Connect with us

Technology

Drawing 2D Metaballs with WebGL2 | Codrops


From our sponsor: Elevate all of your advertising with Mailchimp Smarts.

Whereas many individuals shrink back from writing vanilla WebGL and instantly bounce to frameworks equivalent to three.js or PixiJS, it’s attainable to attain nice visuals and complicated animation with comparatively small quantities of code. Right now, I wish to current core WebGL ideas whereas programming some easy 2D visuals. This text assumes not less than some higher-level data of WebGL by means of a library.

Please notice: WebGL2 has been round for years, but Safari solely not too long ago enabled it behind a flag. It’s a fairly important improve from WebGL1 and brings tons of latest helpful options, a few of which we’ll reap the benefits of on this tutorial.

What are we going to construct

From a excessive degree standpoint, to implement our 2D metaballs we’d like two steps:

  • Draw a bunch of rectangles with radial linear gradient ranging from their facilities and increasing to their edges. Draw plenty of them and alpha mix them collectively in a separate framebuffer.
  • Take the ensuing picture with the blended quads from step #1, scan its pixels one after the other and resolve the brand new coloration of the pixel relying on its opacity. For instance – if the pixel has opacity smaller then 0.5, render it in crimson. In any other case render it in yellow and so forth.
Rendering multiple 2D quads and turning them to metaballs with post-processing.
Left: A number of quads rendered with radial gradient, alpha blended and rendered to a texture.
Proper: Submit-processing on the generated texture and rendering the consequence to the gadget display. Conditional coloring of every pixel based mostly on opacity.

Don’t fear if these phrases don’t make plenty of sense simply but – we’ll go over every of the steps wanted intimately. Let’s bounce into the code and begin constructing!

Bootstrapping our program

We are going to begin issues by

  • Making a HTMLCanvasElement, sizing it to our gadget viewport and inserting it into the web page DOM
  • Acquiring a WebGL2RenderingContext to make use of for drawing stuff
  • Setting the proper WebGL viewport and the background coloration for our scene
  • Beginning a requestAnimationFrame loop that can draw our scene as quick because the gadget permits. The velocity is set by varied components such because the {hardware}, present CPU / GPU workloads, battery ranges, consumer preferences and so forth. For easy animation we’re going to goal for 60FPS.
/* Create our canvas and procure it is WebGL2RenderingContext */
const canvas = doc.createElement('canvas')
const gl = canvas.getContext('webgl2')

/* Deal with error someway if no WebGL2 help */
if (!gl) {
  // ...
}

/* Dimension our canvas and pay attention for resize occasions */
resizeCanvas()
window.addEventListener('resize', resizeCanvas)

/* Append our canvas to the DOM and set its background-color with CSS */
canvas.fashion.backgroundColor="black"
doc.physique.appendChild(canvas)

/* Difficulty first body paint */
requestAnimationFrame(updateFrame)

perform updateFrame (timestampMs) {
   /* Set our program viewport to suit the precise measurement of our monitor with devicePixelRatio under consideration */
   gl.viewport(0, 0, canvas.width, canvas.top)
   /* Set the WebGL background color to be clear */
   gl.clearColor(0, 0, 0, 0)
   /* Clear the present canvas pixels */
   gl.clear(gl.COLOR_BUFFER_BIT)

   /* Difficulty subsequent body paint */
   requestAnimationFrame(updateFrame)
}

perform resizeCanvas () {
   /*
      We have to account for devicePixelRatio when sizing our canvas.
      We are going to use it to acquire the precise pixel measurement of our viewport and measurement our canvas to match it.
      We are going to then downscale it again to CSS items so it neatly fills our viewport and we profit from downsampling antialiasing
      We additionally have to restrict it as a result of it may actually gradual our program. Trendy iPhones have devicePixelRatios of three. This implies rendering 9x extra pixels every body!

      Extra data: https://webglfundamentals.org/webgl/classes/webgl-resizing-the-canvas.html 
   */
   const dpr = devicePixelRatio > 2 ? 2 : devicePixelRatio
   canvas.width = innerWidth * dpr
   canvas.top = innerHeight * dpr
   canvas.fashion.width = `${innerWidth}px`
   canvas.fashion.top = `${innerHeight}px`
}

Drawing a quad

The following step is to really draw a form. WebGL has a rendering pipeline, which dictates how does the thing you draw and its corresponding geometry and materials find yourself on the gadget display. WebGL is basically only a rasterising engine, within the sense that you simply give it correctly formatted information and it produces pixels for you.

The total rendering pipeline is out of the scope for this tutorial, however you may learn extra about it right here. Let’s break down what precisely we’d like for our program:

Defining our geometry and its attributes

Every object we attract WebGL is represented as a WebGLProgram working on the gadget GPU. It consists of enter variables and vertex and fragment shader to function on these variables. The vertex shader accountability is to place our geometry appropriately on the gadget display and fragment shader’s accountability is to regulate its look.

It’s as much as us as builders to write down our vertex and fragment shaders, compile them on the gadget GPU and hyperlink them in a GLSL program. As soon as we have now efficiently completed this, we should question this program’s enter variable areas that had been allotted on the GPU for us, provide appropriately formatted information to them, allow them and instruct them find out how to unpack and use our information.

To render our quad, we’d like 3 enter variables:

  1. a_position will dictate the place of every vertex of our quad geometry. We are going to cross it as an array of 12 floats, i.e. 2 triangles with 3 factors per triangle, every represented by 2 floats (x, y). This variable is an attribute, i.e. it’s clearly totally different for every of the factors that make up our geometry.
  2. a_uv will describe the feel offset for every level of our geometry. They too might be described as an array of 12 floats. We are going to use this information to not texture our quad with a picture, however to dynamically create a radial linear gradient from the quad heart. This variable can be an attribute and can too be totally different for every of our geometry factors.
  3. u_projectionMatrix might be an enter variable represented as a 32bit float array of 16 gadgets that can dictate how can we rework our geometry positions described in pixel values to the normalised WebGL coordinate system. This variable is a uniform, in contrast to the earlier two, it is not going to change for every geometry place.

We will reap the benefits of Vertex Array Object to retailer the outline of our GLSL program enter variables, their areas on the GPU and the way ought to they be unpacked and used.

WebGLVertexArrayObjects or VAOs are 1st class residents in WebGL2, in contrast to in WebGL1 the place they had been hidden behind an non-obligatory extension and their help was not assured. They allow us to sort much less, execute fewer WebGL bindings and maintain our drawing state right into a single, straightforward to handle object that’s easier to trace. They basically retailer the outline of our geometry and we are able to reference them later.

We have to write the shaders in GLSL 3.00 ES, which WebGL2 helps. Our vertex shader might be fairly easy:

/*
  Move in geometry place and tex coord from the CPU
*/
in vec4 a_position;
in vec2 a_uv;

/*
  Move in international projection matrix for every vertex
*/
uniform mat4 u_projectionMatrix;

/*
  Specify various variable to be handed to fragment shader
*/
out vec2 v_uv;

void important () {
  /*
   We have to convert our quad factors positions from pixels to the normalized WebGL coordinate system
  */
  gl_Position = u_projectionMatrix * a_position;
  v_uv = a_uv;
}

At this level, after we have now efficiently executed our vertex shader, WebGL will fill within the pixels between the factors that make up the geometry on the gadget display. The best way the house between the factors is crammed will depend on what primitives are we utilizing for drawing – WebGL helps factors, traces and triangles.

We as builders wouldn’t have management over this step.

After it has rasterised our geometry, it’ll execute our fragment shader on every generated pixel. The fragment shader accountability is the ultimate look of every generated pixel and wether it ought to even be rendered. Right here is our fragment shader:

/*
  Set fragment shader float precision
*/
precision highp float;

/*
  Eat interpolated tex coord various from vertex shader
*/
in vec2 v_uv;

/*
  Ultimate coloration represented as a vector of 4 parts - r, g, b, a
*/
out vec4 outColor;

void important () {
  /*
    This perform will run on every every pixel generated by our quad geometry
  */
  /*
    Calculate the gap for every pixel from the middle of the quad (0.5, 0.5)
  */
  float dist = distance(v_uv, vec2(0.5)) * 2.0;
  /*
    Invert and clamp our distance from 0.0 to 1.0
  */
  float c = clamp(1.0 - dist, 0.0, 1.0);
  /*
    Use the gap to generate the pixel opacity. We now have to explicitly allow alpha mixing in WebGL to see the proper consequence
  */
  outColor = vec4(vec3(1.0), c);
}

Let’s write two utility strategies: makeGLShader() to create and compile our GLSL shaders and makeGLProgram() to hyperlink them right into a GLSL program to be ran on the GPU:

/*
  Utility technique to create a WebGLShader object and compile it on the gadget GPU
  https://developer.mozilla.org/en-US/docs/Net/API/WebGLShader
*/
perform makeGLShader (shaderType, shaderSource) {
  /* Create a WebGLShader object with appropriate sort */
  const shader = gl.createShader(shaderType)
  /* Connect the shaderSource string to the newly created shader */
  gl.shaderSource(shader, shaderSource)
  /* Compile our newly created shader */
  gl.compileShader(shader)
  const success = gl.getShaderParameter(shader, gl.COMPILE_STATUS)
  /* Return the WebGLShader if compilation was successful */
  if (success) {
    return shader
  }
  /* In any other case log the error and delete the defective shader */
  console.error(gl.getShaderInfoLog(shader))
  gl.deleteShader(shader)
}

/*
  Utility technique to create a WebGLProgram object
  It should create each a vertex and fragment WebGLShader and hyperlink them right into a program on the gadget GPU
  https://developer.mozilla.org/en-US/docs/Net/API/WebGLProgram
*/
perform makeGLProgram (vertexShaderSource, fragmentShaderSource) {
  /* Create and compile vertex WebGLShader */
  const vertexShader = makeGLShader(gl.VERTEX_SHADER, vertexShaderSource)
  /* Create and compile fragment WebGLShader */
  const fragmentShader = makeGLShader(gl.FRAGMENT_SHADER, fragmentShaderSource)
  /* Create a WebGLProgram and fix our shaders to it */
  const program = gl.createProgram()
  gl.attachShader(program, vertexShader)
  gl.attachShader(program, fragmentShader)
  /* Hyperlink the newly created program on the gadget GPU */
  gl.linkProgram(program) 
  /* Return the WebGLProgram if linking was successfull */
  const success = gl.getProgramParameter(program, gl.LINK_STATUS)
  if (success) {
    return program
  }
  /* In any other case log errors to the console and delete fauly WebGLProgram */
  console.error(gl.getProgramInfoLog(program))
  gl.deleteProgram(program)
}

And right here is the whole code snippet we have to add to our earlier code snippet to generate our geometry, compile our shaders and hyperlink them right into a GLSL program:

const canvas = doc.createElement('canvas')
/* remainder of code */

/* Allow WebGL alpha mixing */
gl.allow(gl.BLEND)
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)

/*
  Generate the Vertex Array Object and GLSL program
  we have to render our 2D quad
*/
const {
  quadProgram,
  quadVertexArrayObject,
} = makeQuad(innerWidth / 2, innerHeight / 2)

/* --------------- Utils ----------------- */

perform makeQuad (positionX, positionY, width = 50, top = 50, drawType = gl.STATIC_DRAW) {
  /*
    Write our vertex and fragment shader packages as easy JS strings

    !!! Essential !!!!
    
    WebGL2 requires GLSL 3.00 ES
    We have to declare this model on the FIRST LINE OF OUR PROGRAM
    In any other case it will not work!
  */
  const vertexShaderSource = `#model 300 es
    /*
      Move in geometry place and tex coord from the CPU
    */
    in vec4 a_position;
    in vec2 a_uv;
    
    /*
     Move in international projection matrix for every vertex
    */
    uniform mat4 u_projectionMatrix;
    
    /*
      Specify various variable to be handed to fragment shader
    */
    out vec2 v_uv;
    
    void important () {
      gl_Position = u_projectionMatrix * a_position;
      v_uv = a_uv;
    }
  `
  const fragmentShaderSource = `#model 300 es
    /*
      Set fragment shader float precision
    */
    precision highp float;
    
    /*
      Eat interpolated tex coord various from vertex shader
    */
    in vec2 v_uv;
    
    /*
      Ultimate coloration represented as a vector of 4 parts - r, g, b, a
    */
    out vec4 outColor;
    
    void important () {
      float dist = distance(v_uv, vec2(0.5)) * 2.0;
      float c = clamp(1.0 - dist, 0.0, 1.0);
      outColor = vec4(vec3(1.0), c);
    }
  `
  /*
    Assemble a WebGLProgram object out of our shader sources and hyperlink it on the GPU
  */
  const quadProgram = makeGLProgram(vertexShaderSource, fragmentShaderSource)
  
  /*
    Create a Vertex Array Object that can retailer an outline of our geometry
    that we are able to reference later when rendering
  */
  const quadVertexArrayObject = gl.createVertexArray()
  
  /*
    1. Defining geometry positions
    
    Create the geometry factors for our quad
        
    V6  _______ V5         V3
       |      /         /|
       |    /         /  |
       |  /         /    |
    V4 |/      V1 /______| V2
     
     We want two triangles to kind a single quad
     As you may see, we find yourself duplicating vertices:
     V5 & V3 and V4 & V1 find yourself occupying the identical place.
     
     There are higher methods to organize our information so we do not find yourself with
     duplicates, however let's maintain it easy for this demo and duplicate them
     
     In contrast to common Javascript arrays, WebGL wants strongly typed information
     That is why we provide our positions as an array of 32 bit floating level numbers
  */
  const vertexArray = new Float32Array([
    /*
      First set of 3 points are for our first triangle
    */
    positionX - width / 2,  positionY + height / 2, // Vertex 1 (X, Y)
    positionX + width / 2,  positionY + height / 2, // Vertex 2 (X, Y)
    positionX + width / 2,  positionY - height / 2, // Vertex 3 (X, Y)
    /*
      Second set of 3 points are for our second triangle
    */
    positionX - width / 2, positionY + height / 2, // Vertex 4 (X, Y)
    positionX + width / 2, positionY - height / 2, // Vertex 5 (X, Y)
    positionX - width / 2, positionY - height / 2  // Vertex 6 (X, Y)
  ])

  /*
    Create a WebGLBuffer that can maintain our triangles positions
  */
  const vertexBuffer = gl.createBuffer()
  /*
    Now that we have created a GLSL program on the GPU we have to provide information to it
    We have to provide our 32bit float array to the a_position variable utilized by the GLSL program
    
    If you hyperlink a vertex shader with a fraction shader by calling gl.linkProgram(someProgram)
    WebGL (the driving force/GPU/browser) resolve on their very own which index/location to make use of for every attribute
    
    Due to this fact we have to discover the situation of a_position from our program
  */
  const a_positionLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_position')
  
  /*
    Bind the Vertex Array Object descriptior for this geometry
    Every geometry instruction any more might be recorded beneath it
    
    To cease recording after we're completed describing our geometry, we have to merely unbind it
  */
  gl.bindVertexArray(quadVertexArrayObject)

  /*
    Bind the energetic gl.ARRAY_BUFFER to our WebGLBuffer that describe the geometry positions
  */
  gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
  /*
    Feed our 32bit float array that describes our quad to the vertexBuffer utilizing the
    gl.ARRAY_BUFFER international deal with
  */
  gl.bufferData(gl.ARRAY_BUFFER, vertexArray, drawType)
  /*
    We have to explicitly allow our the a_position variable on the GPU
  */
  gl.enableVertexAttribArray(a_positionLocationOnGPU)
  /*
    Lastly we have to instruct the GPU find out how to pull the information out of our
    vertexBuffer and feed it into the a_position variable within the GLSL program
  */
  /*
    Inform the attribute find out how to get information out of positionBuffer (ARRAY_BUFFER)
  */
  const measurement = 2           // 2 parts per iteration
  const sort = gl.FLOAT    // the information is 32bit floats
  const normalize = false  // do not normalize the information
  const stride = 0         // 0 = transfer ahead measurement * sizeof(sort) every iteration to get the following place
  const offset = 0         // begin firstly of the buffer
  gl.vertexAttribPointer(a_positionLocationOnGPU, measurement, sort, normalize, stride, offset)
  
  /*
    2. Defining geometry UV texCoords
    
    V6  _______ V5         V3
       |      /         /|
       |    /         /  |
       |  /         /    |
    V4 |/      V1 /______| V2
  */
  const uvsArray = new Float32Array([
    0, 0, // V1
    1, 0, // V2
    1, 1, // V3
    0, 0, // V4
    1, 1, // V5
    0, 1  // V6
  ])
  /*
    The remainder of the code is precisely like within the vertices step above.
    We have to put our information in a WebGLBuffer, lookup the a_uv variable
    in our GLSL program, allow it, provide information to it and instruct
    WebGL find out how to pull it out:
  */
  const uvsBuffer = gl.createBuffer()
  const a_uvLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_uv')
  gl.bindBuffer(gl.ARRAY_BUFFER, uvsBuffer)
  gl.bufferData(gl.ARRAY_BUFFER, uvsArray, drawType)
  gl.enableVertexAttribArray(a_uvLocationOnGPU)
  gl.vertexAttribPointer(a_uvLocationOnGPU, 2, gl.FLOAT, false, 0, 0)
  
  /*
    Cease recording and unbind the Vertex Array Object descriptior for this geometry
  */
  gl.bindVertexArray(null)
  
  /*
    WebGL has a normalized viewport coordinate system which appears to be like like this:
    
         Gadget Viewport
       ------- 1.0 ------  
      |         |         |
      |         |         |
    -1.0 --------------- 1.0
      |         |         | 
      |         |         |
       ------ -1.0 -------
       
     Nonetheless as you may see, we cross the place and measurement of our quad in precise pixels
     To transform these pixels values to the normalized coordinate system, we'll
     use the best 2D projection matrix.
     Will probably be represented as an array of 16 32bit floats
     
     You'll be able to learn a mild introduction to 2D matrices right here
     https://webglfundamentals.org/webgl/classes/webgl-2nd-matrices.html
  */
  const projectionMatrix = new Float32Array([
    2 / innerWidth, 0, 0, 0,
    0, -2 / innerHeight, 0, 0,
    0, 0, 0, 0,
    -1, 1, 0, 1,
  ])
  
  /*
    With a purpose to provide uniform information to our quad GLSL program, we first have to allow the GLSL program chargeable for rendering our quad
  */
  gl.useProgram(quadProgram)
  /*
    Identical to the a_position attribute variable earlier, we additionally have to lookup
    the situation of uniform variables within the GLSL program as a way to provide them information
  */
  const u_projectionMatrixLocation = gl.getUniformLocation(quadProgram, 'u_projectionMatrix')
  /*
    Provide our projection matrix as a Float32Array of 16 gadgets to the u_projection uniform
  */
  gl.uniformMatrix4fv(u_projectionMatrixLocation, false, projectionMatrix)
  /*
    We now have arrange our uniform variables appropriately, cease utilizing the quad program for now
  */
  gl.useProgram(null)

  /*
    Return our GLSL program and the Vertex Array Object descriptor of our geometry
    We are going to want them to render our quad in our updateFrame technique
  */
  return {
    quadProgram,
    quadVertexArrayObject,
  }
}

/* remainder of code */
perform makeGLShader (shaderType, shaderSource) {}
perform makeGLProgram (vertexShaderSource, fragmentShaderSource) {}
perform updateFrame (timestampMs) {}

We now have efficiently created a GLSL program quadProgram, which is working on the GPU, ready to be drawn on the display. We even have obtained a Vertex Array Object quadVertexArrayObject, which describes our geometry and might be referenced earlier than we draw. We will now draw our quad. Let’s increase our updateFrame() technique like so:

perform updateFrame (timestampMs) {
   /* remainder of our code */

  /*
    Bind the Vertex Array Object descriptor of our quad we generated earlier
  */
  gl.bindVertexArray(quadVertexArrayObject)
  /*
    Use our quad GLSL program
  */
  gl.useProgram(quadProgram)
  /*
    Difficulty a render command to color our quad triangles
  */
  {
    const drawPrimitive = gl.TRIANGLES
    const vertexArrayOffset = 0
    const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
    gl.drawArrays(drawPrimitive, vertexArrayOffset, numberOfVertices)
  }
  /*     
    After a profitable render, it's good apply to unbind our 
GLSL program and Vertex Array Object so we maintain WebGL state clear.
    We are going to bind them once more anyway on the following render
  */
  gl.useProgram(null)
  gl.bindVertexArray(null)

  /* Difficulty subsequent body paint */
  requestAnimationFrame(updateFrame)
}

And right here is our consequence:

We will use the nice SpectorJS Chrome extension to seize our WebGL operations on every body. We will have a look at your entire command record with their related visible states and context info. Here’s what it takes to render a single body with our updateFrame() name:

Draw calls needed to render a single 2D quad on the center of our screen.
A screenshot of all of the steps we applied to render a single quad. (Click on to see a bigger model)

Some gotchas:

  1. We declare the vertices positions of our triangles in a counter clockwise order. That is necessary.
  2. We have to explicitly allow mixing in WebGL and specify it’s mix operation. For our demo we’ll use gl.ONE_MINUS_SRC_ALPHA as a mix perform (multiplies all colours by 1 minus the supply alpha worth).
  3. In our vertex shader you may see we anticipate the enter variable a_position to be vector with 4 parts (vec4), whereas in Javascript we specify solely 2 gadgets per vertex. That’s as a result of the default attribute worth is 0, 0, 0, 1. It doesn’t matter that you simply’re solely supplying x and y out of your attributes. z defaults to 0 and w defaults to 1.
  4. As you may see, WebGL is a state machine, the place it’s important to continually bind stuff earlier than you’ll be able to work on it and also you all the time must be sure you unbind it afterwards. Contemplate how within the code snippet above we equipped a Float32Array with out positions to the vertexBuffer:
const vertexArray = new Float32Array([/* ... */])
const vertexBuffer = gl.createBuffer()
/* Bind our vertexBuffer to the worldwide binding WebGL bind level gl.ARRAY_BUFFER */
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
/* At this level, gl.ARRAY_BUFFER represents vertexBuffer */
/* Provide information to our vertexBuffer utilizing the gl.ARRAY_BUFFER binding level */
gl.bufferData(gl.ARRAY_BUFFER, vertexArray, gl.STATIC_DRAW)
/* Do a bunch of different stuff with the energetic gl.ARRAY_BUFFER (vertexBuffer) right here */
// ...

/* After you've completed your work, unbind it */
gl.bindBuffer(gl.ARRAY_BUFFER, null)

That is completely reverse of Javascript, the place this similar operation could be expressed like this for instance (pseudocode):

const vertexBuffer = gl.createBuffer()
vertexBuffer.addData(vertexArray)
vertexBuffer.setDrawOperation(gl.STATIC_DRAW)
// and so on.

Coming from Javascript background, initially I discovered WebGL’s state machine method of doing issues by continually binding and unbinding actually odd. One should train good self-discipline and all the time ensure that to unbind stuff after utilizing it, even in trivial packages like ours! In any other case you danger issues not working and onerous to trace bugs.

Drawing plenty of quads

We now have efficiently rendered a single quad, however as a way to make issues extra fascinating and visually interesting, we have to draw extra.

As we noticed already, we are able to simply create new geometries with totally different place utilizing our makeQuad() utility helper. We will cross them totally different positions and radiuses and compile every one in all them right into a separate GLSL program to be executed on the GPU. It will work, nonetheless:

As we noticed in our replace loop technique updateFrame, to render our quad on every body we should:

  1. Use the proper GLSL program by calling gl.useProgram()
  2. Bind the proper VAO describing our geometry by calling gl.bindVertexArray()
  3. Difficulty a draw name with appropriate primitive sort by calling gl.drawArrays()

So 3 WebGL instructions in complete.

What if we need to render 500 quads? All of a sudden we bounce to 500×3 or 1500 particular person WebGL calls on every body of our animation. If we would like 1000quads we bounce as much as 3000 particular person calls, with out even counting the entire preparation WebGL bindings we have now to do earlier than our updateFrame loop begins.

Geometry Instancing is a solution to scale back these calls. It really works by letting you inform WebGL what number of occasions you need the identical factor drawn (the variety of situations) with minor variations, equivalent to rotation, scale, place and so on. Examples embrace bushes, grass, crowd of individuals, packing containers in a warehouse, and so on.

Identical to VAOs, instancing is a 1st class citizen in WebGL2 and doesn’t require extensions, in contrast to WebGL1. Let’s increase our code to help geometry instancing and render 1000 quads with random positions.

To start with, we have to resolve on what number of quads we would like rendered and put together the offset positions for every one as a brand new array of 32bit floats. Let’s do 1000 quads and positions them randomly in our viewport:

/* remainder of code */

/* What number of quads we would like rendered */
const QUADS_COUNT = 1000
/*
  Array to retailer our quads positions
  We have to format our array as a steady set
  of numbers, the place every pair represents the X and Y
  or a single 2D place.
  
  Therefore for 1000 quads we'd like an array of 2000 gadgets
  or 1000 pairs of X and Y
*/
const quadsPositions = new Float32Array(QUADS_COUNT * 2)
for (let i = 0; i < QUADS_COUNT; i++) {
  /*
    Generate a random X and Y place
  */
  const randX = Math.random() * innerWidth
  const randY = Math.random() * innerHeight
  /*
    Set the proper X and Y for every pair in our array
  */
  quadsPositions[i * 2 + 0] = randX
  quadsPositions[i * 2 + 1] = randY
}

/*
  We additionally want to enhance our makeQuad() technique
  It not expects a single place, relatively an array of positions
*/
const {
  quadProgram,
  quadVertexArrayObject,
} = makeQuad(quadsPositions)

/* remainder of code */

As an alternative of a single place, we’ll now cross an array of positions into our makeQuad() technique. Let’s increase this technique to obtain our offsets array as a brand new variable enter a_offset to our shaders which can comprise the proper XY offset for a specific occasion. To do that, we have to put together our offsets as a brand new WebGLBuffer and instruct WebGL find out how to upack them, similar to we did for a_position and a_uv

perform makeQuad (quadsPositions, width = 70, top = 70, drawType = gl.STATIC_DRAW) {
  /* remainder of code */

  /*
    Add offset positions for our particular person situations
    They're declared and utilized in precisely the identical method as
    "a_position" and "a_uv" above
  */
  const offsetsBuffer = gl.createBuffer()
  const a_offsetLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_offset')
  gl.bindBuffer(gl.ARRAY_BUFFER, offsetsBuffer)
  gl.bufferData(gl.ARRAY_BUFFER, quadsPositions, drawType)
  gl.enableVertexAttribArray(a_offsetLocationOnGPU)
  gl.vertexAttribPointer(a_offsetLocationOnGPU, 2, gl.FLOAT, false, 0, 0)
  /*
    HOWEVER, we should add a further WebGL name to set this attribute to solely
    change per occasion, as an alternative of per vertex like a_position and a_uv above
  */
  const instancesDivisor = 1
  gl.vertexAttribDivisor(a_offsetLocationOnGPU, instancesDivisor)
  
  /*
    Cease recording and unbind the Vertex Array Object descriptor for this geometry
  */
  gl.bindVertexArray(null)

  /* remainder of code */
}

We have to increase our authentic vertexArray chargeable for passing information into our a_position GLSL variable. We not have to offset it to the specified place like within the first instance, now the a_offset variable will maintain this within the vertex shader:

const vertexArray = new Float32Array([
  /*
    First set of 3 points are for our first triangle
  */
 -width / 2,  height / 2, // Vertex 1 (X, Y)
  width / 2,  height / 2, // Vertex 2 (X, Y)
  width / 2, -height / 2, // Vertex 3 (X, Y)
  /*
    Second set of 3 points are for our second triangle
  */
 -width / 2,  height / 2, // Vertex 4 (X, Y)
  width / 2, -height / 2, // Vertex 5 (X, Y)
 -width / 2, -height / 2  // Vertex 6 (X, Y)
])

We additionally want to enhance our vertex shader to eat and use the brand new a_offset enter variable we cross from Javascript:

const vertexShaderSource = `#model 300 es
  /* remainder of GLSL code */
  /*
    This enter vector will change as soon as per occasion
  */
  in vec4 a_offset;

  void important () {
     /* Account a_offset within the last geometry posiiton */
     vec4 newPosition = a_position + a_offset;
     gl_Position = u_projectionMatrix * newPosition;
  }
  /* remainder of GLSL code */
`

And as a last step we have to change our drawArrays name in our updateFrame to drawArraysInstanced to account for instancing. This new technique expects the very same arguments and provides instanceCount as final one:

perform updateFrame (timestampMs) {
   /* remainder of code */
   {
     const drawPrimitive = gl.TRIANGLES
     const vertexArrayOffset = 0
     const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
     gl.drawArraysInstanced(drawPrimitive, vertexArrayOffset, numberOfVertices, QUADS_COUNT)
   }
   /* remainder of code */
}

And with all these adjustments, right here is our up to date instance:

Despite the fact that we elevated the quantity of rendered objects by 1000x, we’re nonetheless making 3 WebGL calls on every body. That’s a reasonably nice efficiency win!

Steps needed so our WebGL can draw 1000 of quads via geometry instancing.
All WebGL calls wanted to attract our 1000 quads in a single updateFrame()name. Observe the quantity of wanted calls didn’t improve from the earlier instance due to instancing.

Submit Processing with a fullscreen quad

Now that we have now our 1000 quads efficiently rendering to the gadget display on every body, we are able to flip them into metaballs. As we established, we have to scan the pixels of the image we generated within the earlier steps and decide the alpha worth of every pixel. Whether it is beneath a sure threshold, we discard it, in any other case we coloration it.

To do that, as an alternative of rendering our scene on to the display as we do proper now, we have to render it to a texture. We are going to do our put up processing on this texture and render the consequence to the gadget display.

Submit-Processing is a method utilized in graphics that permits you to take a present enter texture, and manipulate its pixels to supply a reworked picture. This can be utilized to use shiny results like volumetric lighting, or some other filter sort impact you’ve seen in functions like Photoshop or Instagram.

Nicolas Garcia Belmonte

The fundamental approach for creating these results is fairly simple:

  1. A WebGLTexture is created with the identical measurement because the canvas and hooked up as a coloration attachment to a WebGLFramebuffer. Initially of our updateFrame() technique, the framebuffer is about because the render goal, and your entire scene is rendered usually to it.
  2. Subsequent, a full-screen quad is rendered to the gadget display utilizing the feel generated in step 1 as an enter. The shader used throughout the rendering of the quad is what incorporates the post-process impact.

Making a texture and framebuffer to render to

A framebuffer is only a assortment of attachments. Attachments are both textures or renderbuffers. Let’s create a WebGLTexture and fix it to a framebuffer as the primary coloration attachment:

/* remainder of code */

const renderTexture = makeTexture()
const framebuffer = makeFramebuffer(renderTexture)

perform makeTexture (textureWidth = canvas.width, textureHeight = canvas.top) {
  /*
    Create the feel that we are going to use to render to
  */
  const targetTexture = gl.createTexture()
  /*
    Identical to all the pieces else in WebGL up till now, we have to bind it
    so we are able to configure it. We are going to unbind it as soon as we're completed with it.
  */
  gl.bindTexture(gl.TEXTURE_2D, targetTexture)

  /*
    Outline texture settings
  */
  const degree = 0
  const internalFormat = gl.RGBA
  const border = 0
  const format = gl.RGBA
  const sort = gl.UNSIGNED_BYTE
  /*
    Discover how information is null. That is as a result of we do not have information for this texture simply but
    We simply want WebGL to allocate the feel
  */
  const information = null
  gl.texImage2D(gl.TEXTURE_2D, degree, internalFormat, textureWidth, textureHeight, border, format, sort, information)

  /*
    Set the filtering so we do not want mips
  */
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE)
  
  return renderTexture
}

perform makeFramebuffer (texture) {
  /*
    Create and bind the framebuffer
  */
  const fb = gl.createFramebuffer()
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb)
 
  /*
    Connect the feel as the primary coloration attachment
  */
  const attachmentPoint = gl.COLOR_ATTACHMENT0
  gl.framebufferTexture2D(gl.FRAMEBUFFER, attachmentPoint, gl.TEXTURE_2D, targetTexture, degree)
}

We now have efficiently created a texture and hooked up it as coloration attachment to a framebuffer. Now we are able to render our scene to it. Let’s increase our updateFrame()technique:

perform updateFrame () {
  gl.viewport(0, 0, canvas.width, canvas.top)
  gl.clearColor(0, 0, 0, 0)
  gl.clear(gl.COLOR_BUFFER_BIT)

  /*
    Bind the framebuffer we created
    Any further till we unbind it, every WebGL draw command will render in it
  */
  gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
  
  /* Set the offscreen framebuffer background coloration to be clear */
  gl.clearColor(0.2, 0.2, 0.2, 1.0)
  /* Clear the offscreen framebuffer pixels */
  gl.clear(gl.COLOR_BUFFER_BIT)

  /*
    Code for rendering our instanced quads right here
  */

  /*
    We now have efficiently rendered to the framebuffer at this level
    With a purpose to render to the display subsequent, we have to unbind it
  */
  gl.bindFramebuffer(gl.FRAMEBUFFER, null)
  
  /* Difficulty subsequent body paint */
  requestAnimationFrame(updateFrame)
}

Let’s check out our consequence:

As you may see, we get an empty display. There are no errors and this system is working simply positive – take into account nonetheless that we’re rendering to a separate framebuffer, not the default gadget display framebuffer!

Break down of our WebGL scene and the steps needed to render it to a separate framebuffer.
Our program produces black display, since we’re rendering to the offscreen framebuffer

With a purpose to show our offscreen framebuffer again on the display, we have to render a fullscreen quad and use the framebuffer’s texture as an enter.

Making a fullscreen quad and displaying our texture on it

Let’s create a brand new quad. We will reuse our makeQuad() technique from the above snippets, however we have to increase it to help instancing optionally and have the ability to put vertex and fragment shader sources as outdoors argument variables. This time we’d like just one quad and the shaders we’d like for it are totally different.

Check out the up to date makeQuad()signature:

/* rename our instanced quads program & VAO */
const {
  quadProgram: instancedQuadsProgram,
  quadVertexArrayObject: instancedQuadsVAO,
} = makeQuad({
  instancedOffsets: quadsPositions,
  /*
    We want totally different set of vertex and fragment shaders
    for the totally different quads we have to render, so cross them from outdoors
  */
  vertexShaderSource: instancedQuadVertexShader,
  fragmentShaderSource: instancedQuadFragmentShader,
  /*
    help non-obligatory instancing
  */
  isInstanced: true,
})

Let’s use the identical technique to create a brand new fullscreen quad and render it. First our vertex and fragment shader:

const fullscreenQuadVertexShader = `#model 300 es
   in vec4 a_position;
   in vec2 a_uv;
   
   uniform mat4 u_projectionMatrix;
   
   out vec2 v_uv;
   
   void important () {
    gl_Position = u_projectionMatrix * a_position;
    v_uv = a_uv;
   }
`
const fullscreenQuadFragmentShader = `#model 300 es
  precision highp float;
  
  /*
    Move our texture we render to as an uniform
  */
  uniform sampler2D u_texture;
  
  in vec2 v_uv;
  
  out vec4 outputColor;
  
  void important () {
    /*
      Use our interpolated UVs we assigned in Javascript to lookup
      texture coloration worth at every pixel
    */
    vec4 inputColor = texture(u_texture, v_uv);
    
    /*
      0.5 is our alpha threshold we use to resolve if
      pixel ought to be discarded or painted
    */
    float cutoffThreshold = 0.5;
    /*
      "cutoff" might be 0 if pixel is beneath 0.5 or 1 if above
      
      step() docs - https://thebookofshaders.com/glossary/?search=step
    */
    float cutoff = step(cutoffThreshold, inputColor.a);
    
    /*
      Let's use combine() GLSL technique as an alternative of if assertion
      if cutoff is 0, we'll discard the pixel through the use of empty coloration with no alpha
      in any other case, let's use black with alpha of 1
      
      combine() docs - https://thebookofshaders.com/glossary/?search=combine
    */
    vec4 emptyColor = vec4(0.0);
    /* Render base metaballs shapes */
    vec4 borderColor = vec4(1.0, 0.0, 0.0, 1.0);
    outputColor = combine(
      emptyColor,
      borderColor,
      cutoff
    );
    
    /*
      Enhance the treshold and calculate new cutoff, so we are able to render smaller shapes once more, this time in numerous coloration and with smaller radius
    */
    cutoffThreshold += 0.05;
    cutoff = step(cutoffThreshold, inputColor.a);
    vec4 fillColor = vec4(1.0, 1.0, 0.0, 1.0);
    /*
      Add new smaller metaballs coloration on high of the previous one
    */
    outputColor = combine(
      outputColor,
      fillColor,
      cutoff
    );
  }
`

Let’s use them to create and hyperlink a sound GLSL program, similar to once we rendered our situations:

const {
  quadProgram: fullscreenQuadProgram,
  quadVertexArrayObject: fullscreenQuadVAO,
} = makeQuad({
  vertexShaderSource: fullscreenQuadVertexShader,
  fragmentShaderSource: fullscreenQuadFragmentShader,
  isInstanced: false,
  width: innerWidth,
  top: innerHeight
})
/*
  In contrast to our situations GLSL program, right here we have to cross an additional uniform - a "u_texture"!
  Inform the shader to make use of texture unit 0 for u_texture
*/
gl.useProgram(fullscreenQuadProgram)
const u_textureLocation = gl.getUniformLocation(fullscreenQuadProgram, 'u_texture')
gl.uniform1i(u_textureLocation, 0)
gl.useProgram(null)

Lastly we are able to render the fullscreen quad with the consequence texture as an uniform u_texture. Let’s change our updateFrame() technique:

perform updateFrame () {
 gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
 /* render instanced quads right here */
 gl.bindFramebuffer(gl.FRAMEBUFFER, null)
 
 /*
   Render our fullscreen quad
 */
 gl.bindVertexArray(fullscreenQuadVAO)
 gl.useProgram(fullscreenQuadProgram)
 /*
  Bind the feel we render to as energetic TEXTURE_2D
 */
 gl.bindTexture(gl.TEXTURE_2D, renderTexture)
 {
   const drawPrimitive = gl.TRIANGLES
   const vertexArrayOffset = 0
   const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
   gl.drawArrays(drawPrimitive, vertexArrayOffset, numberOfVertices)
 }
 /*
   Identical to all the pieces else, unbind our texture as soon as we're completed rendering
 */
 gl.bindTexture(gl.TEXTURE_2D, null)
 gl.useProgram(null)
 gl.bindVertexArray(null)
 requestAnimationFrame(updateFrame)
}

And right here is our last consequence (I additionally added a easy animation to make the impact extra obvious):

And right here is the breakdown of 1 updateFrame() name:

Breakdown of our WebGL scene amd the steps needed to render 1000 quads and post-process them to metaballs.
You’ll be able to clearly see how we render our 1000 instanced quads in separate framebuffer in steps 1 to three. We then draw and manipulate the ensuing texture to a fullscreen quad that we render in steps 4 to 7.

Aliasing points

On my 2016 MacBook Professional with retina show I can clearly see aliasing points with our present instance. If we’re so as to add greater radiuses and blow our animation to fullscreen the issue will turn out to be solely extra noticeable.

The problem comes from the actual fact we’re rendering to a 8bit gl.UNSIGNED_BYTE texture. If we need to improve the element, we have to change to floating level textures (32 bit float gl.RGBA32F or 16 bit float gl.RGBA16F). The catch is that these textures will not be supported on all {hardware} and will not be a part of WebGL2 core. They’re out there by means of non-obligatory extensions, that we have to verify if exist.

The extensions we’re all in favour of to render to 32bit floating level textures are

  • EXT_color_buffer_float
  • OES_texture_float_linear

If these extensions are current on the consumer gadget, we are able to use internalFormat = gl.RGBA32F and textureType = gl.FLOAT when creating our render textures. If they don’t seem to be current, we are able to optionally fallback and render to 16bit floating textures. The extensions we’d like in that case are:

  • EXT_color_buffer_half_float
  • OES_texture_half_float_linear

If these extensions are current, we are able to use internalFormat = gl.RGBA16F and textureType = gl.HALF_FLOAT for our render texture. If not, we’ll fallback to what we have now used up till now – internalFormat = gl.RGBA and textureType = gl.UNSIGNED_BYTE.

Right here is our up to date makeTexture() technique:

perform makeTexture (textureWidth = canvas.width, textureHeight = canvas.top) { 
  /*
   Initialize inside format & texture sort to default values
  */
  let internalFormat = gl.RGBA
  let sort = gl.UNSIGNED_BYTE
  
  /*
    Examine if non-obligatory extensions are current on gadget
  */
  const rgba32fSupported = gl.getExtension('EXT_color_buffer_float') && gl.getExtension('OES_texture_float_linear')
  
  if (rgba32fSupported) {
    internalFormat = gl.RGBA32F
    sort = gl.FLOAT
  } else {
    /*
      Examine if non-obligatory fallback extensions are current on gadget
    */
    const rgba16fSupported = gl.getExtension('EXT_color_buffer_half_float') && gl.getExtension('OES_texture_half_float_linear')
    if (rgba16fSupported) {
      internalFormat = gl.RGBA16F
      sort = gl.HALF_FLOAT
    }
  }

  /* remainder of code */
  
  /*
    Move in appropriate internalFormat and textureType to texImage2D name 
  */
  gl.texImage2D(gl.TEXTURE_2D, degree, internalFormat, textureWidth, textureHeight, border, format, sort, information)

  /* remainder of code */
}

And right here is our up to date consequence:

Conclusion

I hope I managed to showcase the core ideas behind WebGL2 with this demo. As you may see, the API itself is low-level and requires fairly a little bit of typing, but on the similar time is de facto highly effective and let’s you draw complicated scenes with fine-grained management over the rendering.

Writing manufacturing prepared WebGL requires much more typing, checking for non-obligatory options / extensions and dealing with lacking extensions and fallbacks, so I’d advise you to make use of a framework. On the similar time, I consider you will need to perceive the important thing ideas behind the API so you may efficiently use greater degree libraries like threejs and dig into their internals if wanted.

I’m a giant fan of twgl, which hides away a lot of the verbosity of the API, whereas nonetheless being actually low degree with a small footprint. This demo’s code can simply be lowered by extra then half through the use of it.

I encourage you to experiment round with the code after studying this text, plug in numerous values, change the order of issues, add extra draw instructions and what not. I hope you stroll away with a excessive degree understanding of core WebGL2 API and the way it all ties collectively, so you may be taught extra by yourself.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *