Connect with us

Technology

Making a Typography Movement Path Impact with Three.js | Codrops


From our sponsor: Convert designs into clear, developer-friendly React, Vue and HTML code with Anima.

Framebuffers are a key characteristic in WebGL relating to creating superior graphical results equivalent to depth-of-field, bloom, movie grain or numerous sorts of anti-aliasing and have already been lined in-depth right here on Codrops. They permit us to “post-process” our scenes, making use of totally different results on them as soon as rendered. However how precisely do they work?

By default, WebGL (and likewise Three.js and all different libraries constructed on prime of it) render to the default framebuffer, which is the machine display screen. When you’ve got used Three.js or every other WebGL framework earlier than, you realize that you simply create your mesh with the proper geometry and materials, render it, and voilà, it’s seen in your display screen.

Nevertheless, we as builders can create new framebuffers apart from the default one and explicitly instruct WebGL to render to them. By doing so, we render our scenes to picture buffers within the video card’s reminiscence as a substitute of the machine display screen. Afterwards, we will deal with these picture buffers like common textures and apply filters and results earlier than ultimately rendering them to the machine display screen.

Here’s a video breaking down the post-processing and results in Steel Gear Stable 5: Phantom Ache that actually brings house the thought. Discover the way it begins by footage from the precise recreation rendered to the default framebuffer (machine display screen) after which breaks down how every framebuffer seems to be like. All of those framebuffers are composited collectively on every body and the result’s the ultimate image you see when taking part in the sport:

So with the idea out of the way in which, let’s create a cool typography movement path impact by rendering to a framebuffer!

Our skeleton app

Let’s render some 2D textual content to the default framebuffer, i.e. machine display screen, utilizing threejs. Right here is our boilerplate:

const LABEL_TEXT = 'ABC'

const clock = new THREE.Clock()
const scene = new THREE.Scene()

// Create a threejs renderer:
// 1. Measurement it appropriately
// 2. Set default background shade
// 3. Append it to the web page
const renderer = new THREE.WebGLRenderer()
renderer.setClearColor(0x222222)
renderer.setClearAlpha(0)
renderer.setSize(innerWidth, innerHeight)
renderer.setPixelRatio(devicePixelRatio || 1)
doc.physique.appendChild(renderer.domElement)

// Create an orthographic digicam that covers the whole display screen
// 1. Place it appropriately within the constructive Z dimension
// 2. Orient it in the direction of the scene middle
const orthoCamera = new THREE.OrthographicCamera(
  -innerWidth / 2,
  innerWidth / 2,
  innerHeight / 2,
  -innerHeight / 2,
  0.1,
  10,
)
orthoCamera.place.set(0, 0, 1)
orthoCamera.lookAt(new THREE.Vector3(0, 0, 0))

// Create a aircraft geometry that spawns both the whole
// viewport peak or width relying on which one is larger
const labelMeshSize = innerWidth > innerHeight ? innerHeight : innerWidth
const labelGeometry = new THREE.PlaneBufferGeometry(
  labelMeshSize,
  labelMeshSize
)

// Programmaticaly create a texture that may maintain the textual content
let labelTextureCanvas
{
  // Canvas and corresponding context2d for use for
  // drawing the textual content
  labelTextureCanvas = doc.createElement('canvas')
  const labelTextureCtx = labelTextureCanvas.getContext('2nd')

  // Dynamic texture dimension based mostly on the machine capabilities
  const textureSize = Math.min(renderer.capabilities.maxTextureSize, 2048)
  const relativeFontSize = 20
  // Measurement our textual content canvas
  labelTextureCanvas.width = textureSize
  labelTextureCanvas.peak = textureSize
  labelTextureCtx.textAlign = 'middle'
  labelTextureCtx.textBaseline = 'center'

  // Dynamic font dimension based mostly on the feel dimension
  // (based mostly on the machine capabilities)
  labelTextureCtx.font = `${relativeFontSize}px Helvetica`
  const textWidth = labelTextureCtx.measureText(LABEL_TEXT).width
  const widthDelta = labelTextureCanvas.width / textWidth
  const fontSize = relativeFontSize * widthDelta
  labelTextureCtx.font = `${fontSize}px Helvetica`
  labelTextureCtx.fillStyle = 'white'
  labelTextureCtx.fillText(LABEL_TEXT, labelTextureCanvas.width / 2, labelTextureCanvas.peak / 2)
}
// Create a cloth with our programmaticaly created textual content
// texture as enter
const labelMaterial = new THREE.MeshBasicMaterial({
  map: new THREE.CanvasTexture(labelTextureCanvas),
  clear: true,
})

// Create a aircraft mesh, add it to the scene
const labelMesh = new THREE.Mesh(labelGeometry, labelMaterial)
scene.add(labelMesh)

// Begin out animation render loop
renderer.setAnimationLoop(onAnimLoop)

perform onAnimLoop() {
  // On every new body, render the scene to the default framebuffer 
  // (machine display screen)
  renderer.render(scene, orthoCamera)
}

This code merely initialises a threejs scene, provides a 2D aircraft with a textual content texture to it and renders it to the default framebuffer (machine display screen). If we’re execute it with threejs included in our mission, we are going to get this:

See the Pen
Step 1: Render to default framebuffer
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Once more, we don’t explicitly specify in any other case, so we’re rendering to the default framebuffer (machine display screen).

Now that we managed to render our scene to the machine display screen, let’s add a framebuffer (THEEE.WebGLRenderTarget) and render it to a texture within the video card reminiscence.

Rendering to a framebuffer

Let’s begin by creating a brand new framebuffer after we initialise our app:

const clock = new THREE.Clock()
const scene = new THREE.Scene()

// Create a brand new framebuffer we are going to use to render to
// the video card reminiscence
const renderBufferA = new THREE.WebGLRenderTarget(
  innerWidth * devicePixelRatio,
  innerHeight * devicePixelRatio
)

// ... remainder of software

Now that we have now created it, we should explicitly instruct threejs to render to it as a substitute of the default framebuffer, i.e. machine display screen. We are going to do that in our program animation loop:

perform onAnimLoop() {
  // Explicitly set renderBufferA because the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)
  // On every new body, render the scene to renderBufferA
  renderer.render(scene, orthoCamera)
}

And right here is our outcome:

See the Pen
Step 2: Render to a framebuffer
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

As you possibly can see, we’re getting an empty display screen, but our program accommodates no errors – so what occurred? Properly, we’re not rendering to the machine display screen, however one other framebuffer! Our scene is being rendered to a texture within the video card reminiscence, in order that’s why we see the empty display screen.

With the intention to show this generated texture containing our scene again to the default framebuffer (machine display screen), we have to create one other 2D aircraft that may cowl the whole display screen of our app and move the feel as materials enter to it.

First we are going to create a fullscreen 2D aircraft that may span the whole machine display screen:

// ... remainder of initialisation step

// Create a second scene that may maintain our fullscreen aircraft
const postFXScene = new THREE.Scene()

// Create a aircraft geometry that covers the whole display screen
const postFXGeometry = new THREE.PlaneBufferGeometry(innerWidth, innerHeight)

// Create a aircraft materials that expects a sampler texture enter
// We are going to move our generated framebuffer texture to it
const postFXMaterial = new THREE.ShaderMaterial({
  uniforms: {
    sampler: { worth: null },
  },
  // vertex shader can be answerable for positioning our aircraft appropriately
  vertexShader: `
      various vec2 v_uv;

      void foremost () {
        // Set the proper place of every aircraft vertex
        gl_Position = projectionMatrix * modelViewMatrix * vec4(place, 1.0);

        // Cross within the appropriate UVs to the fragment shader
        v_uv = uv;
      }
    `,
  fragmentShader: `
      // Declare our texture enter as a "sampler" variable
      uniform sampler2D sampler;

      // Devour the proper UVs from the vertex shader to make use of
      // when displaying the generated texture
      various vec2 v_uv;

      void foremost () {
        // Pattern the proper shade from the generated texture
        vec4 inputColor = texture2D(sampler, v_uv);
        // Set the proper shade of every pixel that makes up the aircraft
        gl_FragColor = inputColor;
      }
    `
})
const postFXMesh = new THREE.Mesh(postFXGeometry, postFXMaterial)
postFXScene.add(postFXMesh)

// ... animation loop code right here, identical as earlier than

As you possibly can see, we’re creating a brand new scene that may maintain our fullscreen aircraft. After creating it, we have to increase our animation loop to render the generated texture from the earlier step to the fullscreen aircraft on our display screen:

perform onAnimLoop() {
  // Explicitly set renderBufferA because the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)

  // On every new body, render the scene to renderBufferA
  renderer.render(scene, orthoCamera)
  
  // 👇
  // Set the machine display screen because the framebuffer to render to
  // In WebGL, framebuffer "null" corresponds to the default 
  // framebuffer!
  renderer.setRenderTarget(null)

  // 👇
  // Assign the generated texture to the sampler variable used
  // within the postFXMesh that covers the machine display screen
  postFXMesh.materials.uniforms.sampler.worth = renderBufferA.texture

  // 👇
  // Render the postFX mesh to the default framebuffer
  renderer.render(postFXScene, orthoCamera)
}

After together with these snippets, we will see our scene as soon as once more rendered on the display screen:

See the Pen
Step 3: Show the generated framebuffer on the machine display screen
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Let’s recap the mandatory steps wanted to supply this picture on our display screen on every render loop:

  1. Create renderTargetA framebuffer that may permit us to render to a separate texture within the customers machine video reminiscence
  2. Create our “ABC” aircraft mesh
  3. Render the “ABC” aircraft mesh to renderTargetA as a substitute of the machine display screen
  4. Create a separate fullscreen aircraft mesh that expects a texture as an enter to its materials
  5. Render the fullscreen aircraft mesh again to the default framebuffer (machine display screen) utilizing the generated texture created by rendering the “ABC” mesh to renderTargetA

Attaining the persistence impact through the use of two framebuffers

We don’t have a lot use of framebuffers if we’re merely displaying them as they’re to the machine display screen, as we do proper now. Now that we have now our setup prepared, let’s truly do some cool post-processing.

First, we truly need to create yet one more framebuffer – renderTargetB, and ensure it and renderTargetA are let variables, relatively then consts. That’s as a result of we are going to truly swap them on the finish of every render so we will obtain framebuffer ping-ponging.

“Ping-ponging” in WebGl is a method that alternates the usage of a framebuffer as both enter or output. It’s a neat trick that permits for basic objective GPU computations and is utilized in results equivalent to gaussian blur, the place to be able to blur our scene we have to:

  1. Render it to framebuffer A utilizing a 2D aircraft and apply horizontal blur by way of the fragment shader
  2. Render the outcome horizontally blurred picture from step 1 to framebuffer B and apply vertical blur by way of the fragment shader
  3. Swap framebuffer A and framebuffer B
  4. Maintain repeating steps 1 to three and incrementally making use of blur till desired gaussian blur radius is achieved.

Here’s a small chart illustrating the steps wanted to realize ping-pong:

So with that in thoughts, we are going to render the contents of renderTargetA into renderTargetB utilizing the postFXMesh we created and apply some particular impact by way of the fragment shader.

Let’s kick issues off by creating our renderTargetB:

let renderBufferA = new THREE.WebGLRenderTarget(
  // ...
)
// Create a second framebuffer
let renderBufferB = new THREE.WebGLRenderTarget(
  innerWidth * devicePixelRatio,
  innerHeight * devicePixelRatio
)

Subsequent up, let’s increase our animation loop to truly do the ping-pong approach:

perform onAnimLoop() {
  // 👇
  // Don't clear the contents of the canvas on every render
  // With the intention to obtain our ping-pong impact, we should draw
  // the brand new body on prime of the earlier one!
  renderer.autoClearColor = false

  // 👇
  // Explicitly set renderBufferA because the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)

  // 👇
  // Render the postFXScene to renderBufferA.
  // This may comprise our ping-pong collected texture
  renderer.render(postFXScene, orthoCamera)

  // 👇
  // Render the unique scene containing ABC once more on prime
  renderer.render(scene, orthoCamera)
  
  // Identical as earlier than
  // ...
  // ...
  
  // 👇
  // Ping-pong our framebuffers by swapping them
  // on the finish of every body render
  const temp = renderBufferA
  renderBufferA = renderBufferB
  renderBufferB = temp
}

If we’re to render our scene once more with these up to date snippets, we are going to see no visible distinction, though we do actually alternate between the 2 framebuffers to render it. That’s as a result of, as it’s proper now, we don’t apply any particular results within the fragment shader of our postFXMesh.

Let’s change our fragment shader like so:

// Pattern the proper shade from the generated texture
// 👇
// Discover how we now apply a slight 0.005 offset to our UVs when
// trying up the proper texture shade

vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));
// Set the proper shade of every pixel that makes up the aircraft
// 👇
// We fade out the colour from the earlier step to 97.5% of
// no matter it was earlier than
gl_FragColor = vec4(inputColor * 0.975);

With these adjustments in place, right here is our up to date program:

See the Pen
Step 4: Create a second framebuffer and ping-pong between them
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Let’s break down one body render of our up to date instance:

  1. We render renderTargetB outcome to renderTargetA
  2. We render our “ABC” textual content to renderTargetA, compositing it on prime of renderTargetB lead to step 1 (we don’t clear the contents of the canvas on new renders, as a result of we set renderer.autoClearColor = false)
  3. We move the generated renderTargetA texture to postFXMesh, apply a small offset vec2(0.002) to its UVs when trying up the feel shade and fade it out a bit by multiplying the outcome by 0.975
  4. We render postFXMesh to the machine display screen
  5. We swap renderTargetA with renderTargetB (ping-ponging)

For every new body render, we are going to repeat steps 1 to five. This manner, the earlier goal framebuffer we rendered to can be used as an enter to the present render and so forth. You possibly can clearly see this impact visually within the final demo – discover how because the ping-ponging progresses, an increasing number of offset is being utilized to the UVs and an increasing number of the opacity fades out.

Making use of simplex noise and mouse interplay

Now that we have now applied and might see the ping-pong approach working appropriately, we will get inventive and broaden on it.

As an alternative of merely including an offset in our fragment shader as earlier than:

vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));

Let’s truly use simplex noise for extra attention-grabbing visible outcome. We may also management the course utilizing our mouse place.

Right here is our up to date fragment shader:

// Cross in elapsed time since begin of our program
uniform float time;

// Cross in normalised mouse place
// (-1 to 1 horizontally and vertically)
uniform vec2 mousePos;

// <Insert snoise perform definition from the hyperlink above right here>

// Calculate totally different offsets for x and y through the use of the UVs
// and totally different time offsets to the snoise methodology
float a = snoise(vec3(v_uv * 1.0, time * 0.1)) * 0.0032;
float b = snoise(vec3(v_uv * 1.0, time * 0.1 + 100.0)) * 0.0032;

// Add the snoise offset multiplied by the normalised mouse place
// to the UVs
vec4 inputColor = texture2D(sampler, v_uv + vec2(a, b) + mousePos * 0.005);

We additionally must specify mousePos and time as inputs to our postFXMesh materials shader:

const postFXMaterial = new THREE.ShaderMaterial({
  uniforms: {
    sampler: { worth: null },
    time: { worth: 0 },
    mousePos: { worth: new THREE.Vector2(0, 0) }
  },
  // ...
})

Lastly let’s be sure that we connect a mousemove occasion listener to our web page and move the up to date normalised mouse coordinates from Javascript to our GLSL fragment shader:

// ... initialisation step

// Connect mousemove occasion listener
doc.addEventListener('mousemove', onMouseMove)

perform onMouseMove (e) {
  // Normalise horizontal mouse pos from -1 to 1
  const x = (e.pageX / innerWidth) * 2 - 1

  // Normalise vertical mouse pos from -1 to 1
  const y = (1 - e.pageY / innerHeight) * 2 - 1

  // Cross normalised mouse coordinates to fragment shader
  postFXMesh.materials.uniforms.mousePos.worth.set(x, y)
}

// ... animation loop

With these adjustments in place, right here is our remaining outcome. Ensure to hover round it (you might need to attend a second for all the things to load):

See the Pen
Step 5: Perlin Noise and mouse interplay
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Conclusion

Framebuffers are a strong device in WebGL that permits us to tremendously improve our scenes by way of post-processing and obtain every kind of cool results. Some methods require extra then one framebuffer as we noticed and it’s as much as us as builders to combine and match them nevertheless we have to obtain our desired visuals.

I encourage you to experiment with the offered examples, attempt to render extra components, alternate the “ABC” textual content shade between every renderTargetA and renderTargetB swap to realize totally different shade mixing, and so on.

Within the first demo, you possibly can see a selected instance of how this typography impact may very well be used and the second demo is a playground so that you can strive some totally different settings (simply open the controls within the prime proper nook).

Additional readings:

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *