A shader is software which runs on a graphics card to determine how an object should be drawn. Source uses shaders for everything in the 3D world. Shaders are manipulated with parameters stored in material files.
With Three.js, the process of writing shaders is made much easier. You may write shader programs by yourself as the example shows. On the other hand, Three.js provides many frequently used shaders so that you can apply shader effects by simply doing like this.
In this post, we are going to introduce how to use
THREE.ShaderPass to write shaders for video from local camera and get pixel data from the rendered composer. We will take
THREE.EdgeShader as example and read the rendered edges from it.
We are going to introduce in the following steps.
Render local video to a plane in Three.js
1.1 Including files and set up environments 1.2 Getting video from local camera using
getUserMedia1.3 Creating a plane and use video as texture
THREE.EdgeShaderto video texture
2.1 Including shader files 2.2 Using
Three.EffectComposer2.3 Reading pixels
By the end of this post, you will be able to implement the effect at demo 2.3 and get the number of non-zero pixels on the screen.
1. Render local video to a plane in Three.js
The result of this step is available at demo 1.3.
1.1 Including files and set up environments
If you are familiar with Three.js, this step may looks extremly easy for you.
We include the
three.js file from build directory of Three.js repo on GitHub. And set up Three.js environment very simply, and create a canvas of size
The result of this step is available at demo 1.1.
1.2 Getting video from local camera using
Next, we use
navigator.getUserMedia to get the access to local camers.
The result of this step is available at demo 1.2.
1.3 Creating a plane and use video as texture
Next, we need to draw the video onto the canvas using Three.js. You may probably ask why not draw using canvas. This is because we will add shader effects afterwards, and thus we have to draw it using WebGL, which is in our case, using Three.js.
We first create a canvas named
videoImage to draw the video onto. And then create a
videoTexture, which is used as the map of the material of a plane whose position is just about the screen.
Slow down to understand the above code carefully. And ask yourself, how is
video passed to all these elements?
The answer is that we need to modify the callback function
video.onloadedmetadata. In the following lines of code,
this.videoHeight are the size of local camera, and we set that of canvas
videoImage to be the same, so that the canvas renders only the video content, and without any black area. This is very important since it may bring about problems of position if we don’t do so.
Note that we set
videoTexture.needsUpdate to be
true so that when our renderer renders the plane with
videoTexture, it knows the texture should be updated.
Now, we can see the effect of rendering local camera at demo 1.3.
THREE.EdgeShader to video texture
2.1 Including shader files
Three.js shader files are available at examples/js/shaders, which are not part of
build directory. To use them, we need to include the corresponding files.
Then, we create several passes to enable the shader effect.
These passes work like a pipe.
RenderPass passes the rendered texture in
EdgeShader uses edge detect algorithm to modify the edges in the texture to be white and other to be black, then passes to
CopyShader to copy to screen.
EffectComposer is used to compose these three passes and form into a pipeline.
And next, use
composer.render(); instead of
The result of this step is available at demo 2.2. Now, you are supposed to get something like this… if you are as cute as me! :joy:
2.3 Reading pixels
Then, we can read the pixels from this canvas to get pixel information from GPU to CPU.
CAUTION!! This is extremely time-consuming!!
Well, I guess this is not a very common requirement. So when I searched for this solution, I hardly found any. And this is also why I write this post. I will use the pixel data to form vertices on the edges to make Delaunay Triangles. So I guess this is a must-be-paid large cost.
renderFrame, we add the following lines to read pixels.
pixels is a one-dimensional arrary that contains colors in the order of
[red1, green1, blue1, alpha1, red2, green2, blue2, alpha2, ...]. So we can output the count of non-zero red elements to make sure the pixels are read correctly.
The result of this step is available at demo 2.3.
This post provides a solution to get pixels from canvas rendered using