📜 ⬆️ ⬇️

We realize 3D picture in the browser

HTML 3D LOGO In this article I want to continue the story about my experiments with a 3D monitor. In the first article it was described how to output a stereo image from a video stream (in a VLC video player) now I will tell you how to get a stereo image right in your browser. For the demo, I took the wonderful Three.js library about it already written a lot on Habré, it allows you to quickly and simply create beautiful web applications on WebGL. Below I will show how to make the user see a deep 3D image and not a flat projection.



As the initial data we will take the simplest example from Three.js
- it will be a rotating cube . To make the 3D effect brighter, I added a more progressive movement towards the viewer to the rotation.
')
In order to get 2 views for our three-dimensional image - do this trick
in the cycle of drawing each frame:
- draw the scene from the camera not to the screen but to the texture
- move the camera (as if in the position of the second eye)
- draw the scene to another texture
- now we have pictures for the left and right eyes - we just have to mix them correctly so that the left eye sees the left picture and the right one - the right one on the 3D monitor.

Now we will describe it in the code.
(the base code of the example webgl_geometry_cube does not make sense to describe it, I will describe only what I added)

Update: Thank you for adding anaglyph to the demo, now you can try it on any monitor with red-blue glasses.
//        function initORTscene() { //projection to screen rtTexture = new THREE.WebGLRenderTarget(window.innerWidth, window.innerHeight, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBFormat }); ltTexture = new THREE.WebGLRenderTarget(window.innerWidth, window.innerHeight, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBFormat }); //      -       materialScreen = new THREE.ShaderMaterial({ uniforms: { lRacurs: { type: "t", value: ltTexture }, rRacurs: { type: "t", value: rtTexture }, height: { type: "f", value: window.innerHeight } }, vertexShader: document.getElementById('vertexShader').textContent, fragmentShader: document.getElementById('fragmentShader').textContent, depthWrite: false }); //        var plane = new THREE.PlaneGeometry(window.innerWidth, window.innerHeight); var offscreenMesh = new THREE.Mesh(plane, materialScreen); offscreenMesh.position.z = -1; //a little behind sceneORT = new THREE.Scene(); //    ,         cameraORT = new THREE.OrthographicCamera(window.innerWidth / -2, window.innerWidth / 2, window.innerHeight / 2, window.innerHeight / -2, -10000, 10000); sceneORT.add(offscreenMesh); } 


shaders
in them we pass 2 textures and frame height in pixels of the screen
the vertex shader computes the position of the point and passes it to the pixel shader
there we calculate whether we have an even line vertically or not, for even lines we take one texture - for odd another (in this article I wrote how this approach allows us to form a 3D image)

 <script id="fragmentShader" type="x-shader/x-fragment"> varying vec2 vUv; uniform sampler2D rRacurs; uniform sampler2D lRacurs; uniform float height; void main() { //odd from left racurs, even from right float d = mod((floor(height*(vUv.y+1.0))),2.0); //odd or even, height - is new uniform to get viewport height if(d > 0.1) { gl_FragColor = texture2D( rRacurs, vUv ); } else { gl_FragColor = texture2D( lRacurs, vUv ); } } </script> <script id="vertexShader" type="x-shader/x-vertex"> varying vec2 vUv; void main() { vUv = uv; gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 ); } </script> 


now draw our scene

  var x = camera.position.x; var faceWidth = 5; //   //   camera.position.x = x + faceWidth / 2; renderer.render(scene, camera, rtTexture, true); //   camera.position.x = x - faceWidth / 2; renderer.render(scene, camera, ltTexture, true); camera.position.x = x; //           renderer.render(sceneORT, cameraORT); 


That's all.
Happy owners of passive 3D monitors can watch the demo (on a regular monitor, this is of course not so beautiful). The code can be found on github

I want to note, there is certainly not 30 lines of code, but not more than 70 - and this is all you need to realize a 3D image.

The faceWidth parameter can be changed - the larger it is - the stronger the 3D, but significant geometric distortions.

This code can be used for any scene written in three.js (WebGL) to add real 3D to it, for example, here’s a link to a game that I wrote as part of a javascript study — it looks quite good in 3D.

Update: Thanks to KOLANICH for adding an anagyph effect to this demo - now you can try to watch it on any monitor with red and blue glasses. I do not have such, so I can not check, but if someone checks and finds bugs, I will accept pull-requests.

Source: https://habr.com/ru/post/212297/


All Articles