In this post I want to talk about the idea and Proof-Of-Concept of adding real-world objects to Virtual Reality.
In my opinion, the idea described in the near future will be implemented by all players of the VR market. IMHO, the only reason why this has not yet been done - the desire to roll out the perfect solution, but it is not so easy. For many years, I have been thinking over the design of the hydraulic cabin for the MechWarrow simulator.
Of course, I will never do it.
Since It requires quite significant investments, while it is quite clear that interest in it will be lost immediately after the end of the project. I have no place to store the received bandura, but I don’t have a commercial vein to sell / rent to someone.
But it does not interfere to periodically think about all sorts of designs. Previously, I planned to place many displays inside the cab, some of which will work as intended, and the second part will emulate “windows” / loopholes. ')
However, in the modern world, another solution comes to mind - the VR-helmet (Head-Mounted Display). To achieve high-quality immersion is much easier when working with a helmet, because no need to thoroughly lick the interior of a real cabin. Yes, and rework design at times easier, but there is BUT.
A normal fur control panel is a complex and interesting thing. The simplest authentic controller looks like this:
To do something in a serious simulator (for example, control from a gamepad) is not an option. Suppose that modeling the control panel and placing it in the VR world is not a problem. However, managing such a number of small toggle switches by touch is a very bad idea. But how to be, because the user does not see his hands in the VR world.
Of course have
special gloves
but today is not about them ...
At the moment, depth cameras are actively developing. The first they were declared by Microsoft with their Kinect. Unfortunately, the MC decided that Kinekt was not economically justified and closed the project. However, the matter did not die, as one might think. Apple has implemented a depth camera in the last iPhone, it is this camera that is responsible for recognizing the owner's face. MS also did not abandon the technology, VR-helmets on the Windows Mixed Reality platform use inside-out tracking technology based on depth cameras.
The obvious solution is to screw the depth camera onto the VR helmet and apply the resulting geometry on the VR world. But for some reason nobody does it.
ZED Mini seems to be able to build a 3D world and are mounted on a helmet, but I did not see them live, and all promo videos use world information only to apply 3D models to it, but not vice versa. I suppose the problem is the low quality of the resulting model, which will be immediately visible when you try to render.
Unfortunately, I have no opportunity to fasten Kinekt on the helmet. The kinekt is huge and heavy, without invasive intervention in the design of the helmet a normal mount is not done. A helmet borrowed and I can not spoil it.
Therefore, for this mini-project, I placed the kinekt vertically above the table. This option suits me completely, because if the kinekt was placed inside the virtual fur cabin, it would also be placed above the control panel to detect only the player’s hands and cut off all other objects.
Let's move on to the project (there will be no code parsing, only theory and some pictures)
Using libfreenect2 and OpenNI, we get a height map. How to visualize this elevation map? The obvious options in the Unreal Engine are three.
Mesh with height map defining Z vertex offset.
The obvious and fastest option. The mesh is completely static, only the textures change (which is very fast).
Unfortunately, this method has a serious drawback: the lack of ability to fasten physics. From the point of view of physics, such a mesh is absolutely flat and integral. Simple rectangle. The fact that we have part of the vertices is transparent and cut off by the alpha test - physics does not see. The fact that our vertices are modified by the Z coordinate - physics does not see.
Manually build the mesh at a low level.
To do this, we need to block the UPrimitiveComponent and implement a new component using SceneProxy. This is a low-level approach that gives better speed. The main disadvantage is rather high implementation complexity. If you do fully - it is this option and should be elected. But since I had a task to make quickly and simply, I used the third option.
Implementation based on UProceduralMeshComponent
This is a component built into the UE that allows you to easily create a mesh and even immediately read the object for calculating collisions.
Why use the second option, and not this one?
Because this component is not designed to work with dynamic geometry. He is sharpened so that we once (or at least not very often and preferably not in real-time) transfer geometry to him, it is slowly considered, and then we quickly work with it. This is not the case ...
But for the test come down. Moreover, the scene is empty and the computer has nothing more to count, so there are resources with a margin.
To visualize objects with their image from the camera is not an option. Real photos stand out against the background of the virtual world.
Therefore, I decided to visualize it by analogy with the SteamVR lines. Overlayed the texture of the blue grid without filling. Plus, the contour made a stroke. It turned out quite acceptable. True, with full transparency, the hands were perceived a little worse, so filling the squares made slightly noticeable bluish.
The screenshot shows the effect of "runoff" geometry. This is due to the inability of the depth cameras to normally process faces with an angle close to 90 degrees to the camera. Explicitly degenerate pixels kinekt marks the value 0, but, unfortunately, not all, and some of them "noise" without degenerating. I made a set of simple manipulations to remove the main noise, but I couldn’t completely get rid of the “runoff”.
It is worth noting that this effect is very noticeable when viewed from the side (sitting in front of the table, and kinekt from above). If the depth camera is directed parallel to the gaze and comes from a point close to the user's real organs of vision, this effect will be directed forward and much less noticeable.
As you can see in the video, real hands work quite well inside the VR world. The only serious drawback is the displacement resolution.
The mesh does not morph smoothly into the new state, but is removed and re-created. Because of the sudden movement of physical objects fail through it, so we move slowly and carefully:
I apologize for the dark video - I was working on the project in the evenings after work, on the preview video seemed normal when all the equipment was already taken and transferred the video to the computer - it turned out that it was very dark.
How to feel yourself
Something tells me that people who will simultaneously have HMD (optional), Kinect, the ability to work with the UE and the desire to try this project is quite small (not at all?). Therefore, I don’t see any reason to upload the source code to the githab.
Add as a regular plugin to any UE project. I did not understand how to connect the lib file using a relative path, so in OpenNI2CameraAndMesh.Build.cs we set the full path to OpenNI2.lib Next we place ADepthMeshDirect in the right place for us. When starting the level, call the startOpenNICamera method from UTools. Do not forget that libfreenect2 is used to work with the kinekt, which means that the driver for the kinekt should be redefined to libusbK in accordance with the instructions on the libfreenect2 page
UPD: At the beginning of the article, I said that such a system would soon be in all VR helmets. But in the process of writing, I somehow missed this moment and did not reveal it.
Therefore, I will quote my comment, which I wrote below to disclose this topic:
If to say - why such a system is needed in all VR systems without exception - this is security.
Now the boundaries of the playing area are marked with a conditional cube.
But rarely one of us can afford to allocate an absolutely empty space for VR. As a result, objects remain in the room, sometimes dangerous . The main thing that I am sure will be done is a virtual display of the entire room in front of the player in the form of a barely perceptible ghost, which on the one hand does not interfere with the perception of the game, and on the other - allows you not to stumble and not die.
PS: I want to express my deep gratitude to the company “which cannot be called outside of the corporate blog”, whose management has allocated me the technical basis for working on this project.