📜 ⬆️ ⬇️

Use an Intel RealSense camera with TouchDesigner. Part 2


© Rolf Arnold
The Intel RealSense Camera is a very useful tool for creating virtual reality and augmented reality projects. In the second part of this article, you will learn how to use the Intel RealSense camera nodes in TouchDesigner to set up real-time rendering or projection for multi-screen systems, single screens, 180-degree (full-dome), and 360-degree virtual reality systems. In addition, information from the Intel RealSense camera can be transferred to the Oculus Rift using the Oculus Rift TOP node in TouchDesigner.
The second part will be devoted to the RealSense CHOP node in TouchDesigner.

Access to the most important tracking functions of the RealSense F200 and R200 cameras, such as eye, finger and face tracking, is provided through the RealSense CHOP node in TouchDesigner. These tracking functions are particularly interesting in real-time animation or when tracking animations in accordance with body movements and gestures of people. This seems to me most useful for performances by dancers or musicians, where a high level of interactivity between live video, animation, graphics, sound and performance is required.

To get TouchDesigner * (.toe) files associated with this article, click here . A free copy of TouchDesigner is also available for non-commercial use. It has full functionality, but the maximum resolution can not exceed 1280 x 1280.
Again, with the support of the Intel RealSense camera, TouchDesigner * becomes even more versatile and powerful.

Note. Like the first part of this article , the second part is intended for users already familiar with TouchDesigner * and its interface. If you do not have experience with TouchDesigner * and you are going to gradually understand this article, I recommend that you first review the documentation available here: Exploring TouchDesigner .
')
Note. When using an Intel RealSense camera for optimal results, consider distance. On this Intel web page, you can see the range of all camera models and recommendations for using cameras.

Historical information


All data provided by Intel RealSense cameras is very useful for creating virtual reality and augmented reality. Some attempts to do what is being done with the Intel RealSense camera were back in the 80s. The hand-tracking technology was developed in the 1980s in the form of a glove that transmits data; the authors of this invention are Jason Lanier and Thomas G. Zimmerman. In 1987, the company Nintendo released the first manipulator to control games in the form of gloves, connected by wire to the game console Nintendo.

The devices, the development of which led to the creation of Intel RealSense cameras, were originally intended for animation on performances: motion capture technologies were used to convert a person’s performance into mathematical data, that is, into digital information. The capture of the movement has been used since the 70s in research projects at various universities, as well as in the army for training. One of the first animated films created with motion capture was the animated video of Sexy Robot , created in 1985 by Robert Abel and his colleagues. In the Sexy Robot video, several technologies were used to obtain the information with which the digital model of the robot was created and animated. First, a robot model was created. It was measured from all sides, the information describing it was transferred to a digital form: the RealSense camera achieves similar results when shooting objects. To calculate the movement on the actor, points were drawn, the movement of which was correlated with the movement of the digital “skeleton”: a vector animation was created, with which the digital model was moved. The RealSense camera has infrared imaging and an infrared laser projector that allows you to receive data for digital models and motion tracking. The tracking capabilities of the Intel RealSense camera are quite advanced: you can even track eye movements.

Intel RealSense cameras


Currently, there are two models of Intel RealSense cameras. They perform similar functions, but in some ways they differ: this is the Intel RealSense F200 camera for which the exercises in this article are intended, and the Intel RealSense R200 camera.

The Intel RealSense R200 camera has important advantages due to its compact size. It is designed for installation on a tripod or on the back of the tablet. Thus, the camera lens is not aimed at the user, but at the world around it, but thanks to the improved shooting capabilities, the field of vision of the camera covers a wider area. In addition, this camera has improved depth measurement capabilities. This camera is very interesting to use for augmented reality projects, since it supports the scene perception function, which allows you to add virtual objects to the real world footage scene. You can also overlay virtual information on the captured image live. Unlike the F200, the R200 does not support finger, hand and face tracking. TouchDesigner supports both Intel RealSense camera models: both the F200 and R200.

Intel RealSense cameras in TouchDesigner


TouchDesigner fits perfectly with the Intel RealSense camera: it supports direct communication between facial expressions and hand movements and the software interface. TouchDesigner can directly use this tracking and position data. TouchDesigner can also use depth, color, and infrared data transmitted by an Intel RealSense camera. Intel RealSense cameras are very light and compact, especially the R200 model, which can easily be placed next to the performers imperceptible to the audience.

Adam Berg, a researcher at Leviathan, who is working on a project to use the Intel RealSense camera with TouchDesigner to create interactive installations, states: “Thanks to its compact size and simple design, the camera is great for interactive solutions. The camera is not striking, and infrastructure requirements are simplified, since the camera does not require an external power source. Also, we liked the low latency when creating the image depth. TouchDesigner is a great platform for work (from creating the initial prototype to developing the final version). It has built-in support for five cameras, high-performance multimedia playback, and convenient shader capabilities. In addition, of course, excellent support should also be noted. ”

Using the Intel RealSense Camera in TouchDesigner


In the second part, we look at the CHOP node in TouchDesigner for the Intel RealSense camera.

RealSense CHOP node


The RealSense CHOP node manages 3D tracking and position data. The CHOP node contains two types of information. (1) Position in the real world (measured in meters, but accuracy can be increased to units of millimeters) is used to convert along the x, y and z axes. Turns around the x, y, and z axes in RealSense CHOP are displayed as Euler angles along the x, y, and z axes in degrees. (2) The RealSense CHOP node also receives the pixels of the input image and converts them to normalized UV coordinates. This is useful for image tracking.
The RealSense CHOP node has two configurable parameters: finger / face tracking and token tracking.

Using the RealSense CHOP node in TouchDesigner


Demo 1: Using Tracking


This is a simple first demonstration of the RealSense CHOP node shows how it can be connected to other nodes and used to track and create motion. Again, note that for these demonstrations, TouchDesigner's extremely superficial knowledge is sufficient. If you do not have experience with TouchDesigner * and you are going to gradually understand this article, I recommend that you first review the documentation available here: Exploring TouchDesigner.
1. Create the nodes we need and arrange them in a horizontal row in the following order: Geo COMP node, RealSense CHOP node, Select CHOP node, Math CHOP node, Lag CHOP node, Out CHOP node, and Trail CHOP node.
2. Connect the RealSense CHOP node to the Select CHOP node, the Select CHOP node to the Math CHOP node, the Math CHOP node to the Lag CHOP node, the Lag CHOP node to the Out CHOP Node node, and the Out CHOP node to the Trail CHOP node.
3. Open the Setup parameters page of the RealSense CHOP node and make sure that the Hands World Position parameter is set to On. Displays the location of the tracked arm joints in space. Values ​​are in meters relative to the camera.
4. On the Select CHOP Select Parameters page, set the Channel Names parameter to hand_r / wrist: tx, selecting it from the available values ​​in the drop-down list to the right of the parameter.
5. In the Rename From parameter, enter hand_r / wrist: tx, then in the Rename To parameter parameter, enter x.


Figure 1. Channel selection from the RealSense CHOP node occurs in the Select CHOP node

6. In the Range / To Range parameter of the Math CHOP node, enter 0, 100. For a reduced range of movements, enter a number less than 100.
7. Select the Geometry COMP node and make sure it is on the Xform options page . Click the "+" button in the lower right corner of the Out CHOP node to enable browsing. Drag the X channel onto the Translate X parameter of the Geometry COMP node and select Export CHOP from the drop-down menu.


Figure 2. Here you add the animation obtained from RealSense CHOP

To render the geometry, you need the Camera COMP node, the Material node (MAT) (I used the MAT Wireframe), the Light COMP node and the Render TOP node. Add these nodes to render this project.
8. In the Camera Comp node on the Xform parameters page, set the Translate Z parameter to 10. This will allow you to better see the movement of the created geometry, as the camera shifts back along the Z axis.
9. Swipe your hand in front of the camera and see how the geometric shape will move in the Render TOP node.


Figure 3. How nodes are connected to each other. Thanks to the Trail CHOP node at the end you can see the animation in graphical form


Figure 4. Geometry COMP node x transform value was exported from channel x to Out CHOP node, which was transferred further along the chain from Select CHOP node

Demo 2. Tracking the RealSense CHOP marker


In this demonstration, we will use the marker tracking feature in RealSense CHOP to show how to use the image for tracking. You will create an image, you will have two copies of it: a printed copy and a digital copy. They must match exactly. In this case, you can initially have a digital file and print it, or, having an image on paper, scan it for a digital version.
1. Add a RealSense CHOP node to the scene.
2. On the Setup Options page of the RealSense CHOP node, select the Mode value of Marker Tracking .
3. Create a Movie File resource in TOP.
4. On the Play settings page of the TOP node, in the File section, select and download a digital image for which you also have a printed version.
5. Drag the Movie File to TOP on the Setup Settings page of the RealSense CHOP node and to the Marker Image TOP cell at the bottom of the page.
6. Create the Geometry COMP, Camera COMP, Light COMP, and Render TOP nodes.
7. As was done earlier in step 7 in demonstration 1, export the tx channel from RealSense CHOP and drag it to the Translate X parameter of the Geometry COMP node.
8. Create a Reorder TOP node and connect it to the Render TOP node. On the Output Alpha node's Reorder settings page, select One from the drop-down list.
9. Place the printed image of the digital file in front of the Intel RealSense camera and move it. The camera should track the movement and issue it to the Render TOP node. The numbers in RealSense CHOP will also change.


Figure 5. This is a full mock marker demonstration layout.


Figure 6. On the Geo COMP node settings page, the tx channel from the RealSense CHOP node is dragged to the Translate x parameter

TouchDesigner eye tracking with RealSense CHOP


On the TouchDesigner program palette in the RealSense section, there is an eyeTracking template that can be used to track the movement of the user's eyes. This template uses the finger / face tracking of a RealSense CHOP node; RealSense TOP node must be set to Color . In the template, the green rectangles of WireFrame move in accordance with the movement of the human eye and are superimposed on the color image of the person in RealSense TOP. Instead of open green rectangles, you can use any other geometric shapes or particles. This is a very convenient template. Here is an image with a template.


Figure 7. Notice that the eyes are tracked even through the glasses.

Demonstration 3, part 1. A simple way to configure full-domed rendering or virtual reality


In this demonstration, we take a file and show how to present it in full-domed rendering and in 360-degree virtual reality. I have already prepared such a file for download. This is the file chopRealSense_FullDome_VR_render.toe .

Brief description of the process of creating this file
In this file I wanted to place geometric shapes (sphere, torus, cylinders and rectangles) in the scene. So I created several SOP nodes for these various geometric shapes. Each SOP node was attached to the Transform SOP node to move (transform) geometric shapes in different parts of the scene. All SOP nodes are connected to one Merge SOP node. The Merge SOP node is served by the Geometry COMP node.


Figure 8. This is the first step in marking up geometric shapes placed in the scene in a downloadable file.

Then I created the Grid SOP node and the SOP – DAT node. The SOP – DAT node was used to obtain an instance of the Geometry COMP so that more geometric shapes could be added to the scene. I also created the Constant MAT node, selected green and turned on the WireFrame parameter on the Common page.


Figure 9. SOP – DAT node was created using the Grid SOP node

Then I created the RealSense CHOP node and connected it to the Select CHOP node, in which I selected the hand_r / wrist: tx channel for tracking and renamed it to x. I connected Select CHOP with the Math CHOP node so that the range could be changed, and connected the Math CHOP with the Null CHOP node. It is recommended to always terminate the chain with a Null or Out node so that it is more convenient to insert new filters into the chain. Then I exported the x Channel from the Null CHOP node to the Scale X parameter of the Geometry COMP node. This provides control over all movements of geometric shapes along the x-axis in the scene, when I spend my right hand in front of the Intel RealSense camera.


Figure 10. Tracking data from the RealSense CHOP node is used to create real-time animation and movement of geometric shapes along the x axis

To create a 180 ° full dome rendering from this file
1. Create Render TOP, Camera COMP and Light COMP nodes.
2. On the Render TOP node's Render TOP settings page, select Cube Map in the Render Mode drop-down menu.
3. On the Common parameters page of the Render TOP node, set the Resolution parameter to a resolution with a 1: 1 aspect ratio, for example 4096 to 4096, to obtain 4K resolution.
4. Create a Projection TOP node and connect the Render TOP node with it.
5. On the Projection TOP Projection Options page, select Fish-Eye from the Output drop-down menu.
6. (This is optional, in which case the file will have a black background.) Create a Reorder TOP node and on the Reorder settings page, in the right-click Output Alpha drop-down menu, select One .
7. Now everything is ready either to directly perform the animation, or to export the movie file. See instructions in the first part of this article . You create a circular domed animation of the Fisheye type. It will be a circle with a square.

To use an alternative method, go back to step 2, and instead of choosing Cube Map from the Render Mode drop-down menu, select Fish-Eye (180) . Now go to step 3, and also, if you wish, go to step 6. Now the animation is ready for launch or for export.

To create a 360-degree virtual reality from this file
1. Create Render TOP, Camera COMP and Light COMP nodes.
2. On the Render TOP node's Render TOP settings page, select Cube Map in the Render Mode drop-down menu.
3. On the Common parameters page of the Render TOP node, set the Resolution parameter to a resolution with a 1: 1 aspect ratio, for example 4096 to 4096, to obtain 4K resolution.
4. Create a Projection TOP node and connect the Render TOP node with it.
5. On the Projection TOP Projection Options page, select Equirectangular in the Output drop-down menu. This will automatically set the aspect ratio to 2: 1.
6. (This is optional, in which case the file will have a black background.) Create a Reorder TOP node, then on the Reorder settings page, in the right-click Output Alpha drop-down menu, select One .
7. Now everything is ready either to directly perform the animation, or to export the movie file. See instructions in the first part of this article . When exporting a movie, you create a rectangular animation with a 2: 1 aspect ratio for viewing with virtual reality glasses.


Figure 11. Long orange Tube SOP nodes are added to the file. You can add your own geometry to this file.

Output to Oculus Rift * from TouchDesigner when using an Intel RealSense camera


TouchDesigner has created several download templates. They show how to set up your Oculus Rift in TouchDesigner. One of these templates - OculusRiftSimple.toe - can be found in the archive. To view the Oculus Rift, of course, the computer must be connected to the Oculus Rift. Without Oculus Rift, you can create a file, view the images in the LeftEye Render TOP and RightEye Render TOP nodes and display them in the background of the scene. I added Oculus Rift support to the file used in demo 3. Thus, the Intel RealSense camera animates the image I see in the Oculus Rift.


Figure 12. Here, the left eye and the right eye are displayed in the background. Much of the animation for this scene is controlled by tracking from the Intel RealSense camera's CHOP node. The file used to obtain this image can be downloaded by clicking on the button in the upper right corner of this article, chopRealSense_FullDome_VRRender_FinalArticle2_OculusRiftSetUp.toe

Source: https://habr.com/ru/post/281318/


All Articles