
Derivative's TouchDesigner is a popular platform and program used all over the world for creating interactive solutions and real-time animation during performances, as well as for displaying three-dimensional animation, creating maps and diagrams, as well as more recently in virtual reality systems. With support for the
Intel RealSense camera
, TouchDesigner is becoming even more multipurpose and powerful. It should be noted and the ability to import into TouchDesigner objects and animations from other three-dimensional packages using .fbx files, as well as the ability to work with already-rendered animated videos and images.
In this two-part article, I’ll talk about the integration of the Intel RealSense camera into TouchDesigner and how it can be used. The demonstrations in the first part use an Intel RealSense camera with TOP nodes. In the demonstrations in the second part, the CHOP nodes are used. The second part also explains the creation of virtual reality sequences and semicircular panoramas using the Intel RealSense camera.
Both parts include animations and TouchDesigner * (.toe) downloads that you can use to view. To get TouchDesigner * (.toe) files, click
here . In addition, a free
TouchDesigner * copy is
available for non-commercial use . It is fully functional (with the exception that the maximum resolution is limited to 1280 by 1280).
')
Note. There are currently two types of Intel RealSense cameras: the
F200 near view camera and the
R200 long range camera. Due to its ultra-compact size, the R200 camera is very convenient for speeches and other usage scenarios when the camera should be hidden. Unlike the larger model F200, the R200 does not support the tracking of hands and fingers, as well as the tracking of markers. TouchDesigner * supports both Intel RealSense camera models: F200 and R200.
I
quote the TouchDesigner web page : “TouchDesigner * is a revolutionary software platform that allows artists and designers to work with materials in an open, free environment. TouchDesigner * is the ideal solution for interactive multimedia projects that use video, sound, three-dimensional graphics, input via controllers, the Internet and databases, DMX light sources, environmental sensors, and generally everything imaginable. This is a powerful medium for mixing all these elements in an infinite number of ways. ”
I asked Malcolm Bechard, Senior Developer at Derivative, to comment on using the Intel RealSense camera with TouchDesigner *:
“Thanks to the TouchDesigner * procedural architecture of node-based Intel RealSense camera data, you can immediately receive, visualize, then transfer to other nodes without wasting time writing code. You can quickly create prototypes of ideas and conduct development with instant feedback. The camera is represented by a node in TouchDesigner *, which means that there is no need to close and recompile the application at each iteration of the development. The Intel RealSense camera enhances TouchDesigner * by providing users with a significant number of ready-made modules, such as gestures, hand tracking, face tracking, depth data. All this can be used to interact. There is no need to use low-level analysis of hand position data for gesture recognition: this has already been done. ”
Using the Intel RealSense Camera in TouchDesigner
TouchDesigner * is a host-based program and platform that uses Python * as the primary scripting language. There are five categories of nodes that perform different operations and have different functions: TOP nodes (textures), SOP (geometry), CHOP (animation and sound), DAT (tables and text), COMP (three-dimensional geometry nodes, and nodes for creating two-dimensional control panels) and MAT (materials). TouchDesigner * programmers, after consulting with the Intel developers, created two special nodes, the Intel RealSense TOP camera node and the Intel RealSense CHOP camera node to integrate the Intel RealSense camera into the program.
Note. This article is intended for users already familiar with the TouchDesigner * program and its interface. If you do not have experience with TouchDesigner * and you are going to gradually understand this article, I recommend that you first review the documentation available
here .
Note. When using an Intel RealSense camera for optimal results, consider distance. On
this Intel web page, you can see the range of all camera models and recommendations for using cameras.
Intel RealSense TOP Camera Host
The TOP nodes in TouchDesigner * perform a variety of operations commonly found in image composition programs. The Intel RealSense TOP Camera Hub complements these capabilities with 2D and 3D data from the Intel RealSense Camera. The Intel RealSense TOP Camera Node contains a number of settings for capturing various types of data.
- Colour. Video from the Intel RealSense camera color sensor.
- Depth. The calculated depth of each pixel. 0 means the pixel is at a distance of 0 meters from the camera, 1 means that the pixel is at the maximum possible distance or further.
- Untreated depth. Values ​​are taken directly from the Intel RealSense SDK. And again, 0 means 1 meter from the camera, 1 means that the pixel is at the maximum possible distance or further.
- Visual depth. An image of the Intel RealSense SDK in grayscale, allowing you to visualize the depth. It cannot be used to actually calculate the exact distance of each pixel to the camera.
- Depth display on a color UV map. The UV values ​​from the 32-bit floating RG texture (note that there are only two colors (red and green) and no blue colors) are needed to align the image depth with the color image. To align the images, you can use the TOP Remap node.
- Displays color on a UV depth map. The UV values ​​from the 32-bit floating RG texture (note that there are only two colors (red and green) but no blue), the colors needed to align the image with the depth image. To align the images, you can use the TOP Remap node.
- Infrared image. Raw video of the Intel RealSense Infrared Sensor Camera.
- Cloud point This is literally a cloud of points in three-dimensional space (with coordinates X, Y, Z) or data points created by the scanner of the Intel RealSense camera.
- UV-map colors of clouds dots. You can use to get the color of each point from the color image stream.
Note. You can download this
RealSensePointCloudForArticle.toe file to use as a simple initial template for creating 3D animated geometry from Intel RealSense camera data. This file can be modified in various ways. Together, the three nodes of the top Intel RealSense camera — Point Cloud, Color, and Point Cloud Color UV — allow you to create three-dimensional geometry from points (particles) with a color image overlay. This opens up many interesting possibilities.
Point cloud geometry. This is an animated geometry created with an Intel RealSense camera. It is very good to use when performing in public. You can add the sound of a talking cartoon character. TouchDesigner * can also use sound data to create real-time animation.Intel RealSense Camera CHOP Host
Note. There is another Intel RealSense camera CHOP node responsible for 3D tracking and position data. We will discuss it in the second part of this article.
Demo 1. Using the Intel RealSense TOP Camera Node
To get the first TOP demonstration, click the button at the top of the article: settingUpRealNode2b_FINAL.toe.
Demo 1, part 1. You will learn how to configure an Intel RealSense TOP camera node and connect it to other TOP nodes.
1. Open the
Add Operator / OP Create dialog box.
2. In the TOP section, click
RealSense .
3. On the Intel RealSense TOP Camera Node
Setup Settings page, for the
Image parameter, select the
Color value from the drop-down menu. The Intel RealSense TOP Camera node displays an image of what the camera is pointing to, like when using a conventional video camera.
4. Set the Intel RealSense camera resolution to 1920 by 1080.
Configuring an Intel RealSense TOP node is easy.5. Create a Level TOP node and connect it to the Intel RealSense TOP camera node.
6. On the Preferences page of the Level TOP node, select
Invert and move the slider to 1.
7. Connect the Level TOP node to the HSV To RGB TOP node, then connect the latter to the Null TOP node.
The Intel RealSense TOP Camera Node can be connected to other TOP Nodes to get the desired image and create the desired effects.Then we will transfer the created image to the Phong MAT node (material) so that you can impose it on various geometric shapes as a texture.
Using data from an Intel RealSense camera to create textures for geometry
Demonstration 1, part 2. This exercise shows how to use the Intel RealSense TOP camera node to create textures and how to add them to the MAT node so that you can assign their geometry to a project.
1. Add a
Geometry (geometry) COMP node to the scene.
2. Add a Phong MAT node.
3. Take the Null TOP node and drag it to the
Color Map parameter of the Phong MAT node.
The Phong MAT node uses Intel RealSense camera data for its Color Map parameter.4. On the Render page of the Geo COMP node, add the phong1 type to the Material parameter to use the phong1 node as the material.
The Phong MAT node uses data from an Intel RealSense camera for its Color Map parameter, added to the Render / Material parameter of the Geo COMP node.Creating a Box SOP node and texturing with the newly created Phong shader
Demonstration 1, part 3. You will learn how to assign the Phong MAT shader, just created using data from an Intel RealSense camera, to the cube's Geometry SOP node.
1. Navigate the geo1 node to its child level (
/ project1 / geo1 ).
2. Create a Box SOP node, a Texture SOP node and a Material SOP node.
3. Delete the Torus SOP node that was there, then connect the node box1 to the texture1 and material1 nodes.
4. In the Material parameter of the material1 node, enter ../phong1. This is the phong1 MAT node created at the parent level.
5. To place a texture on each side of the cube, in the Texture / Texture Type parameters of the node texture1, place face and set the Texture / Offset put parameter to .5 .5 .5.
At the child level of the geo1 COMP node, the Box SOP, Texture SOP and the Material SOP nodes will be connected. The Material SOP node now gets the texture from the phong1 MAT node at the parent level (... / phong1).Animate and instantiate Box node geometry
Demonstration 1, part 4. You will learn how to rotate a Geometry SOP node using the Transform SOP node and a simple expression. You will then learn how to create instances of the geometry of the Box node. As a result, we get a screen with a lot of rotating cubes, each of which will have textures from the Intel RealSense TOP camera node.
1. To animate the rotation of the cube around the x-axis, insert the Transform SOP node after the Texture SOP node.
2. Place the expression in the X component (first field) of the
Rotate parameter in the transform1 SOP node. This expression does not depend on frames, it will continue to work and will not begin to repeat at the end of frames on the timeline. I multiplied the value by 10 to increase the speed:
absTime.seconds * 10
Here you can see that the cube rotates.3. To create cubes, go to the parent level (/ project1) and on the Instance parameters page of the geo1 COMP node, set the
Instancing parameter to On.
4. Add a Grid SOP node and an SOP – DAT node.
5. Set the grid parameters: 10 rows and 10 columns, size - 20 and 20.
6. In the SOP – DAT node parameters for SOP, set grid1 and make sure that the
Extract parameter is set to Points.
7. On the Instance parameters page of the geo1 COMP node, enter for sopto1 for the Instance CHOP / DAT parameter.
8. Fill out the TX, TY, and TZ parameters using P (0), P (1), and P (2), respectively, to specify which columns from the sopto1 node to use for instance positions.

9. If you want the image from the Intel RealSense camera to be transmitted without filtering, disable the Level TOP and HSV to RGB TOP nodes or bypass these nodes.

Real-time rendering and animation
Demonstration 1, part 5. You will learn how to customize the scene for rendering and display the image in live mode or as a video file.
1. To render the project, add the Camera COMP, Light COMP, and Render TOP nodes. By default, the camera renders all the components of the geometry on the scene.
2. Move the camera back about 20 units along the Z axis. To illuminate, leave the default values.
3. Set the rendering resolution to 1920 by 1080. By default, the rendering background is transparent (alpha value is 0).
4. To make the background opaque black, add the Constant TOP node and change the
Color parameter to 0.0.0 to set the black color by setting the
Alpha parameter to 1. You can choose any other color.
5. Add the Over TOP node and connect the Render TOP node with the first connection, and the Constant TOP node with the second one. In this case, the background pixels will get the value (0, 0, 0, 1), that is, they will no longer be transparent.
Another way to change the TOP transparency value to 1 is to use the Reorder TOP node and set its
Output Alpha parameter to Input 1 and One.
Displays a scene with an opaque black background.
Here you can see a full screen with textured rotating cubes.If you prefer to output the animation to a file instead of playing it in real time during the demonstration, you need to select the
Export movie dialog box in the
file section on the top panel of the TouchDesigner program. In the TOP Video node parameter, enter null2 for this particular example. Otherwise, enter any TOP node that needs to be rendered.
Here is the export movie panel with node null2. If there was also a sound node CHOP, then I would place CHOP Audio right below null2.Demonstration 1, part 6. One of the useful features of the TouchDesigner platform * is the ability to create animation in real time. This feature is especially useful when using the Intel RealSense camera.
1. Add a Window COMP node, and in the operator's parameter enter the node null2 TOP.
2. Set the resolution to 1920 for 1080.
3. Select the desired monitor in the Location parameter. The Window COMP node allows you to display all the animation in real time on the selected monitor. Using the Window COMP node, you can specify a monitor or projector on which to display an image.
You can create as many Window COMP nodes as you want to display images on other monitors.
Demo 2. Use of Intel RealSense TOP Camera Node Depth Data
The Intel RealSense TOP Camera Node contains a number of other settings for texture creation and animation.
In demonstration 2, we use depth data to apply blur to the image based on the depth data received from the camera. In the archive we will use the
RealSenseDepthBlur.toe file.
First, create an Intel RealSense TOP camera node and set the
Image parameter to
Depth . The depth image contains pixels with a value of 0 (black) if they are close to the camera, and a value of 1 (white) if they are far from the camera. The range of pixel values ​​is determined by the
Max Depth parameter, its value is indicated in meters. By default, this parameter is set to 5. This means that pixels that are 5 meters away from the camera (or farther away) will be white. Pixels with a value of 0.5 are located at a distance of 2.5 m from the camera. Depending on the actual distance between the camera and you, it makes sense to change this value to a smaller one. In this example, we changed the value of this parameter to 1.5 m.
Then you need to process the depth to remove objects that are outside the range of interest to us. For this we use the Threshold TOP node.
1. Create a Threshold TOP node and connect it to the realsense1 node. It is necessary to remove all the pixels that are farther than a certain distance from the camera, so set the
Comparator parameter to Greater, and the
Threshold parameter to 0.8. At the same time, pixels with a value greater than 0.8 (which corresponds to a distance of 1.2 m or more, if the
Max Depth parameter in the Intel RealSense TOP camera node is set to 1.5) become equal to 0, and all other pixels equal to 1.

2. Create the Multiply TOP node, connect the realsense1 node to the first input, and the thresh1 node - to the second input. When multiplying pixels by 1, they will remain unchanged, while multiplying other pixels by 0, they will be reset to zero. Now the multiply1 node contains only pixels greater than 0 for that part of the image on which we need to make a blur, which we will now do.
3. Create a
Movie File node in TOP and select a new image for the
File parameter. In this example, we select Metter2.jpg from the TouchDesigner * Samples / Map folder.
4. Create a Luma Blur TOP node and connect moviefilein1 to the first input of lumablur1, and multiply1 to the second input of lumablur1.
5. In the lumablur1 parameters, set White Value to 0.4, Black Filter Width to 20, and White Filter Width to 1. Due to this, the pixels with the first input value 0 will have a blur filter width of 20, and for pixels with a value of 0.4 or more is the width of the blur 1.
All in general.As a result, we get an image where the pixels on which the user is located are not blurred, and all other pixels are blurred.
The background displayed by the Luma Blur TOP display shows how blurry the image is.Demo 3. Use of Intel RealSense TOP Camera Node Depth Data with Remap TOP Node
In the archive we will use the
RealSenseRemap.toe file.
Note. The cameras of the depth and colors of the Intel RealSense TOP cameras node are physically located in different places, so the default images they produce do not match. For example, if your hand is exactly in the middle of a color image, it will not be in the middle of the image depth, but somewhat shifted to the left or right. Shifting the UV map eliminates this problem by aligning and precisely overlaying the pixels. Note the difference between aligned and non-aligned TOP nodes.
Remap TOP combines the depth data obtained from the Intel RealSense TOP camera node with the color data received from the same node using UV depth-color data in the same space.Demo 4. Using Point Cloud in the Intel RealSense TOP Camera Site
In the archive we will use the
PointCloudLimitEx.toe file.
In this exercise, you will learn how to create animated geometry using a point cloud, an Intel RealSense TOP camera node, and a Limit SOP node. Please note that this approach is different from the sample Point Cloud file provided at the beginning of this article. In the previous example, GLSL shaders are used, which makes it possible to create much more points, but this task becomes more complicated and beyond the scope of the article.
1. Create a RealSense TOP node and set the
Image parameter to Point Cloud.
2. Create a TOP – CHOP node and connect it to the Select CHOP node.
3. Connect the Select CHOP node to the Math CHOP node.
4. In the TOP parameter of the CHOP topto1, enter: realsense1.
5. In the
Channel Names parameters of the Select CHOP node, enter rgb, separated by letters.
6. In the CHOP math1 node, in the value field of the Multiply parameter, enter: 4.2.
7. On the
Range Parameters page, in the To Range parameter value field, enter: 1 and 7.
8. Create a Limit SOP node.
I quote a wiki page on
www.derivative.ca : “Limit SOP creates geometry from the data transmitted by the
CHOP nodes. It creates geometry at each sample point. Using the Output Type parameter on the Channels page, you can create different types of geometry. ”
1. On the page with the limit1 CHOP
Channels parameter, enter r for the X Channel parameter, “g” for the Y Channel parameter, and “b” for the Z Channel parameter.
Note. Moving the values ​​of r, g, and b to other channels X, Y, and Z changes the geometry that is formed. Therefore, you can later try the following: On the
Output parameters page, for the Output Type parameter, select Sphere at Each Point in the drop-down list. Create a SOP – DAT node. On the parameters page for the SOP, enter limit1 or drag the limit1 CHOP node to this parameter. Leave the default value for Points in the Extract parameter. Create Render TOP, Camera COMP and Light COMP nodes. Create a Reorder TOP node, set Output Alpha to Input 1 and One, connect it to the Render TOP node.
When you change the image from the Intel RealSense camera, the geometry also changes. This is the final draft.
The resulting images in the node Over TOP CHOP. By changing the channel order in the Limit TOP parameters, you change the geometry based on a point cloud.In the second part of this article, we will discuss the Intel RealSense CHOP camera node and the creation of content for recording and demonstration in real time, demonstrating hemispherical panoramas and virtual reality systems. In addition, we will discuss the use of the Oculus Rift CHOP node. We’ll talk about hand tracking, face tracking and markers.