You know, I am sometimes surprised by the bizarre structure of public opinion. Take for example the technology of 3D-visualization. The technology of virtual reality
glasses :
Oculus Rift ,
Google Glass . But there is nothing new here, the first virtual reality helmets appeared in the late 90s. Yes, they were difficult, they were ahead of their time, but why then didn’t it cause such a WOW effect? Or 3D printers. Articles about how cool they are or how quickly they will take over the world appear in the information field twice a week for the last three years. I do not argue, this is cool and they will capture the world. But this technology was created back in the 80s and has been sluggishly progressing since then. 3D TV?
1915 ...
All these technologies are good and curious, but where does so much hype because of every sneeze?
What if I say that in the last 10 years, 3D shooting technology, which is very different from any other, was invented, developed and put into mass production? In this case, the technology is already widely used. Debugged and accessible to ordinary people in the shops. Did you hear about her? (Probably only specialists in robotics and related fields of science have already guessed that I am talking about ToF cameras).

What is a ToF camera? In the Russian Wikipedia (
English ) you will not find even a short mention of what it is. “Time of flight camera” is translated as “Time of Flight Camera”. The camera determines the distance through the speed of light, measuring the time of flight of the light signal emitted by the camera, and the resulting image reflected by each point. The current standard is a matrix of 320 * 240 pixels (the next generation will be 640 * 480). The camera provides the accuracy of measuring the depth of about 1 centimeter. Yes Yes. A matrix of 76800 sensors, providing accuracy of time measurement of the order of 1 / 10,000,000,000 (10 ^ -10) seconds. On sale. For 150 bucks. Or maybe you even use it.
And now a little more about physics, the principle of operation, and where you met this beauty.
There are three main types of ToF cameras. For each type, its own point distance measurement technology is used. The simplest and most understandable is Pulsed Modulation, also known as Direct Time-of-Flight imagers. An impulse is given and the exact time of its return is measured at each point of the matrix:

In essence, the matrix consists of triggers triggered on the wave front. The same method is used in optical flash synchro. Only here on orders more precisely. This is the main difficulty of this method. A very accurate detection of the response time is required, which requires specific technical solutions (which I could not find). Now these sensors are tested by NASA for the landing modules of their
ships .

But the pictures that she gives:

There are enough lights for them to trigger on the optical flow reflected from a distance of about 1 kilometer. The graph shows the number of pixels that worked in the matrix, depending on a distance of 90% work at a distance of 1 km:

The second method is constant signal modulation. The emitter sends some modulated wave. The receiver finds the maximum correlation of what it sees with this wave. This determines the time that the signal spent to reflect and arrive at the receiver.

Let the signal be radiated:

where w is the modulating frequency. Then the received signal will look like:

where b is a certain shift, a is amplitude. Correlation of the incoming and outgoing signal:


But a complete correlation with all possible shifts in time is quite difficult to produce in real time in each pixel. Therefore, use a sly trick ears. The received signal is received in 4 neighboring pixels with a phase shift of 90 ° and correlates with itself:

Then the phase shift is defined as:

Knowing the resulting phase shift and the speed of light, we obtain the distance to the object:

These cameras are a bit simpler than those built using the first technology, but they are still complex and expensive. This
company makes them. And they cost about
4 kiloboxes . But simpatishny and futuristic:

The third technology is "Range gated imagers". Essentially shutter camera. The idea is terribly simple and does not require high-precision receivers, nor complex correlation. Before the matrix is ​​the shutter. Suppose that it is perfect for us and works instantly. At time 0, the scene lights up. The shutter closes at time t. Then objects located further than t / (2 2 c), where c is the speed of light, will not be visible. The light just does not have time to reach them and go back. A point located close to the camera will be illuminated all the time of exposure t and have brightness I. So any exposure point will have brightness from 0 to I, and this brightness will be a representation of the distance to the point. The brighter - the closer.
It remains to do just a couple of small things: enter the shutter closing time and matrix behavior at this event into the model, the illumination source is not ideal (the dependence of distance and brightness is not linear for a point source of light), different reflectivity of materials. These are very large and complex tasks that device authors have solved.
Such cameras are the most inaccurate, but the simplest and cheapest: the complexity of them is the algorithm. Want an example of what this camera looks like? Here he is:

Yes, in the second Kinect is exactly such a camera. Just do not confuse the second Kinect with the first (on Habré once upon a time there was a good and detailed
article where it was still mixed up). The first Kinect uses a
structured backlight . This is a much older, less reliable and slower technology:


It uses a conventional infrared camera, which looks at the projected pattern. Its distortions determine the range (a comparison of methods can be found
here ).
But Kinect is not the only representative on the market. For example, Intel launches a
camera for $ 150, which provides a 3D image card. It is focused on a more near zone, but they have an SDK for analyzing gestures in the frame. Here is another
option from SoftKinetic (they also have an SDK, plus they are somehow tied to texas instruments).



I myself, the truth has not yet come across any of these cameras, which is a pity and annoyance. But, I think and hope that in five years they will come into use and my turn will come. As far as I know, they are actively used when targeting robots, they are introduced into face recognition systems. The range of tasks and applications is very wide.