📜 ⬆️ ⬇️

We study OpenCV on StereoPi: depth map by video



Today we want to share a series of examples on Python for OpenCV learners on the Raspberry Pi, namely for the two-chamber StereoPi board. The finished code (plus the Raspbian image) will help you to go through all the steps, starting from capturing a picture and ending with getting a depth map from the captured video.

Introductory


I want to stress right away that these examples are for a comfortable immersion in the topic, and not for a production solution. If you are an advanced user of OpenCV and dealt with raspberries, then you know that for full-fledged work it is desirable to code on a sishechka, and even use a raspberry GPU. At the end of the article I will touch upon the "bottlenecks" of the python solution and performance in general in a little more detail.

What we work with


As iron we have such a setup here:
')


StereoPi board, on board the Raspberry Pi Compute Module 3+. Two of the simplest cameras for the Raspberry Pi version V1 are connected (on the ov5647 sensor).

What is established:


The process of installing all the software is beyond the scope of this article, and we simply suggest downloading a ready-made Raspbian image (links to the githab at the end of the article).

Step One: Capturing a Picture


To do this, use the script 1_test.py

Open the console, go from the home folder to the folder with examples:

cd stereopi-tutorial 

Run the script:

 python 1_test.py 

After launch, a thumbnail of our stereo image is displayed. The process can be interrupted by pressing the Q button. This will save the last captured image, which will be used in one of the following scripts to set up a depth map.

This script allows you to make sure that all the hardware is working correctly, as well as get the first image for future use.

Here is the work of the first script:


Step Two: Collect Images for Calibration


If we talk about a spherical horse in vacuum, then to get a good quality depth map we need to have two absolutely identical cameras, the vertical and optical axes of which are ideally parallel, and the horizontal axes coincide. But in the real world, all the cameras are slightly different, and it’s impossible to arrange them perfectly. Therefore, a software calibration trick was invented. With the help of two cameras from the real world, a large number of pictures of a previously known object are taken (we have this picture with a chessboard), and then a special algorithm calculates all the “nonideality” and tries to correct the pictures so that they are close to the ideal.

This script makes the first stage of work, namely it helps to make a series of photos for calibration.

Before each photo the script starts a 5-second countdown. This time, as a rule, is enough to move the board to a new position, make sure that on both cameras it does not crawl over the edges, and fix its position (so that there is no smearing on the photo). By default, the series size is set to 30 photos.

Run:

 python 2_chess_cycle.py 

Process:


As a result, we have a series of photos in the folder / scenes.

Cut the pictures into pairs


The third script, 3_pairs_cut.py, cuts the photos you have taken into “left” and “right” pictures and saves them in the / pairs folder. In fact, we could eliminate this script and do the cutting on the fly, but it is very useful for further experiments. For example, you can save the thread from different series, use your own scripts to work with these pairs, or even send pictures as pictures of other stereo cameras as pairs.

Plus, before each picture is cut, the script displays its image, which often allows seeing the unsuccessful photos before the next calibration stage and simply deleting them.

Run the script:

 python 3_pairs_cut.py 

Short video:


In the finished image there is a set of photos and chopped pairs that we used for our experiments.

Calibration


The 4_calibration.py script draws all pairs with chess boards to itself and calculates the necessary corrections to correct the pictures. The script made automatic garbage photos, which are not found a chessboard, so that in the case of unsuccessful photos, the work does not stop. After all 30 pairs of pictures are loaded, the counting begins. It takes us about a minute and a half. After completion, the script takes one of the stereo pairs, and, based on the calculated calibration parameters, “corrects” them, displaying a rectified picture. At this point, you can evaluate the quality of the calibration.

Run the command:

 python 4_calibration.py 

Calibration script in work:


Depth Map Setup


The 5_dm_tune.py script loads the image made by the first script and the calibration results. Next, an interface is displayed that allows you to change the settings of the depth map and see what is changing. Tip: before setting the parameters, make a frame in which you will have objects at the same time at different distances: near (30-40 centimeters), at an average distance (meter-two) and in the distance. This will allow you to choose the parameters for which close objects will be red in color, and distant objects will be dark blue.

In the image is a file with our depth map settings. You can load our settings in the script simply by clicking the “Load settings” button.

Run:

 python 5_dm_tune.py 

Here is the setup process:


Real-Time Depth Map


The last script 6_dm_video.py builds a depth map by video, using the results of previous scripts (calibration and setting of the depth map).

Run:

 python 6_dm_video.py 

Actually the result:


We hope that our scripts will be useful in your experiments!

Just in case, I’ll add that all scripts have keystroke processing, and you can stop working by pressing the Q button. If you stop roughly, for example, Ctrl + C, the process of interaction between Python and the camera may break and you will need to restart the raspberry.

For advanced



Thank you for your attention, and here is the promised reference to the source .

Source: https://habr.com/ru/post/446872/


All Articles