Recently I had the opportunity to participate in one interesting project. My sister is studying to be a designer at BUSAD, and they were given the task to do a project on the topic of Street Interactive. The idea was chosen rather simple. The animation of a moving bear is shown on the screen, everyone is invited to get into it from a slingshot with an improvised snowball. The result is shown on the video, who are interested in the technical implementation, welcome under cat.
Description
The original idea was to use kinekt for tracking. It seemed that for such a task the kinekt would fit perfectly - it has a good built-in camera, and also allows you to track the depth (and determine the position of the body in three-dimensional space with some accuracy). However, after brief testing, the use of kinekte had to be abandoned. It does not allow to track objects at a distance, and in addition, the bright light from the projector interferes with its sensors.
Then I had an idea to use a regular webcam for tracking. Position the camera next to the projector and direct it to the screen. With its help to track the position of the ball in the plane of the screen. But one more problem remained - to determine the moment of the ball colliding with the wall. As an option, Arduino was considered with a motion sensor. However, it was finally decided to use the second camera located near the screen as a motion detector. With it, you can record the moment when the ball flies to the screen, and take the coordinates of the impact within a few milliseconds after that moment. ')
The software implementation was decided to be done in C ++ using the OpenCV library. It allows you not to reinvent the wheel, but to use the ready-made functionality for capturing an image from a camera, and then processing it.
Ball Tracking
To determine the coordinates of the ball, I used the following algorithm. 1) Translated image from RGB view to HSV. This makes it easier to identify similar colors, because unlike RGB, HSV stores color tone, saturation and brightness in a separate channel. 2) Translated image into binary (bitmap). Those colors that are closest to the desired color (the color of the ball) - transformed into white. The rest are black. 3) Filtered noises by the median filter. 4) Determined the average coordinate and the number of white pixels. If the quantity is greater than the threshold value, then there is a ball in the frame. The resulting code is:
To determine what happened, I used the second camera as a motion detector. For this, I determined the difference between the previous frame and the current one, and if the difference is above the threshold value, then the ball flew into the frame:
As a result, having the coordinates of the ball and information about the collision, the program passed them to the second program responsible for the animation. The result of the project is presented in the first video. And below you can look at testing technology at home.