📜 ⬆️ ⬇️

Algorithm for determining motion through comparing two frames

Hello.
I want to share with you my work on image processing. Recently, I am engaged in writing a home server under the "smart home" and started with video surveillance.
The task was not so trivial. With regard to all video surveillance, I will write separately (if someone is interested), and now I would like to touch on the topic “Algorithm of motion detection through the comparison of two frames”.
This algorithm is required to enable (disable) video recording from video cameras.
Information on this topic in the network is not so much. The original algorithm was invented by myself (it is very simple), and the article “The algorithm for detecting shadows on a video image” helped me to improve it .
In this article I will only touch on the algorithm (without examples in the programming language). The whole algorithm is based on cycles, everything is elementary and easy to recreate in your favorite programming language.
Examples in the descriptions of the algorithm will be given as "live" and invented tables (for understanding).

Basic algorithm



The advantages of this algorithm are: ease of programming, low resource consumption, good sensitivity (does not let in the slightest movement).
I see only one drawback - triggering on the change in illumination. Since the entire algorithm is based on the analysis of color, then in cloudy weather you can easily watch when the sun comes out from behind the clouds and the room becomes much lighter. At this point, they are attacking the blocks throughout the image. In paragraph 4 of the algorithm, I showed a mask with an overstated "delta". In reality, on this example (see item 1) with a working delta, the mask covers almost the entire frame.

If you overestimate the delta (as in paragraph 4 of our algorithm), then this greatly affects the sensitivity, and our motion sensor may stop working on dimly lit objects.
')
Therefore, I began to look for a solution to the issue, and to my delight I found a hint at the Habré (see the link at the beginning of the text). I wanted the algorithm to work only on objects (not transparent), and the transparent shadow from the girl and the light on the walls and the floor did not make our motion sensor trigger.

Improved algorithm




In this algorithm, by filtering the MoveMask with the MaskFilter filter, we remove most of the blocks from the MoveMask that work on the shadow or highlight. It is possible to reduce more than twice the erroneous response of the sensor.
Among the shortcomings, a huge number of “deltas” and the complexity of both programming and algorithm settings.


Algorithm base and algorithm improved.
In this example, the shadow, the glare on the walls and windows could be removed completely. Glare on the floor were "overcooked" and did not disappear. But, perhaps, for this room, by correcting one of the “deltas”, you can configure the filter to ignore and glare from the floor. Different rooms - different algorithm tweaks.

How can you even improve the algorithm


Maybe if you convert RGB to HSV and try to work without the brightness channel, you can increase the accuracy of the algorithm. Hands have not yet come to check.

I hope my experience and this description of the algorithm will be useful to someone. And if you have something to supplement, correct, suggest, I will be glad to hear.

Source: https://habr.com/ru/post/134635/


All Articles