📜 ⬆️ ⬇️

Tracking an object by its color using Aforge.NET

Hello. Frequent phrase: "my first post" :). In it I want to tell you about my small project to track an object by its color. Now it has a fairly wide range of applications, for example, the same joysticks from Wii and Playstation 3. The basis for the work was the development of Andrei Kirillov Aforge.NET - a rather powerful thing for self-made image processing.
The code does not claim to be “the ultimate truth”, much has been simplified (in one place, in a sense, duplication was allowed - I created my own class for quick access to the pixels, although there were similar developments in Aforge). But nevertheless, the code works, tracks the object, gives information about the location, allows you to dynamically calculate the shade of the object (in case of a change in lighting).

For those interested - please under the cat.


A short excursion to AForge.


A framework is a set of libraries, each of which is designed to solve a specific kind of task:

Included with the library is a set of examples.
')

User interface


I did not write my project from scratch, I took the example of Vision \ MotionDetector as a basis. He already knows how to connect to a webcam, a remote webcam (via JPEG, MJPEG url), and also open certain video files (which, I confess, did not experiment):
image

The original example is able to identify motion on a stream using several algorithms, the simplest of which is finding the difference between two consecutive frames.
The form code was processed by a file, and sharpened specifically for the task. On the Motion tab, you need to select an object search algorithm:
image

Then select an object via the Define color tracking object form:
image

Information about the object will be displayed in the status bar of the main form:
image

As an additional setting, the color difference threshold is provided - the ability to not track a single color, but to take into account its variations:
image

Additionally, the user can specify if he wants the object color to be also tracked during processing (that is, not only the object itself is tracked by color, but also the new color of the object is calculated during processing):
image

The next bun comes as standard with the Aforge Motion Detector. The object can be highlighted in different ways on the image:
image

Implementation


AForge.Vision.Motion. IMotionDetector - an interface that allows you to search for the difference between images. It inherits the ColorDetection class that performs the processing.
To interact with the user interface, the Initialize (Image Image, Rectangle rect) method was added, which initializes the processing of subsequent frames. Here is the collection of information about the target object (selected rectangle in the image). Information is collected on the dominant color in the selected area (this property will continue to serve as the basis for tracking). The position of the target is also remembered.

The iMotionDetector has the following methods:

Properties:

To update the information in the status bar of the application, Get Properties has been added:

In order to use not only the target color, but also some shades, Set Property DifferenceThreshold is used .
The main frame processing takes place in the ProcessFrame function. The algorithm can be divided into the following steps:
  1. Expansion of the region of the presence of the object. We will look for a new position not across the whole screen, but only in the area adjacent to the previous position. This makes the search more accurate from the point of view that the target object will not be confused with another object of the same color (in another part of the image).
  2. Calculation of the object boundaries in the above described area through the definition of the extreme points of the color that is dominant for the object (here also the possible color deviation - DifferenceThreshold) is taken into account.
  3. Creating a "mask" MotionFrame , which will allow MotionDetector to highlight the target object in the image.
  4. Next, the “average color” and the size of the new object are calculated.
  5. If the object is too small (for example, in the next frame, our target object was completely closed by another object) - we do not change the position and color information inherited from the processing of the previous frame.
  6. Otherwise, the new position and boundaries of the object are remembered, and if the algorithm follows the color change, which is set using the bool DynamicColorTracking property, the new calculated color is also remembered.

This completes the image processing.

Possible improvements


As it was already mentioned about controllers to game consoles, their color usually differs in contrast with any other objects in the frame. Therefore, it is possible to make a primary search for the target color throughout the frame (and not only in the adjacent area). This will allow you to track the object with faster movement.

Useful links on the topic


• Aforge project website .
• Aforge - an article on CodeProject .

PS Link to the source at depositfiles .
Link to the source in Google Docs

UPD. Thanks for the karma, transferred to the thematic blog.

Source: https://habr.com/ru/post/97345/


All Articles