📜 ⬆️ ⬇️

Motion Detection Based on OpenCV Bioinspired Module

image

This article will be useful for beginners who have just started using the OpenCV library and do not yet know all its features. In particular, based on the bio - inspired module of the OpenCV library, it is possible to make a motion-adaptive motion detector. This motion detector will work in the twilight better than the usual subtraction of two frames.

Something about Retina


The OpenCV library contains the Retina class, in which there is a space-time filter of two information channels (parvocellular pathway and magnocellular pathway) of the retinal model. We are interested in the magnocellular channel, which in fact is already a motion detector: it remains only to obtain the coordinates of the image area where there is motion, and in some way not to react to the interference that occurs if the motion detector is shown a static picture.

image

Interference at the output of the magno channel in the absence of motion in the image

Code


First you need to connect the bioinspired module and initialize it. In this example, the module is configured to work without using color.
')
Connection and initialization of the module
#include "opencv2/bioinspired.hpp" //   cv::Ptr<cv::bioinspired::Retina> cvRetina; //    //  void initRetina(cv::Mat* inputFrame) { cvRetina = cv::bioinspired::createRetina( inputFrame->size(), //    false, //   :    cv::bioinspired::RETINA_COLOR_DIAGONAL, //    false, //     1.0, //  .      10.0); //   //    cvRetina->write("RetinaDefaultParameters.xml"); //   cvRetina->setup("RetinaDefaultParameters.xml"); //   cvRetina->clearBuffers(); } 


The default settings will be saved in the * RetinaDefaultParameters.xml * file. Perhaps it will make sense to correct them.

RetinaDefaultParameters
 <?xml version="1.0"?> <opencv_storage> <OPLandIPLparvo> <colorMode>0</colorMode> <normaliseOutput>1</normaliseOutput> <photoreceptorsLocalAdaptationSensitivity>0.89e-001</photoreceptorsLocalAdaptationSensitivity> <photoreceptorsTemporalConstant>5.0000000000000000e-001</photoreceptorsTemporalConstant> <photoreceptorsSpatialConstant>1.2999997138977051e-001</photoreceptorsSpatialConstant> <horizontalCellsGain>0.3</horizontalCellsGain> <hcellsTemporalConstant>1.</hcellsTemporalConstant> <hcellsSpatialConstant>7.</hcellsSpatialConstant> <ganglionCellsSensitivity>0.89e-001</ganglionCellsSensitivity></OPLandIPLparvo> <IPLmagno> <normaliseOutput>1</normaliseOutput> <parasolCells_beta>0.1</parasolCells_beta> <parasolCells_tau>0.1</parasolCells_tau> <parasolCells_k>7.</parasolCells_k> <amacrinCellsTemporalCutFrequency>1.2000000476837158e+000</amacrinCellsTemporalCutFrequency> <V0CompressionParameter>5.4999998807907104e-001</V0CompressionParameter> <localAdaptintegration_tau>0.</localAdaptintegration_tau> <localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno> </opencv_storage> 

For myself, I changed a couple of parameters (ColorMode and amacrinCellsTemporalCutFrequency). Below is a translation of the description of some parameters for the magno output.

normalizeOutput - determines whether (true) output will be scaled in the range from 0 to 255 not (false)

ColorMode - determines whether (true) the color will be used for processing, or (false) the gray image will be processed.

photoreceptorsLocalAdaptationSensitivity - photoreceptor sensitivity (from 0 to 1).

photoreceptorsTemporalConstant - the time constant of the low-pass filter of the first order of photoreceptors, you need to use it to reduce high temporal frequencies (noise or fast motion). A block of frames is used, a typical value of 1 frame.

photoreceptorsSpatialConstant is a spatial constant of the low-pass filter of the first order of photoreceptors. You can use it to reduce high spatial frequencies (noise or thick outlines). A block of pixels is used, a typical value is 1 pixel.

horizontalCellsGain - enhancement of the horizontal network of cells. If the value is 0, then the average value of the output signal is zero. If the parameter is near 1, then the brightness is not filtered and is still achievable at the output. A typical value is 0.

HcellsTemporalConstant is the time constant of the low-pass filter of the first order of horizontal cells. This item is needed to cut low temporal frequencies (local brightness variations). A block of frames is used, a typical value is 1 frame.

HcellsSpatialConstant is a spatial constant of a low-pass filter of the first order of horizontal cells. It is necessary to use in order to cut out low spatial frequencies (local brightness). A block of pixels is used, a typical value is 5 pixels.

ganglionCellsSensitivity - the compression force of the local adaptation yield of ganglion cells, set a value in the range from 0.6 to 1 to achieve the best results. The value increases according to how the sensitivity falls. And the output signal is saturated faster. The recommended value is 0.7.

To speed up the computation, it makes sense to pre-reduce the incoming image using the * cv :: resize * function. To determine the presence of interference, you can use the average image brightness value or entropy. Also in one of the projects I used the counting of pixels above and below a certain level of brightness. The bounding box can be obtained using the function to search for contours. Under the spoiler is the code of the motion detector, which does not claim to work, but only show an approximate possible implementation.

Motion Detection Code
 //          #define CV_MOTION_DETECTOR_MEDIAN_FILTER_N 512 //    static float meanBuffer[CV_MOTION_DETECTOR_MEDIAN_FILTER_N]; static float entropyBuffer[CV_MOTION_DETECTOR_MEDIAN_FILTER_N]; //   static int numFrame = 0; //    float getMedianArrayf(float* data, unsigned long nData); //   // inputFrame -   RGB  CV_8UC3 // arrayBB -    void updateMotionDetector(cv::Mat* inputFrame,std::vector<cv::Rect2f>& arrayBB) { cv::Mat retinaOutputMagno; //    magno cv::Mat imgTemp; //     float medianEntropy, medianMean; //   cvRetina->run(*inputFrame); //     cvRetina->getMagno(retinaOutputMagno); //   ,     cv::imshow("retinaOutputMagno", retinaOutputMagno); //      ,      if (numFrame < CV_MOTION_DETECTOR_MEDIAN_FILTER_N) { numFrame++; } //       float mean = cv::mean(retinaOutputMagno)[0]; //   float entropy = calcEntropy(&retinaOutputMagno); //   if (numFrame >= 2) { //    //     //      for (i = numFrame - 1; i > 0; i--) { entropyBuffer[i] = entropyBuffer[i - 1]; } entropyBuffer[0] = entropy; //     //     //       for (i = numFrame - 1; i > 0; i--) { meanBuffer[i] = meanBuffer[i - 1]; } meanBuffer[0] = mean; //      medianEntropy = getMedianArrayf(entropyBuffer, numFrame); medianMean = getMedianArrayf(meanBuffer, numFrame); } else { medianEntropy = entropy; medianMean = mean; } //      ,    ,    // if (medianMean >= mean) { //    ,    ,    if ((medianEntropy * 0.85) >= entropy) { //    //  ,      //         // cv::threshold(retinaOutputMagno, imgTemp,150, 255.0, CV_THRESH_BINARY); //        cv::threshold(retinaOutputMagno, imgTemp,150, 255.0, CV_THRESH_BINARY); //   std::vector<std::vector<cv::Point>> contours; cv::findContours(imgTemp, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE); if (contours.size() > 0) { //    arrayBB.resize(contours.size()); //    float xMax, yMax; float xMin, yMin; for (unsigned long i = 0; i < contours.size(); i++) { xMax = yMax = 0; xMin = yMin = imgTemp.cols; for (unsigned long z = 0; z < contours[i].size(); z++) { if (xMax < contours[i][z].x) { xMax = contours[i][z].x; } if (yMax < contours[i][z].y) { yMax = contours[i][z].y; } if (xMin > contours[i][z].x) { xMin = contours[i][z].x; } if (yMin > contours[i][z].y) { yMin = contours[i][z].y; } } arrayBB[i].x = xMin; arrayBB[i].y = yMin; arrayBB[i].width = xMax - xMin ; arrayBB[i].height = yMax - yMin; } } else { arrayBB.clear(); } } else { arrayBB.clear(); } //   retinaOutputMagno.release(); imgTemp.release(); } //    template<typename aData> void quickSort(aData* a, long l, long r) { long i = l, j = r; aData temp, p; p = a[ l + (r - l)/2 ]; do { while ( a[i] < p ) i++; while ( a[j] > p ) j--; if (i <= j) { temp = a[i]; a[i] = a[j]; a[j] = temp; i++; j--; } } while ( i<=j ); if ( i < r ) quickSort(a, i, r); if ( l < j ) quickSort(a, l , j); }; //    float getMedianArrayf(float* data, unsigned long nData) { float medianData; float mData[nData]; register unsigned long i; if (nData == 0) return 0; if (nData == 1) { medianData = data[0]; return medianData; } for (i = 0; i != nData; ++i) { mData[i] = data[i]; } quickSort(mData, 0, nData - 1); medianData = mData[nData >> 1]; return medianData; }; 


image

An example of the work of motion detection.

Source: https://habr.com/ru/post/321618/


All Articles