📜 ⬆️ ⬇️

Acoustic Surveillance Systems



Our surveillance systems are primarily associated with video cameras, and, as a rule, without a microphone. Acoustic observation is exotic, something from the world of submarines and espionage fighters. It is not surprising, because vision is the main channel of human perception of information, and audio information could add little to the visual, while the operator sitting in front of the monitor was the key element of any surveillance system.

However, the mass distribution of computers and the development of artificial intelligence systems has led to the fact that now more and more information is analyzed automatically. Video analytics systems already know how to recognize car numbers, faces, and human figures. Given the fairly modest abilities of the human ear, computer hearing can outperform a person much faster than computer vision. The sound has many advantages - the microphone does not have to be within the direct line of sight, it does not have dead zones. A good microphone is cheaper than a good camera, the flow of information from it is much smaller, which means it is easier to store and process in real time. There are many situations for the recognition of which the video requires very non-trivial computer vision algorithms, whereas in the case of acoustic observation, a simple analysis of the level and spectrum of sound is enough.

In recent years, acoustic technology has begun to penetrate everyday life. So far, the most widely used are fire-detection systems, which are installed in high-crime neighborhoods. In the US, several dozen police stations have installed similar systems.
')
Shooting detection systems have a century of history. They began to be used even during the First World War to localize enemy artillery. During World War II, they were used to warn about airstrikes - by the end of the war they were driven out by radar. The first such systems did not even have microphones and resembled huge stethoscopes. Modern detectors are often used to fight snipers.

One of the leading manufacturers of civilian fire detection systems is ShotSpotter . A network of directional microphones is installed on the roofs of buildings, pillars and other elevated places. With the help of triangulation, the sound source is localized to within a few meters. In disadvantaged areas, only in 25% of cases, someone calls the police after hearing the shots. It usually takes a few minutes. ShotSpotter gives a signal to the site after a few seconds, and the exact location of the shot is marked on the map.

All acoustic information is sent not only to the police, but also to the company's servers for analysis. This is necessary in order to reduce the number of false positives from firecrackers, fireworks and pops from car mufflers. Each case of system operation provides new information for a machine learning system that learns to correctly recognize the sound characteristics that are characteristic of a shot. For example, unlike other similar sounds, when fired, not only cotton, which produces powder gases, escaping from the barrel, but also a shock wave from the movement of a bullet at supersonic speed is heard.

Naturally, a shot from a weapon with a silencer will remain unnoticed by the system, but, according to the FBI, such weapons are used by criminals in only one case out of a hundred. In general, the system shows good results and is already working in several major US cities, in the UK and Brazil.

Additional benefits can be obtained if acoustic detectors are used with video cameras and infrared sensors. Guided cameras can automatically turn in the direction of the shot. Recording from such cameras, accompanied by information about the exact time and place of the shot, can greatly assist in the investigation of the crime and serve as evidence in court.

An EU-funded EAR-IT project has much broader goals. The network of acoustic sensors is planned to be used not only for security purposes, but also for the analysis of traffic flows and pedestrians in cities and buildings, as well as environmental monitoring. Microphones placed at the intersection can give fairly accurate information about the number, speed and type of passing cars. Hearing the approaching sound of the siren, the system can adjust the work of the traffic light so as to skip the special vehicle without delay.

Inside buildings, it is possible to determine the number of people in different parts of a room by the noise level and, on the basis of this data, correct the operation of the ventilation and air conditioning systems, make acoustic maps of the rooms, which help to more rationally plan the reconstruction and optimize the operation of the building. The sensor network will consist of many small, inexpensive microphones and a small number of nodes equipped with high-quality microphones and a processor for signal processing.

The project involves several European research institutions and organizations. The long-term goal of the project is the creation of “smart” cities and buildings that not only see, but also hear everything that happens inside them.

Source: https://habr.com/ru/post/183166/


All Articles