📜 ⬆️ ⬇️

Visibility through the turbulent atmosphere. Computer correction of images of remote objects

Author retelling of two publications with a demo film.

A solution is proposed to improve the visibility of distant objects observed through a randomly inhomogeneous atmosphere. The method is based on real-time processing of sequential shots taken with a digital video camera with a long-focus lens. The film shows, it seems to me, quite spectacular results.


Astronomical retreat.


Anyone who looked through a telescope at high magnification, for example, at the moon, observed a trembling, blurry and changeable image, only occasionally “clearing up” at the moments of fading of air fluctuations. Areas of space with different densities of air can be imagined as a set of weak irregular random lenses, constantly appearing and disappearing with great frequency.
In astronomy, the classical methods of adaptive optics (wavefront sensors and high-speed deformable mirrors) are used to correct images distorted by atmospheric turbulence. These are quite complex and expensive systems, available only for large and extra large telescopes, but with a significant positive effect. Here is a case in point (taken from: Ralf Flicker. Methods of Multi-Conjugate Adaptive Optics for Astronomy, Lund Observatory, Sweden, 2003).
')


The left photo is a section of the starry sky, photographed with a long exposure. Random distortion, superimposed on each other, lead to a significant blurring of the image, with the result that many stars are simply not visible, while others merge with each other. In the center - the same part of the sky, photographed using an adaptive mirror. A wavefront sensor, aimed at a bright (supporting) star just below the central cluster, measures wavefront distortions at several points in the entrance aperture. The computer calculates and delivers control signals through high-voltage amplifiers to piezoelectric pushers. These pushers, pressing on a flexible mirror from behind in different places, bend it so as to compensate for distortion. This procedure occurs hundreds of times per second. In this way, it is possible to improve the image in a certain region near the reference star. As can be seen from the middle photo, the image does not improve at the edges, because there are other distortions in different parts of the image, other than those measured by the sensor in the center. With the help of any tricks (multi-conjugate adaptive systems with several reference stars and mirrors), you can expand the area of ​​the improved image - the photo on the right. By the way, the angular size of this area in the sky is only 90 seconds, or 1/20 of the lunar disk.

For a telescope used in “terrestrial conditions,” of course, there is no point in using such a complex technique as an adaptive optical system. Not only because of the complexity and high cost of living, but also because there are no bright anchor points by which it would be possible to determine the distortion of the wave front. Unlike stars, distant surface objects, as a rule, have low contrast and length. And also because the turbulence in the surface layer is significantly higher than in the high-mountain observatories.
Therefore, computational methods for improving the visibility of distant objects in terrestrial conditions, based on post-detector image processing, have, I believe, a good perspective.
First, consider what kind of distortion.

A simplified model of image formation in the atmosphere.


In the first approximation, one can adopt a model of imaging of an object that takes into account the passage of a light flux through a randomly inhomogeneous medium described in [1]. It includes three factors that distort the final image in different ways.

The first factor is random changes in the wavefront slope within the aperture. Simply put, large-scale irregularities in air density act as large weak prisms, tilting the light rays slightly in different directions, causing the image to tremble. Jitter can be removed by measuring the offset of the current frame relative to some reference frame and shifting the image back by the amount of this offset.

The second factor is small-scale density fluctuations, which lead to random (for each frame) distortion of fine image details. Smart people [2,3], based on the theory of atmospheric turbulence and statistical optics, suggest the following method. Since, due to the stochastic nature of fluctuations, it is impossible to correct each frame, it is necessary to apply statistics. Time-averaged distortions can theoretically be computed and obtain an optical transfer function that is quite specific for a given state of the atmosphere. In other words, the averaged image is the result of the action of an atmosphere-specific (non-Gaussian) smoothing filter. Applying a reverse filter to the image, theoretically we should get a diffraction-quality image.

The third factor is the diffraction distortion associated with the finite receiving aperture and with which nothing can be done. The ideal image of a point will always not be a point, but a disk with rings (the so-called Airy picture).

Thus, if we have a sequential set of images of an object, randomly distorted by a turbulent atmosphere, then the image recovery requires the sequential execution of the following operations. Measurement of random displacements of the image, elimination of these displacements, averaging of centered images, calculation of the inverse optical transfer function, filtering of the averaged image. Performing these operations continuously with each subsequent frame, we gradually get the corrected image [4]. By applying the procedure to different parts of the frame, it is possible to calculate the optical flow (displacement vector of each pixel as a function of time) over the entire image and expand the adjustable field of view [5].

Algorithm.


Offset Correction. In the reference frame, choose a square with a size of several dozen pixels, inside which there are more or less contrasting details. In the current frame it is necessary to find a square of the same size, best of all coinciding with the reference one. The comparison of squares is made by direct calculation of the mutual correlation function S (p, q):

S (p, q) = ∑ mod [In (x + p, y + q) - I 0n (x, y)],

where I n (x, y), I 0n (x, y) are the brightness of the pixels of the current nth frame and the reference; x, y - coordinates of square pixels; p, q - coordinates of the search area, approximately equal in size to the reference square; summation over all pixels of a square.
The minimum of S (p, q), meaning the place of the best match, is found using the well-known two-step method of searching for extremes of multidimensional functions. The coordinates of the minimum p min , q min determine the displacement vector of the current frame relative to the reference. Then the square from the current frame shifts to the reference frame and is summed with it, forming a new reference frame. This is the accumulation of the useful signal in the reference square. The summation is performed recursively with a coefficient R of the order of 0.01–0.05:

I 0n + 1 (x, y) = (1 - R) * I 0n (x, y) + R * I n (x + p min , y + q min )

Roughly, we can say that the averaging required for further filtering occurs over a moving sample of several dozen frames. After that, the image bounded by the square becomes generally stable, unlike the rest of the frame.

Filtration. Or elimination of residual blurring of the obtained averaged image due to the influence of small-scale turbulence. This residual blur, for example, for one bright point, is a spot with a bell-shaped brightness distribution, the width of which depends on the intensity of turbulence and is several times larger than the diffraction disk. The role of filtering is to transform this diffuse spot into a diffraction-limited Airy picture. Image filtering and obtaining the final result I (x, y) is performed by two-dimensional direct Fourier transform (operator F) of the averaged image I 0n + 1 (x, y), multiplying the obtained spectrum by the inverse optical transfer function H (x, y), calculated in [2,3], and inverse Fourier transform (operator F -1 ):

I (x, y) = F -1 [F (I 0n + 1 (x, y)) * H (x, y)],

Where
H (x, y)] = [arccos (r) - r * (1 - r 2 ) 1/2 ] * exp [3.44 * rr 5/3 * (1 - r 1/3 )];
r = (x 2 + y 2 ) 1/2 / D;
rr = (x 2 + y 2 ) 1/2 / d;
D is a quantity depending on the relative aperture of the optical system;
d is a quantity depending on the intensity of turbulence (Fried parameter for the atmosphere).

Technical implementation.


The MTO-1000, Vario-Goir-1T, Sigma telephoto lenses with an entrance aperture diameter from 70 to 140 mm and focal lengths from 200 to 5000 mm were used as the optical system. While I did not have a high-speed video camera, I had to use the Raster Tecknolodgy camera RT-1000DC and observe only fixed objects. The main results in the film are obtained with her. As can be seen in the episode with the car number, you need to wait a few seconds until the image "appears."
The episodes with the tower and the building - an attempt to implement the expansion of the corrected area on the entire frame. I had to work with pre-recorded video files, because the amount of computation is very large, and in general there are still many unsolved problems.
The last episode of the film (“The Snowstorm”) was filmed quite recently with the JAI network camera RM-6740, the frame rate is 200 Hz, the time for receiving the corrected image is less than 1 second. In this episode, an additional effect of noise reduction (in the form of falling snow) is observed in the correction area.



Literature

1. Optics of the Atmosphere and the Ocean, 11, 522 (1998).
2. Goodman J. Statistical Optics (M .: Mir, 1988).
3. Fried DLJ Opt. Soc. Am., 56, 1372 (1966).
4. Quantum electronics, Vol. 40, No. 5 (2010), pp.418-420.
5. Quantum electronics, v.41, No. 5 (2011), pp. 475-478.

Source: https://habr.com/ru/post/173791/


All Articles