πŸ“œ ⬆️ ⬇️

On a singularity of the Kotelnikov theorem

I was inspired to write this article by the following task:

As is known from Kotelnikov's theorem, in order for an analog signal to be digitized and then reconstructed, it is necessary and sufficient that the sampling frequency be greater than or equal to twice the upper frequency of the analog signal. Suppose we have a sine with a period of 1 second. Then f = 1 βˆ• T = 1 hertz, sin ((2 βˆ— Ο€ T) βˆ— t) = sin (2 βˆ— Ο€ βˆ— t), the sampling frequency is 2 hertz, the sampling period is 0.5 seconds. Substitute values ​​that are multiples of 0.5 seconds into the formula for sin: sin (2 βˆ— Ο€ βˆ— 0) = sin (2 βˆ— Ο€ βˆ— 0.5) = sin (2 βˆ— Ο€ βˆ— 1) = 0
Everywhere get zeros. How, then, can this sine be restored?


')
An Internet search did not give an answer to this question, the maximum of what could be found is various discussions on forums where quite bizarre arguments for and against were given before references to experiments with various filters. It should be pointed out that Kotelnikov's theorem is a mathematical theorem and it should be proved or disproved only by mathematical methods. What I did. It turned out that the evidence of this theorem in various textbooks and monographs is quite a lot, but I did not manage to find where this contradiction arises for a long time, because the evidence was presented without many subtleties and details. I will also say that the very formulation of the theorem was different in different sources. Therefore, in the first section I will give a detailed proof of this theorem, following the original work of the academician himself (V.A. Kotelnikov 'On the transmission capacity of the' ether 'and wire in telecommunications.' Materials for the I All-Union Congress on technical reconstruction of communications and the development of low-current industry 1933)

We formulate a theorem as given in the original source:
Any function F (t), consisting of frequencies from 0 to f1 periods per second, can be represented next

image

where k is an integer; Ο‰ = 2Ο€f1; Dk - constants depending on F (t).

Proof: Any function F (t) that satisfies Dirichlet conditions (a finite number of maxima, minima and break points on any finite segment) and integrable from βˆ’βˆž to + ∞, which occurs in electrical engineering, can be represented by a Fourier integral:

image

those. as the sum of an infinite number of sinusoidal oscillations with frequencies from 0 to + ∞ and amplitudes of C (Ο‰) dΟ‰ and S (Ο‰) dΟ‰, depending on the frequency. And

imageimage

In our case, when F (t) consists only of frequencies from 0 to f1, obviously

imageimage

at

image

and therefore F (t) can be represented as:

image

the functions C (Ο‰) and S (Ο‰), like all the others in the segment

image

can be represented always by Fourier series, and these series can, according to our desire, consist of cosines or sines alone, if we take for the period the double length of the segment, i.e. 2Ο‰1.

Author's note: here it is necessary to give an explanation. Kotelnikov uses the ability to complement the functions C (Ο‰) and S (Ο‰) so that C (Ο‰) becomes even, and S (Ο‰) is an odd function on the double segment with respect to Ο‰1. Accordingly, in the second half of the section, the values ​​of these functions will be C (2 βˆ— Ο‰1 βˆ’Ο‰) and βˆ’S (2 βˆ— Ο‰1 βˆ’Ο‰). These functions are reflected along the vertical axis with the coordinate Ο‰1, and the function S (Ο‰) also changes the sign

In this way

imageimage

We introduce the following notation.

imageimage

Then

imageimage

Substituting we get:

image

Transform

image

Transform

image

We integrate and replace Ο‰1 by 2Ο€f1:

image Inaccuracy in the Kotelnikov theorem

All evidence seems rigorous. What is the problem? To understand this, we turn to one not very well-known property of the inverse Fourier transform. It says that during the inverse transformation from the sum of sines and cosines into the original function, the value of this function will be

image

that is, the restored function is equal to the half-sum of the values ​​of the limits. What does this lead to? If our function is continuous, then to nothing. But if there is a final discontinuity in our function, then the function values ​​after the direct and inverse Fourier transforms will not match the original value. Let us now recall a step in the proof of the theorem, where the interval is doubled. The function S (Ο‰) is complemented by the function βˆ’S (2 βˆ— Ο‰1 - Ο‰). If S (Ο‰1) (the value at the point Ο‰1) is zero, nothing bad happens. However, if the value of S (Ο‰1) is not zero, the reconstructed function will not be equal to the original one, since at this point a discontinuity of 2S (Ο‰1) occurs.
We now return to the original problem about the sine. As is known, the sine is an odd function whose image after the Fourier transform is Ξ΄ (Ο‰ - Ξ©0) - the delta function. That is, in our case, if the sine has a frequency Ο‰1, we get:

imageimage

Obviously, at the point Ο‰1, we add two delta functions of S (Ο‰) and βˆ’S (Ο‰), forming a zero, which we observe.

Conclusion

The Kotelnikov theorem is undoubtedly a great theorem. However, it must be supplemented by another condition, namely

image

In such a formulation, boundary cases are excluded, in particular, a case with a sine whose frequency is equal to the boundary frequency Ο‰1, since for him the Kotelnikov theorem with the above condition cannot be used.

Source: https://habr.com/ru/post/197606/


All Articles