Hello friends! Recently, I began to study the noise-resistant codes and simulate the process of their work, and I realized that humanly written topics on this topic are quite a bit, or rather not enough. There are tricky books, but they are tricky, that it takes time to study them, and sometimes you need to get some basic knowledge and ideas on the topic, for a very short period of time. As an example, I can cite an article on Hamming codes , it helped me a lot when I was just starting to fiddle with codes. Similar codes are available here .
My topic is aimed primarily at practitioners. Those who are interested in using such algorithms and get a small working application in a reasonable amount of time.
I would also like to clarify that the Chase algorithm is a classic example of noise-tolerant coding, and where it is now very rarely used alone. But for understanding coding and learning issues, it fits one hundred percent, or maybe two hundred percent.
Today, there are many different classes of error-correcting codes, differing from each other in structure, purpose, efficiency, coding / decoding algorithm, and others.
In one approach, codes can be divided into two groups: block , in which encoding and decoding are performed within a single block of bits and tree , in which processing occurs continuously, without dividing the message into blocks.
The decoding of the block code can be accomplished by hard or soft decision making. In decoding with a hard decision, a value is assigned to each accepted bit in the demodulator
0 or 1. A soft decision decoder accepts not only a binary value of 1 or 0, but also a confidence value associated with a given bit. The decoder will use this soft information, and at the output we get the same hard decision.
In short, the vector from the value 0 or 1 comes from the channel to the input of the hard decoder. The vector from the values (-∞; + ∞) comes to the input of the soft decoder. The use of soft decisions gives more opportunities, as it provides a wider maneuver to correct the arising interference.
Chase's algorithm is a block code with a soft solution .
Chase's algorithm is an understandable and relatively efficient error correction (interference) algorithm in the transmitted message. The main idea of the algorithm is to generate an array of code words that will contain the word closest to the received sequence. The algorithm is based on the fact that if a word decoded by a conventional hard decision contains an error, then one of the “nearest” code words to it most likely matches the transmitted one.
Chase decoder uses a hard decoder in his work. Let's use the already mentioned Hamming decoder. Once again I remind you that its description can be found here . The only change, in our case, is the transfer of verification characters to the end of the message.
')
1. Suppose we have an initial vector u = [1, 0, 0, 1];
2. We encode it using the Hamming algorithm and obtain c = [1, 0, 0, 1, 1, 0, 0];
3. Add parity check - with [8] = ∑ mod 2. It is worth noting that for these purposes it is convenient to use the extended Hamming code. But it is better about him separately.
As a result, we obtain c = [1, 0, 0, 1, 1, 0, 0, 1].
4. We introduce some noise and get the vector r = [-1.02, -1.1, -2, 1.95, 0.98, -2.34, -0.73, 1.97].
5. We use the second Chase algorithm . It is characterized by the following method of generating an array of check code words:
6. Demodulate the vector r and get the vector y . y [i] = r [i]> 0, then 1, otherwise 0; And discard the last bit.
y = [0, 0, 0, 1, 1, 0, 1].
7. Generate an array of sample sequences t . t [i] = y ⊕ e [i].
We get:
t [1] = [0, 0, 0, 1, 1, 0, 1] - y ⊕ e [1],
t [2] = [0, 0, 0, 1, 1, 0, 0] - y ⊕ e [2],
t [3] = [0, 0, 0, 1, 0, 0, 1] - y ⊕ e [3],
t [4] = [0, 0, 0, 1, 0, 0, 0] - y ⊕ e [4],
t [5] = [1, 0, 0, 1, 1, 0, 1] - y ⊕ e [5],
t [6] = [1, 0, 0, 1, 1, 0, 0] - y ⊕ e [6],
t [7] = [1, 0, 0, 1, 0, 0, 1] - y ⊕ e [7];
This allows us to check the least reliable bits. Namely, invert them, depending on the values of the vector e [i] and then choose the value that will suit us best (see the next steps).
Reliable bits will not be affected by these operations.
If we used only hard decisions, we could not conclude which bits are the least reliable and check them.
8. Apply a hard solution to t [i] (in this case, the Hamming algorithm), but leave the check characters, and we get an array of vectors s:
s [1] = [0, 0, 0, 1, 1, 1, 1], a tough decision for t [1],
s [2] = [1, 0, 0, 1, 1, 0, 0], a hard decision for t [2],
s [3] = [0, 0, 1, 1, 0, 0, 1], a hard decision for t [3],
s [4] = [0, 0, 0, 0, 0, 0, 0], a hard decision for t [4],
s [5] = [1, 0, 0, 1, 1, 0, 0], a tough solution for t [5],
s [6] = [1, 0, 0, 1, 1, 0, 0], a tough solution for t [6],
s [7] = [1, 1, 0, 1, 0, 0, 1], a hard decision for t [7];
9. Add parity to s [i] vectors:
s [1] = [0, 0, 0, 1, 1, 1, 1, 0],
s [2] = [1, 0, 0, 1, 1, 0, 0, 1],
s [3] = [0, 0, 1, 1, 0, 0, 1, 1],
s [4] = [0, 0, 0, 0, 0, 0, 0, 0],
s [5] = [1, 0, 0, 1, 1, 0, 0, 1],
s [6] = [1, 0, 0, 1, 1, 0, 0, 1],
s [7] = [1, 1, 0, 1, 0, 0, 1, 0];
10. Conduct a modulation of the values of vectors from the array s. According to the formula s [i] [j] = 2 * s [i] [j] - 1:
s [1] = [-1, -1, -1, 1, 1, 1, 1, -1],
s [2] = [1, -1, -1, 1, 1, -1, -1, 1],
s [3] = [-1, -1, 1, 1, -1, -1, 1, 1],
s [4] = [-1, -1, -1, -1, -1, -1, -1, -1],
s [5] = [1, -1, -1, 1, 1, -1, -1, 1],
s [6] = [1, -1, -1, 1, 1, -1, -1, 1],
s [7] = [1, 1, -1, 1, -1, -1, 1, -1];
11. Find the vector nearest to the vector r from the array s. Using Euclidean distance as a metric.
evkl (s [1], r) = 21.96, for s [1]
evkl (s [2], r) = 11.72 , for s [2]
evkl (s [3], r) = 16.64, for s [3]
evkl (s [4], r) = 27.24, for s [4]
evkl (s [5], r) = 11.72 , for s [5]
evkl (s [6], r) = 11.72 , for s [6]
evkl (s [7], r) = 25.00, for s [7]
We got several identical metrics. We can choose any of the vectors s. Let's select the first of them, that is, s [2] = [1, -1, -1, 1, 1, -1, -1, 1].
12. Demodulate s [2] and get the vector res . res [i] = s [2] [i]> 0, then 1, otherwise 0.
res = [1, 0, 0, 1, 1, 0, 0, 1].
13. We discard the check symbols of the vector res and obtain the original vector u = [1, 0, 0, 1], which was required to be transmitted.
Friends, I would like to note that the process of introducing (and modeling this process) interference in messages (described at the beginning of the topic) deserves a separate post. And I did not want to inflate the topic with this useful, but in this context, superfluous information.
I wrote on Habré for the first time. If anyone is interested, in the future I would like to consecrate the Viterbi algorithm, threshold decoders, possibly turbo codes. I would be grateful for the constructive comments.
On this topic I advise - “Coding with error correction in digital communication systems. J. Clark, J. Kane .
Source: https://habr.com/ru/post/208876/
All Articles