Recently, there were several articles on Habré on Neural Networks. To broaden my horizons, I propose a description of the Neural Network built on non-classical principles, and with which I conducted experiments very actively and productively. The neuron will not accumulate incoming signals, but will recognize incoming signal sequences.
Building a neuron
Let the neuron has several inputs and one output. Neuron, like the entire neural network, operates in cycles. The inputs of the neuron are ordered. During one cycle, a neuron receives a sequence of zeros and ones from all synapses at the input in accordance with the established order: first from the first, then from the second, and so on. The sequence of input signals of one clock cycle is complemented by a sequence of signals of the next. In total, we can assume that a neuron receives a continuous sequence of zeros and ones as input.
A neuron has a pattern, having met that anywhere in the incoming sequence, the neuron “shoots”: it gives out a signal (one) at the output in the tact in which it met the pattern sequence.
Learning a neuron comes down to finding a pattern that a neuron must find in its incoming sequence.
Building a network. Energy
Let the neural network consists for the beginning of two neurons, each of which is connected to receptors.
- For image analysis, receptors will traditionally be the pixels of the matrix on which the image is projected.
We will project two types of images onto the matrix: horizontal and vertical lines. We set ourselves the task of teaching the neural network to recognize these images, and so that one neuron responds only to horizontal, and the other to vertical lines only.
To do this, first complicate the behavior of the neuron. Suppose that a neuron, producing a signal (unit), consumes energy (always the same one, we denote it by 1E - a unit of energy). Where does the neuron take the energy? From your surroundings. We assume that at each clock cycle a certain amount of energy enters the neural network, which is entirely distributed over the neurons.
- I will say in advance that the amount of energy entering the neural network for a certain interval corresponds to the number of response signals of the neural network. That is, in conditions where only two types of images are fed to the input, and we want to achieve an unambiguous reaction, one unit of energy must be fed into the network at each clock cycle.
So, each neuron at the beginning of a clock has some non-negative energy potential. If the accumulated energy allows, the neuron, having met its pattern in the incoming sequence, “shoots”. When discharging, the energy potential of a neuron decreases by one. During the cycle, the energy entering the neural network must be distributed between the neurons. The distribution rule affects the overall behavior of the system. We propose a simple rule: energy is distributed between neurons inversely proportional to their energy potential: a neuron with a minimum potential receives more energy (or even all the energy) than a neuron with a maximum potential; equal potential neurons receive an equal amount of energy.
Training
What should happen to neurons if they “shoot” too often? Energy consumption in this case increases, and at some point it turns out that the neuron met the patterned sequence, but the energy potential does not allow to form a response. This means that the pattern sequence is found too often: the pattern needs to be changed.
- A pattern can be any sequence, and its length should not correspond to the number of input synapses of the neuron: it can be both shorter and longer.
In the described case, the pattern should be extended. Suppose a bit follows the one that occurred in the input sequence following the recognized pattern.
And what if the neurons are “silent”? Their energy accumulates. Suppose that reaching a certain upper threshold of the neuron energy potential (say, 10E) is critical. Reaching the upper threshold means that the neuron has an irrelevant pattern: the pattern must be shortened. Shortening the pattern is mated with the consumption of a certain amount of energy, say, 1E.
')
Behavior
Thus, we have a dynamic system that is easy to implement.
I note that a system built on two neurons according to the principles outlined can quite effectively recognize patterns. For the given example with vertical and horizontal lines, the stabilized neural network will produce the correct results for any (!) Horizontal and vertical lines regardless of their thickness and location on the retina.