📜 ⬆️ ⬇️

The complication of memory in neural networks

Reading the article "21st century: what is life from the point of view of physics," I came across a description of memory in both living and non-living matter. By memory in living matter, everyone understands what is meant, well, and by memory in non-living means all methods of storing information in your computer and many other ways. So, from the point of view of the author of the article, G. Ivanitsky, the hallmark of life is the use of memory to predict Everything would be fine, but only we have already created many prediction programs and automata which are guided by such programs. I don’t want to raise at least a few philosophical questions about where the border is alive and not alive, whether robots are alive then, and we can just complex composite automata, etc. And just bring a chain of reasoning to a problem that interests me more, using the material of the article.

Darwin suggested that heredity, selection, and variability are the driving forces of evolution. However, selection cannot be the driving force, it comes from what already exists, reducing diversity. Selection is a reduction operation. And instead of the term "variability" should use the term "self-complication." But how is the self-complication mechanism implemented?

As with any process, energy and directing the use of this energy are necessary for complication. The author does not give a direct answer as to what should happen here. He draws as close as possible in his opinion to this process an analogy. As from a random symmetric chaotic process, a directional process and an accumulation of energy arise. Why is a symmetric chaotic process, because the occurrence of life is considered at the molecular level in a liquid characterized by thermodynamic equilibrium. And for simplicity and clarity of explanation, the symmetric chaotic process is reduced to a probabilistic binary game.
')
And a probabilistic binary game is a simple coin toss. Such a game is symmetrical; for an infinite number of shots, the average probabilities of winning and losing are equal. The question is, can a player in any way break the symmetry in his direction?
The conditions of the game are quite strict, every time a coin is tossed up, the game starts again, all previous game results are forgotten. The probability of each new game does not depend on the previous one. This means that there is no strategy that allows you to win, and the optimal strategy, opposing the chance of one side of a coin falling out of a completely random strategy of guessing a side, guarantees a draw.

However, if the player has a memory for at least one round and the ability to change the value of the bet, then you can choose a strategy to change the value of the bet, which introduces asymmetry in the direction of the player. When considering the results of games in time, they can be represented as a sequence of alternating clusters of winnings and losses of various lengths. The strategy is to increase the rate from one unit to an arithmetic progression as soon as you win. In total, after applying such a strategy, clusters of winnings will bring more than they will take away clusters of losses.

Such a strategy is the simplest example of a control system with a variable feedback factor. In our case, the strategy was invented by the player, as it happened in nature is unknown. But it is obvious that the central idea of ​​all living objects is a survival strategy, which requires a control system. In living organisms, such systems are implemented both at the molecular level in unicellular and at the cellular level - in neurons in the nervous system of higher forms of life.

The control system described above has a memory used to store the result of the last coin toss game. With each new game, the memory cell updates its value. Such a memory is based on a simple structure and is easily realizable on neural networks, but still it lasts exactly one tact of operation. And this may have been the first step of nature in the evolution of life management systems.

Now the most complex product of evolution is the human brain, consisting of many neurons. Neurons and connections between neurons are very, very complex structured. The map of the brain, which describes the structural parts of the brain and for which they are responsible, is still being refined and improved. And we are far from even compiling a “connectom” of the human brain - a complete description of the structure of connections in the nervous system of the body.

But one of the important properties of the human brain is “permanent memory”, which allows learning and implementing the strategies described above and those similar to it without mastering them from birth. It would be logical to assume that the structure of our brain allows us to do such tricks. And if we throw aside all the complicating properties as much as possible and set ourselves the task of creating a model of a control system, the structure of which would allow keeping the simplest strategy in memory and at the same time be guided by it. (It breaks me down to a hard analogy with the expression “the system is aware of what it is doing”) Can we solve this problem on neurons? As such, a solution already exists, but in the form of expert systems, programs that derive new theorems and the like, but not on neurons. And I did not hear that someone would seriously engage in the construction of such structures on neural networks. The last serious attempts to work with complex structures of neurons is the realization of the nervous system of the simplest roundworm of the connectome . But in the worm there is no structure of "permanent" memory, this property is only for higher forms of life. Or, from my point of view, a weakly conscious simulation of the simultaneous operation of neurons by a quantity slightly more than that of an ordinary cat .

I have an article about how I modeled on a neural network a simple amoeba control system that goes to the goal. Now I want to build a control structure on neurons that would meet the following requirements:

From my previous experience, I can say that probably the task will be simplified even more to the most primitive level while preserving the central idea of ​​implementing “permanent” memory. Well, the result will be in the form of another way it does not work.

Materials:
Article " 21st century: what is life from the point of view of physics " G.R. Ivanitsky in the journal "Uspekhi Fizicheskikh Nauk".
Video program " Popular Science " with the discussion of the article.

Source: https://habr.com/ru/post/119536/


All Articles