⬆️ ⬇️

The logic of consciousness. Introduction

image In due time on Habré the cycle of articles "Logic of thinking" was published. Two years have passed since then. During this time, we have managed to make great progress in understanding how the brain works and to obtain interesting simulation results. In the new cycle “The Logic of Consciousness” I will describe the current state of our research, well, I will try to talk about theories and models of interest for those who want to understand the biology of the natural brain and understand the principles of artificial intelligence.



Before starting, I would like to make a few comments that will be useful to remember while reading all the subsequent articles.



The situation associated with the study of the brain, especially for science. In all other areas of science there are basic theories. They constitute the foundation on which all subsequent reasoning is built. And only in neuroscience there is still no theory that somehow explained how information processes take place in the neural structures of the brain. At the same time accumulated a huge amount of knowledge about the physiology of the brain. Very encouraging results were obtained using artificial neural networks. But to throw a bridge from one to another, so far, fails. What is known about biological neural networks is very poorly related to the artificial neural network architectures that have been created today.

')

Should not be misled by the common phrase that many ideas of artificial neural networks are borrowed from studies of the real brain. Borrowing is too general. By and large, it ends with the fact that there are neurons both there and there, and there are connections between these neurons.



Therefore, today, the basic question of neuroscience is not the construction of more advanced theories, but the search for initial explanations that would somehow tie together everything that is already known about the brain.



Neuron, if you look at it not very intently, looks simple enough. There is a body of a neuron and there is its dendritic tree. On the body and dendrite synapses are located. There is an axon. Synapses look like entrances, axons look like exits. There are spikes - impulses that arise in the body of the neuron and propagate along the axon. Here, like, ready basic design. Typical element with a fairly simple principle of operation.





Neuron Scheme (Mariana Ruiz Villarreal)



That is how McCulloch and Pitts reasoned, proposing a formal neuron scheme. We discard the insignificant, leave the essence and get a threshold adder. The signals at the synaptic inputs are added up, each with its own weight, a threshold function is applied to the result and now the output either has a signal or not.





McCulloch-Pitts Formal Neuron



For such a neuron, its “all” is the meaning of synaptic weights. They determine what a neuron reacts to, that is, in essence, what it is. From these neurons, you can assemble a neural network. You can come up with learning algorithms for such a network. That is, to select such weights for all neurons so that the result of the network operation in any way meets our expectations.



As a result, various architectures of artificial neural networks and various learning algorithms have appeared. But in all architectures, the general idea, embedded in the formal neuron itself, is preserved - the idea of ​​a “grandmother's neuron”. In 1969, Jerry Letvin said: “If a person’s brain consists of specialized neurons, and they encode the unique properties of various objects, then, in principle, there must be a neuron somewhere in the brain, through which we recognize and remember our grandmother.” Very many models that explain the work of the brain, in varying degrees, rely on the concept of "grandmother's neuron." Even if the conversation is not about a separate neuron, but about a neural ensemble, it is implied that certain neurons localize the reaction to a certain phenomenon and therefore can be compared with it.



The concept of "grandmother's neuron" rests on two very strong arguments. First, many experimental data point to a selective response of neurons to appropriate stimuli. For example, the response of the neurons of the primary visual cortex to certain visual stimuli has been shown and well studied (Hubel, 1988). Also, the neuron “Jennifer Aniston” was detected, which reacted both to Jennifer Aniston herself and to the characters from the television series “Friends” (R. Quian Quiroga, L. Reddy, G. Kreiman, C. Koch, I. Fried, 2005).

Secondly, artificial neural networks, entirely based on the idea of ​​a neuron detector (that is, the “grandmother's neuron”) work well and show revolutionary results that inspire deep optimism.



But there are a few problems. The first, already mentioned above, is related to biology. The more one becomes aware of the structure and operation of a real neuron, the less it becomes similar to a formal neuron. Rather, we can say that a real neuron does not even closely resemble its formal counterpart. It is possible to make the following analogy: judging the work of a neuron by its soldering is how to draw conclusions about the operation of a computer by changing the overall brightness of the monitor.



The second problem is that if a lot of smart people try to find a solution to a problem for too long and do not find it, then most likely the problem is not in the solution, but in the context of the problem. Think of a black cat in a dark room. For neuroscience, one of the “components of the condition” is the concept of a “grandmother's neuron,” which, due to the reasons described above, is perceived as a concept proven experimentally.



Is it possible to abandon the concept of "grandmother's neuron" and what is the alternative? Suppose there are several memory cells. The concepts of a “grandmother's neuron” are when each cell has its own “grandmother”. The description in this approach is when the contents of the memory cell indicate how pronounced this or that “grandmother” is. When modeling an artificial neural network, each neuron corresponds to a specific trait, the level of neuron activity shows the severity of the trait in the current description.



An alternative approach is possible. The memory cells, individually or jointly, can store the “grandmother's code”. If this code can be changed to another one or reproduced in another place of memory, then we come to the ideology of the computer and computer program.



Computers, as we know, quite successfully solve various information tasks. In principle, it is rather tempting to use an exclusively computer approach to describe the work of the brain. But here there are two difficulties. First, it is not clear how the brain neurons can function using the computer paradigm. If the brain is even remotely similar to artificial neural networks (neurons, connections, levels and all that), then computer logic gates, addressable memory and program execution mechanism - all this does not really fit into the existing idea of ​​the brain.



The second difficulty is that if the brain is ideologically similar to a computer, why has it so far failed to come up with good computer algorithms that implement artificial intelligence? Modeling on a computer, neural networks, we proceed from the fact that a computer is only a modeling tool, and the whole point is in the architecture of neural networks. Accordingly, it is hoped that by developing the network architecture we will be able to approach the capabilities of the brain. But if you abandon the "grandmother's neuron," and hence the neural networks, and begin to explore the computer alternative to the brain, the question arises: what is missing in modern computer architecture or programming ideology to create something resembling a brain.



There are a couple of significant points closely related to the question of the "grandmother's neuron." The first point is the question: do real neurons work with a digital code or with an analog signal? Much depends on the answer to this question. In fact, it defines the ideology - “grandmother” or not “grandmother”. If the work of the neurons is analog, for example, the frequency of the spikes or the interval between the spikes encodes the activity level of the "grandmother", then all the paradigms of traditional neural networks work. The picture of the activity of neurons is a characteristic description. Neuron activity is a scalar quantity corresponding to a quantitative trait. The vectors describing the state of different layers of the network, the connections between neurons, their weights and the type of threshold functions of neurons determine certain transformation functions. We can customize these functions using gradient descent, Hebb's training, backward error propagation, Boltzmann machines, and the like. The main thing is that we can smoothly, by analog, change the network parameters and the state of its neurons. "Grandmothers" will be a little more, "grandmothers" will be slightly less.



But if the signals of neurons form a digital code, then this is a completely different mathematics and completely different methods. If you have a phone number of "grandmother", then you can not dial it almost exactly and hope to get almost to the "grandmother".



The second point is related to the issue of understanding information. For a person, useful information makes sense. Moreover, it is intuitively clear to us that it is the meaning that determines the basic ideology of the information processes occurring in our head. At the same time, the very concept of meaning is still rather poorly formalized. It is quite appropriate to assume that the real mechanism of the work of the neural structures of the brain should not only take into account the phenomenon of meaning, but put it above all. Meaning should lie at the very core of the neural architecture. For traditional neural networks, the question of meaning is rather complicated. “Grandmother's Neuron” ideologically is its own meaning in some way. Here I am, a specific "grandmother", in a specific sense. If you need another “grandmother” or this “grandmother”, but in a different sense, then you need another neuron. What can be obtained from this approach seems to have reached its limit. If we want something more, then maybe it's time to say goodbye to the "grandmother."



On the one hand, if we assume that neural networks do not resemble the brain, then the question arises: why do they sometimes work so well? On the other hand, if the brain is like a computer, then, really, thinking can be reduced to algorithms?



The proposed series of articles will be devoted to the description of a brain model in which I managed, it seemed to me, to resolve all the contradictions described quite nicely. It will be shown how the biological brain works, how the biological memory works, and why it works that way. The concept of meaning will be formalized and it will be shown how the architecture of the brain is perfectly “sharpened” to work with the meaning of information. The mechanisms shaping thinking and behavior will be shown. The mechanisms and role of emotional evaluations will be revealed.



And, perhaps, the main thing: all key algorithms will be accompanied by working code. We ourselves are strange, but it all really works very well :)



And finally, the title of the cycle. Often the solution of one puzzle will open the way to the solution of another. The riddle of consciousness seems to be a much more general task than the task of understanding the information processes of the brain. But, definitely, there is a connection between these tasks. At least, if there is an understanding about information processes, it will be much easier to ask the right questions about consciousness. Quite definite consequences follow from the proposed model regarding the nature of consciousness. These effects do not answer all questions about consciousness, but they create a direction for possible reflections and experiments. Actually, such a super task and gave the name to this cycle.



Alexey Redozubov



PS If someone wants to look a little ahead and at the same time do a useful job, now, through joint efforts , translation of materials into English is done (coordinator is Dmitry Shabanov ). American colleagues from Duke University make the final editing. But it is required to translate to the level when they will understand the meaning. Terms of delivery of the text to the publisher strongly pressed. If there is an opportunity and desire to translate a few paragraphs, then join.



UPD



The logic of consciousness. Part 1. Waves in the cellular automaton

The logic of consciousness. Part 2. Dendritic waves

The logic of consciousness. Part 3. Holographic memory in a cellular automaton

The logic of consciousness. Part 4. The secret of brain memory

The logic of consciousness. Part 5. The semantic approach to the analysis of information

The logic of consciousness. Part 6. The cerebral cortex as a space for calculating meanings.

The logic of consciousness. Part 7. Self-organization of the context space

The logic of consciousness. Explanation "on the fingers"

The logic of consciousness. Part 8. Spatial maps of the cerebral cortex

The logic of consciousness. Part 9. Artificial neural networks and minicolumns of the real cortex.

The logic of consciousness. Part 10. The task of generalization

The logic of consciousness. Part 11. Natural coding of visual and sound information

The logic of consciousness. Part 12. The search for patterns. Combinatorial space

Source: https://habr.com/ru/post/308268/



All Articles