📜 ⬆️ ⬇️

About consciousness and artificial intelligence

The topic of gaining consciousness by the artificial intelligence became in fact a common place for modern science fiction and cinema (it suffices to mention Azimov, Terminator, Ghost in Armor, et cetera ad infinitum). Meanwhile, few of the science fiction writers thought about what consciousness is, how the consciousness arose in a person and how the AI ​​can gain this very consciousness.
In this essay, we would like to draw attention to one very interesting (and, in our opinion, highly plausible) definition of the mentioned phenomenon, which was given not by a science fiction or philosopher, but by evolutionary scientist Richard Dawkins in his book The Selfish Gene .

Dawkins views the human brain as the universal builder of mathematical models of reality. In chapter 4, he writes:

Evolution of the ability to modeling, obviously, led ultimately to the subjective awareness. Why this should have happened, seems to me the deepest mystery facing modern biology. There is no reason to believe that electronic computers act consciously when they model something, although we have to admit that in the future they may be conscious of their actions. Perhaps, awareness arises when the model of the world created by the brain reaches such fullness that it has to include in it a model of itself .


This idea - consciousness as the inclusion of the architect in the model he built - is, of course, far from certain (and it is quite likely unfounded by Popper). In favor of this theory, some indirect reasoning can be given. For example, reflection in philosophy (the subject’s conversion to himself) is considered one of the most important acts of consciousness.
UPD. Another argument in favor of the pre-kinetic hypothesis is the existence of animatism (belief in the impersonal animateness of objects and phenomena), typical of many (if not all) primitive societies. In the context of Dawkins theory, animatism arises as an attempt to apply the current way of modeling the behavior of fellow tribesmen to other objects.
')
Assume that the pre-kinetic hypothesis is true. Let's try to look at AI using this definition of consciousness; having done this, we will undoubtedly come to the conclusion that no Skynet threatens us in the near future.

In fact, for AI is characterized by a clear separation of the subject area and the algorithm of the AI ​​itself. This division comes, if you will, from the Vonnemann architecture, with its division of the command area and the data area.

But, even if we put in memory of AI information about the principles of its own device, will the AI ​​gain consciousness? Unfortunately no. This information — the algorithm of our own work — is absolutely useless for AI in the sense that it is impossible to draw any conclusions based on it that directly or indirectly affect the accuracy of the simulation and, therefore, the verifiability of the predictions derived from the model.

Dawkins was silent in his book about why the brain needed to include itself in the subject area, i.e. what benefits does an architect get from such an expansion of the model. We will try to answer this question.

The analysis of one's own behavior is necessary for a living organism in order to predict the behavior of other individuals and, above all, representatives of the same species. It is obvious that such a skill significantly enhances the individual's ability to live in a pack. It is logical to assume that such an ability is most in demand among those animals that form more or less large hierarchical flocks with intensive social ties - for example, people. (UPD: in the Dawkins comments you can find a link to the research of Nicholas Humphrey, who came to the same conclusions)

And this is the second weighty argument in favor of the fact that no AI can emerge in the framework of existing technologies. The AI ​​does not communicate with its own kind and has no public relations at all, the ability to analyze its own program is absolutely useless for him.

So, the general conclusion is approximately the following: if we accept the dawking hypothesis, then within the framework of the existing paradigms for constructing AI (and computers in general), consciousness will never be acquired by AI, no matter how many neurons there could be modeled.



I hereby convey the above text to the public domain.

Source: https://habr.com/ru/post/79110/


All Articles