📜 ⬆️ ⬇️

Will computers ever really think?



The concept of thinking machines lies at the heart of a myriad of fantastic books and films. Even in more or less serious futuristic forecasts, from time to time they proceed from the assumption that we will create not just artificial life, but artificial consciousness. And this prospect is fascinating. Of course, in our dreams of thinking machines, we a priori believe that their thinking will be built in the image of ours. With certain differences, such as not being burdened with the needs of the body, emotions and other difficulties associated with biological being. But if you think about it, then no one guarantees that cars will think like us.


Have you ever wondered what is meant by “thinking”? Yes, on an intuitive level, we can become aware of our own thinking, human, but what about animals? Do chimps think? And the crows? Octopus?
')
Suppose alien rational life forms exist. It is likely that their intelligence will be so different from ours that we can’t even realize their intelligence. Who knows, maybe the aliens were already somewhere nearby, but because of the fundamental differences of the mind did not even notice us. And we - them.

Of course, animals have cognitive abilities that are different from ours, aimed at using tools, understanding cause-effect relationships, communication with other creatures, and even awareness of targeted thinking in others. Probably, all these tasks should be referred to “thinking”. Imagine that we were able to create a machine with all the listed capabilities, that is, in our opinion - thinking. Then we will only have to praise ourselves and lie down on the sofa with a sense of accomplishment. But can the car get up a notch, can it think like a person? And if it can, then how can we find out about it? Focusing on only the behavior of the computer would be a mistake. If he acts as if he can think, then it will not mean that everything is as it is. Maybe it will be a skillful imitation, a kind of philosophical zombie .

At one time, this question - how to recognize the reasonableness of the machine - prompted Alan Turing to create his famous test , during which the computer interacts with the person through the text on the screen, and must in most cases convince a live interlocutor that he is also a person. For Turing, it all came down to the behavior of the machine, and not to the “inner digital life”.

However, for someone this inner life is still important. The famous philosopher Thomas Nagel in his article “What is it like to be a bat?” Expressed a point of view, consciousness is not identified with the brain. There is “something that pleases” to receive conscious experiences, experience. That is, we have something that we like to look at beautiful objects, or to do something. Man is more than a set of states of his brain.

But then you can ask yourself: can there be “something that pleases” to be a thinking machine? Suppose we ever succeed in creating a sensible computer. And if during the Turing test a person asks: “Do you have a consciousness?”, You may receive in response: “How would I know about this?”.

Calculations - and nothing else?


According to modern concepts, computer-based thinking should be based solely on calculations: the number of operations per second and logical transitions. But we are not sure that thinking - or consciousness - is a derivative of computing power. At least in terms of using binary computers. Can thinking be more than a set of algorithms? What else do we need? And if the whole thing is only in calculations, then why is the human brain not very strong in this? After all, most of us face difficulties when we need to multiply in the mind a couple of double-digit numbers, not to mention more complex tasks. Or is there some kind of deep data processing carried out in our subconscious that ultimately results in limiting our conscious computational capabilities (argument in favor of the so-called “strong AI” )?

Compared to computers, our ability to manipulate raw data is no good. But computers are very poorly given such things as languages, poetry, voice recognition, interpretation of complex behavioral patterns and the formation of comprehensive judgments.

If the abilities of computers are so different from ours, then how can we expect that in the end they will be able to think like us? Perhaps in the future computers will acquire all the features of human thinking in exchange for the deterioration of their ability to arithmetic calculations?

About faith, doubt and value


If computers really start to think like a person, then concepts like “faith” and “doubt” will also be inherent in their minds. And what can “belief in something” mean for a computer? Of course, except for such a trivial situation as an action without taking into account the probability of error? Will it ever happen that the computer genuinely doubts, but will overpower its doubt and continue to act anyway?

For human reason, the concept of "value" is of great importance. It can be considered one of the fundamental driving factors. What do we think at one time or another, and why do we think about it? Can computer intelligence value anything? And if so, why?



It would be great if we could create a computer that shares the system of human values. But, generally speaking, we ourselves cannot say with certainty what it is, much less how to program this system. In addition, if computers can program themselves, then it may occur to them that values ​​can be changed.

Considering how much effort and resources are spent on creating artificial intelligence, now is the time to try to understand how we want to see a thinking computer. And for this, perhaps we must first try to understand ourselves.

Source: https://habr.com/ru/post/367933/


All Articles