This topic is a translation of an article by
Yvonne Raley from
Scientific American Mind .

How long will it take you to add the numbers 3 456 732 and 2 245 678? Ten seconds? Not bad for a man. An average modern computer can perform this operation in 0.000000018 seconds. What about your memory? Can you remember a list of planned purchases of 10 items? And out of 20? Compare this to many millions for a computer.
On the other hand, computers do not quickly recognize faces that people instantly recognize. Machines lack the creative potential of fresh ideas, they have no feelings and no fond memories of their youth.
But recent technical advances reduce the gap between the human brain and the circuit. At Stanford University, bioengineers reproduce complex parallel processing of neural networks on microchips. Another development - a robot named Darvin VII - is equipped with a camera and a metal jaw, so that it can interact with the outside world and learn like a young animal. Researchers at the Neurobiological Institute in La Jolla (California) modeled Darwin’s brain on rat and monkey brains.
')
Developments raise a natural question: if computer data processing ultimately simulates natural neural networks, can cold silicon ever think in the full sense of the word? And how will we evaluate it if he can do it? More than 50 years ago, the British mathematician and philosopher
Alan Turing invented an original strategy of drawing attention to these issues, and following this strategy made a great contribution to the design of artificial intelligence. At the same time, began to shed light on human knowledge.
Start: Acumen Testing
So, what do we mean by the word "think"? People often use this word to describe processes that affect consciousness, intelligence, and creativity. Unlike modern computers, which only follow the instructions of the programs written on them.
In 1950, in an era when silicon microchips did not exist yet, Turing realized that as the computer becomes more intelligent, this question of artificial intelligence will eventually rise. In perhaps the most famous philosophical note that ever existed, “
Computational Mechanisms and Intellect ”, Turing simply replaced the question “Can machines think?” With “Can a machine - a computer - go through a simulation game?”. Does this mean a computer can talk so naturally that it fools a person’s interlocutor so much that he thinks another person is talking to him?
Turing took his idea from a simple room game, in which a person had to determine with the help of a series of questions the gender of a person in the next room. In his experiment, he replaced the person in the next room with a computer. In order to pass the test, which is now called “
Turing Test ”, the computer must answer any question from an interrogator with linguistic competence and sophisticated imitation of a person.
Turing completed his fruitful research with the prediction that in 50 years (this time just now has come) we will be able to construct a computer that would be so good in a simulation game that the average interrogator would have only a 70 percent chance to reliably recognize his companion - the car he or the person.
Turing's prediction did not come true. No computer can pass his test. Why are there things that are easy for people, but they cause great difficulties for cars? To pass the test, the computer must demonstrate not only one ability (in mathematics, oratory or the ability to fish), and many of them - as many as an ordinary person has.

So far, computers have a limited architecture. Their programming allows them to solve specific problems, and they have a knowledge base that relates only to one of those tasks. A good example is Anna, an online consultant at IKEA. You can ask Anna about goods and services, but she cannot tell you about the weather.
What else does a computer need to pass the Turing test? It is clear that he should have a large vocabulary with all the quirks and oddities in the form of a pun. Taking into account the context in which this pun is used is critical. But computers cannot easily recognize the context. The word “bank”, for example, can mean “river bank” (river bank) or “financial institution”, depending on the context in which it is used.
The context makes it very important that it provides fundamental knowledge. A significant part of this knowledge is, for example, to know the identity of the questioner: this is an adult or a child, an expert or an amateur. And for a question like “Did the Yankees win the championship?” The year in which the question is asked is very important.
Fundamental knowledge is useful in all cases, because it reduces the amount of computing power required. Logic is not enough to correctly answer a question such as “Where is Sue's nose, if Sue is at home?” You need to know that noses are usually attached to their masters. To order a computer to simply answer “at home” is not enough for this type of questions. Then the computer to the question “Where is Sue’s backpack if she is at home?” Should answer “at home”, while the appropriate answer would be “I don't know.” And just imagine how difficult the question would be if Sue would have recently undergone nose surgery. Here the correct answer would be the counter question: “What part of Sue's nose do you ask?”. Attempts to write software that would consider every possible case quickly lead to a situation that scientists call a “combinatorial explosion” (an exponential increase in the number of options or resources required for solving a problem with a linear increase in the dimension of the problem).
Man or only humanoid?
However, the Turing Test is subject to criticism. The New York University philosopher
Ned Block claims that the Turing imitation game tests one way or another only the behavior of a computer with respect to the identity of a person’s behavior (we are talking only about verbal and cognitive behavior). Imagine that we could program a computer with all the possible options for developing a conversation of clearly defined length. When the interrogator asks a question Q, the computer searches for a conversation in which Q is encountered and then issues the necessary answer, A. When the interrogator asks his next question, P, the computer searches for the lines Q, A and P and issues answer B, which follows from this conversation. Such a computer, according to Blok, will have the intelligence of the toaster, but it will pass the Turing test. One answer to Blok’s challenge is that the problem he raised for computers is also relevant to human behavior. Leaving aside the physical characteristics, the obviousness of the fact whether a person can think is the behavior that produces thought. And this means that we will never know exactly whether our interlocutor is speaking in the usual sense of the word. Philosophers call this the problem of "other minds."
Anyone Chinese?
A similar line of discussion — the
Chinese Room argument — was developed by the philosopher
John Rogers Searle of the University of California at Berkeley to show that a computer can pass the Turing Test without understanding the meaning of the words it will use. To illustrate this, Searle asks us to imagine that programmers have written a program to simulate an understanding of the Chinese language.
Imagine that you are a processor in a computer. You are locked in a room (computer case) full of baskets containing Chinese characters (characters that will appear on the computer screen). You do not know Chinese, but you have a large book (application program) that tells you how to handle these characters. However, the rules in the book do not tell what these symbols mean. When Chinese characters enter the room (input), your task is to get them back out of the room (output). For this task, you get a further set of rules - these rules correspond to the simulation program designed to pass the Turing test. You are not aware that the characters coming into the room are questions, and the characters that you send back are the answers. Moreover, these answers perfectly imitate the answers that a Chinese announcer could give out; so outside the room it looks like you know Chinese. But you, of course, do not know him. Just like a computer will be able to pass the Turing test, but in fact it will not think.
To learn to think, the machine should have a chance to learn things for yourself.
Will computers ever be able to understand what these symbols mean? Computer scientist
Stephen Harnad of the University of Southampton in England believes that they can, but like people, computers will need to understand abstractions and their context when they first learn, just as they connect with the real, external world. People learn the meanings of words through a causal connection between us and the object to which the symbol corresponds. We understand the word "tree" because we have had life experience with trees. (Think about this case: blind and deaf Helen Keller finally understood the meaning of the word water when she hit her hand; enlightenment came when she touched the water when it flowed out of the pump.)

Harnadus argues that in order for a computer to understand the meanings of the symbols it manipulates, it must be equipped with sensory equipment — for example, a camera — this is how it can actually see objects represented by symbols. A project like the little Darwin VII - a robot with an eye camera and metal mandibles is another step in that direction.
Harnad offers a revised Turing test, which he called the Turing Robotic Test. To earn the label "thinking", the machine must pass the Turing test and be connected to the outside world. Interestingly, this add-on captures one of Turing's personal observations: the car, as he wrote in the 1948 report, should be allowed to “travel around the outside world” so that it can “have a chance to learn things for themselves.”
Future robot
Sensory equipment, which, according to Harnad, is crucial, can provide computer scientists with a way to equip a computer with the context and fundamental knowledge necessary to pass the Turing test. Instead of requiring all the necessary data to be entered by brute force, the robot learns only what it needs to know to communicate in its environment.
Can we be sure that equipping with sensory access to the outside world will ultimately give the computer a real understanding? This is exactly what Searle wants to know. But before we can answer this question, we have to wait until the machines actually pass the Turing robotic test offered by Harnad. In the meantime, the intelligence model offered by the Turing test continues to provide an important AI research strategy. In accordance with the philosopher of Dartmouth College, James Moore, the main strength of the test is the foresight that it offers - "the creation of complex ubiquitous intelligence, able to learn." This foresight sets a valuable task for the AI, regardless of whether the machine that passes the Turing test, think like us, understand or realize.