Creating an AI will only improve our lives, experts said

Everyman is afraid of robots and artificial intelligence. You can long list works of science fiction, in which mechanical servants and AI were created and firmly occupied an important niche in our life, but for some reason they went out of control and turned against their creators.
Recently, large and influential figures in the world of science and information technology have begun to direct primitive fear. In particular, Tesla Motors CEO Ilon Mask
said that AI is "potentially more dangerous than nuclear weapons," and English physicist Stephen Hawking
called AI "the greatest mistake of mankind."
Oren Etzioni (head of the Allen Artificial Intelligence Institute) has been researching artificial intelligence for the past 20 years, and he is completely calm. Other experts of the region express similar thoughts.
According to
his statement , the well-known dark scenarios of the development of AI are wrong for one simple reason: they confuse the mind with autonomy, and assume that the computer will begin to create its own goals with the help of its own will, and access to databases and computational capabilities will help defeat people.
')
Etzioni believes that these two aspects are far from each other. The calculator in the hands of a person does not begin to do his own calculations - he always remains a tool to simplify calculations, which otherwise it would be too long to do manually.
In a similar way, artificial intelligence is a tool for making works that are either too complex or too expensive for us: analyzing large amounts of data, conducting medical research, and so on. AI requires human participation and management.
Scary standalone programs exist: these are cyber weapons or computer viruses. But they do not have a mind. The smartest software has a very narrow niche of use, and a program that can beat a person into an intellectual game, like
Watson did at IBM , has zero autonomy.
Watson is not torn in the desire to play other television games, he has no consciousness. As John Searle said, Watson does not even realize that he won.
The arguments against AI always operate on hypothetical terms. For example, Hawking says that the development of fully artificial intelligence can mean the end of the human race.
The problem with these sentiments is that the emergence of fully artificial intelligence in the next twenty-five years is less likely than the death of humanity from the fall of an asteroid on Earth.
Since the time of the story about the monster of Frankenstein, we have been afraid of artificial servants, and according to Isaac Asimov, we will begin to experience a condition called by him the Frankenstein complex.
Instead of fearing that technology might turn against us, we better focus on how AI can improve our lives.
For example, the Journal of the Association for Information Science and Technology claims that the global output of scientific data doubles every two years. Specialist with all the desire is no longer able to keep track of everything. Search engines show us terabytes of data that a person cannot read in his entire life.
Therefore, scientists are working on AI, which will answer, for example, the question of how a particular drug will affect the body of middle-aged women, or at least be able to limit the number of articles to search for an answer. We need software that monitors scientific publications and marks important ones, not based on keywords, but based on understanding the information.
Etzioni notices that we are now at a very early stage in the development of artificial intelligence: the current developments cannot even read the textbooks of junior schoolchildren, programs can’t pass the ten-year-old test child, or at least understand the sentence “I threw the ball out of the window, and it broke up.”
The work involves overcoming a multitude of difficulties, and skeptics overlook the fact that for many years AI will be weaker than a child. The world of cyber-weapons is not the scope of this discussion, since it does not belong to artificial intelligence.
Etsioni is echoed by a professor of psychology and neurology at New York University and the head of Geometric Intelligence, Gary Marcus. He also says that artificial intelligence is still too weak to produce such far-reaching conclusions, but also makes remarks about the price of bugs.
In order to bring losses to billions of dollars and kill people, the program does not have to have a mind to create malicious intent: a normal trading robot due to the mistakes it contains can cause mischief.
An error in the control program of an unmanned vehicle can lead to an accident or even death, but this does not mean that we should immediately abandon research in this area - these programs can save hundreds of thousands of lives a year.
However, the alarmists are also somewhat right. Over the past decades, computers are superior to man in new and new tasks, and it is impossible to predict what levels of restrictions will be needed to minimize the risks of their activities. The problem is not in the seizure of the world by machine, the problem is in the mistakes made in AI.
As the head of Pecabu Rob Smith writes in his article “What Artificial Intelligence Is Not”, the expression “artificial intelligence” will soon become another meaningless term that is put everywhere everywhere for the very fact of its use. This has already happened with the "cloud" and "big date."
The problem is not only this. Society sees AI as such an absolute miracle of technology, powerful and expensive.
It should be remembered that the AI ​​is not canonical images in the form of a red light bulb HAL9000 or the evil "SkyNet". Artificial intelligence is unconscious. This is just a computer program, “smart” enough to perform tasks that usually require the participation of a person’s analysis. This is not a cold-blooded killing machine.
Unlike a person, an AI is not a living being, even if a program can perform tasks that people usually solve. They do not contain feelings, desires and aspirations except those that we put in them, or which they can form themselves on the basis of input data.
Like man, artificial intelligence can set tasks for itself, but their nature is laid in the reasons for its creation. The role of the “smart” program is determined only by the creators, and it is unlikely that anyone will start writing code to subjugate humanity or realize self-awareness.
Only in science fiction does artificial intelligence want to multiply and replicate. Let it be possible to create a program with goals to harm, but is this the problem of the AI ​​itself?
Finally, computer intelligence is not a whole entity, but a community of specialized programs. Smith points out that in the near future the creation of AI as a network of subroutines that provide computer vision, language communication, machine learning, motion, and so on is most likely. An artificial intelligence program is not “he,” “she,” or “it,” it is “they.”
We are in the decades from the singularity that Ilon Musk is so afraid of. Today, IBM, Google, Apple and other companies are developing a new generation of applications that can only partially replace the human element in many of the same type, time-consuming and dangerous work. We do not need to develop a sense of fear regarding these programs, they only improve our lives. The only thing that is dangerous is the people who create artificial intelligence.