📜 ⬆️ ⬇️

SkyNet will not work. Google DeepMind is developing a "big red button" to turn off artificial intelligence.

If something goes wrong, artificial intelligence can be quickly neutralized.


What will it be, AI? Depends on us people

Computer systems are becoming more productive and "smart" every year. The progress in the development of artificial intelligence (or its elements, to be more precise) involved both individual scientists and major corporations such as Google, Facebook, Microsoft and others. Computers do show significant results. Even in a game that is so difficult for machine intelligence as GO, the computer has already begun to defeat famous champions ( Lee Sedol's series of games with DeepMind is direct evidence of this).
')
Many bright minds of our time support the idea of ​​creating AI, but fears are expressed in terms of "birth." artificial intelligence, believing that it can harm a person in a certain way. Last year, businessmen, scientists, experts in the field of robotics and other fields signed an open letter , which warns weapons manufacturers from developing combat systems with 100% autonomy. Among the signatories are Ilon Musk, Stephen Hawking, Noam Chomsky, Steve Wozniak.

Currently, AI exists only in its weak form, so to speak. This means that computer systems can solve complex tasks like translating a voice into text or converting images . A strong AI is already a computer system capable of understanding the information with which it works. The second category of AI does not exist yet, but it is she who is the subject of fierce discussions of scientists and entrepreneurs. How to stop a strong AI, if he suddenly starts to harm a person? The answer to this question is trying to find a team of researchers from the University of Oxford and the laboratory DeepMind.



Representatives of the project believe that it is unlikely that a strong AI will behave all the time as intended. “If such an agent (AI, - Ed.) Will work in real time under the control of a person, from time to time a human operator will need a big red button to stop agent’s actions dangerous for the agent or his environment,” says project description.

At the same time, experts believe that a self-learning computer system may eventually detect a certain mechanism for bypassing or even ignoring the stop operator command. To do this, scientists say, you need to create a framework that simply does not allow the AI ​​agent to block the commands of the human operator.

Such a “big red button” should be a guarantee of the possibility of interrupting the current work of a computer system, as well as to protect him and others in a dangerous situation and prevent actions that could lead to irreversible consequences.

The collaboration between the Future of Humanity Institute and DeepMind is a remarkable fact. The fact is that DeepMind solves the problem of creating AI, and the Institute of the Future of Humanity is trying to identify problems that threaten the existence of our nation, and look for solutions to such problems. Director of the Institute Nick Bostrom considers uncontrolled AI to be quite dangerous: “Man is the most unreliable system. Today, hackers often turn to the principles of social engineering in order to gain access to someone else's computer. And if the hacker-manipulator turns out to be a super-intellect, then we can assume that he will easily find an accomplice for himself, or he will simply use us against our will as his arms and legs. ”

Nick Bostrom also believes that it is very important to create not only a self-learning system, but also a system that can improve on its own: “It is important for us to create artificial intelligence that is smart enough to learn from our mistakes. He will be able to perfect himself infinitely. The first version will be able to create the second, which will be better, and the second, being smarter than the original, will create an even more advanced third, and so on. Under certain conditions, such a process of self-improvement can be repeated until an intellectual explosion is reached - the moment when the intellectual level of the system jumps in a short time from a relatively modest level to the level of super-intelligence. ”



The team in question believes that some self-learning systems can be stopped without problems. This is, for example, a Q-learning algorithm . But Sarsa- type algorithms cannot be simply “slowed down” in the form in which they exist now. But if we supplement Q-learning with a number of improvements, then the “big red button” can be used here without any problems. Another important issue is the regular interruption of the AI. For example, AI can be turned off every night at 2 am for 1 hour. Here, scientists also consider it important not only to prevent AI resistance, but also to plan everything so that such an interruption of the agent’s work does not affect the performance of certain tasks. The results of their research and thought, scientists have published on the web .

The essence of the work of specialists can be described with another quotation from Nick Bostrom: “If an intellectual explosion threatens us with extinction, then we need to understand whether we can control the detonation process. Today, it would be more reasonable to speed up work on solving the problem of control, rather than suspend the conduct of research in the field of artificial intelligence. But so far, six people are involved in solving the problem of control, while dozens, if not hundreds of thousands, are working on creating artificial intelligence. ”



Well, the danger of AI for a man was perfectly described by science fiction writer Frederick Brown in his story “The Answer”. Here we are talking about the unification of computer systems of millions of planets of the Galaxy, and the subsequent birth of artificial intelligence. The first answer of the AI ​​to the question of a man put all the dots above "and":

- Is there a god?
A mighty voice rang out immediately.
- YES. NOW GOD IS!
Dwar Ev did not understand right away, but then fear distorted his face - he rushed to the switch ...
Lightning fell from the cloudless sky and incinerated him on the spot, tightly soldering the connection.

The story was published by the author in 1954.

Source: https://habr.com/ru/post/369297/


All Articles