📜 ⬆️ ⬇️

DeepMind is recruiting specialists to protect against strong AI



London-based research company DeepMind (a property of Google) specializes in advanced development of Artificial Intelligence, which in the future may develop into the form of strong Artificial Intelligence. According to the theory, a strong AI may be able to think and be aware of itself, empathize (feel). However, it will have the following properties.


It is obvious that a program with such functionality can act unpredictably for developers. Actually, it will be specially programmed for autonomous work. Therefore, it is very important to foresee the necessary security measures.

From various sources, including profiles on the social network of LinkedIn professionals, it became known that DeepMind started recruiting for the AI ​​security department. This unit should reduce the likelihood that a strong AI can develop into a form that is dangerous to humanity and / or to itself.
')
DeepMind is one of many companies in the world that are working to create self-learning neural networks in the form of weak Artificial Intelligence. So far, these programs are limited to highly specialized tasks: they play complex board games (go) and help reduce Google’s costs of eclectrics. But the ambitions of English scientists do not stop at this. In the future, they seek to develop a universal system of AI. As stated on the site, the company wants to "solve the problem of intelligence" (solve intelligence) in order to "make the world a better place." This is quite consistent with the basic principle of Google Corporation - “Don't be evil.”

To reduce the chances of developing a dangerous form of a strong AI, the company created the AI ​​Safety Group safety department (the date of department formation and the number of personnel are not known). New employees include Viktoria Krakovna (Viktoriya Krakovna), Jan Leike (Jan Leike), Pedro Ortega (Pedro Ortega). For example, Victoria Krakovna (pictured) was accepted as a research assistant, she has a PhD from Harvard University in statistics. The prize-winner of international school and continental student mathematics competitions of Canadian-Ukrainian origin was a trainee-developer at Google in 2013 and 2015, and later co-founded the Future of Life Institute in Boston, one of the leading research organizations in the world, which deals with safety issues artificial intelligence.


Ian Leike also studies the safety of AI. He is listed as a research associate at the Future of Humanity Institute , and this summer he won an award for the best student work at the conference Uncertainty in Artificial Intelligence. The work is devoted to the application of the Thompson method in self-learning of neural networks with reinforcement ( text of the scientific work ).

Pedro Ortega is a PhD in machine learning from the University of Cambridge.

Many scientists have warned about the potential danger of the superintelligent Artificial Intelligence. For example, British physicist and mathematician Stephen Hawking said that underestimating the threat from artificial intelligence could be the biggest mistake in the history of mankind if we do not learn to avoid risks.

Stephen Hawking et al . Warns of danger if machines with inhuman intelligence cultivate and nothing can stop the process. This, in turn, will launch the process of the so-called technological singularity.

Such technology will surpass the person and begin to manage the financial markets, scientific research, people and the development of weapons, inaccessible to our understanding. If the short-term effect of artificial intelligence depends on who controls it, the long-term effect depends on whether it can be controlled at all.

Probably, the company DeepMind listened to the words of the professor - and is taking the necessary security measures. Hawking et al. Mentioned that little serious research outside of non-profit organizations such as the Cambridge Center for Existential Risk Research, the Institute for the Future of Humanity, as well as the research institutes for machine intelligence and the future of the life is devoted to protection from strong AI. In fact, these issues should be given much more attention.

Ilon Musk also warned about the potential danger of AI. A year ago, he and like-minded people announced the founding of the non-profit organization OpenAI, which considers the open research of Strong AI as a way to hedge the risks of humanity against a single centralized artificial intelligence.

In the official announcement of the foundation of the organization it is said: “In connection with the unpredictable history of AI, it is difficult to predict when the AI ​​of the human level can appear. When this happens, it will be important for mankind to have a leading research institute that is able to prioritize the gain for all over their own interests. ”

Now the development in the field of strong AI are research organizations and large commercial corporations, such as Google, Facebook and Microsoft.

Source: https://habr.com/ru/post/372915/


All Articles