📜 ⬆️ ⬇️

The threat of the "uprising machines" will be studied



One of the most talked about news on the English-language Internet was an interview with Professor of Philosophy Hugh Price, who accompanied the announcement of the imminent launch of the Center for the Study of Global Risk at the University of Cambridge (CSER), and the topic of “car uprising” became prevalent when CSER was mentioned.

Even on news sites with a very conservative public, like BBCnews, the number of comments under the relevant news reaches several hundred.

Let me remind you: the Center for the Study of Global Risks (Center for the Study of Existential Risk - CSER) is going to explore the global risks that are potentially fraught with biotechnology, nanotechnology, nuclear research, anthropogenic climate change and developments in the field of artificial intelligence. The founders of the Center are Professor of Philosophy Hugh Price and Professor of Astrophysics Martin Reese of the University of Cambridge, as well as one of the founders of Skype, Jaan Tallinn , has a degree in theoretical physics at the University of Tartu.
')
The progress of humanity today is characterized not so much by evolutionary processes, as by technical development. This allows people to live longer, perform tasks faster and arrange destruction more or less of their own accord.

However, the increasing complexity of computational processes will eventually lead to the creation of a single artificial intelligence (AI), Price and Tallinn are sure. A critical moment will come when this “universal mind” can independently create computer programs and develop technology to recreate its own kind.

“Take, for example, the gorillas,” suggests Professor Price. “The reason why they disappear is not at all in the fact that people are engaged in their active destruction, but in the fact that we control the environment by methods that suit us, but are detrimental to their existence.”

The analogy is more than transparent. “At some point, this century or the next, we have to face one of the greatest turns in the history of mankind — perhaps even the history of the cosmos — when intelligence goes beyond biology,” promises Professor Price. "Nature did not foresee us, and we, in turn, should not take AI for granted."

Specialists associated with robotics and high technology, most of them rather skeptically apprehended the statements of the professor. Software failures and errors in algorithms are something understandable, predictable and conditionally tangible — it is much easier for the human mind to perceive than abstract threats of a non-existent fact.

Nevertheless, the problems associated with AI, worry not only the Cambridge professorship. Since 2001, there is a non-profit organization in the USA known as SIAI - Institute of Singularity. Her field of interest includes the study of potential dangers associated with an “intellectual explosion” and the emergence of an “unfriendly” AI. One of the co-founders of the Institute, Eliezer Shlomo Yudkowski , is widely known for research into issues of technological singularity (the moment after which technological progress will be inaccessible to human understanding).

In his work “Artificial Intelligence as a Positive and Negative Factor of Global Risk” (available here in Russian) Yudkowski writes: “One of the ways to a global catastrophe is when someone presses a button, having a mistaken idea of ​​what this button does - when an AI arises through a similar fusion of working algorithms, with a researcher who does not have a deep understanding of how the whole system works ... Ignorance of how to make a friendly AI is not fatal in and of itself, if you know what you do not know. It is an erroneous belief that AI will be friendly means an obvious path to global catastrophe. ”

And a bit of a fantastic reality - in August of this year, the edition of Business Insider announced the creation of "cyberplot" by bio-engineers at Harvard University. The results of the study were published in Nature Materials.

The design is a complex structure of nanowires and transistors, onto which human tissues are built. Cyberplot can monitor and transmit data like the heartbeat of a human body. "This allows us to actually blur the boundaries between electronic, inorganic systems and organic, biological," said the head of the research team, Charles Lieber.

Extras: Tallinn and Price article “ AI - can we keep it in the box? ".

Source: https://habr.com/ru/post/161397/


All Articles