📜 ⬆️ ⬇️

What artificial intelligence researchers think about the possible risks associated with it

The risks associated with AI, I became interested in 2007. At that time, most people’s reaction to this topic was something like this: “Very funny, come back when someone other than Internet idiots will believe it.”

In the years that followed, several extremely intelligent and influential figures, including Bill Gates , Stephen Hawking and Ilon Mask , publicly shared their concerns about the risks of AI, followed by hundreds of other intellectuals, from Oxford philosophers to cosmologists from MIT and investors from Silicon Valley . And we are back.

Then the reaction changed to: "Well, a couple of some scientists and businessmen can believe it, but it is unlikely that they will be real experts in this field, really versed in the situation."
')
From here came statements like the article in Popular Science “ Bill Gates is afraid of AI, but AI researchers know better ”:
After talking with AI researchers — real researchers who hardly make such systems work at all, let alone work well, it becomes clear that they are not afraid that superintelligence will suddenly sneak up on them, either now or in the future. . Despite all the frightening stories told by Mask, researchers are in no hurry to build protective rooms and countdown self-destruction.

Or, as they wrote on Fusion.net in the article “ Objection about killer robots from a person who actually develops AI ”:
Andrew Angie professionally develops AI systems. He led the course on AI at Stanford, developed AI at Google, and then moved to Baidu, a Chinese search engine, to continue his work in the front ranks of applying AI to real-world problems. So, when he hears about how Ilon Musk or Stephen Hawking - people who are not familiar with modern technology - are talking about AI, potentially capable of destroying humanity, you can almost hear him cover his face.

Ramez Naam [Ramez Naam] from Marginal Revolution repeats about the same thing in the article “ What do researchers think about the risks of AI? ”:
Ilon Musk, Stephen Hawking and Bill Gates have recently expressed fears that the development of AI can implement the "killer AI" scenario, and potentially lead to the extinction of humanity. They do not belong to AI researchers and, as far as I know, did not work directly with AI. What do real AI researchers think about the risks of AI?

He quotes the words of specially selected AI researchers, as well as the authors of other stories - and then he stops, without mentioning any opinions different from this.

But they exist. AI researchers, including leaders in this field, actively expressed concerns about the risks of AI and beyond intelligence, from the very beginning. I will begin by listing these people, in peak to the list of Naam, and then move on to why I do not consider this a “discussion” in the classical sense expected from listing the list of luminaries.

The criteria for my list are the following: I mention only the most prestigious researchers, or professors of science in good institutions with many quotations of scientific works, or very respected scientists from the industry who work for large companies and have a good track record. They do AI and machine learning. They have several strong statements in support of some point of view about the onset of a singularity or a serious risk from the AI ​​in the near future. Some of them wrote about this work or book. Some simply expressed their thoughts, believing that this was an important topic worth exploring.

If someone disagrees with the inclusion of a person in this list or thinks that I forgot something important, let me know.

* * * * * * * * * *

Stuart Russell is a professor of computer science at Berkeley, a winner of the IJCAI Computers And Thought Award, a researcher at the computer mechanization association, a researcher at the American Academy of Advanced Scientific Research, the director of the Center for Intelligent Systems, a Blaze Pascal award winner, etc. etc. Co-author of the book " AI: a modern approach, " a classic textbook used in 1,200 universities around the world. On his website he writes:
In the field of AI, 50 years have been developing under the banner of the assumption that the smarter the better. With this you need to combine concern for the benefit of humanity. The argument is simple:

1. The AI ​​is likely to be successfully created.
2. Unlimited success leads to great risks and great benefits.
3. What can we do to increase the chances of getting benefits and avoiding risks?

Some organizations are already working on these issues, including the Institute for the Future of Humanity at Oxford, the Center for Existential Risk Studies at Cambridge (CSER), the Institute for the Study of Machine Intelligence at Berkeley and the Institute for Future Life at Harvard / MIT (FLI). I am on the advisory boards for CSER and FLI.

In the same way as nuclear fusion researchers considered the problem of restricting nuclear reactions as one of the main problems in their field, the development of the AI ​​field will inevitably raise control and safety issues. Researchers are already beginning to raise questions, from purely technical (basic problems of rationality and utility, etc.) to broadly philosophical.

On edge.org, he describes a similar point of view:
As explained by Steve Omohandro, Nick Bostrom and others, the discrepancy in values ​​with decision-making systems, the possibilities of which are constantly increasing, can lead to problems - perhaps even the problems of the scale of extinction of the species, if the machines prove to be more capable than people. Some believe that in the coming centuries, imaginable risks for humanity are not foreseen, perhaps forgetting that the time difference between Rutherford’s confident statement that nuclear energy can never be recovered, and the invention of Silard initiated by the neutrons of the nuclear chain reaction took less than 24 hours .

He also tried to represent these ideas in an academic environment, pointing out :
I find that the main people in this industry, who have never previously expressed fears, think to themselves that this problem should be taken very seriously, and the sooner we take its seriousness, the better it will be.

David McAllister is a professor and senior fellow at Toyota Institute of Technology, affiliated with the University of Chicago, who previously worked in the faculties of MIT and Cornell Institute. He is an employee of the American Association of AI, published more than a hundred papers, conducted research in the areas of machine learning, programming theory, automatic decision making, AI planning, computational linguistics, and had a serious impact on the algorithms of the famous Deep Blue chess computer. According to an article in the Pittsburgh Tribune Review:
Chicago professor David McAllister considers the inevitable emergence of the ability of fully automatic intelligent machines to develop and create smarter versions of themselves, that is, the onset of an event known as [technological] singularity. The singularity will allow the machines to become infinitely intelligent, leading to an “incredibly dangerous scenario,” he says.

In his blog " Thoughts on cars ", he writes:
Most computer scientists refuse to talk about real progress in the field of AI. I think that it would be more reasonable to say that no one is able to predict when an AI will be obtained that is comparable with the human mind. John McCarthy once told me that when asked about how soon human-level AI will be created, he replies that it is five to five hundred years old. Makarty was smart. Given the uncertainties in this area, it is reasonable to consider the problem of a friendly AI ...

In the early stages, a generalized AI will be safe. However, the early stages of the AIS will be an excellent test ground for AI in the role of servant or other options friendly AI. An experimental approach is also advertised by Ben Görzel in a good post on his blog. If we are waiting for an era of safe and not very smart OII, then we will have time to think about more dangerous times.

He was a member of the AAAI Panel On Long-Term AI Futures expert group dedicated to the long-term outlook for AI, chaired the long-term monitoring committee and is described as follows :
McAllister spoke with me about the approach of "singularity", an event when computers become smarter than people. He did not name the exact date of her offensive, but said that this could happen in the next couple of decades, and in the end it will definitely happen. Here are his views on singularity. Two significant events will occur: operational rationality, in which we can easily talk to computers, and an AI chain reaction, in which the computer can improve itself without help, and then repeat it again. The first event we notice in the systems of automatic assistance, which really will help us. Later it will be really interesting to communicate with computers. And in order for computers to be able to do everything that people can do, it is necessary to wait for the onset of the second event.

Hans Moravec is a former professor at the Institute of Robotics at Carnegie Mellon University, named after him is the Moravek paradox , the founder of SeeGrid Corporation , a company engaged in computer vision systems for industrial applications. His work " sensor synthesis in the lattice of certainty of mobile robots " was quoted over a thousand times, and he was invited to write an article for the British Encyclopedia on Robotics, at a time when articles in encyclopedias were written by world experts in this field, rather than hundreds of anonymous Internet commentators.

He is also the author of the book " Robot: From a Simple Machine to a Transcendental Mind ", which Amazon describes as follows:
In this exciting book, Hans Moravek predicts that by 2040 cars will approach the intellectual level of people, and by 2050 they will surpass us. But although Moravec predicts the end of an era of human domination, his vision of this event is not so gloomy. He does not isolate himself from the future, in which machines rule the world, but accepts him, and describes an amazing point of view, according to which intelligent robots will become our evolutionary descendants. Moravec believes that at the end of this process, “immense cyberspace will unite with inhuman supermind and deal with matters as far from people as human affairs are from bacteria.”

Shane Leg is co-founder of DeepMind Technologies , an AI startup bought in 2014 for $ 500 million by Google. He received his doctorate at the Institute of AI them. Daile Moule in Switzerland, and also worked in the Division of Computational Neurobiology named after Gatsby in London. At the end of the thesis “machine superintelect” he writes :
If ever there is something that can come close to absolute power, it will be a super-intelligent machine. By definition, she will be able to achieve a large number of goals in a wide variety of environments. If we prepare carefully for such an opportunity in advance, we will be able not only to avoid a catastrophe, but also to begin an era of prosperity, unlike any other that existed before.

In a subsequent interview, he says :
AI is now in the same place where the Internet was in 1988. Requirements for machine learning are required in special applications (search engines like Google, hedge funds and bioinformatics), and their number is growing every year. I think that around the middle of the next decade this process will become widespread and noticeable. The AI ​​boom should occur in the 2020 area, followed by a decade of rapid progress, possibly after market correction. A person’s AI level will be created around mid-2020, although many people will not accept this event. After this, the risks associated with a developed AI will receive practical implementation. I will not say about the "singularity", but they expect that at some point after the creation of the AIS, crazy things will start to happen. This is somewhere between 2025 and 2040.

He and his co-founders Demis Khasabis and Mustafa Suleiman signed a petition to the Institute for the Future Life regarding the risks of AI, and one of their conditions for joining Google was that the company agrees to organize an AI ethics council to research these issues.

Steve Omohundro is a former computer science professor at the University of Illinois, the founder of the computer vision and training group at the Center for the Study of Complex Systems, and the inventor of various important developments in machine learning and computer vision. He worked on lip-reading robots, StarLisp, a parallel programming language, and geometric learning algorithms. He now heads Self-Aware Systems , "a team of scientists working to ensure that intelligent technology benefits humanity." His work, "the basics of AI motivation, " helped bring about the field of machine ethics, since he noted that super-intelligent systems would be directed towards potentially dangerous goals. He's writing:
We have shown that all advanced AI systems are likely to have a set of basic motivations. It is extremely important to understand these motivations in order to create technologies that ensure a positive future for humanity. Yudkovsky called for the creation of a "friendly AI". To do this, we need to develop a scientific approach for “utilitarian development”, which will allow us to develop socially useful functions that will lead to the desired sequences. The rapid pace of technological progress suggests that these problems may soon become critical.

Under the link you can find his articles on the topic "rational AI for the common good."

Murray Shanahan received his Ph.D. in computer science from Cambridge, and now he is a professor of cognitive robotics at Imperial College London. He published works in such areas as robotics, logic, dynamical systems, computational neuroscience, philosophy of mind. He is currently working on the book " Technological Singularity ", which will be published in August. Amazon's advertising summary is as follows:
Shanakhan describes technological advances in the field of AI, both made under the influence of knowledge from biology and developed from scratch. He explains that when a human-level AI is created — a theoretically possible, but difficult task — the transition to a super-intelligent AI will be very fast. Shanahan reflects on what the existence of supramental machines for such areas as personality, responsibility, rights and individuality can lead to. Some representatives of the supramental AI can be created for the benefit of the person, some can get out of control. (That is, Siri or HAL?) The singularity represents for humanity both an existential threat and an existential possibility to overcome its limitations. Shanahan makes it clear that if we want to achieve a better result, we need to imagine both possibilities.

Markus Hatter is a professor of computer science at the National Australian University. Prior to that, he worked at the Institute of AI them. Dale Moule in Switzerland and the National Institute of Informatics and Communications in Australia, also worked on stimulated learning, Bayesian conclusions, the theory of computational complexity, Solomon's theory of inductive predictions, computer vision and genetic profiles. He also wrote a lot about singularity. In the article “ Can Intellect Explode? ” He writes:
This century can witness a technological explosion, the scale of which deserves the name of a singularity. The default scenario is a community of interacting rational individuals in the virtual world, simulated on computers with hyperbolically increasing computing resources. This is inevitably accompanied by an explosion of speed, measured by physical time, but not necessarily an explosion of intelligence. If the virtual world is populated with free and interacting individuals, evolutionary pressure will lead to the emergence of individuals with increasing intelligence, who will compete for computing resources. The end point of this evolutionary acceleration of the intellect can be a community of the most intelligent individuals. Some aspects of this singular community can theoretically be studied using modern scientific tools. Long before the appearance of this singularity, even placing this virtual community in our imagination, one can imagine the emergence of differences, such as, for example, a sharp drop in the value of an individual, which can lead to drastic consequences.

Jürgen Schmidhuber is a professor of AI at the University of Lugano and a former professor of cognitive robotics at the Munich University of Technology. He develops some of the most advanced neural networks in the world, works on evolutionary robotics and the theory of computational complexity, and serves as a research assistant at the European Academy of Arts and Sciences. In the book " Hypotheses of Singularities, " he argues that "with the continuation of the existing trends, we will face an intellectual explosion in the next few decades." When he was directly asked about the risks associated with AI at Reddit AMA, he replied :
Stewart Russell’s anxiety about AI seems reasonable. Can we do something to control the influence of AI? In response, hidden in a nearby thread, I pointed out: at first glance, the recursive self-improvement of Gödel’s machines offers us a way to create a future superintelligence. The self-repair of the Gödel machine is in some sense optimal. It will only make changes in its code that are proven to improve the situation, according to the original utility function. That is, in the beginning you have the opportunity to send it on the right path. But other people can equip their Gödel cars with other utility functions. They will compete. And the resulting ecology of individuals, some utility functions will be better suited to our physical universe than others, and they will find a niche for survival . "

Richard Saton is a professor and member of the iCORE committee at the University of Alberta. He serves as a research fellow at the Association for the Development of AI, co-author of the most popular textbook on stimulated learning , the discoverer of the method of temporal differences, one of the most important in this area.

In his report at the conference on AI, organized by the Institute for the Future, Suton argued that "there is a real chance that even during our life" an AI will be created, comparable in intelligence to a person, and added that this AI "will not obey us" , “Will compete and cooperate with us”, and that “if we create super-intelligent slaves, we will get super-intelligent opponents”. In conclusion, he said that “we need to think over the mechanisms (social, legal, political, cultural) to provide the desired outcome”, but that “inevitably ordinary people will become less important”. He also mentioned similar problems at the Gadsby Institute presentation . In addition, there are such lines in Glenn Beck’s book: “Richard Suton, one of the greatest specialists in AI, predicts an explosion of intelligence somewhere by the middle of the century.”

Andrew Davison is a professor of machine vision at Imperial College London, leading the robotic vision group and robotics laboratory Dyson, and the inventor of the computerized localization and markup system MonoSLAM. On his website he writes :
At the risk of being in an unpleasant position in certain academic circles, to which, I hope, I belong, since 2006 I began to take the idea of ​​technological singularity with full seriousness: the exponential growth of technology can lead to the emergence of superhuman AI and other developments that are extremely strong will change the world in a surprisingly near future (perhaps in the next 20-30 years). I was influenced by both the reading of Kurzweil's book “The Singularity is Close” (I found it sensational, but generally intriguing), and my own observations of the incredible progress in science and technology that has been happening recently, especially in the field of computer vision and robotics, with whom I am personally connected. Modern decision-making systems, training, methods based on Bayesian probability theory, coupled with the exponentially growing capabilities of inexpensive computer processors, are becoming capable of demonstrating amazing properties similar to people's skills, particularly in the field of computer vision.

It is hard to imagine all the possible consequences of what is happening, positive or negative, and I will try to enumerate only the facts without hitting the opinions (although I myself am not in the camp of super-optimists). I seriously believe that it is worth talking to scholars and the public about this. I will make a list of "signs of singularity" and will update it. It will be little news about technology or news that confirms my feelings that technology is surprisingly evolving faster and faster, and very few people are now thinking about the consequences of this.

Alan Turing and Irving John Good need no introduction. Turing invented the mathematical foundations of computational science and named after him the Turing machine, Turing completeness and Turing test. Hood worked with Turing at Bletchley Park, helped create one of the first computers and invented many well-known algorithms, for example, the fast algorithm for the discrete Fourier transform, known as the FFT algorithm. In his work "Can digital machines think?" Turing writes:
Let's assume that such machines can be created, and consider the consequences of creating them. Such an act will undoubtedly be met with hostility, if only we have not advanced in religious tolerance since the times of Galileo. The opposition will consist of intellectuals who are afraid of losing their jobs. But it is likely that intellectuals will be wrong. It will be possible to tackle many things in trying to keep our intellect at the level of standards set by the machines, since after running the machine method it does not have to take a long time until the machines exceed our insignificant possibilities. At some point, one should expect the machines to take over control.

While working in the Atlas computer lab in the 60s, Goode developed this idea in Reasonings Concerning the First Ultra-Intellectual Machine :
We define an ultra-intelligent machine as a machine capable of surpassing a person in any intellectual work. Since the development of such machines is one of the examples of intellectual work, an ultra-intelligent machine can develop better quality machines. As a result, no doubt, one should expect an “explosion of intellect”, and the human intellect will remain far behind. Therefore, the invention of the ultra-intelligent machine is the last of the inventions that need to be made by man.

* * * * * * * * * *

It bothers me that this list may create the impression that there is some kind of dispute between “believers” and “skeptics” in a given area, during which they tear each other apart. But I did not think so.

When I read articles about skeptics, I always meet two arguments. First, we are still very far from human-level AI, not to mention superintelligence, and there is no obvious way to reach such heights. Secondly, if you demand bans on AI research, you are an idiot.

I completely agree with both points. Like the leaders of the movement of risk AI.

A survey among AI researchers ( Muller & Bostrom, 2014 ) showed that, on average, they give 50% because human-level AIs will appear by 2040, and 90% that they will appear by 2075. On average, 75% of them believe that superintelligence (“machine intelligence, seriously surpassing the capabilities of every person in most professions”) will appear within 30 years after the advent of human-level AI. And although the technique of this survey raises some doubts, if we accept its results, it turns out that most AI researchers agree that something that should be worried will appear in one or two generations.

But the director of the Institute of Machine Intelligence Luc Muelhauser and the director of the Institute for the Future of Humanity Nick Bostrom, stated that their predictions for the development of AI are much later than the predictions of scientists participating in the survey. If we study the data on the predictions of AI from Stuart Armstrong, it is clear that, in general, the estimates for the time of the appearance of AI, made by the supporters of AI, do not differ from the estimates made by AI skeptics. Moreover, the most long-term prediction in this table belongs to Armstrong himself. However, Armstrong is now working at the Institute for the Future of Humanity, drawing attention to the risks of AI and the need to research the goals of superintelligence.

The difference between supporters and skeptics is not in their assessments of when we should expect the appearance of human-level AI, but when we need to start preparing for it.

Which brings us to the second point. The position of skeptics, it seems, is that although we probably should send a couple of smart people to work on a preliminary assessment of the problem, there is no need to panic or ban AI research.

Fans of AI, however, insist that although we absolutely do not need to panic or ban research of AI, it is probably worth sending a couple of smart people to work on a preliminary assessment of the problem.

Yang Lekun is perhaps the most ardent skeptic of the risks of AI. He was abundantly quoted in an article on Popular Science , in a post on Marginal Revolution., and he also spoke with KDNuggets and IEEE on the “inevitable questions of singularity,” which he himself describes as “so far away that science fiction can be written about them.” But when he was asked to clarify his position, he stated:
Ilon Musk is very worried about the existential threats to mankind (so he builds rockets to send people to colonize other planets). And although the risk of an AI uprising is very small and very distant in the future, we need to think about it, develop precautions and rules. Just as the bioethics committee appeared in the 1970s and 1980s, before the extensive use of genetics, we need committees on AI ethics. But, as Joshua Benjio wrote, we still have plenty of time.

Eric Horwitz is another expert, often referred to as the main voice of skepticism and constraints. His point of view was described in articles such as “The director of Microsoft’s research department thinks that an AI that got out of control won't kill us, ” and “ Eric Horvitz of Microsoft believes that AI should not be afraid .” But what he said in a longer interview with NPR:
: , - , . , , 747. , ?

: . , , .

: - , , , . . , , ?

Horwitz: I truly believe that the stakes are high enough to spend time and energy on actively seeking solutions, even if the likelihood of such events is low.

This, in general, coincides with the position of many of the most zealous AI risk agitators. With such friends and enemies are not needed.

The article in Slate, " Do not be afraid of AI, " also surprisingly puts a lot in the right light:
, . , «» . .

-, Skynet . , « », - , - , .

Yang Lekun, head of the AI ​​laboratory on Facebook, briefly summed up this idea in a post on Google+ in 2013: Hype hurts AI. Over the past five decades, the hype has killed the AI ​​four times. It needs to be stopped. "Lekun and others are rightly afraid of hype. The inability to meet the high expectations imposed by science fiction leads to serious cuts in budgets for AI research.

Scientists working on AI are smart people. They are not interested in falling into the classic political traps, in which they would be divided into camps and would accuse each other of panicking or ostrichism. Apparently, they are trying to find a balance between the need to begin preliminary work related to the danger looming somewhere far away, and the risk of causing such a strong hype that will hit them.

I do not want to say that there is no difference of opinion about how soon you need to start addressing this issue. Basically, it all comes down to whether it is possible to say that “we will solve the problem when we encounter it,” or expect such an unexpected take-off, due to which everything gets out of control, and for which, therefore, you need to prepare in advance. I see less and less than I would like evidence that most AI researchers who have their own opinion understand the second possibility. What can I say, if even in the article on Marginal Revolution quotes an expert who says that superintelligence does not pose a big threat, because "smart computers will not be able to set goals for themselves", although anyone who read Bostromknows that this is the whole problem.

There is still a lot of work to be done. But not to specifically select articles in which "real experts on AI are not worried about superintelligence."

Source: https://habr.com/ru/post/402379/


All Articles