Hello to readers Habrahabra! Recently, I learned that DeepMind, a company engaged in the development of artificial intelligence (AI), was acquired by Google for $ 500 million. I began searching the internet for something about DeepMind's researchers, interviewing them and found Q & A with Western experts, including Shane Legg from DeepMind, collected on the LessWrong.com website. Below I will give the translation of the interview with Shane Legg, which seemed interesting to me. The second part of the article will include interviews with ten other AI researchers.Shane Legg is a Computer Science scientist and AI researcher who works on theoretical models of super intelligent machines (AIXI). His PhD thesis “Machine Super Intelligence” was completed in 2008. He was awarded the “Artificial Intelligence” award and $ 10,000 from the Canadian Institute of Singularity. The list of works of Shane can be found
on the link .
Interview with Shane Legge, July 17, 2011
Original articleAbbreviations used:
')
CII : human-level AI (hereinafter - human AI, CII, also Artificial Generic Intelligence, human-level AI)
SCHII :
ai super-human level
Q1 :
Assuming that PII research will not be suspended by a global catastrophe, how do you think, in what year will PII be developed with a probability of 10% / 50% / 90%?Explanation:
P (CII (year) | no wars ∧ no catastrophes ∧ political and economic support) = 10% / 50% / 90%
Shane Legg : 2018, 2028, 2050.
Q2 :
With what probability the result of the development of AI will fail and absolutely fail?Explanation:
P (Terrible consequences | unsuccessful AI) =? # Disappearance of humanity
P (Absolutely terrible consequences | unsuccessful AI) =? # Torment of mankind
Shane Legg : It depends a lot on how you define the terms. True, it seems to me that the disappearance of humanity will nevertheless come, and technology will play its part in this. (
Probably, this means disappearing species of homo sapience, and not intelligent life in general - comment of the translator ). But there is a big difference when this happens - a year after the invention of the PII or a million years after it. I don't know about probability. Maybe 5%, maybe 50%. I do not think anyone can give a good rating.
If by torment you mean long torments, I think this is unlikely. If a super intelligent machine wants to get rid of us, it will do it quite effectively. I don’t think that we will voluntarily create a machine to maximize the suffering of humanity.
Q3 :
How likely is it that CII will rebuild itself to a massive superhuman intellectual machine (UCHI) within hours / days / <5 years?Explanation:
P (CHII within hours | CII, running at person speed, 100 Gb network connectivity) =?
P (CHII for days | CII, running at person speed, 100 Gb network connectivity) =?
P (CHRII for <5 years | CII, working at person speed, 100 Gb network connection) =?
Shane Legg : “the rate of human level CIA” is a rather vague term. Without a doubt, the car will be in something better than a man, and in something worse. What it can be better at — it can lead to a big difference in the results.
In any case, I suspect that, by creating CII, the developers will independently scale it up to SMII, the machine itself will not do it. After that, the car will be engaged in self-improvement, most likely.
How quickly can this be done? Perhaps very quickly, but it is also possible that there may never be — there may be limitations of nonlinear complexity, which means that even theoretically optimal algorithms give a decreasing increase in intelligence when adding computing power.
Q4: Is it important to figure out and prove how to make AI friendly for us and our values (safe) before solving the AI problem?Shane Legg : I think that sounds like
a chicken and an egg question . At the moment, we cannot reach a general opinion on what intelligence is and how to measure it, and we cannot even reach a general opinion on how the PII should work. How can we make something safe if we don’t really know how it will work? It may be useful to consider some theoretical questions. But without a concrete and basic understanding of AI, I think that an abstract analysis of these issues will become very variable.
Q5 :
How much money is needed now to eliminate possible risks from AI (contributing to the achievement of your personal distant goals, for example, to survive this century), less / suffices / slightly more / much more / incommensurable more.Shane Legg : A lot more. However, as is the case with charity, it is unlikely that the cause of pumping problems with money, and indeed it can worsen the situation. I really think that the
main question is not financial, it is a question of culture (singled. Transl.). I think that here in the society, changes will begin to occur when there is progress with the AI and people will begin to take the possibility of the emergence of PII during their lifetimes more seriously. I think up to this point a serious study of the risks of AI will remain optional.
Q6 :
Are AI risks prioritized than other existential risks, such as those related to advanced nanotechnology capabilities? Explanation: which existential risks (such as the disappearance of humanity) are most likely to have the greatest negative impact on your personal long-term goals, if nothing is done to reduce the risk.Shane Legg : For me, this is the number one risk in this century, next to it, but in second place is the creation of a biological pathogen.
Q7 :
What is the current level of risk awareness of AI, regarding the ideal?Shane Legg : Too low ... but it’s a double-edged sword: by the time the main research community begins to worry about the problem, we may face some kind of arms race if large companies and / or governments secretly panic. In this case, most likely, everything will be bad.
Q8 :
Can you say something about Milestone, after which we will probably reach CII within five years?Shane Legg : This is a tough question! When the machine can play a fairly extensive set of games, having a stream of perception at the entrance and exit, and will also be able to reuse the experience between different games. I think in this case, we will begin to approach.
July 17, 2011
Translator’s note: I allowed myself to single out Shane Legg’s opinion on culture - he considers the problem of resources for the invention of AI less important than the issue of cultural exchange. When I try to transfer this opinion to the Russian reality, I have different thoughts - negative, due to the fact that there is almost no cultural exchange within the whole, big Russia, and rather positive ones, related to the fact that developers who seriously evaluate their lives will either leave the country, or contribute to the development of the social sphere.