The term “artificial intelligence” appeared only about 60 years ago, and now it is no longer just a term. At the moment there are many experts on AI, who are trying to understand what the future of this technology will be. One of the main issues for discussion is technological singularity, the moment when the machines reach a level of intelligence that exceeds the level of human intelligence.
And although the singularity is still the subject of many science fiction works, the possibility of its occurrence looks increasingly real . Corporations and startups are actively engaged in artificial intelligence, including Google, IBM and others. The results of this work are. For example, a robot has been created that looks like a person and can keep the conversation going, read emotions — or at least try to do it — and also perform different types of tasks.
Probably the most confident expert among all supporters of the point of view of the inevitability of the onset of singularity is Ray Kurzweil, technical director of Google. He believes that robots will become smarter than people around the year in 2045. ')
SoftBank CEO of Masayoshi Son, himself a well-known futurist, is convinced that the singularity will come as early as this century, approximately in 2047. The businessman does his best to speed up the offensive of the singularity, creating his own projects and buying others. So, SoftBank recently bought a startup Boston Dynamics from Google. The company also invests billions of dollars in technology venture capital investment.
Not everyone shares the optimism of the adherents of accelerating the process of singularity, considering the strong form of AI dangerous.
Among those who fear intelligent machines - Ilon Musk, Stephen Hawking and other scientists and businessmen. They argue that the emergence of a strong form of AI will be the beginning of the end of human civilization. Here are the opinions of some experts.
Louis Rosenberg, CEO Unanimous AI
“In my opinion, which I expressed on TED this summer , artificial intelligence will become rational and will overtake a person in development, just as people call it a singularity. Why am I so sure that this will happen? It's simple. Mother Nature has already proved that the mind can appear as a result of the emergence of a massive number of homogeneous computing elements (ie, neurons) from which adaptive networks (that is, the brain) are formed.
In the early 90s I pondered over this question, and it seemed to me that the AI would exceed the capabilities of a person in 2050. Now I believe that this will happen faster, probably already in 2030.
I believe that the creation of artificial intelligence on Earth is no more dangerous than someone else's AI, who came from a different planet. In any case, this AI will have its own values, morals, feelings and interests.
Assuming that the interests of AI will coincide with our absurdly naive, but in order to understand what could be the effect of a conflict of interests, you should think about what man has done with nature and living beings on Earth.
Thus, we need to prepare for the inevitable appearance of a reasonable AI. It has the same degree of probability as the arrival of a ship from a different solar system. All this is a threat to the existence of our own species.
What can be done? I do not think we can delay self-aware AI. We humans are simply unable to cope with dangerous technologies. This is not because we have no good intentions. We just rarely understand the potential threat of our own inventions. When we begin to convene, it is usually too late.
Does this mean that we are doomed? For a long time, I thought that this was indeed the case and wrote two novels about the inevitable destruction of man. But now I think that humanity will survive if we become smarter, much smarter, and we can stay ahead of the car.
Pierre Barrot, CEO at Aiva Technologies
I believe that there is a strong misconception about how quickly “super intelligence” will appear, associated with the certainty that the exponential increase in productivity (technology) must be taken as a given.
First, at the hardware level, we have almost reached the limit set by Moore's law. At the same time, there is no certainty that modern technologies, such as quantum computing, can be used to continue increasing the efficiency of computer systems, at the same rate that we have seen so far.
Secondly, with regard to the software level, we still have a long way to go. Most AI systems need multiple training cycles in order to be able to perform this or that operation. We humans are much more effective in terms of learning, we need only a few examples and repetitions.
The scope of AI is very narrow. Thus, such systems focus on solving specific problems, such as recognizing photographs of cats and dogs, cars, composing musical compositions. But so far there are no systems that can simultaneously perform all this together.
And this is not to say that we should not be too optimistic about the development of AI. It seems to me that there is too much HYIP in this topic, but soon all our illusions can be dispelled, illusions in understanding what the AI can and cannot do.
If this happens, then a new “AI winter” may occur, which will lead to a lower level of funding for the development of artificial intelligence. This is probably the worst thing that can happen in this area, and everything must be done to prevent the possibility of such a scenario.
But when will the singularity come? I think it depends on what is meant by this term. If we are talking about the passing of the AI Turing test and raising the level of intelligence of the artificial system to the human level, then this will happen around the year in 2050. This does not mean that the AI will necessarily be smarter than us.
If we are talking about AI, which is superior to a person in absolutely everything, then here we must first understand how our own mind works. And only then you can think about creating something that exhorts us. The human brain is still the most difficult problem that the best of the best cannot yet solve. The human brain is definitely more complicated than the most complex neural networks or neural network complexes.
Raja Chatila, Head of the Institute for Intelligent Systems and Robotics (ISIR) at the University of Pierre and Marie Curie.
The concept of technological singularity has neither a technological nor a scientific basis.
The main argument is the so-called "law of acceleration of development", which was created by the efforts of the prophets of technological signularity, including mainly Ray Kurzwela. This law seems to stem from the law of Moore, which, as we know, is not a scientific law - it is an empirical conclusion that is based on the results of the development of the electronic industry.
We all know the limitations of Moore's law - the time when we will achieve quantum technologies for example - and the fact that this architecture can change everything. This is important for understanding that Moore's law is not 100% law.
But supporters of singularities are trying to draw a parallel with the evolution of species and technologies without much reason. They believe that the constant increase in the power of computers will eventually give an artificial intelligence that will surpass the human mind. In particular, they assume that this will happen in the interval between 2040 and 2050.
But ordinary computing devices are not mind at all. We have about 100 billion neurons in the brain. Not only their number, but above all, their structure and principle of interaction gives us the opportunity to think and act.
All we can do is create a certain kind of algorithms to achieve certain goals and solve problems (calling it intelligence). In fact, all these systems are very limited in their capabilities.
My conclusion is that singularity is faith, but not science.
Gideon Shmuel, CEO of eyeSight Technologies
Trying to understand how to make cars self-learning, in a broad sense, we spend a lot of time. The challenge is that once such systems are created, they can learn extremely quickly, exponentially. And in a matter of hours or even minutes they will be able to surpass a person.
I would like to say that technology is not bad and not good, it is just a tool. I would also like to say that this tool becomes bad or good only in the hands that hold it. As for the singularity, people, users, nothing to do with it, all this concerns only machines. They can break out of our hands, and the only thing that can be said with certainty is that we cannot predict the consequences.
A large number of sci-fi books and films show us super-intelligent machines, the way they destroy humanity, or close everyone, or perform some other actions that neither you nor I like.
What you need to do in reality is to think about ways of developing AI-technologies. For example, if you take machine vision, the risk is relatively small. If the system can understand the value of things and recognize objects, nothing bad will happen.
It is in our own interest to obtain machines that can be self-taught in order to understand what is happening. The risk here is in the intermediate layer, which perceives data and translates external factors into action.
These actions can be very fast, related to ordinary reality (cars with AI, for example) or virtual reality (information processing, resource control, identification, etc.).
Should we be afraid of this, especially the last? Personally, I fear yes.
Patrick Winston, professor of information technology, specialist in AI.
I have been asked several times about what I think about this. People believe that human-level AIs will appear in 20 years past 50 years (that is, each generation is waiting for the AI to appear soon). My answer is no big deal, and in the end it may be true.
In my opinion, the creation of AI has nothing to do with, for example, sending a man to the moon. We already have all the technology for the moon, but there is almost nothing to create an AI. More technological breakthroughs are required for this, and now it is difficult to think about it from the point of view of any time frame.
Of course, it all depends on how many scientists will work on the problem of creating AI. Now a huge number of specialists are involved in machine learning and deep learning. Perhaps some of these experts will understand how a person's mind works.
When do we get the machine equivalent of a Watson-Creek breakthrough? I think in 20 years, in the end, I believe that this will happen.