📜 ⬆️ ⬇️

The future of artificial intelligence



In the previous article, we described the past and present of artificial intelligence - what AI looks like today, the difference between strong and weak AI, OII, and some philosophical ideas about the nature of consciousness. Weak AI can be found anywhere in the form of software designed to intelligently perform certain tasks. The final goal is a strong AI, and it is the real strong AI that will resemble what is familiar to us from popular fiction.

Generalized AI is a modern goal that many researchers devote their careers to today. OII does not need to have some kind of consciousness, but it must cope with any task related to the data we set for it. Of course, in the nature of people there is a desire to predict the future, and this is what we will do in this article. What are the best guesses we can make about AI-related expectations in the near future? What ethical and practical problems can arise with the emergence of a conscious AI? In the expected future, should the AI ​​have rights, or, for example, should it be feared?

Future AI


The optimism of AI researchers over the future has changed over the years, and even modern experts are arguing about it. Trevor Sands, Lockheed Martin's AI explorer, is cautious in his assessments:
')
Since the advent of OII as a concept, researchers and optimists have argued that it’s not long to wait, just a few decades. Personally, I think that we will see the emergence of the AII in the next 50 years, because iron has been brought up to the level of theory, and more and more organizations see the potential in AI. AIS is a natural conclusion to existing attempts at investigating AI.

During this time, even a reasonable AI can appear, as Albert says (another AI researcher who asked us to mention him only by pseudonym):

I hope that I will see him during his lifetime. At the very least, I hope to see cars that are smart enough for people to argue whether they have consciousness. And what this really means is a more complicated question. If the mind means "self-awareness", then it is not so difficult to imagine a smart machine with a model of itself.

Sands and Albert believe that today's research on neural networks and in-depth training is the right way, which is likely to lead to the creation of IES in the near future. In the past, researchers either focused on ambitious attempts to create a strong AI, or on an essentially weak AI. Between them is the OII, and so far the result of the work of neural networks looks fruitful, and most likely will lead to an even greater number of breakthroughs in the coming years. Large companies, in particular, Google, clearly believe that this will happen .

Implications and ethical problems of strong AI


With each discussion of AI, there are always two problems: how will it affect humanity, and how do we relate to it? Literature can always be viewed as a good indicator of thoughts and feelings, reflecting people's moods, and science fiction is full of examples of these problems. Will sufficiently advanced AI try to eliminate humanity like Skynet ? Or will the AI ​​need to be given rights and protection in order to avoid such acts of cruelty as are found in AI Artificial Intelligence ?

image
Scary AI

In both cases, the point is that with the creation of a true AI, a technological singularity will come. Technological singularity - a period of exponential growth of technology, occurring in a small time period. The idea is that AI can improve itself or produce more advanced AI. Since this will happen quickly, cardinal changes can happen in one day, and as a result, an AI will appear much more perfect than the one created by mankind. This may mean that as a result we will have a super-intelligent and unfriendly AI, or a reasonable AI, worthy to have rights.

Negative AI


What if this hypothetical super-intelligent AI decides that humanity does not like it? Or will we be indifferent to him? Do I need to be afraid of this opportunity and take precautions? Or are these fears the results of unfounded paranoia?

Sands says: “OII will make a revolution, and its application will determine whether it will be positive, or negative. Approximately the same splitting of the atom can be considered as a double-edged sword. " Of course, here we are talking only about OII, and not about a strong AI. What about the possibility of the emergence of a reasonable strong AI?

Most likely, the potential can be expected not from the malicious, but from the indifferent AI. Albert considers an example with a simple task set by the AI: “There is such a story that the owner of the factory for the production of paper clips asked the AI ​​a seemingly simple task: to maximize production. And then OII used his intellect and figured out how to turn the entire planet into paper clips! ”

Albert rejects the possibility described in this ridiculous thought experiment: “You want to say that this OII understands human speech, is super-intelligent, but the subtleties associated with the query are not available to it? Or that he will not be able to ask clarifying questions, or guess that turning all people into paper clips is a bad idea? ”

That is, if the AI ​​is smart enough to understand and run a dangerous scenario for people, it must be smart enough to understand that this is not worth doing. The three laws of Asimov’s robotics can also play a role, although the question remains: can they be implemented in such a way that the AI ​​cannot change them? What about the well-being of the AI ​​itself?

AI rights


On the opposite side of the problem is the question, does the AI ​​deserve protection and rights? If a reasonable AI had appeared, could a person be allowed to simply turn it off? How to treat him? Animal rights are still very controversial, and so far there is no agreement on whether animals possess consciousness or intelligence .

Apparently, the same disputes will unfold over creatures with AI. Will slavery make AI work day and night for the benefit of humanity? Should we pay him for the services? What will AI do with this payment?


The film is bad, the idea is good

It is unlikely that in the near future we will have answers to these questions, in particular, answers that suit everyone. “How can we guarantee that an AI, comparable to a person, will have the same rights as a person? Given that this intellectual system is fundamentally different from human, how can we determine the fundamental rights of AI? In addition, if we consider AI as an artificial form of life, do we have the right to take away this life from it (turn it off)? Before you create an OII, you need to seriously consider ethical issues, ”says Sands.

As the study of AI continues, these and other ethical questions will undoubtedly be controversial. Apparently, we are still quite far from the moment when they will be relevant. But even now, conferences are being organized to discuss them.

How to participate


Research and experiments with AI have traditionally been managed by scientists and researchers from corporate laboratories. But in recent years, the growing popularity of free information and open source has spread even to AI. If you are interested in doing future AI, there are several ways to do this.

You can conduct independent experiments with AI using available software. Google has a browser built-in sandbox for working with simple neural networks. Libraries are available on open source neural networks, for example, OpenNN and TensorFlow . They are not so easy to use, but purposeful hobby lovers can turn around at their base.



The best way is to do everything you can to advance professional research. In the US, this means the promotion of scientific research. AI research, like any scientific research, depends on unforeseen circumstances. If you believe that technological innovation has a future, then assistance in obtaining research funding is a worthy occupation.

Over the years, optimism about the development of AI has hesitated. Now we are at the peak, but it is quite possible that this will change. But it can not be denied that the possibility of AI spurs the public's imagination. This is obvious, judging by science fiction and other entertainment. A strong AI may appear after a couple of years or a couple of centuries. One can only be sure that we will not stop on the way to this goal.

Source: https://habr.com/ru/post/370235/


All Articles