📜 ⬆️ ⬇️

Should I be wary of improving artificial intelligence?



Recently, it has become fashionable to fear that a highly developed artificial intelligence will ever rise against people. Or somehow hurt. World media regularly publish such forecasts, and similar thoughts were expressed by some well-known businessmen, scientists and IT-specialists. But there is another - optimistic - view on the future role of AI technologies.

The voices of pessimists today are getting louder, and more and more “agents of influence”, people and entire companies, whose wider sections of the population are listening to their camp, adjoin their camp. Let's say this is a completely recent example from The Guardian:


')
This video reflects the fears already circulating in society, while strengthening them even more. Until recently, it was believed that many intellectual professions associated with the creation of something new would not be subject to automation for the simple reason that it is impossible in the foreseeable future to create a sufficiently advanced artificial intelligence. Nevertheless, a recent survey showed that about a third of software developers in the West seriously fear that machines will take their place. These fears are fueled by the academic environment. For example, in 2013, the University of Oxford published the study “The Future of the Labor Market”, which warned about the imminent replacement of automation software development.

It is not surprising that, against this background, many of us would prefer that AI technologies not develop. Why cultivate what may soon deprive you of work?

However, many people hold a different view on this issue. Artificial intelligence is one of the most important IT-technologies that have developed over the past decade. And for us, AI is not a threat, but a tool with remarkable capabilities. Roughly speaking, AI can be described as the ability of a computer to understand human questions, to search for information in databases and to formulate accurate and understandable answers. Also, AI is the ability of a computer to process huge amounts of data, make decisions and take (or advise to take) certain actions.

Everyone knows such examples of AI as Siri, Cortana and IBM Watson. The latter is a system consisting of a supercomputer and special software. Watson won the game Jeopardy (“Own Game”), and is also used to process large volumes of medical data in the treatment of cancer.

The AlphaGo system, developed by Google DeepMind, beat into play the first Lee Do Sedol, owner of the ninth professional dan, one of the best go masters in the world. Previously it was believed that computers could not win for another decade.

But all this is only the first steps towards a truly powerful artificial intelligence, something like JARVIS from the film “Iron Man”.



What opportunities will AI give us?



Terminator is one of the most famous examples of the use of AI (at the moment only in the cinema).

First of all - equal opportunities in a number of spheres of activity. Today, the Google search engine provides equal access to the global information environment, no matter where you live. And the future systems of artificial intelligence will equalize the possibilities of people from the point of view of using all sorts of services and services, from health care to advice on investing funds. Well, or at least reduce the gap in such opportunities for residents of different countries.

AI will take place in the exact sciences, financing, training, design, cooking, entertainment, and many other areas of activity. The cost of "labor" of AI will be extremely low, so we can use it for free or very cheap. In other words, the widespread introduction of advanced technologies of artificial intelligence will make many different services more accessible, which will improve the quality of life of an ever growing population of the planet. Perhaps bringing us closer to universal abundance.

(Almost) no reason for fear



Robots - one of the possible embodiments of AI. It is not at all necessary that we will communicate with the AI ​​through similar mechanisms. It is possible that the robots will remain just dumb assistants, while the AI ​​will take on the solution of algorithmic problems.

People tend to exaggerate and give free rein to emotions. Especially in relation to new technologies. In addition, millions of years of evolution - that is, survival - have endowed us with such a property as the fear of everything incomprehensible. Who knows what to expect from your “smart” computer ... And Hollywood and journalists actively support our fears of the future.

In the 1980s, scientists discovered DNA restriction enzymes , which opened the way for genetic engineering. And the world media began to massively frighten everyone with terrible viruses bred in laboratories and all sorts of mutants. Instead, we received more efficient medicines and increased agricultural productivity. Remarkably, the regulation in the field of genetic engineering was established on the initiative of scientists and doctors, and not officials. In 1975, the so-called Asilomar Conference was convened, during which the professional community developed principles and recommendations that still help to develop genetic engineering and keep it ethically.

And remember what hysteria arose in the media after the cloning of Dolly the sheep in 1996. We were promised the imminent emergence of armies of cloned soldiers, the massive breeding of geniuses and whole farms of specially bred organ donors. And where is all this?

Threat Prevention



This is how the terrible dreams of all opponents of the creation of AI look like.

Of course, a strong AI that thinks no worse than a person is not at all the same as modern virtual assistants and voice search engines. In addition to breathtaking opportunities, we will receive certain risks and threats. Any radically new technology is always frightening. Stephen Hawking called artificial intelligence the greatest mistake of mankind. Bill Gates admitted that “in a few decades, artificial intelligence will become sufficiently developed to become a cause for concern.” The head of high-tech companies Tesla Motors and SpaceX, Elon Musk, spoke about AI only as a “major existential threat.”

However, Elon Musk himself immediately decided to personally invest in the company engaged in the development of AI, in all likelihood, guided by the principle "you can not defeat the enemy - lead him." The idea of ​​the entrepreneur was supported by PayPal co-founder Peter Thiel and head of Y Combinator Sam Altman, having funded the non-profit organization OpenAI , created to oversee the development of AI.

No one knows for sure whether the existing technologies are enough for the emergence of a full-fledged AI, or it is necessary to wait another dozen years until new computational capabilities appear. However, suppose that the technologies known to us are exactly what is needed for AI. The military will first be interested in creating artificial intelligence (for example, DARPA ). Even if you create a friendly AI, no one will prevent some state from trusting the “hawks of war” and creating a hostile AI.

The future AI must understand the absolute value of human life. And before the advent of intelligent machines, the creators should understand this value - the people responsible for the development of the necessary technologies. OpenAI representatives believe that the world right now needs a leading research institute that would not be subordinate to one government or a particular commercial corporation, and whose scientific research would be available to the entire scientific community on the principle of open source.

OpenAI, headed by Ilya Sutskever, one of the world's most famous machine learning experts, is committed to ensuring that AI does not end up in the hands of any particular group of people pursuing their goals. The superintellect security solution is to eliminate dangerous software bookmarks, which are possible (even by mistake) at the AI ​​creation stage.
The method, supported even by Elon Musk, implies that good people will create a good AI, which will then be distributed to other good people. But let's not forget that over the last hundred years, humanity has survived two world wars, one cold war, created thermonuclear, chemical and bacteriological weapons, continues to participate in many local conflicts and wages a world war on terrorism. In such conditions, the emergence of AI can be a significant destabilizing factor. Does civilization have other ways to protect itself?

We should not discount the simplest possibility - the creation of pseudo-AI. Technical progress cannot be stopped and scientific research can not be banned, but they can really be sent in a certain direction. A similar situation occurred in due time with cloning. After the appearance of the sheep, Dolly the media periodically exploded with news about the first cloned person (which is prohibited in almost all developed countries of the world), but all of them turned out to be hoaxes. To the same extent, the development of pseudo-intelligent algorithms that solve a narrow range of tasks under our control, will save humanity from possible threats.

At this stage, the development of AI is the solution of applied problems. We can build a training system that will be better for a professional doctor to make a diagnosis, we can create a program that wins a person in any games or make a construction robot that installs bricks the fastest. Some researchers believe that this is worth staying. We have enough artificial intelligence, solving mathematical problems of the millennium and preparing the most delicious coffee in the universe - and it will be two different intellects, out of thousands and thousands of others, not united into a single network.

The desire to destroy humanity, or at least one person is largely due to nature. Evolution has sharpened our instincts that help us survive and expand our species. The instinct of self-preservation, the desire for domination, all this makes us human, intelligent species. Without putting such behaviors into artificial intelligence, we can protect ourselves from possible side effects of AI.

Real contribution



Perhaps, before creating an artificial mind, we should fully understand how the natural mind works? The world's largest project for the study of the human brain will be completed only in 2023.

It is worth noting here that one cannot simply take and declare to all possible AI an ultimatum from the three laws of robotics . Any laws can be changed technically, they are also full of logical errors - in any case, they are not considered an axiom in the world of security. In certain conditions, even a cute cat can be a threat to a person, not to mention AI, and the task of scientists at this stage is to minimize the risks and reduce the number of possible threats to all mankind.

In addition to OpenAI, there are several organizations in the world whose purpose is not only to create AI-like systems (such as in the European project of the Human Brain Project's digital map), but also to work out the safe conditions for the existence of supermind.

Prague-based GoodAI was created by Marek Rosa, manager and founder of the gaming studio Keen Software House. Communication with the gaming industry is not accidental here - integrating AI-technologies improves the quality of the games themselves and makes virtual worlds truly alive. However, GoodAI’s mission is more global: create a really good mind, aimed at developing new technologies, scientific discoveries, space exploration and much more. A year ago, GoodAI reported on several successes.

First, the Pong-playing AI project appeared, able to learn how to play Pong and Breakout based on unstructured input. Secondly, the project was developed Maze game AI , capable of playing a video game that requires the creation and implementation of consistent goals. In addition, the Brain Simulator project was posted for free in the public domain, in which users can try to design their own AI.

When Google acquired DeepMind (it was they who developed AlphaGo and, in general, a network capable of playing video games at the human level), one of the conditions of the deal was to work out ethical issues and the security of artificial intelligence. Investing more in AI security than developing AI itself is another way to prevent threats, but it is believed that research should be supported only in companies that were not originally intended to make a profit.

For example, Nnaisense . This is a young company from Switzerland. The startup that develops AI based on the training of neural networks was created by Jürgen Schmidhuber, the scientific director of IDSIA, the Swiss laboratory of artificial intelligence. The startup team began as a university project, and is now ready to enter the “deep data” market. In fact, such teams, with many years of research experience and not aimed at profit in the short term, become the “pioneers” of artificial intelligence. Microsoft, Facebook and Google and other AI market leaders are not always able to quickly turn scientific research into a commercially successful project. Indicative in this regard is the story of the popular manufacturer of "frightening" robots Boston Dynamics, which Google bought first and now is trying to sell .

Universities and companies that exist with the founders' own money can afford much longer development. Moreover, the lack of the need to sharpen AI for making money, some researchers believe is a necessary condition for creating a "good" AI.

At the moment there is a whole research institute - the Singularity Institute for Artificial Intelligence - dedicated to the creation of an exceptionally friendly AI. Many futurologists have supported the theory of friendly AI, including the famous Ray Kurzweil , technical director of machine learning and natural language processing at Google.

Fear has big eyes


A common mistake is to present the AI ​​as some kind of animal. AI will be an incredibly powerful tool that can expand our capabilities and access to information resources and services. And how we manage all this depends only on us.
Perhaps, trying to make AI just an effective tool for solving global problems, rather than trying to recreate the semblance of the human mind in the digital environment, we will create a new safe world. In any case, the future cannot be predicted, one can only predict. Therefore, it is worth supporting projects aimed at peaceful, scientific, research goals. As well as to spread as much as possible the real information about successes in the field of AI. When the subconscious fear of AI turns into curiosity, humanity will be able to direct more efforts to solving real problems in society, ecology, economics, science, etc.

Agree, to finance research in the field of AI and to solve ethical computer problems, is much more useful than spending billions on thermonuclear weapons and other purely militaristic systems - those threats that are much worse and more real than any artificial intelligence.

Source: https://habr.com/ru/post/372017/


All Articles