At the threshold stood a huge mountain
I fall to the bottom of my cheek.
And that daisy has not grown yet
On which I read myself.
Robert Christmas
Artificial Intelligence (AI) or Artificial Intelligence (AI) is a rapidly developing technology that is worth talking about even more often than they do today. It is rapidly developing together with technologies that complement each other, such as neural networks and machine learning (to which the Internet of Things has recently connected - IoT), and, according to rumors, is even going to take over the whole world. And with our direct help. About her continuously talking and writing, writing and talking. AI is already being used in complex modeling, in games, in medical diagnostics, in search engines, in logistics, in military systems and in many other places, promising to cover and possibly thoroughly shovel the entire post-industrial landscape in the foreseeable future. And he even began writing literary works like these: “Once upon a time there was a golden horse with a golden saddle and a beautiful purple flower in her hair. The horse brought a flower to the village, where the princess began to dance at the thought of how beautiful and good the horse looks. ”
What can I say? - Horses, they are. Yes, and the princesses are acquired, as you know, more in the villages ... However, AI has the ability to learn, unlike the unforgettable Lyapis Trubetskoy from Ilf and Petrov who came to mind from the above lines. "The first pancake is lumpy" is not only in literature, so all other AI fans should also get ready for it.
And yet AI is a mantra that technologists, academics, journalists and venture capitalists repeat from time to time to draw attention to both the problems of humanity and themselves, loved ones. Some experts in the face of well-known representatives of science and business, like Stephen Hawking, Bill Gates and Elon Musk were not so long ago worried about the future of AI, since the further development of AI-technologies may open the “Pandora’s box” when AI becomes the dominant form of “life” our planet. Other experts are concerned with the development of ethical standards in order to curb the destructive power of AI (however, the AI ​​has not yet destroyed anything), sending it to serve the common good of civilization.
But the Pentagon, for example, has already decided that AI is a key area in which it is necessary to put maximum efforts to prevent China and Russia from taking the lead. In this regard, in the US under the leadership of the Minister of Defense, an appropriate AI center is being created.
Currently, a significant part of what is called AI in the public sphere is only so-called “machine learning” (ML - Machine Learning). In particular, using the Big Data ML technology allows the computer program to learn from all the collected data and make predictions / predictions with increasing accuracy as it is being taught for use in automatic (or under human control) decision making. In general, ML is an algorithmic field combining ideas from statistics, computer science and many other disciplines for the development of algorithms that allow you to do the above.
By the way, ML was not born today. His role in the industry was generally understood in the early 1990s, and by the end of the twentieth century, promising companies such as Amazon had already used ML in their entire business, solving critical problems with detecting fraud, predicting supply chains, or creating recommendations to consumers. As data volumes and computing resources increased rapidly over the next two decades, it became clear that soon ML will manage not only Amazon, but virtually any company in which solutions can be tied to large-scale data. As specialists in the field of ML algorithms collaborated with specialists in the field of databases and distributed systems to create scalable and reliable ML systems, the social and ecological boundaries of the resulting systems were expanded further. Today, it is this fusion of ideas and technological trends that is called AI.
On the other hand, historically, the term AI appeared in the late 1950s, so that when ideas emerged on the development of cybernetics (mostly where it was not considered "false science") to plunge into the scientist’s aspirational soul to realize in software and hardware an entity with intelligence close to the human intellect. The revolution seemed so close, and the artificial rational entity must have seemed one of us, if not physically, at least mentally. The new term was immediately picked up by science fiction writers, but in real life, the creators of the "new entities" did not really approach the successes of the Lord or just Nature (as atheists say).
In those days, the term AI was used at a “high level”, approaching the ability of people to “reason” and “think”. According to experts, despite the fact that almost 70 years have passed since then, all those former high-level arguments and ideas still remain elusive and have not received any software and hardware implementation. In contrast to the enthusiastic past expectations, the whole of today's “real” AI was mainly formed in the areas of technology associated with low-level pattern recognition and motion control. And partly in the field of statistics regarding disciplines focused on finding data patterns and making logically related predictions. That is, the so long awaited revolution in the field of AI has not yet happened.
However, unlike the human brain, on top of which our intellect exists, AI does not depend on carbon atoms, protein life and all evolutionary restrictions. Because of this, he is able to continuously learn and improve, and, in the end, he will allow humanity to solve a lot of pressing problems - from climate change to cancer. This opinion is shared, in particular, by Max Tegmark, a physicist from MTI and the co-founder of the so-called “Institute for the Future of Humanity”. In an interview with The Verge, Tegmark presented his vision of three evolutionary formats of life (on our planet).
Life 1.0 is characteristic of bacteria, which Tegmark calls "small atoms joined together in the simplest self-regulation algorithm." Bacteria are not able to master anything new during their lifetime, and the mechanisms of their work are extremely primitive - they can only turn in the direction where there is more food. In turn, the development of their “software” (modern scientists now easily share all things into software and hardware) only within the framework of evolutionary changes.
Life 2.0 is embodied in people. And, despite the fact that a person also has a rigidly defined and limited evolution of the body - "iron", he also has a significant advantage in the form of a more advanced mind - "software", which allows him to learn independently. Thanks to the ability to improve their software at their own discretion, acquiring knowledge and not waiting for evolutionary development, people began to dominate on this planet, created modern civilization and culture. Nevertheless, despite all the advantages, our improvement has a limit. That is why, over time, life 2.0 will be crowded out by a less limited life 3.0 (given the above, a little higher, making such statements would be somewhat rash).
Life 3.0 is characterized by the fact that there is not only evolutionary, but also biological limitations. AI, unlike the previous formats, will be able to develop both its own software and hardware. For example, install more memory in yourself to memorize a million times more information or get more computing power (by the way, it would be interesting to see if Tegmark has a USB connector somewhere behind the ear). Unlike life 3.0, we, who are content with life 2.0, - although we can maintain our own heartbeat with the help of pacemakers or facilitate digestion with a pill - are not able to make drastic changes in our bodies. Well, maybe a small correction with the help of plastic surgeons or implantation of chips. We are not given to seriously increase our height or speed things up a thousand times in our own brain. The human intellect works on biological neural connections, and the volume of our brain is limited so that at birth the head can pass through the birth hole of the mother. AI is not limited by anything and can be improved infinitely - explains the scientist.
However, it seems that Tegmark somehow does not take into account progress in genetic design - it’s not an hour, people will learn to adjust their bodies to grow long legs, tenacious tentacles, enlarge a birth hole or add mind to those who are sorely lacking.
Tegmark notes that many people today perceive the mind as a mysterious property of biological organisms. However, according to him, these ideas are erroneous. “From the point of view of a physicist, the mind is just the processing of information that elementary particles perform, moving according to certain physical laws,” he says. The laws of physics do not interfere in any way with the creation of machines that are far superior to man in intelligence (it would be nice to know more precisely what intelligence is). In addition, Tegmark emphasizes, there is no evidence that reason depends on the presence of organic matter:
“I do not think that there is any secret sauce in which carbon atoms and blood must be present. Many times I wondered what the limit of intelligence could be from the point of view of physics, and each time I came to the conclusion that if such a limit exists, then we are very far away from it. We can't even imagine it. Nevertheless, I am sure that it is humanity that will breathe into the Universe what will later become life 3.0 - and this, from my point of view, sounds very romantic. ”
In response, I would like to add no less romantic — did they call us there with our AI with all our “cracks”, so to speak? As for the "cracks". If life 3.0 will not know the limitations, it would be nice to find out in which particulars: in deception, in indifference, in meanness? And, perhaps, in the ability to kill? Exactly the same problems regularly confront individual members of the human race, who repeatedly succumb to temptation.
“We are confronted with the limitations of our mind every time we conduct a particular study. That is why I believe that as soon as we manage to unite our own mind with AI, we will have enormous opportunities to solve almost all problems, ”says Tegmark, and as we know, there are many problems for people.
Well then. For this, it is not necessary, as they say, to go far. Over the past 20 years, both in industry and in scientific circles, significant progress has been made in creating the so-called “intelligence enhancement” or IA (Intelligence Amplification). In this case, calculations and data are used to create services that complement human intelligence and creativity. A search engine can also be considered as an example of IA (it increases human memory and factual knowledge), as well as a natural translation of a language (it increases a person’s ability to communicate). The generation of sounds and images serves as a palette and an amplifier of creativity for artists. While services of this kind are likely to include high-level reasoning and ideas, this is not currently the case. Basically, it all comes down to performing various mappings of datasets with patterns or numeric operations. We may still see some cloud services like InaaS (Intellect-as-a-Service) that help the user to wise up in different areas of knowledge, but this will be only the development of search engines, but in no case a substitute for human intelligence.
There is also such a “smart” thing as “intelligent infrastructure” (II - Intelligent Infrastructure), in which networks of computing, data and physical objects coexist and which begins to appear in areas such as transport, medicine, trade and finance. All this is crucial for individuals and communities. Sometimes the concept II arises in conversations about Internet things, but it usually refers to the simple problem of obtaining “things” on the Internet, rather than solving a significant set of problems associated with these “things” in order to analyze data flows, to discover their links to external world and interact with people and other “things” at a much higher level of abstraction than just bits. In general, IA and II are not yet the “real” AI.
And what is "real" intelligence? Do I have to imitate him in the framework of creating AI? Of course, human intelligence is the only kind of intelligence that we know. But we also know that, in fact, people are not very well versed in certain judgments: we have our own omissions, prejudices and limitations. It happens people are wrong. Moreover, critically we did not evolve in order to carry out the types of large-scale decision-making that modern systems are faced with, prepared for the role of AI. Of course, one can reasonably argue that the AI ​​system not only imitates human intelligence, but also complements and corrects it, and then it will also be scaled to solve arbitrarily large problems facing humanity. But, sorry, this is already from science fiction. And such speculative arguments that have been feeding fiction for 70 years should not become the main strategy for the formation of AI. Obviously, IA and II will continue to develop, solving their particular problems, but without pretending to become a “real” AI. So far, we are very far from at least the implementation of the "human-imitative" AI.
In addition, success in IA and II is neither sufficient nor necessary to solve important AI problems. If we turn to unmanned vehicles, then to implement this technology it will be necessary to solve a number of technical problems that may have very little to do with the competences of a person. An intelligent transport system (and this is System II) is more likely to resemble the existing air traffic control system, rather than a population of weakly related, self-directed and generally inattentive people-drivers. More precisely, it will be much more complicated than the current air traffic control system, at least in terms of using huge amounts of data and adaptive statistical modeling to inform about private solutions for each maneuver of each car.
However, despite the generally optimistic attitude towards the future of AI, experts recognize that AI carries serious risks. I remember that Stephen Hawking et al. Believed that AI would be either the worst or the best phenomenon in the history of mankind. Moreover, when people talk about the current total automation of jobs, they often forget that it is much more important to look ahead in order to understand what will happen next.
On this occasion, Tegmark said: “The fact is that today we are faced with questions that we must answer before the first superintelligence comes into being. Moreover, these questions are quite complex, perhaps we can answer them no sooner than in 30 years. But as soon as we solve them, we will be able to protect ourselves from threats. ” “How can we ensure the reliability of future AI systems when our computers are very easily cracked today?” How to make the AI ​​understand our goals if he becomes smarter than us? What should be the purpose of the AI? Can artificial intelligence develop high-level tasks that many American programmers are hoping for today, or will AI suddenly think like an IG supporter or a person from the Middle Ages? How will our society change after the invention of AI? - When your computer freezes, you get nervous because you have lost an hour of work. But imagine that we are talking about the on-board computer of the aircraft on which you are flying, or about the system responsible for the nuclear arsenal in the United States - this is already several times worse. ”
But who, for example, must respond if the AI ​​or a robot armed with it performs an action that causes people harm? This action may be accidental, but it is one of many questions of autonomy and responsibility of AI that society faces when its most advanced forms, say self-driving cars (perhaps the first robots we learn to trust), drones or even warfare, are becoming more widespread. Specialists in AI and law are trying to understand this, but they do not see a simple answer. In any case, the question remains legally difficult. How, for example, to divide the areas of responsibility of the programmer and owner, given that robots and AI are trained from their environment?
According to Tegmark, in order to level the risks arising from AI, it is necessary to hold discussions more often. Moreover, all strata of society should take part in the discussions, and not just scientists turned to AI. After all, AI promises to change the very essence of our civilization and will affect the lives of every literal person. Let everyone participate. And what is interesting is that today billions are invested in AI research, while studies of its security are practically not funded. Who would build a nuclear reactor without first having designed its protection? It is obvious that everyone would have won if states and corporations began to invest more in research on the subject of AI security. And then we get something.
And now let us consider the ideal information society, in which the environment is hung with sensors that regularly transmit adequate information and no one “hacks” them (in the sense that no one interferes with this). And this huge surrounding world delivers to our high-tech company a huge stream of data that is processed in real time in data centers, transformed, visualized, etc., in order to appear in a “human-readable” form. And then the person to whom this information is intended, straining his brain, looks at all this and makes an operational decision, which he transmits to another person, who embodies this decision in some kind of code. Further, this optimized code controls the production, the company or even the state, and then production, etc., begins to improve something in the surrounding world and so on.
Don't you think that a weak link appeared in the story about AI, and a person becomes just superfluous? Experts note that a well-trained algorithm is already able to make operational decisions on its own, especially in a more or less repetitive production process. Moreover, it can do this much better than any of the most experienced technologist (not to mention the head of the transport department), who, for example, put "life and health" into well managing the production process. And it turns out that having a person in the decision-making chain is simply impractical. That is, not having built a “real” AI, mankind is already beginning to realize its own uselessness in a number of known processes. What else will happen when a “real” or “classic” AI appears ... Does this AI consider it necessary to feed millions and even billions of unnecessary idlers on a “protein burden”? After all, robots that feed on electricity do not need agriculture, housing, household waste recycling, heating, water supply, etc., etc. It is easy to figure out here and without AI what it is worth saving. Has anyone other than science fiction writers figured out such risks?
Maybe someday some wireless technologies and cloud services will allow you to endow anyone with intelligence. And then all around will become intelligent and educated. Only it will be no one needs. The AI ​​will tell you that your train, comrades, has already left. What remains to us? - Well, at least a song:
I dreamed of seas and corals.
I dreamed of eating turtle soup.
I stepped on the ship, and the ship
Turned out from the newspaper yesterday ...
Based on : radio.ru, The Verge, hightech.fm, vz.ru, anews.com, pcweek.ru, medium.com, Defense News
Alexander Golyshko, Ph.D., Systems Analyst
Source: https://habr.com/ru/post/423567/