📜 ⬆️ ⬇️

Ominous secret in the heart of artificial intelligence

No one understands how the most advanced algorithms work. And this can be a problem.


image

Last year, on the quiet roads of Monmouth County, New Jersey, a strange robotic car came out. The experimental vehicle developed by researchers from Nvidia did not outwardly differ from other mobile vehicles, but it was completely different from the ones developed by Google, Tesla or General Motors, and it demonstrated the growing power of AI. The car did not follow strict instructions programmed by man. He fully relied on an algorithm that learned to drive a car, observing people.

Creating a robobile in this way is an unusual achievement. But it is also a bit disturbing, since it is not completely clear how the machine makes decisions. Information from the sensors goes directly to a large network of artificial neurons that process the data and issue commands necessary for steering, brakes and other systems. The result is similar to the actions of a live driver. But what if one day she does something unexpected - drive into a tree, or stop at the green light? In the current situation it will be very difficult to find out the reason for this behavior. The system is so complex that even engineers who have developed it can hardly find the cause of any particular action. And she cannot be asked a question - there is no simple way to develop a system that can explain her actions.

The mysterious mind of this car indicates the problem of AI. The machine-based AI technology, in-depth training (GO), in recent years has proven its ability to solve very complex problems, and it is used for such tasks as creating image captions, voice recognition, text translation. It is hoped that such technologies will help in diagnosing fatal diseases , making multimillion decisions in financial markets and in countless other things that can transform the industry.

But this will not happen - or should not happen - if we do not find a way to make GO-type technologies more understandable for their creators and responsible for their users. Otherwise it will be very difficult to predict the appearance of failure - and failures will inevitably occur. This is one of the reasons why the car from Nvidia is in the experimental phase.
')
Already today, mathematical models are used as an auxiliary tool for determining who can be released on parole, who can approve a loan and whom to hire. If you could access such models, you would understand how they make decisions. But banks, the military, employers and others are starting to pay attention to more complex machine learning algorithms that can make automatic decision making inexplicable. GO, the most popular of these approaches, is a fundamentally different way of programming computers. “This problem already matters, and in the future its value will only increase,” says Tommi Jaakkola, a professor at MIT who works on machine learning applications (MO). “Whether this decision is connected with investments, with medicine, or with military affairs - you do not want to rely only on the“ black box ”.

Some have already argued that the ability to interrogate the AI ​​system to determine how a decision was made is a fundamental legal right. Since the summer of 2018, the European Union can introduce the requirement that companies should be able to explain the decisions made by automatic systems to users. This may not be possible, even in the case of systems that at first glance look simple - for example, for applications or sites that use GOs to display ads or recommend songs. The computers on which these services run have programmed themselves, and this process is incomprehensible to us. Even the engineers who created these applications cannot fully explain their behavior.

This raises difficult questions. With the development of technology, we may have to go beyond a certain limit beyond which the use of AI requires a certain faith in it. Of course, people can not always fully explain the course of their thoughts - but we find ways to intuitively trust and test people. Will it be possible with machines that think and make decisions differently than people would? We have never created machines that work in ways that are incomprehensible to their creators. What can we expect from socializing and living with cars that may be unpredictable and inexplicable? These questions led me to the forefront of AI research, from Google to Apple, and many places in between, including a meeting with one of the greatest philosophers of our time.

image

In 2015, researchers from the Mount Sinai Medical Complex in New York decided to apply the GO to an extensive database of case histories. They contain hundreds of variables derived from tests, visits to doctors, etc. As a result, the program, called the Deep Patient researchers, trained on 700,000 people, and then, when tested on new patients, showed surprisingly good results in predicting diseases. Without the intervention of experts, Deep Patient discovered patterns that are hidden in these data, which, apparently, indicated that the patient had embarked on the path to various kinds of diseases, including liver cancer. There are many methods that predicted a disease “fairly well” based on a medical history, says Joel Dudley, a leading team of researchers. But, he adds, "this one just turned out to be much better."

At the same time, the Deep Patient is puzzling. She seems to recognize the initial stages of mental disorder, such as schizophrenia. But since it is very difficult for doctors to predict schizophrenia, Dudley became interested in how it works at the car. And he still hasn't figured it out. A new tool does not give an understanding of how it achieves this. If a system like the Deep Patient ever helps doctors, ideally, it should give them a rationale for their prediction to convince them of its accuracy and justify, for example, a change in the course of medications. “We can build these models,” Dudley sadly states, “but we don’t know how they work.”

AI was not always like that. Initially, there were two opinions about how much an AI should be understandable or explainable. Many believed that it makes sense to create machines that reason according to the rules and logic, making their inner work transparent for everyone who wants to study them. Others believed that intelligence in machines could arise faster if inspired by biology, and if the machine will learn through observation and experience. And that meant turning all programming upside down. Instead of a programmer writing commands to solve a problem, the program would create its own algorithms based on data examples and the desired result. MO technologies, today turned into the most powerful AI-systems, took the second way: the machine programs itself.

At first, this approach was of little use in practice, and in 1960–70 he lived only at the forefront of research. And then the computerization of many industries and the emergence of large data sets returned interest to him. As a result, the development of more powerful machine learning technologies began, especially new versions of artificial neural networks. By the 1990s, neural networks were already able to automatically recognize handwritten text.

But it was not until the beginning of the current decade, after several ingenious tweaks and revisions, that deep neural networks showed a dramatic improvement in performance. GO is responsible for today's AI blast. It gave computers extraordinary capabilities, such as speech recognition at the human level, that it would be too difficult to program manually. Deep learning has transformed computer vision and radically improved machine translation. Now it is used to help make key decisions in medicine, finance, manufacturing - and much more.

image

The operation scheme of any MO technology is inherently less transparent, even to computer scientists, than from a manually programmed system. This does not mean that all AI in the future will be equally unknowable. But in essence, GO is a particularly dark black box.

You can not just look into the deep neural network and understand how it works. The reasoning of the network is built into thousands of artificial neurons organized in dozens or even hundreds of complexly connected layers. The neurons of the first layer receive input data, such as the brightness of the pixel in the picture, and compute a new output signal. These signals over a complex web are transmitted to the neurons of the next layer, and so on, until the data is completely processed. There is also a back-propagation process that adjusts the computation of individual neurons so that the network is trained to produce the necessary data.

Multiple layers of the network allow it to recognize things at different levels of abstraction. For example, in a system that is configured to recognize dogs, the lower levels recognize simple things, such as outline or color. Higher already recognize the fur or eyes. And the topmost ones identify the dog as a whole. The same approach can be applied to other input options that allow the machine to train itself: the sounds that make up words in speech, the letters and words that make up sentences, or the steering movements needed to drive.

In an attempt to recognize and explain what is happening inside the systems, innovative strategies have been developed. In 2015, researchers from Google changed the image recognition algorithm so that instead of finding objects in a photo, it would create or modify them. In fact, by running the algorithm in the opposite direction, they decided to find out what features the program uses to recognize, for example, birds or buildings. The final images created by the Deep Dream project showed grotesque, alien animals appearing among the clouds and plants, and hallucinogenic pagodas visible in forests and mountains. The images proved that GO is not completely unknowable. They showed that the algorithms are targeting familiar visual signs, such as beak or bird feathers. But these images told about how much the perception of a computer differs from a human one, because a computer could make an artifact out of what a person would ignore. The researchers noted that when the algorithm created an image of a dumbbell, with it he also painted a human brush. The car decided that the brush was part of a dumbbell.

Further, the process was driven by ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, tested deep neural networks using optical illusion equivalent. In 2015, the Kluna group showed how certain images can fool the network so that it recognizes objects that were not in the image. For this purpose, low-level details were used that the neural network is looking for. One member of the group created a tool whose work resembles an electrode implanted in the brain. It works with one neuron from the center of the network, and searches for an image that activates this neuron more than others. Pictures turn out to be abstract, demonstrating the mysterious nature of machine perception.

But all that is needed is not only hints on the principle of AI thinking, and there is no simple solution here. The interconnection of calculations within the network is critical for recognizing high-level patterns and making complex decisions, but these calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you could figure it out,” says Yakkola, “but when it grows to thousands of neurons per layer and hundreds of layers, it becomes unknowable.”

Next to Yakkola in the office is the workplace of Regina Barzilay [Regina Barzilay], a professor at MIT, who intends to apply the Ministry of Defense to medicine. A couple of years ago, at the age of 43, she was diagnosed with breast cancer. The diagnosis was shocking on its own, but Barzilai was also worried about the fact that advanced statistical methods and IO are not used for oncological research or for developing a treatment. She says that AI has great potential for organizing a revolution in medicine, but its understanding extends beyond simple processing of medical records. She imagines using raw data that is not used today: "images, pathology, all this information."

At the end of the cancer-related procedures, last year Barzilai began working with students at the Massachusetts Hospital to develop a system capable of processing pathology reports and identifying patients with certain clinical characteristics that the researchers would like to study. However, Barzilai understands that the system should be able to explain the decisions made. Therefore, she added an additional step: the system extracts and highlights the text sections that are typical of the patterns found by it. Barzilai and students are also developing a deep learning algorithm that can find early signs of breast cancer in mammograms, and they also want to make this system able to explain their actions. “We really need a process in which the machine and people could work together,” says Barzilai.

The US military is spending billions on projects that use MoDs to drive vehicles and airplanes, set targets, and help analysts filter out huge piles of intelligence. Here, the secrets of the work of the algorithms are even less relevant than in medicine, and the Ministry of Defense determined the intelligibility as a key factor.

David Gunning, development program manager at the Advanced Defense Research Agency, oversees the Explainable Artificial Intelligence project (explained by AI). A grizzled veteran of the agency, who had previously followed the DARPA project, which in fact led to the creation of Siri, Gunning said that automation sneaks into countless military areas. Analysts are checking the capabilities of the Ministry of Defense to recognize patterns in vast amounts of intelligence. Autonomous machines and aircraft are being developed and tested. But soldiers are unlikely to feel comfortable in an automatic tank that does not explain their actions to them, and analysts will be reluctant to use the information without explanation. “In the nature of these MO systems, they often give a false alarm, so the analyst needs help to figure out why this or that recommendation was given,” says Gunning.

In March, DARPA chose to finance 13 scientific and commercial projects under the Gunning program. Some of them may be based on the work of Carlos Guestrin [Carlos Guestrin], a professor at the University of Washington. They and colleagues developed a way in which MO systems can explain their output. In essence, the computer finds several examples of data from a set and provides them as an explanation. A system designed to search for emails of terrorists can use millions of messages for training. But thanks to the approach of the Washington team, she can highlight certain keywords found in the message. The group of Guestrin also came up with how image recognition systems might hint at their logic, highlighting the most important parts of the image.

One drawback of such an approach, and others like it, is the simplified nature of the explanations, and therefore some important information may be lost. “We have not reached a dream in which the AI ​​can lead a discussion with you and is able to explain something to you,” says Guestrin. “We are still very far from creating a fully interpreted AI.”

And we are not necessarily talking about such a critical situation as the diagnosis of cancer or military maneuvers. Knowing about the course of reasoning AI will be important if this technology becomes a common and useful part of our daily life. Tom Gruber, Apple's Siri development management team, says that explanability is a key parameter for their team trying to make Siri a smarter and more capable virtual assistant. Gruber did not talk about specific plans for Siri, but it’s easy to imagine that when you receive a restaurant’s recommendation you would like to know why it was made. Ruslan Salakhutdinov, director of AI research at Apple and an adjunct professor at Carnegie Malone University, sees explanability in the role of the core of evolving relationships between people and smart machines. “She will add trust to the relationship,” he says.

Just as it is impossible to explain in detail many aspects of human behavior, perhaps the AI ​​will not be able to explain everything that it does. “Even if someone can give you a logical explanation for their actions, it will still not be complete - the same is true for AI,” says Clun of the University of Wyoming. “This feature can be part of the nature of the intellect — that only part of it can be rationally explained. Something works on instincts, in the subconscious “.

If so, then at some stage we will have to simply believe the decisions of the AI ​​or do without them. And these decisions will have to affect social intelligence .Just as society is built on contracts related to expected behavior, so AI systems must respect us and fit into our social norms. If we create automatic tanks and killing robots, it is important that their decision-making process coincides with our ethics.

To test these metaphysical concepts, I went to Tufts University to meet with Daniel Dannet, a famous philosopher and cognitive scientist who studies the mind and mind. One of the chapters of his latest book, From Bacteria to Bach and Back, an encyclopedic treatise on the subject of consciousness, suggests that a natural part of the evolution of intelligence is the consciousness of systems capable of performing tasks inaccessible to their creators. “The question is, how do we prepare ourselves for the rational use of such systems — what standards should we demand from them and from ourselves?” He told me in the midst of the mess in his office located on the territory of the idyllic campus of the university.

He also wanted to warn us about seeking explicability. “I think that if we use these systems and rely on them, then, of course, we must be very strict about how and why they give us their answers,” he says. But since there may not be an ideal answer, we should be as careful with the explanations of the AI ​​as our own - no matter how intelligent the machine seems. “If she cannot better us explain what she is doing,” he says, “it’s better not to trust her.”

Source: https://habr.com/ru/post/370445/


All Articles