(Informal review of David Mindell’s book The Rise of the Machines is Canceled! Myths about Robotics, Alpina non-fiction, 2017)
(User Chmouel on en.wikipedia : file )The Book of
David Mindell “The Rise of the Machines is Canceled! Myths about robotization left a strong but ambivalent impression. First of all, it is worthwhile to look at the comments, from which it is clear what a tremendous job the author has done, summarizing a great deal of material from very significant sources. In a few words: robots under water, on earth, in the air, in space and on other planets: the Moon and Mars. In the latter case, however, an omission - now robots have flown to the edge of the solar system, but unfortunately, they are not mentioned in the book. But what is mentioned allows the author to draw general conclusions about the prospects of robotics.
')
I completely agree with the main conclusion: absolute autonomy is a harmful myth. At least for the coming decades. On his own and others' experience, the author shows in detail that these days the most successful are systems where human interaction with an automaton is fully realized, and not alienation of a person from the decision-making process. Personally, I was convinced of the correctness of this idea on my modest example of game bots for the KR2HD game: the
bot for planetary battles, according to our co-author, should be completely autonomous, and now this project is stalled. In the new project of the bot to go through the battle of Rogeria, thanks to this multi-way battle, you can get the lion's share of the points of the whole game, I chose the semi-automatic mode: relatively routine operations (some not trivial, as you have to use pattern recognition) are performed by the bot, but upon conditions, he does not try to "flash intelligence", and requests the intervention of the player. He does it not often: now he managed to write the above on one computer, and the bot at the time wound my glasses on the other. And since this approach has justified itself, I will describe it in more detail in a separate article. There is no need to go far for annoying examples of trying different programs “to show off with intellect”: in the old Microsoft Word 2000, with the default settings, it is difficult to type the phrase “Cycle with counter i ...” - Word immediately replaces “i” with “I”.
Back to the book. Agreeing with the basic statement about the perspective of non-autonomy, I should note that the reading was not just because of the author's repetitions, the text is clearly and greatly redundant, but nevertheless read every word. At the very end, the author, criticizing “Google-mobiles” (Google’s cars without a driver), once again explicitly listed three myths of robotization:
No matter how funny it is, such a high-tech company like Google, in its rhetoric, walks back into the XX century, archaically exposing the driver as a passive observer. Their “new” approach becomes a victim of all three myths about robots and automation generated by the twentieth century: 1) automotive technology should logically develop to full, utopian autonomy (the myth of linear progress); 2) autonomous control systems will exempt the driver from the obligation to drive (replacement myth); 3) autonomous machines can act completely independently (the myth of complete autonomy).
Having read a lot of stories in the book, in particular, that during all landings on the Moon, beginning with Neil Armstrong, all astronauts turned off automatic landing and sat down manually, using the on-board computer information, it was similar when landing Shuttles to Earth. I agree with the author. However, just below the author talks about the new project in which he participates. This is an ALIAS project - an automatic aircraft control system. Everything looks good, but an ambitious task has been set: with a minimum of efforts to equip any aircraft with it so as not to certify the aircraft completely again, not to interfere with its design. In particular, use computer vision to read information from displays installed in the cockpit. After reading this, I grabbed my head - I no longer understand anything and I can only guess. Maybe it seemed to me, but the author wants to place in the co-pilot's seat a web-camera aimed at the display and recognize the information from this display! This will also complicate the system wildly and greatly reduce reliability. Isn't it easier to connect to the on-board computer using a USB cable and download the digital stream directly without any recognition? It is possible that any connection, even read-only, requires certification, but to go for recognition, just to avoid certification is absurd. Just as in this regard, my bots with recognition are absurd - if the game had a COM interface, all the tasks of my bots would be solved trivially.
Interestingly, throughout the book, the author rarely says "AI", while stating that he will not discuss the question "can the car think." Perhaps, contrary to the general opinion, the author does not consider the problem of pattern recognition as AI tasks? The point is not in the name, but in the fact that these are fundamentally different tasks. Simply put, in a good computing environment, two times two will always be four, but the same environment does not always correctly recognize the number “2” from paper or from the monitor. While a person recognizes images much better than a computer, but he is mistaken. So, not everyone and not always can immediately understand every word that any vocal-instrumental ensemble sings in a seemingly familiar language. And in the visual field a person has illusions and mirages:
“I had a hallucination yesterday: I was so scared that I slept badly all night,” the patient told me. - I enter the room in the evening and see: in the rays of the moon is a man. I wondered - who would it be? I came closer, and this is my robe hanging on the wall, and at the top is a hat. It was then that I became even more frightened: if I had a hallucination, it means that I am seriously ill.
And there was nothing to be afraid of. It was not a hallucination, but an illusion, that is, an incorrect, distorted reflection of a real object. Bathrobe and hat seemed a man.
( Konstantin Platonov, Entertaining Psychology, RIMIS, 2011. )
Another well-known example of difficult recognition is the captcha, which you come across on the Internet at every turn. There are such scribbles that you have to press the captcha change button several times before you can “prove that a
camel is not a robot”. Maybe someday the machine will be able to recognize all sorts of audio and video images better than a person, but it has not yet been proved that such tasks always have a solution. And while modern practice shows that in general it is possible to recognize, however, guaranteed to avoid errors does not work.
It so happened that before reading the book of Mindella, I wanted to re-read
Navigator Pirks by Stanislav Lem . It can be said that this is a chronicle of catastrophes to which the hero had something to do during his career, and in almost every one of these catastrophes, AI is involved. As a result, similar questions arise as in the book of Mindell. One can only be amazed that Lem guessed the problems that will be relevant in the modern development of robotization. Unfortunately, Mindell does not mention Lem, but there might be interesting parallels. If the situation invented by Lem as models, many of them confirm the statements of Mindell.
Of course, Lem did not foresee everything. So, he did not foresee hacking, did not foresee viruses and Trojan horses (although he simulates cases of inadequate robots, but not as a result of premeditated OS hacking). However, it is strange that in our time of permanent catastrophes associated with hacks, Mindell says nothing about them. In my opinion, in this regard, it is somewhat reminiscent of Asimov, whose three laws of robotics ensure the harmonious coexistence of people and machines. At the same time, not autonomy, that is, control over the human operator, can fail to save - Mindell repeatedly notes that the line between autonomous and non-autonomous devices is gradually erased and the same device can work both in autonomous and non-autonomous mode, like an onboard computer the Apollo compartment mentioned above. At the same time, it seems obvious that a robot in which a Trojan has been introduced will turn into a spy, and a robot infected with a virus may perform extremely inadequate and dangerous actions. Why the book does not say this? Maybe because such a too real threat refutes the overly optimistic title of the book about the abolition of the uprising of cars?