Fully automated devices, systems and entire production can provide many advantages. But as practice shows, it is impossible to fully rely on automation. After all, if it fails, we are not ready for this. And this is not about production at all: the Internet of things and unmanned vehicles are also automated systems, on the performance of which our life may depend. Remember the science fiction classic " Space Odyssey 2001 "? The film was shot back in 1968. If anyone does not know: in the course of a manned flight to Jupiter, artificial intelligence HAL 9000, which completely controls all the systems of the spacecraft and radically facilitates the life of astronauts, begins to seriously fail. The astronauts are worried, HAL begins to paranoid, and eventually starts killing people on the ship.
The moral of this story is that when our lives begin to depend on fully automated systems, it is highly desirable to constantly look after them. Even if the authors did not put such a meaning into the film, they would. This topic, in fact, is already very relevant: although we have not yet created a strong AI, we have been using fully automated systems for a long time, they are becoming more and more every year. And the news feed sometimes throws up stories that vividly reflect a lack of control on our part. ')
PetNet crash
Petnet is an automatic feeder for cats and dogs, giving out portions of food on a schedule. It is connected via Wi-Fi to the Internet, so that the owner can control it through the mobile application. If desired, you can give a command to an extraordinary feeding (for example, when you feel guilty, that you left the beast in the care of a soulless machine for a long time).
In addition, Petnet is tied to a branded feed delivery service, and you can automate your home inventory replenishment process with Amazon Dash. You do not need to order another bag yourself, the feeder will do it for you (and when full-fledged robots for walking and caring for pets appear on the market, we can forget about them altogether!). Among other things, Petnet monitors the consumption of food and water, so that your pet is not fat on automatic grub.
Sounds cool!
But when the service provided by Google fell on which the Petnet cloud depended, about 10% of the feeders did not work correctly for 10 hours. And although the manufacturer claims that the failure did not affect the automatic feeding mechanism, however, users lost the ability to manually feed up or change the schedule. So that part of the animals had to starve. Although the company sent a letter to the customers with the advice: “please feed the animals by hand”, many of the users relied on the reliability of the system, going on vacation in the summer.
Overheating of the thermostat Nest
Thermostat Nest Learning Thermostat from Google has become one of the first trend devices for the Internet of things. Its task is simple - to automatically maintain the desired temperature in the rooms.
This summer in the USA turned out to be very hot, and some of the Nest's failed . The manufacturer published a statement in which it reported that a “small number” of thermostats went offline, although they continued to function.
This is not the first trouble with Nest: in January 2016 , many devices showed a bug , which manifested itself in the rapid discharge of their batteries and subsequent disconnection. And all this happened against the background of serious frosts on the east coast of the United States - many buyers of thermostats thoroughly frozen.
Fortunately, not a single case of death, illness or injury due to excessive rise or fall of temperature in homes caused by thermostat failures has been registered, but the saturation of homes with automated systems will only increase this probability. Does this seem far-fetched to you? Imagine that you are an old man with a weak heart, and in the hot summer your thermostat decided that your home was too cool and decided to add firewood to the virtual stove. Or in the winter you came down with a strong cold, and it occurred to the thermostat to turn off the heating at night and start the air conditioner, but stronger.
Tesla Autopilot Error
In May 2016, a high- profile event happened: a Tesla Model S autopilot did not notice a truck on the road. The car drove at full speed under the trailer (the roof was cut off), broke through two fences and stopped after hitting the pole. The driver died. In Tesla, they were quick to say that this is the first known fatal case for more than 200 million kilometers, which drove the company's cars in autopilot mode.
I must say that on a 6-point scale of autonomy (from 0, the car is not automated at all, up to 5 people never need to control at all) Tesla cars today have 3 points. Their autopilots are able to automatically stick to the track, urgently maneuver, avoid a side collision and park. And although in the press the emphasis was placed on a set of autopilot capabilities, a failure occurred in the emergency braking system, which was probably the cause of the accident. It is believed that the Tesla surveillance system could not see the bright white sun-lit truck against the bright sky. Why the driver himself did not try to slow down or get away from a collision? Probably he relied too much on the autopilot and didn’t really follow the road.
By the way, a week later there was a second accident involving the autopilot , this time in the Tesla Model X, the system did not notice a wooden post. Fortunately, there were no casualties.
Trust risks
Those who are familiar with the problems of ensuring reliability in technology, know that mankind has not yet come up with anything better than duplicating the functions of the most critical systems. Moreover, duplication can be threefold in order to prevent an accident, even if both the main and duplicating systems fail. But this is a very expensive way, which increases the complexity and cost of technology. And how should we deal with AI? Or with algorithms embedded in gadgets? How to duplicate them? Is it possible at all? And how to convince people that it is better to pay twice as much for a device that differs from its competitors only in that, hypothetically, it is less likely to fail automatically?
Manufacturers of a variety of devices, gadgets and equipment tipped us a bright automated future. Autoconcerns are making a lot of effort, developing unmanned vehicles , with each month expanding range of devices for the Internet of things. Only the lazy one does not experiment with neural networks, tens of thousands of scientists and programmers persistently create increasingly stronger artificial intelligences and advanced algorithms, wishing to entrust them with everyday worries and complex tasks.
But we, the users of all these high-tech benefits, should not forget that any system can fail. And the harder it is, the higher the risk. Automation of processes is just an extra convenience, not a magical replacement for our vigilance, control and evaluation.
For example, you can not completely lay the care of animals on cloud services. If you leave on vacation in the summer, leaving the cat or dog in the good hands of an automatic feeder, then on return you risk finding material for the taxidermist. No, no, automatic feeders are a useful and convenient thing. But it is better to agree with a friend or a neighbor, so that at least once every couple of days they come to check your livestock.
It is impossible to give at the mercy of automation and climate control, as well as determining the presence of smoke and carbon monoxide - the elderly and the disabled, who can not take care of themselves, will depend on equipment failures or algorithms.
And a really difficult problem: what should we do with unmanned vehicles in the near future?
The Tesla crash gave a new impetus to the public debate about the safety of autopilot. The situation is complicated by the fact that in a few years the widespread introduction of automated emergency braking systems (AEB, automated emergency braking) can begin, which, as the autobollers claim, should save many lives on the roads . The driver’s mistake was the cause of that tragic accident: at least it was necessary to pay attention to the road when approaching the intersection, and ideally to reduce the speed. Two accidents in a row show that modern automation is still very far from perfect, and you can’t trust your life on the road, leaving unattended, in any case.
Of course, in the end, progress will lead to the emergence of very reliable unmanned vehicles that will perform wonders during tests, trials and commercial exploitation. But something tells us that one can hardly expect the realization of the scenario when the machines will turn into simply convenient capsules for moving.
Most likely, the use of unmanned vehicles will differ little from the usual driving, except that it will not be necessary to drive: just the driver will sit behind the wheel and follow the road, while the autopilot will drive the car. It is to watch, being ready to intervene, and not to chat in a smartphone or watch TV series.
There is no doubt that corporations will assure us that in an unmanned vehicle we no longer need a steering wheel, a brake pedal, or even a windshield in general, and it is much safer to drive in such a car. All this will be backed up by terabytes of statistics. But even if the number of accidents involving drones decreases by 90% compared to traditional cars, the remaining 10% is still an accident that takes the lives of many thousands of people. So will drones be a blessing? From the point of view of statistics, of course.
If you leave your child in a public park and go shopping nearby, then from the point of view of statistics it is unlikely someone will harm him. But you never do that, regardless of the chances of chance. So should parents trust the lives of their children to automated driving systems? It seems that the best solution would be to turn on the autopilot and follow the road, in readiness to turn the steering wheel or press the pedal.
But here we are raising the problem of the unreadiness of society for a new technology. After all, its use implies the dissemination as a cultural norm of such a thing as the inadmissibility of replacing human attention with automated systems in responsible processes that could harm health or create a threat to life.
In the words of HAL 9000 from the 2001 Space Odyssey: any harm caused to our loved ones due to the transfer of responsibility for their safety to the machines, "can only be explained by human error."