It is believed that automation is a blessing. The factory directors boast: “Our production is robotized, only a man sits on the console”. Any technical novelty, from the machine to the aircraft, is smarter than the previous one and requires less effort from a person. The world has embarked on automation and sees no danger.
The main trouble of automation is unpredictability. Creating a technical device or a computer program, developers are guided by their own ideas about who, how and under what conditions will use them. But in life, not everything fit into the norm. And because automation sometimes presents unpleasant surprises. Take a modern digital camera. A person does not need to install anything manually, but sometimes it is impossible to take a picture - press “start”, and the device does not work. Automatics counted: optimum conditions were not provided for shooting. But it happens that you just do not miss the right angle! This is a typical example of the unpredictability of "smart technology".
And if we are talking about the management of the aircraft? Few people know: during the flight of the Buran spacecraft (which passed in automatic mode) the unexpected happened. Experts expected that the Buran would make a right turn when landing, but suddenly the ship turned left and flew across the lane. Terrestrial services and tester Magomet Tolboev, who accompanied the Buran on the MiG-25, were confused. Fortunately, the plane landed safely. The reason for the strange maneuver was a strong side wind, and later the developers said: the probability of such a case is no more than 3%. But she is! Now suppose that the Buran was piloting Tolboev. He would have two ways: to take control and disable serviceable automation or not to interfere and become hostage to automation. But who guarantees that the plane will not fly into the steppe and will not break? As a pilot, do not go, he is "guilty" around.
')
The problem has another aspect. In situations not foreseen by the developers, the automatics can disable the working equipment, correct or block the actions of the operator, considering them erroneous. Already, this often leads to serious disasters.
It is believed that in case of failure or breakdown, the operator must take control. But it is psychologically difficult to go from automatic to manual mode: you need to understand the cause of the accident and quickly move to action. In the automatic mode, the operator is a passive observer, it is difficult for him to remain vigilant. So fatal errors occur. And the more complex the technique, the harder the consequences. Not only is the operator hostage of automation, he is also responsible for what he does not do!
Automation threatens not only disasters. Here are the smaller troubles:
* automation requires special knowledge, so the requirements for staff qualifications will grow;
* operators will slowly lose their manual control skills, in case of accidents, they will not be able to perform the necessary actions;
* weaning specialists from active management may cause them to be insecure and reduce their social status.
Automation has already become the cause of many air crashes. And the risks are growing: there are unmanned aircraft, autopilots for cars, combat robots. Back in the 1970s, our psychologists warned of the dangers of mindless automation. They proposed a solution: the optimal control mode of the equipment is semi-automatic. The operator plays a leading role, maintains the skill level, and in the event of a failure or accident, it is easier for him to take control. A little is required from the developers of home appliances: the user should get easy access to the “semi-automatic” (now it is not easy to get to it). With industrial installations it is more difficult: semi-automatic here is the first stage of the solution. In an atypical situation, even high-class professionals are mistaken. Reducing the likelihood of accidents can be based on the development of engineering psychologists, for example, on the means of active assistance. But for starters, the creators and users of smart machines should see the problem.
Harvard Business Review Russia, November 2007, p. 34