Good dream of automation: when, at last, the machine will replace a person in the difficult task of controlling equipment and he will be able to take a break from the works of the righteous. In this case, annoyed human mistakes will be forever excluded, since the machines do not get tired, are not subject to stress, do not get distracted and do not forget anything. It seems, just now, full automation is close, especially against the background of an impressive increase in the power of computers. Moreover, there are encouraging examples: thanks to automation, the number of airbus crews was reduced from three people to two and came close to replacing the computer and co-pilot, and our cosmonauts often act as “passengers” of spacecraft launching, joining and descending from orbit automatic mode.
Alas, in reality things are not so good.
And everything rests, paradoxically, in the notorious human factor. Indeed, after all, the “dumb” box, called a computer, must be trusted not just to manage some technical object, but ultimately its life and the lives of many people, which can be interrupted if this box bucks and behaves in an unpredictable way or just breaks. And the fact that it can be - is clear, otherwise a person (pilot, cosmonaut, NPP operator and any other operator) at the console (and the console itself!) Would not have been needed for a long time. All the hope that a person will be able to hedge, to take control of the automation, to take it over in case anything happens. He is like a goalkeeper in football - the last reserve, the last hope. It is unlikely that there are many hunters to fly on a fully automatic passenger plane without a crew.

Each technological improvement of automation voluntarily or involuntarily forces the operator to the periphery of the direct control of equipment. Therefore, another interesting circumstance arises: seemingly automatic controls, and the person is still responsible for security. It turns out that he is responsible for what he does not do! Naturally, none of the operators do not like it.
In fact, the pilot becomes a passive observer, since he controls the work of the onboard systems more than he directly pilots the aircraft. Automation requires constant attention, “scanning,” as pilots call it; emphasis in the activity moves to mental or, in psychological terms, cognitive processes. It turns out that automation does not meet expectations, since, although manual tasks are canceled, the complexity of control increases. The feeling of exclusion from the control circuit can cause uncertainty and anxiety or boredom, and former professional pride gives way to a sense of confusion, inferiority, loss of dignity when the operator feels stupid against the background of smart automation.
What then will be the role of the crew of such aircraft? In fact, it can be reduced to ads in the passenger cabin. And the grim joke of one of the old pilots will come true, when the question “How does the first pilot differ from the goose?” The answer is: “Geese sometimes fly.”
When using automated control systems, the inevitable question arises: even if this system becomes more reliable and has a friendly interface, should the operator hesitate to follow her advice and will this not result in a denial of responsibility when critical independent considerations appear? Operators may rely on automation and, if they encounter an unexpected problem, try to ignore it, instead of turning off the automation and switching to manual control. On the other hand, people may be mistaken in defining a dangerous or threatening situation, taking it as a safe one.
Differently than in aviation, the evolution of automation took place in the space program. Human capabilities for working in space were not known, and therefore both Soviet and American projects of the first manned spacecraft, Vostok and Mercury, respectively, laid the concept of automatic control priority over manual control, which was considered as a backup in case abnormal situations. In fact, the person had to play the role of a “gag” or “understudy” of potentially unreliable elements.
The unconditional orientation to automatics and mistrust to the astronaut in the Russian cosmonautics, unlike the American one, was preserved later. Many subsequent flights showed that the crew’s functions as a backup link in emergency situations are unmistakable. The reason was not the flaws in the ground professional training, but the exclusion of the astronaut from the control process in the automatic mode.
It should be noted that many specialists, engineering psychologists and astronauts actively opposed the desire for total automation. Shortly before his death, S. P. Korolev himself admitted the fallacy of the chosen course: “We automatized ...”
In the 1980s, the national space program actually encountered unconventional failures of automation, which made it necessary to revise the basic principles of solving the problem of reliability. The main one is considered to be elemental redundancy for redundancy of failed equipment. The new type of failures was not associated with equipment failures, but with inadequate operation of automation in the diagnosis of onboard systems.
Thus, during the first Soviet-French space flight aboard the Soyuz T-6 in 1982, in close proximity with the Salyut-7 orbital station, the automatics mistakenly diagnosed the failure of the main and backup units of the angular velocity sensors, which led to an emergency termination of the automatic mode. Further convergence was successfully performed by the crew in manual mode. As it turned out, the reason for the inadequate operation of the automation turned out to be a slight difference between the calculated, given by the developers, values ​​of the moments of inertia of the ship from the real ones. Previously, this difference did not affect management and manifested itself in a situation of rapid, complex turn, which first appeared only during this flight.
The calculated values ​​of the moments of inertia were used in the programs of the onboard computer complex for the automatic control of the functioning of these sensor units. And when during the turn a mismatch arose between the calculated and measured values ​​of the angular velocity, then, based on the established quantitative criteria for assessing the reliability, the automatic control program defined this situation as a failure of the sensors themselves, first the main unit and then the backup one. In fact, the false diagnosis was the result of a complex inter-system interaction of the design features of the ship and the motion control system, which was not yet taken into account by the developers. Therefore, it was fundamentally impossible to foresee that the cause of the mismatch would not be the failure of the sensors, but the inaccuracy and inadequacy of the quantitative criteria in the automation programs.

The given example visually reveals the general logic of the manifestation of possible inadequacy of quantitative criteria in automatic control programs: the discrepancy between the measured and calculated parameters of the systems functioning is interpreted by automation as a failure of a block, although it functioned normally, and the reason for the mismatch lies in the ambiguity of formalization of intersystem interaction in control models used by developers.
In essence, situations similar during the rapprochement of the scientific modules Kvant-1, Kvant-2 and Kristall to the Mir orbital station. In all three flights, the automatic approaching stopped abnormally due to the erroneous diagnosis of failures of various units of the approach control system. Successful convergence and docking of each of the modules with the station was performed on the second attempt by changing or canceling the quantitative criteria in the respective control programs implemented from the ground. The reason for the failure of the automatic docking of the first Progress M1-4 cargo ship to the International Space Station at the end of November last year, which was discontinued due to a large roll in the near section, as it turned out, was also due to the imperfection of the software.
It turns out that the
limited adequacy and ambiguity of using quantitative criteria for the reliability of a particular equipment due to the multivariance and mediation of the links between different systems and their mutual influence can lead to situations not foreseen by the developers. Their paradox is that, despite the emergency diagnostics of automation, the technical systems themselves will function normally! But then
the basic principle of ensuring reliability, which consists in reserving the failed blocks of systems ,
ceases to operate , since the automation will disable any number of serviceable backup equipment, no matter how many. The implementation of control in this case is possible only by reserving automation by the operator based on their use of qualitative, rather than quantitative,
criteria for assessing reliability, allowing for a holistic analysis of emerging situations.
')
If we ignore astronautics, the difference in qualitative and quantitative criteria can be illustrated by the sensational 1997 chess match between Garry Kasparov and Deep Blue computer. By the way, it is in chess that the concepts of "material" (
quantity ) and
"quality" exist. Kasparov himself considers the number one problem for a computer lack of
flexibility . It means that each machine has a clear scale of
priorities (that is, criteria), which cannot be changed during the game, depending on the particular position. In this case, the machine always tries to translate
quality and time factors into
numbers , which are the mathematical equivalent of the material.
As is known, the world champion lost the match, and it seems to many that this event will affect the future relationship between a person and a computer. According to Kasparov, the main reason for his stress and uncertainty, in addition to some important but organizational issues, was the loss of the second batch.
In the critical position, the grandmaster sacrificed three pawns, which according to the calculations of the machine corresponded to the advantage of +300 (each pawn by +100). As a result of the victims, he received good tactical chances, although there was still no clear win. But the machine did not take the pawn, but chose the option with the advantage of +49, that is, the positional losses amounted to -251. Further, Kasparov notes: “This is the most important moment in the development of computer technology, if this is true. But whether this is true - can be proved very simply with the help of printouts. You have to show us how the car changed your mind. I want to know how they (programmers.
- A.K. ) managed to explain to the machine that the value of various positional effects, positional deficiencies was equal to -251. In my opinion, this is impossible.
Computers do not yet know how to compare material and quality . ”
After the match, the emerging position was repeatedly analyzed using various chess programs, and pawn sacrifices were always accepted. This fact and the refusal of computer developers to provide printouts of parties allowed Kasparov to come to the conclusion that it was not the machine that made the critical move, but man. He emphasizes: “Even one possibility of interfering with the game of the machine, interrupting the basic version with the words“ Stop. Do not go there, give up this option, "would incredibly improve the game of the computer. After all, my strategy is based on the statement that the machine will make a positional error. I built my strategy on this. I played strange chess, not like I used to, I tried to use our
human advantages over the machine . But if I was deprived of my main trump card - I am doomed. " In fact, we are talking about the advantages of man over the machine in the possibilities of a qualitative, meaningful, and not quantitative, formal type of thinking.
If we go back to cosmonautics again, then the thesis about the impossibility of fully formalizing control, at first glance, can be refuted by the experience of creating and successfully conducting the first test flight in the fully automatic reusable spacecraft Buran on November 15, 1988. However, the features of even this flight confirm to a certain extent the assumption made.
As you know, during the descent and landing of the Buran, he was accompanied by a MiG-25 fighter, piloted by test cosmonaut Magomet Tolboev. At the final stage, the unexpected happened: on reaching the lane, the Buran did not enter the right landing turn, but went
across the runway (see diagram on the left). Both the pilot and ground controllers were just confused for a while.
As it turned out, the reason for the strange maneuver was a strong side wind. And now imagine that a man piloted not an escort plane, but the Buran itself. Then, in the described situation, he would have only two options: either take control and plant the car manually, or not interfere with the operation of automation. In the first variant, it would have turned out that the pilot turned off the normally functioning automation, and this cannot be considered as a completely correct decision even in case of a successful landing and would be considered a serious mistake with an unfavorable outcome. In the second variant, by the way, psychologically impossible from the point of view of a pilot, he would simply become a hostage of automation. But then who will give the pilot a guarantee that the automation will work correctly, and the orbital plane will not fly off somewhere in the steppe and will not break there? After all, not only the most experienced test pilot who made dozens of flights on aircraft-laboratories and the analogue of “Buran”, but also ground control services could not immediately understand the meaning of the maneuver performed by the automatics. There is a stalemate: no matter how the pilot does, he is still “guilty”. Therefore, even the normal operation of automation does not guarantee that a person will not have problems in the management of equipment.

Automation increasingly penetrates the transport. Many of the modern car models are equipped with satellite navigation systems, computers that perform the functions of technical diagnostics and even direct control. Driver support systems are becoming more and more active; test versions of the autopilot are already being created for driving in difficult urban conditions. It is characteristic that the main obstacle to their implementation are not technical difficulties, but the lack of laws criminalizing the computer (by the way, this is again the problem of responsibility).
What happens? On the one hand, the probability of occurrence of unforeseen situations does not allow to fully rely on automation. On the other hand, a person, being better guided in difficult, non-standard situations, may stumble out of the blue. The possibilities of a qualitative, meaningful analysis of situations by an operator are determined by his professional experience, knowledge and skills, creative thinking abilities, psychological readiness to make a responsible decision in extreme conditions. But even high-class professionals sometimes fail.
A person is not always able to replace automation in initially unknown and uncertain situations, nonlinear and unstable processes of intersystem interaction. Under these conditions, erroneous, unauthorized actions, non-observance of professional norms, and even refusal to operate are possible.
The limited capacity of operators in an unforeseen situation of intersystem interaction was also detected by the accident at the fourth unit of the Chernobyl NPP. Recall, by the way, that there were no technical failures, and one of the main causes of the accident, in addition to personnel errors, was the assumption of the developers of the planned experiment on the independence of electrical and nuclear processes.
Thus, the main problem of automation is not even a computer and software. This problem has a psychological character in many respects, since the generally irreconcilable positions of two professional groups of people - developers of equipment and automation, on the one hand, and operators - on the other, clash. The first (in the hope of maximizing reliability) are asked: what can be automated, what prevents to transfer all functions to a computer, and only then: what can be left to a person? The second (hoping to achieve maximum satisfaction with work) asks: what tasks can be performed by a person and in which he cannot do without the help of a computer?
Due to the fact that the problem of automating the management of equipment clearly has a psychological aspect, it falls into the circle of interests of engineering psychology as one of the branches of psychological science. . , . (N. Wiener), . .
, . (PM Fitts). , , . . , , , , ; , , , , . .
, : , , . , . , , , .

. (N. Jordan)
. , , . , , .
, . , , (), , . : ? — .
,
. . , , . ,
.
, (workload) . , — .
. , , .
, . , , . , , .
, , ,
, . . , . . . . . , , , , .
, , , . .
- , . , , - , -, , -, -, .
, , , ? , , , . , , , —
.
, .
(, , - , - ) , . . , , , . , .
, , . , , — .
, , , — .
.
, ? , — . , . , , ! , — , .
, . . , , , , . « », , , , , ( , , !). , , !
, . , , (), , — . ( ) . , , .
, . , . , , — .
. . , . , , , , , , , , . , , .
«» №12 26 2001