The article presents a methodological approach to determining the degree of influence of the human factor on the functioning of large information systems.
Introduction
Modern information technologies and innovative computer and telecommunication hardware and software solutions allow a new approach to the problems of creating, maintaining and upgrading large corporate information systems.
Considering such systems, it is impossible not to take into account the role of a person, for the sake of facilitating the labor of which, in fact, such systems are being created. A man-machine system in which a person or a group of people interacts with a technical device in the process of production of material values, management, information processing, performs its tasks through the joint work of devices and people who are considered as integral components of the entire system. It should be noted that any such system is vulnerable due to its dependence on many different factors.
According to data for the period from 1996, the Corporation for Research in the Field of Planning for Emergencies in IMF Banks 10% of the threat of information system failures comes from the attendants. According to other data from American sources, in general, the degree of influence of the human factor on information systems is even higher and amounts to 30%, and up to 18% of them are due to careless and negligent attitude to the processing or input of information.
')
Equally important is the issue of protecting information systems from the threats to which they may be exposed, and human participation in this matter. According to one of the surveys conducted in 2005, in Russia the unintended errors of employees were called the most serious threat [
5 ].
The human-machine system is not an automaton; therefore, one of the decisive factors affecting the operation of the system is the unpredictable human factor, the assessment of the role and importance of which this work is devoted to.
1. Basic concepts and definitions
Any large information system can not fully operate in automatic mode. There will always be operations that, by their very nature, are impossible or too “expensive” to automate. The more such operations, especially in the main technological chain of the information system, the more dependent it becomes on the individual properties of the person. We note a number of typical characteristics of a person interacting with an information system, on which his ability to make decisions in normal and emergency situations depends.
- adaptability,
- fatigue ability
- ability to rest,
- possibility of making a mistake
- ability to make decisions
- ability to memorize information
- ability to carry information overload,
- ability to learn [ 1 ].
Let us consider a quantitative assessment of the influence of the human factor on such an important property as accessibility (or, equivalently, availability) of an information system.
Availability factor K g - the probability that the system will be in working condition at an arbitrary point in time. This is a comprehensive description of the reliability and maintainability of the system, which is characterized by maintainability indicators:
T o - mean time between failures and
T in - average recovery time after failure.
Availability factor is defined as:
Availability (
D ) is usually in contrast to the availability rate expressed as a percentage, or
D =
Kg * 100%.
The human factor also affects the accuracy, timeliness and completeness of the processing of information entered and stored in the database of the information system. With a long monotonous data entry, in the process of fatigue, a person begins to make mistakes when entering, to skip data, ceases to fit into the temporary regulations.
Consideration of such characteristics as the ability to fatigue is assessed as follows. When operating in favorable conditions, the average output in the last hours decreases by 6-7% for each hour of working day lengthening over 6 hours (i.e., for the seventh hour, productivity is 94%, for the eighth - 88%, for the ninth - 81% etc.).
The degree of influence of the human factor on the accuracy of the data entered into the information system during the monotonous execution of the input operation can be assessed using the values ​​given in
Table 1 .
Table 1 . The influence of the human factor on the accuracy of information input
| Work time (work hours) |
---|
1st - 6th | 7th | 8th | 9th | 10th | 11th |
---|
Performance (% of normal) | 100 | 94 | 88 | 81 | 74 | 67 |
Percentage of inerrancy | 0.96 | 0.9 | 0.85 | 0.78 | 0.71 | 0.64 |
Real time of operation with regard to re-work
(hours) | 6.25 | 1.11 | 1.18 | 1.28 | 1.4 | 1.56 |
The accuracy of the input results
(percentage of errors, taking into account logical checks
and re-enter) | 0.999 | 0.996 | 0.994 | 0.991 | 0.988 | 0.985 |
Upper confidence limit | 0.9995 | 0.998 | 0.997 | 0.995 | 0.993 | 0.991 |
Lower confidence limit | 0.997 | 0.993 | 0.991 | 0.987 | 0.983 | 0.979 |
One of the important issues in the problem under discussion is the question of the “qualification” of the employee serving the information system. Employees with low qualifications and newcomers must go through the stages of training and training work with the system, which, in turn, must be well documented.
2. Methodological approach to determining the influence of the human factor on the efficiency of the information system
Man, as a link in any man-machine system, certainly affects reliability and efficiency indicators (completeness, reliability, timeliness of information processing) of the information system as a whole and its individual subsystems and tasks. in which it is necessary to take into account the influence of human errors on its reliability, as well as the psychological characteristics of a person as a link in this information system.
The influence of the human factor, namely, operators, service personnel of service centers, etc., on the operation of an information system can be quantified by the degree of the impact of personnel errors on the security and performance of an information system.
Many processes in human-machine systems contain the potential for human error, especially in cases where the time available to the operator to make decisions is limited. Moreover, the likelihood that problems will develop in a negative way is often small. At times, personnel actions are limited by the ability to prevent an initial malfunction progressing towards an emergency.
However, it is necessary to identify the various types of erroneous actions that may occur, including:
a) an error due to an oversight, an oversight, expressed in the failure to perform the required action of the information system;
b) a mismatch error, which may include:
- the situation when the required action is not performed properly (for example, failure to comply with the database administration regulations);
- an action performed by too much or too little effort, or without the required accuracy (for example, inaccuracies in filling out the entry form, inaccurate data entry errors, etc.);
- an action performed at an unsuitable time for it (for example, late entry of information, delay in processing information, etc.);
- an action performed with violation of the sequence of execution (for example, preparation of the final analytical report in case of an incomplete data processing process);
c) an extra action performed in place of or in addition to the required action (for example, repeated inputs of the same information, which may lead to discrepancies in the information or the appearance of duplicate data).
The degree of influence of the human factor on the reliability of the system can be assessed by the likelihood of errors in the process of manual data entry. Operator error is always associated with incorrect interpretation of the data received and analyzed by it. It is believed that for complex technical devices and complex computer problems the error probability can reach 15%, for simple technical devices and simple computer problems the error probability ranges from 1% to 5% [
1 ].
The accuracy of the operator’s actions depends on many factors:
- lack of time (the frequency of errors in the processing of information is a logarithmic function of the rate of receipt of information);
- information overload (the number of errors increases with overload, in particular, with an increase in the number of information sources);
- degree of preparation (more trained specialists make on average fewer errors);
- psychological peculiarities of a person (besides, work performed with interest is usually less erroneous);
- “Sensory hunger” (an increase in the error rate during the long-term performance of monotonous work due to the small load on the sensory organs).
An important role in reducing the number of errors is played by the degree of operator readiness. It is considered [
1 ] that in the process of learning the frequency of errors tends to decrease, and this dependence can be approximated by the formula:
where
- q is the frequency of errors after training;
- q 0 - the initial value of the error rate (before training);
- q c - steady-state stationary value of the error rate (for trained operators);
- n is the accumulated amount of input operations performed by the operator in previous training (work) cycles;
- N - “permanent training”, characterizing the duration of operator training.
When
n =
N , the difference (
q 0 > -
q c ) decreases by 63%. It is considered [
1 ] that the value of
q c is reached in 4-5
N. Moreover, if denoted as
n1 is the number of information inputs at which
q =
q c is performed, then:
The obtained value of
N determines the required number of information inputs, constituting one cycle of training (training) work with the information system.
According to the experimental data obtained during the development of visual signals by operators [
3 ], the following values ​​of the above parameters were calculated:
- q 0 = 0.27 (beginners who cannot work with the information system),
- q c = 0.018 (operators who have completed 4 or more workouts)
Assuming that there are no operators who are not trained at all with the information system of operators, the percentage of errors
q 0 = 0.27 is not reached. The maximum value can be taken indicator
q 01 = 0.15 (see [
2 ]).
Then the error rate of the manual input stage can be calculated by the formula:
, where
P rv - the probability of the error-free stage of manual input is estimated separately for each manual process; if the processes are sequential, the coefficients are multiplied, i.e.
where
- M is the number of consecutive processes of manual input,
- N n . - the number of operators for which error statistics are collected.
The probability of an operator error significantly depends on the rate of receipt of information. According to [
1 ], the probability of occurrence of an error depending on the rate at which information is received
V (bit / s) can be represented by the following formula:
q RV = 9.7 10
-4 V 1.77
The importance of assessing the impact of the human factor can be illustrated at least by the emergency that occurred during the operation of one of the large distributed information systems in August 2005, when erroneous actions of the operator led to the destruction of the working database, and its restoration took several days. The situation was caused by the fact that the operator did not create backup copies of the database every week in violation of instructions, arguing that the backup operation takes a long time. Such accidents (the authors know more than two dozen similar situations that occurred at different times on real large information systems) are a warning against those risk assessments that focus exclusively on technical and software tools of information systems and ignore personnel errors.
In addition to determining the possibility of catastrophic situations due to the influence of the human factor, it is useful to identify errors that reduce productivity, the effectiveness of solving the problem in the information system.
The methodological approach to determining the influence of the human factor may include the following steps:
- analysis of the task or subsystem of the information system;
- determining the degree of workload of tasks and subsystems by “manual” operations performed by personnel;
- identification of possible personnel errors;
- quantitative or qualitative determination of the influence of the human factor on the reliability of the information system and the accuracy of the information stored in it;
- recommendations on the automation of information system tasks aimed at reducing the influence of the human factor.
At the stages of examination of “manual” operations and detection of personnel errors, possible erroneous actions are identified and described during the execution of the task. Identification of personnel errors may include identifying possible consequences and causes of erroneous actions, as well as proposing measures to reduce the likelihood of this error, improve prospects for correcting and / or reducing the consequences of erroneous actions. The results of the “manual” operations survey and recommendations for their automation, thus, provide a valuable contribution to risk management in information systems, even if no quantitative assessment of the human factor is carried out.
A quantitative assessment of the influence of the human factor on the reliability and efficiency of an information system is intended to assess the probabilities of the correct execution of a task (
P ) or the probability of erroneous actions (
Q = 1 -
P ). You can also include steps to assess the likelihood or frequency of certain sequences of undesirable events or undesirable outcomes.
The probability of the operator performing his task correctly when performing a manual operation with a mandatory check, depending on the degree of preparedness to work with the information system, is
0.985 <=
P rv <= 0.999
or on average
P rv = 0.995
In other words, the probability of error-free execution of a manual operation by a person (
P rv ) will be in the range from 0.985 to 0.999 depending on qualification, degree of fatigue, degree of work overload, etc. The probability of an error (
Q rv ) will be in the range from 0.001 to 0.015 (from 0.1% to 1.5% of the input data). A more complete dependence of
P PB on the duration of the monotonous work can be found in
Table 1 .
For manual data entry operations performed in a complex task (large information load, complex interface) without a control check, the values ​​of
P pv will lie in the range from 0.85 to 0.982 [
1 ,
2 ]. In other words, the probability of making an error (
Q rv ) will be in the range from 0.018 to 0.15 (from 1.8% to 15%). In simple tasks
, a QB will be in the range from 0.01 to 0.05 (from 1% to 5%).
In general, for an information system and its main parts, it is important to identify the degree of dependence of its individual tasks and subsystems on the operations performed "manually", to determine whether it is possible to automate manual operations. For operations that are difficult to automate for some reason (fundamental impossibility, high cost of automation), it is necessary to develop organizational or other measures that reduce the possibility of the influence of individual human properties on the operation of the information system (documentation, training, development of brief instructions and emergency instructions) .
The main opportunity to reduce the impact of the human factor on the system is the automation of operations in the system, the maximum reduction of mandatory operations performed by humans.
Of course, there are operations that cannot be automated or expensive in terms of resource costs (for example, non-automated semantic operation, etc.), but in this case, as a rule, organizational and other measures can be taken to reduce the influence of the human factor.
If there is no data to accurately determine the level of automation, you can use a rough qualitative assessment of the degree of task load by “manual” operations: “very high”, “high”, “medium”, “low”, and also an assessment of whether this is good or bad for a given task or subsystems. The proposed estimates are characterized by an estimate of the percentage of manual operations performed in the task, as well as the laboriousness of data entry, the complexity of working with the user interface, and the pace of work.
The use of a mathematical apparatus for estimating the reliability of data depending on the errors of manual input given in [
2 ] allows us to make a table of the dependence of errors of manual input on the degree of workload of the task by “manual” operations (see
Table 2 ). The probability of entering erroneous information lies in the specified range and depends on the qualifications of the operator, the degree of fatigue and the speed of input of information.
The table below shows an estimate of the possible erroneous data entry, depending on external conditions.
Table 2 . Approximate percentage of information containing errors depending on the degree of task load by manual operations
The degree of task load manual operations | Estimation of the percentage of data entry errors Q RV * |
---|
Manual operation performed with verification | Manual operation performed without verification. |
---|
Low | 0.0001 - 0.003
(0.01 - 0.3%) | 0.01 - 0.05
(15 %) |
Average | 0.001 - 0.010
(0.1 - 1.0%) | 0.02 - 0.10
(2 - 10%) |
High | 0.001 - 0.015
(0.1 - 1.5%) | 0.02 - 0.12
(2 - 12%) |
Very high | 0,003 - 0,022
(0.3 - 2.2%) | 0.05 - 0.15
(5 - 15%) |
* excluding the effect of fatigue on work results
In turn, the workload of the task (subsystem) with manual operations is proposed to be assessed as follows (see
Table 3 ). The cell tables are filled in according to the following principle: depending on the rating indicated in the heading of column No. 3, the lines in column 3 are filled with zero or one. Then the last line, containing the sum of all the previous ones, characterizes the degree of task load on manual operations.
Table 3 . Assessment of the level of task load by manual operations
No | Characteristic | Rating: high - 1, low - 0 |
---|
one | Estimation of the number of manual operations performed in the task | 0 or 1 |
2 | The complexity of data entry | 0 or 1 |
3 | The difficulty of working with the user interface | 0 or 1 |
four | The rate of implementation of "manual" work | 0 or 1 |
five | Total (degree of congestion): | 1 - Low
2 - Medium
3 - High
4 - Very high |
If each subsystem or system task is analyzed according to the above algorithm and filled in for it
Table 3 , then it is possible to evaluate the influence of the human factor within the framework of a specific task (subsystem) on the accuracy of the input data (see
Table 2 ) and on the reliability indicators of the system as a whole.
Using the data in
Table 1 , you can determine the percentage of errors, taking into account the fatigue of a person, depending on the time of work.
Thus, the above methodological approach allows us to assess the degree of influence of the human factor for the information system as a whole, and for its individual functions and subtasks using data that is easy to obtain without resorting to statistical research methods.
Conclusion
It is generally accepted that the main problems of creating and implementing information technologies in large organizational systems are associated with the influence of the human factor. [
4 ] Moreover, it is safe to say that the lack of an assessment of the impact of this indicator when conducting work on the analysis of the reliability, efficiency, integrity of information systems, reduces the accuracy of the result.
Literature
-
Druzhinin G.V. Man in technology models. Part I: Human properties in technological systems. - M .: MIIT. 1996 - 124 s.
-
Akimova G.P., Soloviev A.V. Methodology for assessing the reliability of hierarchical information systems. // A systematic approach to information management. / Works of ISA RAS. V. 23. M .: KomKniga, 2006. P. 18 - 47.
-
I. Tsibulevsky Erroneous reactions of the human operator. - M .: Owls. Radio, 1979 - 208 p.
-
Kireenko V.E. The human factor of corporate information systems (on the example of the Tomsk City Executive Committee). - Bulletin of Tomsk State University â„–275, April 2002
-
Vetlugin K. The human factor. Computerworld â„–11, 2006