📜 ⬆️ ⬇️

How to measure the interface. Quantitative criterion

I periodically have a natural desire to find out which software is more convenient, or to prove the advantage of a product with the help of some adequate, numerical arguments (and not as usual).

Having become interested in this topic seriously, I have been looking for solutions for a long time, and even a year ago I wrote and defended my thesis “Determining a quantitative assessment of the quality of human-computer interaction”. I will tell about it in the article.

There are several ways to determine the convenience of the interface.

Working techniques


User questionnaire


There are many internationally approved questionnaires (SUMI, SUS, SEQ, etc.), which are lists of 1 to infinity with questions like "Did you find this system difficult for yourself." In the text of the thesis, you can see an overview of several of the most popular systems.
')
Experiments on live users


You can conduct a series of experiments with some numerical parameters (for example, the speed of the task, the number of errors or the average level of the tasks).

Expert


You can invite some reputable uncle and hope that during his 5/10/20-year experience he has learned something like that.

But these methods require you to hire an expensive usability specialist (or even a few), and catch users on the street.
Therefore, many developers, horrified by the prospect of interacting with end users, have come up with ways to formally assess the complexity of the system.

Techniques that do not require the participation of a specialist or users


* if it is interesting to learn more about any methodology, a review with references to the literature is in the text of the diploma

Determination of the average time required by the user according to the method of GOMS, KLM

On the basis of average values, it is simply considered how much time the average user would spend on performing basic tasks. True, it is not clear who determines which user scenarios to calculate.

Entropy RGB-profile

The visual complexity of the system is evaluated. Obviously, the program with ten panels and twenty buttons is more complicated than the Google start page. Naturally, the method is very relative, but simple and fast.

Productivity Information

The ratio of the minimum amount of information needed to complete a task to the amount of information that the user must enter. For example, if an informational modal window is displayed with a single “OK” button, then the information entered by the user (click on the button) is absolutely useless, which reduces the information performance indicator.
It is not very clear how to determine whether information is needed or not for more and more complex cases. Most likely, without an expert, too, can not do.

XML Tree Analysis

The more complex the code describing the interface, the more likely the program’s more difficult for the user.
A very controversial statement, most likely works only for a very rough estimate (photoshop is more complicated than a notepad, since the code is larger).

Number of classes into which interface objects can be broken.

At this last, so clumsily named method, I stopped.
Before describing the essence of the method, I will immediately make a reservation that its scope is rather limited. When I just found out about him, he beamed with a fabulous light, but in the process of working on his diploma, he somehow leaned around a bit and was blown away. Most likely, such a method can be used for small widget applications, in which you can more or less distinguish the main tasks and constant sequences of actions. It is not suitable for creative applications with supercontrol and non-formalizable result (photoshop, autocad, etc.)

I was based on the methodology for assessing the complexity of the Tim Comber and John Maltby systems, as outlined in Comber T., Maltby JR Investigating Layout Complexity; in Proc. CADUI 1996

We denote the complexity of the system C. In accordance with the theory of informational entropy of Claude Shannon, modified by Ji Bonsipe, the complexity will be determined by the formula


where N is the number of all objects,
p i - relations of objects in the i-th class to all objects (pi = ni / n),
n is the number of object classes,
n i - the number of objects of the i-th class,

For use in video display terminals, let's assign complexity to the location and size of objects:
C = C S + C D ,
where C S is based on classes-sizes of objects and C D on classes-mutual arrangement.

The complexity of an individual object:
C O = C / N

Of course, this is a primitive way. But this is a way that somehow tries to distribute objects by type. If the buttons are the same size and are there, then everything is simple. If all controls are different, then the functions they perform are also likely to be diverse and specific.

The original algorithm involves the evaluation of only one, the main screen of the application. I will evaluate the complexity of the entire sequence of screens that a user must go through in order to complete his task. For each screen, an indicator of complexity is calculated, and if, while performing a task, some elements of the interface remain constant, a reduction factor is entered for them.

To make the assessment more tied to life, I suggested introducing coefficients of user and task significance.
Users are divided into several groups depending on their needs and tasks to be performed; a significance factor is set for each type of user.
It depends on:
- the number of users of this type,
- the frequency of their use of the product,
- the value of their time or marketing significance of this type.


C uk - system complexity for the k-th type of users
C tn - the complexity of the n-th problem
K tn - coefficient of importance of the nth task for the user

After we get information about how difficult it is to use the interface for each group of users, we can calculate the final complexity. To do this, we need to sum up the difficulties for user types multiplied by user weighting factors.

The main difference between this approach and the original method is that we rely on real people with real needs. A program cannot be abstractly complex, it can be difficult for people whose tasks it does not perform or performs for a long time, and overloading it with unnecessary information. In essence, the concept of complexity is redefined to fit the tasks.

In short

So, I took a technique that determined the complexity of the interface, measuring how many classes all controls can be broken into and how many controls each class has.
I suggested first to allocate several types of users, to each, depending on the number, frequency of use and cost of time, assign a coefficient.
For each type of user, highlight tasks with different significance coefficients.
For each task, create a sequence of screens and read them according to the original method.
In the end, a score is obtained reflecting how simple the interface is for performing specific tasks by relevant users.

The weak point: you still need to allocate users and tasks yourself. But once you figure out which types of users and what tasks they have, you can quickly and cheaply assume how much better or worse alternative versions are. It works only for fairly simple programs.

Usefulness: it turns out not a vague expert comment, but numbers that can be compared.

References (abbreviated, without nonsense)
  1. Danilyak V.I. The human factor in quality management: an innovative approach to the management of ergonomics: a tutorial. - M. Logos, 2011. Places - the book is written by a person who was involved in the interface of the control panel of the aircraft, a lot of water, but there is an interesting
  2. Sviridov V. A. The human factor. –Nafanin.deda.ru/human-factor/human-factor-spreads.pdf - the guy writes about the development of aircraft control systems. Information is not very much, but as an autobiography it is interesting and somehow touching that
  3. Randolph G. Bias, Deborah J. Mayhew Cost-Sustaining Usability, Second Edition: An Update for the Internet Age, Second Edition.– Morgan Kaufmann, 2005. — about how much a bad interface costs. Interesting statistics
  4. Stickel, S., Ebner, M., Holzinger, A. The Metric XAOS - Understanding Visual Complexity as a measure of usability. Work & Learning, Life & Leisure, Springer, 2010, pp. 278-290 - about automatic determination of complexity based on the number and variety of controls
  5. Bean N. International Standards for HCI and Usability // International Journal of Human-Computer Studies.– 2001.– 55 (4) - all the works and ideas are a little vague, but the name of this Bevan is much where in the references is found
  6. Bevan N. Measuring usability as quality of use // Journal of Software Quality Issue.– 1995.– 4, pp 115-140
  7. Sauro J. 10 Benchmarks For User Experience Metrics. - www.measuringusability.com/blog/ux-benchmarks.php - the whole site is interesting. The company has long been engaged in trying to measure usability. It turns out also not very good, but Sauro himself seems to be publishing scientific articles and in general in the subject



In the thesis work, in addition to the special unit, there is quite an interesting part with stories and stories about all the horrors that have occurred due to fakaps in usability, a review of statistics on financial damage that cause crappy interfaces and a review of all sorts of ways to evaluate usability ( clumsily, collected and translated from a dozen sources).
You can see the content, and if you find something interesting, download the diploma itself .

The content of the diploma
1. Introduction
1.1. The place of ergonomics in the science of quality
1.2. Conceptual apparatus
1.3. The economic effect of increasing usability
1.4. Economic effect
1.5. Safety Impact
1.6. Examples
1.6.1. Economic effect
1.6.2. Safety Impact
Flight 965 crash
Disable the warship
Wreck of remotely controlled aircraft
Air crash in 1972
Accident at Three Mail Island NPP
1.7. The need to evaluate user interface properties. Conclusion
2. Review of literature. Existing solutions for software ergonomics evaluation
2.1. The history of the development of ergonomic standards of the interface
GOST 28195-89 Software Quality Assessment. General provisions
ISO standards
Conclusion
2.2. Existing criteria for general software certification
Testimonials from certification laboratories
Conclusion
2.3. Existing ways to evaluate interface ergonomics
2.3.1. Comparison and Compliance Estimates
Standardization and compliance with the working environment
Benchmarking
Examples
2.3.2. Expert review
Heuristic evaluation
2.3.3. User survey on the basis of interaction with the system
General requirements. Respondents
How many respondents are needed
Questionnaire by words
User loyalty (Net Promoter Score, NPS)
Software Usability Measurement Inventory, SUMI
System Usability Scale, SUS
Estimation of the complexity of the tasks, based on a single question
2.3.4. Quantitative estimates based on experimental data
Number of mistakes
The average level of the tasks
Single Usability Metric, SUM
Examples
2.3.5. Formal evaluation methods
Information Search
Information performance
KLM-GOMS models
Estimating the complexity of the system by Tim Comber and John Maltby
XAOS - Actions, Organizational elements, Summed entropy of RGB values
LOC-SS Complexity Measurement Model
Examples
2.4. Quality management system as the basis for the development of ergonomic interface
Identification of the interested person
Survey of stakeholders
Usage scenarios
3. Development of quantitative evaluation
3.1. Restrictions on the use of formal numerical evaluation
3.2. Preliminary research. Expert participation
3.2.1. Data source. Requirements for respondents
3.2.2. Definition of user types
3.2.3. Determining the proportions of the number of users of each type
3.2.4. Frequency Evaluation
3.2.5. Determining the cost of time users, or their importance from a marketing point of view
3.2.6. Calculation of the coefficient of significance of the user from the data, or the assignment of the coefficient by the expert
3.2.7. Ranking tasks for each type of user
3.2.8. Selection of the sequence of screens needed to solve the problem
3.3. Formal settlement
3.4. Class division
3.5. Sample Calculation Example
User research
Check
4. The economic part
4.1. Construction of the schedule
4.2. Construction of an algorithm for evaluating the human-computer interaction
4.3. Costing to estimate the human-computer interaction
Conclusion
6. References

Source: https://habr.com/ru/post/166329/


All Articles