Translation of an article by Stephen Huber .
Many read and refer to the 2013 article “
How Do Users Really Hold Mobile Devices? ". But since then, a lot of research and experiments have been carried out on the use of various techniques in real products, many other articles have been written. Over the years, it has become more aware of how people hold their smartphones and tablets, how they interact with them tactilely. And all these, often unexpected data, is not in the old article of 2013. And this is her main problem. It made assumptions based on observations of the use of desktops, on standards for older ways of interaction, as well as on incidental situations and incorrectly interpreted data. But thanks to further research and better analysis, we managed to reject erroneous assumptions and get to the bottom of the truth.
')
Today, many still refer to outdated, less accurate articles on this topic. Sometimes readers combine some irrelevant data with others, and on the basis of this they come to erroneous conclusions. This article aims to put an end to this and offer an updated insight into how people interact with touch screens, and how you can use this information to develop a better product.
Trust the data, not your "feelings"
Unless you are very, very careful, it is easy to fall prey to delusions when it comes to analyzing our observations.
Surely you have a smartphone, you use it for web surfing, use some favorite applications and think that you understand how everyone uses their phones. But you are mistaken! Most often, we use only one smartphone, and as a designer, you are much more likely to own an iPhone — even though most people use Android devices.
In addition, we perceive many things subjectively, and there are a lot of rumors and misconceptions related to cognitive psychology, simply psychology, patterns and design standards.
Touching is not a natural interaction paradigm with mobile devices. Therefore, we should explore the methods used by users and the new touch design paradigms. Having offered phones with the function of flipping-and-choosing or mouse with a keyboard for touch, we came to the conclusion that we need to solve a new layer of problems with interaction. And in some cases, very intractable problems, since touch interaction is still relatively new to us. We are still developing appropriate interaction patterns. In addition, the scope and depth of our understanding of how touch screens work is limited. Often we make design decisions based on curiosities, opinions, personal mistakes, unverified information and rumors.
Touch technology
Before turning to the results of observations, let's briefly discuss the technology. Let's talk about some key points in the work of touch screens - and their history - that you need to know. This will help you understand the behavior we are seeing today and explain some of the problems found in the data.
Light pen
First came the light pen. It was the predecessor of the mouse as a pointing device for computers - and is still in service. Today we call them stylus, but in the beginning they were called
light pens . The first production application to use the stylus was SAGE, the giant
Semi-Automatic Ground Environment network system for the US Air Force.
The Nintendo
Duck Hunt game pistol worked on the same principle: the pen was not a pointer, but a reader highly synchronized with the display timing, which allowed us to tell which part of the screen the gun was aimed at.
By the end of the 1960s, the light pens available for desktop workstations were not too different from those used today. They allowed all familiar interactions to be performed, including pointing, copying, pasting, and using gestures.
Digital pens are still quite actively used, but they already use other technologies. Now some of them can determine the force of depression and the angle of inclination.
Infrared Touch Screens
One of the first touch screens used a grid of infrared rays - directed vertically and horizontally - to determine the position of a person’s finger. In the 1980s, such screens were used in ATMs and other social devices like museum kiosks.
As you understand, infrared rays made it possible to determine the presence of any objects on the screen, so similar devices implied some simple rules of interaction: it was necessary to keep cuffs and paper away from the screen — there was a wide frame around the infrared displays, in which they were also pretty deeply recessed. The rays themselves were quite wide, so they determined the user's finger entirely. Although in the end such displays learned to position quite accurately, in most models it was assumed that the user chooses rather large sections of the screen. That is, the whole screen was a grid of selectable zones, because of which the buttons had to be made large. But for applications of that time, such displays are quite suitable and reliable.
Resistive touch screens
Touch screens are widely spread after the resistive technology appeared on the market. It allowed people to perceive touch as a natural form of interaction. The term "resistive" means that the screen physically resists movement, that is, pressing. The top layer is made of flexible transparent plastic. When the user presses with a finger or a stylus, the grid of the thinnest wires is pressed against the underlying grid, and thus pressing is positioned.
Such screens can be very responsive and accurate. But for the flexible upper layer, we had to compromise between responsiveness and vitality. Such a screen is easily scratched, wiped, or the top layer can even be torn off. Highly responsive systems are more fragile; tenacious models are harder to use, you may need a passive stylus, which allows you to poke the screen hard enough.
Until recently, it was one of the best touchscreen technologies for low-cost devices — and in certain environments — but the need for more responsiveness and better quality materials led to the displacement of resistive screens from the market.
Capacitive touch screens
Today, in 2017, when someone talks about a touchscreen, he means
capacitive technology. Now such screens are used everywhere: in smartphones, tablets, entertainment systems, cars, kiosks, and more and more are being introduced into other small gadgets.
Capacitive screens use the electrical conductivity of the human body. Therefore, they do not work with old light feathers as a stylus, you have to take off gloves, and sometimes there are problems if the skin is too dry. The finger works as a capacitor, whose presence at the screen is measured by grid nodes - consisting of layers along the X and Y axes - between the screen glass and the protective plastic or glass layer.
Simplified diagram of capacitive screen layers:

Although high-resolution sensors already exist, they are used only in special devices, such as fingerprint sensors. Most touch screens use a very coarse grid, and more accurate positioning is performed using calculations.
At a certain angle you can notice vertical capacitive sensors:
This is not an ideal system. There are obstacles that prevent an increase in accuracy, including the complexity of mathematical calculations, electrical noise, and trade-offs between thickness, weight, cost, and optical transparency. If the display is too accurate, it will be able to recognize the lightest touch of fingers or stylus, background electric “noise” will begin to interfere strongly, and in general it will be difficult to use such a smartphone.
A few years ago, Motorola conducted an experiment, with the help of a robot, precisely controlling the pressure, angle and speed of various touches. Try it yourself in a graphic editor on your smartphone to draw diagonal parallel lines using a ruler and a stylus, as in the illustration below. Most likely, they do not turn out perfectly straight.
Demonstration of inaccurate interpretation of touches:
The irregularity of the distances between the lines is my fault, I'm not a robot. But the illustration shows other technology problems. Line breaks are the result of an error in determining the accuracy of touches (this is probably due to the thin tip of the stylus, so this may not be the case if you touch your finger). Curvatures and breaks at the ends of the lines indicate the artifacts of the display design and the effect of electrical noise. The waviness of the lines is due to inaccurate calculations between the lines of the grids.
Size, pressure and contact spots
The contact patch is the contact area of the finger with the screen. A spot can vary greatly in size and shape, depending on how the user touches - with the tip or pad of the finger - and on the pressing force.
Contact spot:
Capacitive screens capture only the center point of the spot — the geometric center. No matter how big the stain is, and there is no need to measure pressure, size or anything else. Although many devices support multiple touches (multi-touch), and some are able to recognize the pressure force, these features do not have stable support, so they are difficult to implement with benefit. So if you do not create a graphic editor or a game, for the time being we will assume that touch screens do not recognize the pressing force. Although it may seem illogical to you, it is important to realize that the size of a finger is not related to accuracy and touch sensitivity.
Since the old standards for touch screens are based on finger size, they are no longer relevant. For example, during the development of ISO standards, infrared technology dominated the market, which recognized a person’s finger, so standards define that target screen objects (touch targets) must be 22 x 22 mm in size to accommodate big fingers. The authors did not conduct large studies on the accuracy of indications.
When deciding to comply with standards, make sure you understand the basics of specific recommendations. As technology advances, standards do not always go with the times. The underlying research standards may be erroneous, outdated, or applicable only to specific situations or technologies.
Outdated standards
ISO is not the only group promoting obsolete standards. All mobile OS developers and some of the OEMs are promoting their own target screen sizes. Nokia borrowed one of the versions of the old standards and never updated them. Microsoft in this respect is a little better, it offers to make gaps between target objects, but in general, the problem is rather small size of the objects themselves. Google and Apple use other sizes, which seem to be based on the convenience of working with platforms, and not on human factors. Any standard that uses pixels instead of physical dimensions is useless, because even device-independent pixels can vary greatly from screen to screen, and they have nothing to do with the size of the human body.
Not only are the sizes of target display objects obsolete, but also many other standards related to mobile devices. You can refer to the W3C WCAG standards because they are simple, straightforward, and universal. But they are
not applicable to the mobile field. W3C somehow ignores mobile devices, especially when it comes to accessibility standards. They believe that all computers are desktops with a keyboard and mouse, located at a distance of hands from the user's eyes. Their standards define pixel sizes based on the old standard 72/96 ppi (pixels per inch), not rationing the angle, brightness, distance and other aspects that mobile users have to deal with.
Fortunately, the inadequacy of mobile standards will soon become history as we further research and promote improved standards. “An ordinary computer” is no longer a desktop with a mouse and keyboard, but a smartphone or tablet. Mobile devices are owned by many more people than desktops, and the technologies used, the context of use and the needs of people are very different.
Defining new standards
It’s bad when our patterns, heuristics and UX data are mixed with human opinion and intuition. We are not artists, but UX researchers and designers. At best, engineers and scientists. This should be taken seriously.
Some time ago we conducted observations on the behavior of users, approximately 1,300 people from different countries. How they use their phones on the street, at bus stops, on trains, airports, in cafes. A meta-study of dozens of reports on the use of touches and gestures from the ACM Digital Library was also conducted. One of them, for example, contained data on 120 million events, so the statistics are quite sufficient. In addition, 651 observations were conducted in schools, offices and homes, due to which data on tablets and types of users were obtained. Remote unmoderated tests were conducted, how people use tangencies depending on the type of input and the tasks they perform.
Today we receive data from different countries and on different devices. Researchers collect information in different ways, in different situations, in relation to different use cases.
So all the information in this article is based on a huge array of research. If something is unknown, it will be said so. But today we already know a lot about how to design mobile applications for people, for many different devices and ways of using. Now we need to document this and use as new standards.
Science of touch
It seems that most designers who think about how people generally use mobile phones consider all smartphones to be small iPhones, which are held with one hand and controlled by the thumb. They still believe the control schemes of the thumb, like the one below. They believe that all tapas will be at the bottom of the screen, and that no one reaches the top left corner.
Well known and erroneous scheme:
But according to field research, people constantly use the “Back” button. In fact, this is the most popular button on the screen, even when it is in the upper right corner. So what do we know about the thumb? Let's start with the basics.
Below is the range of movement of the bones of the finger in relation to the hand. In addition, the joints, tendons and muscles of the thumb interact with other fingers, especially the index finger. If the fingers hold the device, the range of movement of the thumb will be limited more strongly. But by moving their fingers relative to the device, users can change the screen area to which the thumb reaches.
In fact, the thumb moves in the swipe range - moving away and clinging to the hand - not relative to the point of attachment to the arm, but relative to the metacarpal-carpal joint. Other finger joints allow it to bend towards the screen, but do not expand the range of the swish movement. The ability to bend is important because, despite the three-dimensional freedom of movement of the finger, the screens are still flat. Therefore, only a limited fragment of the range of finger movements can be projected onto a one-dimensional screen.
Basic touch monitoring
The thumb is the strongest on the arm, so using it for tapas means that the device is held weaker. People understand this, so when pushing or vibrating occurs or is expected, people instinctively
clasp their mobile devices, holding them in one hand and pressing them with their thumb.
Do people hold phones with two hands? Not. People hold them differently, often changing the way they are held. Thanks to research in which we observed the behavior of people in a wide variety of contexts, we realized that we must consider all possible options for holding the device. The following are the six most common types of hold and touch phone types:
Over time, it was possible to determine the frequency of using different methods. These data were repeatedly confirmed during the observations.
People hold phones differently, depending on the device, needs and context.
They unconsciously change the grip, that is, they do not give themselves too much in this report and do not predict such behavior.
- 75% of users touch the screen with just one thumb.
- Less than 50% hold the phone with one hand.
- 36% wrap their phones around using their second hand to make it more stable and more comfortable to reach out to different areas of the screen.
- 10% hold the phone in one hand and press the finger with the other.
But these are just the basics. There are other ways to hold and use phones that users place on the surface. The methods of using tablets are also different, and the behavior is adapted depending on what the user does - on the screen or in addition to the gadget.
Probably the biggest surprise and the most important observation was that on mobile devices people do not view the screen from the top left to the bottom right, as on desktops. They do not touch the screen in the opposite direction - from the bottom right to the left up - due to limitations in the movements of the thumb. Instead, people prefer to watch and touch the screen in the center. The illustration shows the change in the touch accuracy of the screens of smartphones and tablets:
Best of all, people read the content in the center of the screen and often scroll through it so that the readable fragment is in the middle. It is also more convenient for people to click in the center of the screen, so that target objects here can be smaller, up to 7 mm, while in the corners it is better to increase them to 12 mm.
From the observations and subsequent analysis, an important conclusion was made: people never press precisely enough where they aim. There is always an error in clicking. Here is an example of a real study: you had to click on the menu button with the imposed sight:
This study involved dozens of users. And many of the clicks were far from the center of the target. Some did not fall into it at all. Slips are key. Object sizes will never be perfect, so register all keystrokes. Misses will be constantly, so just determine the amount allowed and live with it. In the above proposed sizes of target objects fit only 95% of clicks.
You also need to avoid problems caused by blunders, so accept as a fact that there are faults, errors and inaccuracies. Remember the probability of error, placing dangerous or unrelated elements away from others, thereby eliminating or reducing the possible consequences of accidental clicks.
Touch-friendly information design
Over time, managed to detect and prove the effects of second and third order associated with the basic human interaction with mobile touch screens.
We so often use list and grid views because we focus on the center of the screen. Such viewing modes work well, and people try to use them by simply scrolling and pressing. Therefore, always place the main content in the center of the screen. Think of your most used applications. When you launch one of them, you get a list of content — for example, text messages, emails, stories, videos, photos, articles — and choose what you want to see or interact with.
Objects for secondary actions should be placed above or below. Tabs along the top or bottom edge of the content area will help users switch views or sections. Action buttons allow you to build or find content. Hide the less important - tertiary - functionality in the menus that users usually run from any angle. The hierarchy in the information design is shown below:
You may have heard that the “hamburger” menu is an erroneous idea, and you have to get rid of it. But this advice is too radical. Although many menu design patterns are not recommended, some designers hold this view only because the icon is used poorly.
If you use custom navigation through sections of your site, it will be a bad decision to hide the navigation in the menu. There will be more effective tabs denoting sections, besides the tabs refer to the secondary content. It is recommended to put key content in the center of the screen — or design the application so that it does not have to wade through different categories.
These rules have been used for a couple of years in creating design of mobile applications and sites for different types of users. They have fully proven their performance and correctness of the conclusions, the options and functions in the menu work fine, all the participants of the UX testing found the right thing for a few seconds, even those users who previously did not have experience with mobile devices.