📜 ⬆️ ⬇️

3D interfaces are usually worse than 2D interfaces

The author of the article is John Carmack, an engineer in the fields of computer science, aerospace engineering and virtual reality, co-founder and co-owner of id Software. Since August 2013 - Technical Director of Oculus VR.


I sent this note to the staff in 2017, but my position only strengthened in the following years.

Last year, I said that cylindrical panels were surprisingly effective, and they need to be embedded in the VR Shell interface [VR browser shell for viewing the web - approx. trans.]. Many were opposed to abandoning the free placement of 3D interfaces and further reducing the use of 3D interfaces in the future, but an objectively higher quality native TimeWarp projection is not just an abstract design issue.

Last week, looking at the job description before the interview, I noticed that one of the responsibilities of the project manager was: “Create a new 3D interaction paradigm instead of 2D”.
')
How so ... Well, I will try to more clearly articulate abstract arguments against the paradigm of 3D interactions.

Obviously, a three-dimensional interface is needed to interact with 3D objects, such as Medium, Quill, or 3D data visualization. But I argue that the bulk of web browsing, settings, and interaction choices benefit from designing in 2D.

Splitting information into several layers in depth is harmful because the eye has to refocus. This is easy to demonstrate in practice. If you have in sight the poster hanging on the wall - try to look at it from the monitor. Do this several times, and then compare with a simple translation of the eyes between the icons on the panel at the bottom of the monitor.

In VR, the situation is even worse, because you have to struggle with the lack of an actual change of focus. In the varifocal system, we only emulate the bad real world, but do not improve it. This fact is fundamental to the convenience of daily work.

There is an opinion that varifocality is a hardware function necessary for good readability of the text. It is not right. It is important only to improve the readability of texts that are at a great distance from each other: like a sheet of paper before your eyes and a billboard away. HMD static optics can focus anywhere, and we have to set it to a distance UI.

If it is possible to place the interface at any distance, as in VR, you will not place it at the distance of the usual reading / monitor. Reading glasses are necessary precisely because older people can no longer focus at such close distances. The exact focusing distance of a relaxed eye varies with different people, but it is usually several meters.

This is the advantage of VR! Focusing on close monitors throughout the day - eye stress. It can be removed. If you want to scan information as quickly and comfortably as possible, it should be at the same distance from the reader and not too close.

The depth signal (distance to the object) gives important information when you understand the environment and move relative to its elements. If you want to hit something with a spear or dodge a projectile, this is valuable information. UI actions almost never benefit from this.

Your idea of ​​a 3D environment is a pair of 2D projections. If you don’t move much relative to the environment, then they remain essentially the same 2D projections. Even if you have developed a truly three-dimensional interface, you have to take care that the 3D elements do not overlap each other in the projection.

I think 3D gives some benefit in the design of individual UI elements: slightly convex 3D buttons that protrude from the surface where otherwise you would have to use color changes or artificial 3D effects, such as bevels or shadows. 3D-modeling of icons at the level of the user interface is possible, but all this is usually at a distance of several centimeters from the UI plane. Visual scanning and interaction is still based on 2D, but there is another channel of information that the eye will naturally pick up. But this is inconvenient to design in an environment like VrShell.

This does not mean that the VR interfaces should simply be “floating screens” - I didn’t like that our first Home design was basically a UI console floating in the middle of the screen, along with “safe zones” around.

From the point of view of the user interface, the main advantage of VR is the ability to use the entire field of view and allow it to expand, “peering” sideways.

I always urge to take away the selection of content from the edges of the screen and leave space on each edge so that when looking ahead you can see half of the next tile. If half the tile is interesting to the user, he will look at the rest to see. In fact, the user is not very convenient to interact with the elements of the UI, which are located in the corners far from the center. If you do not rotate the whole body, then a long work from the edge of the screen is a load on the neck, which is always in a turned state. Therefore, the idea is to take a look - and scroll the tile to the center.

Another key element is to place the rarely used UI elements on the sides and behind the main screen. The void theater button in Netflix or the old “Skip New User Intro” button in Home are good examples of how you can easily move aside the options menu.

However, for this we need a little “wean” users from the usual behavior. Today’s options for hiding options in computer UIs are clearly unintuitive: how do I know that clicking on this obscure icon will open a whole window of other options? This trains people to look for hidden values ​​in the UI elements, rather than looking around.

Source: https://habr.com/ru/post/453114/


All Articles