Have you noticed? Before our eyes the most real revolution is taking place once again. No, no, this is not what you thought - no policy for the New Year! I'm talking about the interface revolution. Despite the fact that some of its signs could be observed during the whole of 2010, the overall picture, I think, is far from obvious to everyone.

This article is a small overview of the trends in the field of interfaces, in which I will try to convince you that very soon we will find ourselves in a fantastic future.
')
Let's start with the banalities, and postpone the most delicious.
Multitouch
We have already managed to get used to this phrase.
The multitouch is already on iPhones, on touchpads and touchscreens of some laptops, on
Microsoft Surface and
TouchTable touchscreens - in general, this is already a completely working and commercialized technology. Although not everyone has multi-touch devices, but for geeks this technology has become as common as a cell phone.
I just want to remind you that the winning march of this interface all over the world began only five years ago (although it was invented back in the
eighties ). Take a look at the search statistics for the word multi touch on Google Trends:

Actually, approximately from this interface it is possible to count off (conditionally) that revolution, which will be discussed further.
Stereoscopic and 3D image
Attempts to "otremehmerit" flat image for a hundred years at lunch. But for the last two years, there has been some
recovery in this area. Over the past couple of years, electronics manufacturers have created so many models of autostereoscopic displays (that is, not requiring glasses) that they could devote a separate large review to them. And yes, you can already
equip a 3D cinema at home for the foreseeable money.
In the context of this review, 3D-visualization technologies are interesting not so much by themselves as in combination with other human-machine interfaces, which will be discussed below. Remember the
Apple patent ?
Augmented Reality
The technology of
augmented reality is also not new for Habrayuzer: topics about it
regularly appear on Habré.
It is almost beyond doubt that augmented reality will firmly enter our life, although people began to realize the potential of this technology only two years ago. It is quite possible that very soon, thanks to it, virtual reality and the rough material world will merge into a single whole, and the concepts of “offline” and “online” will become meaningless and indistinguishable. After all, a prototype of glasses, familiar to anime artists on the TV series
Dennou Coil , has already been
created .
And now let's turn to Google Trends again and see that the first moves in this area began about five years ago, as in the case of multitouch. A relatively wide popularity came to this technology and even less - just two years ago.

Sign Analyzers
In June 2010, Microsoft
is the world technology
Kinect . In fact, interactive systems based on motion recognition appeared a
little earlier , some of them even
integrated with 3D visualization tools . But Kinect has opened a new page on the era of interfaces, for two reasons. First, due to the large number of sensors and competent algorithms, they managed to achieve amazing accuracy and versatility of recognition. And secondly, thanks to PC-compatibility and the presence of the SDK, a developer community immediately formed around the technology, and the
libfreenect project
appeared , the purpose of which is to develop Kinect drivers for the most common platforms.

In addition to Kinect, there is another project related to the recognition of gestures, which, with bated breath, watching geeks around the world. And this project is called ...
... SixthSense

For the first time,
SixthSense was presented at the
Computer Human Interfaces 2009 conference in Boston, after which BBC News wrote a
short article about technology, which, however, was not noticed by the general public. But the
exciting performance of Pranava Mistry for
TED talks was noticed by many, the link to it
was on Habré .
The Sixthsense technology uses a combination of a large number of devices at once: a mobile computer, a camera, a pocket projector, a mirror, headphones, some kind of clothespins and funny colorful patches on the fingers. And of course, it was not without cunning software, in it is the most interesting. As a result, the real world and cyber space interact in absolutely incredible ways. You can write a note by hand and paste it onto your desktop, or drag a real paper document to your computer with your finger, or even drive a typewriter on a plain piece of paper.
However, it was just a demonstration, before the actual implementation of the technology, you probably need to write thousands, or even millions of lines of code. Representatives of MIT Media Lab promised to open access to the SixthSense code for the developer community, but so far this has not happened. Nevertheless, technically there are no obstacles to the implementation of this technology, and if you combine it with Kinect, miracles will happen. At a minimum, it will not necessarily be necessary to always wear colored electrical tape on the fingers. And if you add here also 3D visualization technologies? Mm ... but I already noticed it, because pocket 3D-projectors do not exist yet.

It's hard to believe, but the “Sixth Sense” is not the most interesting thing that the interface developers created for us in the first decade of the 21st century (at least from my subjective point of view). So move on.
Neurocomputer interface
A neuro- computer interface, or BCI (brain computer interface), is a system that provides input to a computer directly, so to speak, from the brain, bypassing the user's hands and other unnecessary peripherals. Simply put, control the power of thought. The most developed and affordable device of this kind, as far as I know, is the
Emotiv EPOC (correct me if this is not the case).
The latest mention of Emotiv EPOC on Habré dates
back to 2008 . Then it was noted with regret that the sales of the device are transferred to 2009. Since then, nothing has been heard about Emotiv EPOC on Habré. In the meantime, the device is already being sold. True, so far only in America, but when did it stop Russian geeks? ;) Moreover, this piece has a
developer community (admittedly, quite conditional) and
6 SDK options , from the light free version to the heaped Enterprise Plus for 7.5 kilobax. Unfortunately, if you want to debug on a real device and have documentation and access to the API, you will have to pay not $ 300, which the average user will give, but $ 500 for the
Developer Edition . But this is also not cosmic money, and I hope that there will be a brave experimenter on Habré who will take the opportunity to pick the interface of the future and describe his feat for future generations.
Admittedly, the Emotiv EPOC is not the only commercially available device using BCI. There are at least
Gamma Sys from g-tec and
Neural Impulse Actuator from OCZ Technology.
Gamma Sys, it seems, is more focused on research organizations than on ordinary users - on the manufacturer’s website I did not find any prices or references to distributors: apparently, it’s suggested to send a request for a technical-commercial proposal or something like that.
Neural Impulse Actuator (NIA) looks more attractive to the end user - you can buy it for only
$ 100 , and it even has manuals in Russian. He does not have the SDK, official support for each game must be expected from the manufacturer (and only under Windows). True, knowledgeable people have
realized that the NIA is a regular
HID device , and
you can write firewood for
it yourself . On the one hand, this seems to be a plus, but on the other hand, as far as I understand, such an architecture seriously limits the capabilities of the device compared to EPOC, which is not just an advanced mouse, but a complete device for measuring the activity of various brain regions.

Usability
CCC-COMBO BREAKER! And now I would like to say a few words, at first glance, some falling out of the general canvas. Let me remind you that the article began with the words about the revolution in the field of interfaces, signs of which we have been seeing for the last five years or so. As you can see from the above, scientists and engineers have done a great job of improving the user experience with digital devices. Everything about which I wrote earlier related mainly to new peripheral devices or how to work with them. Now let's look at the concept of “Software Usability”:

This topic has also become popular recently. And I have a feeling that attention to it will increase. To date, created a huge amount of software, covering almost any user needs. And now, in order to compete successfully among their own kind, programs are forced to have a simple, convenient and aesthetic user interface.
And now I will return to the technologies and try to draw general conclusions.
Conclusion
It seems that for the onset of the future we have enough physical devices, we lack only software that will make their use simple and convenient. Of course, this is also a laborious business, probably comparable to the development of an operating system (how many years has
ReactOS been written there?). But there would be a desire ... Who knows, maybe among the people reading this article, there is a person who organizes an open-source project to integrate all the listed devices among themselves and create the Computer Interface of the XXI century?
Finally, let's fantasize a bit. Imagine: you are sitting in a chair, but there is no computer in front of you (or rather, you don’t see it). Instead, on your desk (the usual Ikea desk) you can see virtual objects - text documents, windows with games, three-dimensional models, and move them around the table with your hands. Or hang in the air above the table. A pop-up window appears, and then closes: you have removed it by the power of thought. You open the drawer with your hand, and from there the browser pops up with Habr. Pages scroll by themselves as you read.
Do you remember the guys who make a
cartoon about Gypsy ? The second (well, the third) they will make their cartoon in six months. Compare: instead of a 3D editor, audio editor and editing software, they now have a single grabber thought. They don't even need to render 3D models. They just sit and in detail imagine a cartoon. Of course, there are also their own subtleties and their technical process ... but the performance still turns out to be much higher.
Today such fantasies seem far and irrelevant. But remember how once you discovered that everyone around you, not excluding you, use mobile phones and the Internet, seeing nothing surprising in it? The future is it. It comes unnoticed.