This topic is a translation of a post in the design blog Canonical.
With the entry of products such as the Nintendo Wii, Apple iPhone and Microsoft Kinect, developers have finally begun to realize that there are several ways a person can control a computer besides keyboards, mice and touch screens. There are many alternatives these days, apparently based on hardware sensors, and the main difference is software dependency. Solutions for computer-based vision products (like Microsoft Kinect) rely on art software to analyze images taken with one or more cameras.
If you are interested in the technical side of this, we recommend looking at the following projects for projects: Arduino
Use with Ubuntu
During a little research a few months ago, we thought about how Ubuntu could behave if she knew more about her physical context. Not only when detecting the inclination of the device (as an iPhone application), but also when analyzing the presence of the user.
This was not really a new concept for us, in 2006 we experimented with user intimacy. We believe that it is important to adapt screen content based on the presence of the person watching it.
We have come up with some scenarios that are far from being developed and defined, but we hope that this will only open up some discussion or, even better, help launch some initiatives.
Full screen mode
If the user moves away from the screen while the video is playing, the video will automatically switch to full-screen mode.
If the user is not in front of the screen, notifications could be shown in full screen. Thus, the user can read them from a different location (including far from the monitor).
Since this year is the year of 3D screens, we could not omit the parallax
effect with windows. A user gesture could also trigger an appearance.