One of the creators of Kinect will read a lecture at the Polytechnic Museum on June 3
Dear users! You probably heard about the Kinect controller, which recently entered the Guinness Book of Records as the most successfully sold electronic device. It turns out that Kinect can not only see and recognize the movement of a person, but also hear it, and even understand it! In fact, the task is not at all as simple as it seems - when a person communicates with a set-top box, there is a lot of surrounding noise that needs to be taken into account and filtered out.
Behind the sound component of Kinect is Dr. Ivan Tashev , the main architect in the Microsoft Research Speech Technology Group. On June 3, at the Polytechnic Museum, he will give an open lecture on the topic “ Audio for Kinect: what is almost impossible .” To participate in the lecture you need to register . The link also contains more detailed information about the event. In addition to the technical details, the lecture will discuss new scenarios for the use of Kinect and the possibilities that open beyond the sphere of computer games to create a more advanced interface for human-computer interaction. Thanks to Kinect, user interface developers can take advantage of two additional interaction tools: gestures and speech. For example, speech is useful when selecting a desired item in a large list of items (“Play a song about a Beatles group submarine”), while gestures are well suited for choosing from shorter lists — for example, choosing the right song from four to five songs returned after the above example of a "fuzzy" voice query. Combining these aspects into a multi-modal user interface allows you to develop more organic and intuitive ways to interact with the computer.