In mid-May, the 2013 Google I / O 2013 conference was held in San Francisco, where Google is showing its most promising products in the tradition and telling developers about new tools and technologies that will be available in the near future. This time the conference was opened by Larry Page. Unfortunately, the new Androyd did not announce, but there were many other interesting things. We will consider in more detail the session devoted to WebRTC, in which we even managed to participate. All the details, videos and links under the cat. Justin Uberti (WebRTC tech lead) and Sam Dutton conducted the WebRTC session, we have known Justin for quite a long time - everyone who is part of the WebRTC working group has a lot of communication on a number of issues and if you want to suggest something to improve the standard itself or implement it in Chrome, you can always communicate with Justin directly or via maillist. By the way, Google Chrome is the only browser where now the WebRTC functionality is already available for use (there is also Maxthon, but the implementation is still lame), in Firefox 21 it is hidden behind a special flag in the browser settings, but Mozilla promises that already in version 22 Everything will be included by default, the approximate release date of FF 22 is the middle of June. From mid-June, it can be expected that more than half of all browsers in the world (desktop versions) will support WebRTC. The standard itself is still in development, but much of it is ready and it can be used, which we do. Returning to the session about WebRTC, as usual, there was an introductory part to explain what WebRTC is and what it eats with, and then the most interesting part is new features, recent changes in the API, examples of services using the standard.
There is a separate video session in which a call from WebRTC to a regular telephone network is shown on the 25th minute using the example of Zingaya. We were very pleased that the guys from Google marked us and our work. The WebRTC working group is not so big, but this small number of people now determines the future of realtime communications for the next 10 years :)
')
For those who are too lazy to watch, we have identified key points:
WebRTC from the point of view of Javascript is 3 API: MediaStream (+ getUserMedia), RTCPeerConnection (formerly just PeerConnection), RTCDataChannel
Now you can transfer to getUserMedia constraints - it allows you to set the resolution, fps, audio / video data type, etc.
The WebAudio API can be linked to MediaStream to process the sound, but while WebAudio is in development and hidden behind the flag in the Chrome config
Experiments with the Screencapture mode are going quite successfully and most likely it will become part of WebRTC
RTCDataChannel is almost ready, allows p2p data transmission over a secure channel, and there are several modes of operation - both reliable and unreliable
Support TURN, already have the first providers STUN / TURN-servers compatible with WebRTC
There is an active work on the Error Handling API, now the useful thing is available in Chrome: chrome: // webrtc-internals
There are also problems, the solution of which is not very clear, for example, which video codec will be recognized as mandatory. Google and Mozilla chose VP8, but they are well aware that on mobile devices this is not the best choice, besides Apple and MSFT are clearly not happy with VP8 and would prefer to make support for H.264. So far, the only solution would be to make both codecs mandatory, but H.264 is not royalty free (MPEG LA is responsible for licensing), which can make this process more difficult.