📜 ⬆️ ⬇️

The dev version of Chrome has received WebRTC support.

Web Real-Time Communications (WebRTC) technology, announced in early May last year, is integrated into the dev-version of Chrome and, most likely, will officially appear in one of the next browser releases.

WebRTC is an open framework managed by Google after purchasing Global IP Solutions for it, and implementing real-time video and audio transfer technology between browsers. In fact, this means that Chrome will be able to perform the same functions that Skype or other similar plug-ins are currently performing (Google talk), and from the developers' point of view, creating such applications will be a simple use of some JavaScript-API that can be viewed here . Interestingly, the Mozilla Foundation offers its input to the WebRTC - MediaStream Processing specification, which allows you to programmatically mix audio streams or track traffic in transmitted online video.

The key concept in the WebRTC API is the MediaStream object, which is a generic JavaScript interface for interacting with audio and video streams. In order for the developer to organize interaction with them, he must have a tool for accessing the microphone and the user's webcam — the getUserMedia function serves this purpose. If this function is completed successfully and access to the camera and microphone has been obtained, the developer is returned an instance of the MediaStream class, which is, in fact, an interface for working with multimedia data.

')
HTML code of the page illustrating work with the WebRTC API.

<html> <head> <title> WebRTC</title> </head> <body> <h2>, !</h2> <video id="live" autoplay></video> <script type="text/javascript"> video = document.getElementById("live") navigator.webkitGetUserMedia("video", function(stream) { video.src = window.webkitURL.createObjectURL(stream) }, function(err) { console.log("!") } ) </script> </body> </html> 


In this case, the webkitGetUserMedia function receives three parameters, the meaning of which is as follows: the first is a string that determines that we want to work with video, the second is a callback function called in case of success of an attempt to access the webcam, and finally the third is the function that will be called if access to the equipment failed for some reason.

The window.webkitURL.createObjectURL (stream) string is the receiving of the Blob URL (File API element) of the video stream, after which the video will be displayed in the video object.

According to the specifications of WebRTC, the GetUserMedia function should ask the user if he objects that the application will have access to his webcam, approximately the same way as the Geolocation API does.

[Sources: < 1 >, < 2 >]

Source: https://habr.com/ru/post/136605/


All Articles