⬆️ ⬇️

Making your personal Skype, step by step instructions for creating WebRTC applications

Webrtc



WebRTC allows real-time audio / video communication through a browser ( firefox and chrome ).



In this topic, I will explain how to implement the simplest WebRTC application.



1. getUserMedia - access to media devices (microphone / webcam)



Nothing complicated, with the help of 10 lines of javascript code you can see and hear yourself in the browser ( demo ).

')

Create index.html:



<video autoplay></video> <script> navigator.getUserMedia = navigator.getUserMedia || navigator.mozGetUserMedia || navigator.webkitGetUserMedia; navigator.getUserMedia({ audio: true, video: true }, gotStream, streamError); function gotStream(stream) { document.querySelector('video').src = URL.createObjectURL(stream); } function streamError(error) { console.log(error); } </script> 


You can apply css3 filters to the video element.



What is distressing here is that at this stage of WebRTC development I cannot tell the browser “I trust this site, always give it access to my camera and microphone” and I need to click Allow after each page open / refresh.



Well, it would not be superfluous to remind you that if you gave access to the camera in one browser, another will try to get PERMISSION_DENIED when trying to access it.



2. Signaling server (signaling server)



Here I am violating the sequence of most of the “webrtc getting started” instructions because as a second step they demonstrate the possibilities of webRTC on one client, which personally only added confusion to the explanation.



A signaling server is a WebRTC coordinating center that provides communication between clients, initialization and closing of a connection, and error reporting.



In our case, the signal server is Node.js + socket.io + node-static, it will listen on port 1234.

Plus, node-static can render index.html, which will make our application as simple as possible.



In the application folder, install the necessary:



 npm install socket.io npm install node-static 


Download server.js to the application folder. I will not analyze the server part, it is quite simple to understand, and besides, it is not a direct part of WebRTC, so just start the server:



 node server.js 




3. WebRTC



Now we are ready to work directly with WebRTC. Download the working index.html completely, then I will disassemble it piece by piece.



3.0 Standards have not yet been adopted, so for now we have to use prefixes for different browsers:


 var PeerConnection = window.mozRTCPeerConnection || window.webkitRTCPeerConnection; var IceCandidate = window.mozRTCIceCandidate || window.RTCIceCandidate; var SessionDescription = window.mozRTCSessionDescription || window.RTCSessionDescription; navigator.getUserMedia = navigator.getUserMedia || navigator.mozGetUserMedia || navigator.webkitGetUserMedia; 




3.1. getUserMedia


 navigator.getUserMedia( { audio: true, video: true }, gotStream, function(error) { console.log(error) } ); function gotStream(stream) { document.getElementById("callButton").style.display = 'inline-block'; document.getElementById("localVideo").src = URL.createObjectURL(stream); pc = new PeerConnection(null); pc.addStream(stream); pc.onicecandidate = gotIceCandidate; pc.onaddstream = gotRemoteStream; } 


We already analyzed this part at the very beginning of the post, except that in the gotStream function we create PeerConnection and add two event handlers:





3.2. Call offer


Call Offer is the initialization of a WebRTC session. Call Offer and Call Answer (the next step) have the SDP (Session Description Protocol) format and are used to describe the parameters (resolution, codec, etc.) of the initialization of client media streams.



IMPORTANT: Customers must give access to media devices BEFORE sending a Call Offer.



 function createOffer() { pc.createOffer( gotLocalDescription, function(error) { console.log(error) }, { 'mandatory': { 'OfferToReceiveAudio': true, 'OfferToReceiveVideo': true } } ); } 




3.3. Call answer


Call Answer sends the caller immediately after receiving the Call Offer. I note that the callback function of createOffer() and createAnswer() is the same - gotLocalDescription, i.e. localDescription for one client created by the function createOffer() , for another - by the function createAnswer() .



 function createAnswer() { pc.createAnswer( gotLocalDescription, function(error) { console.log(error) }, { 'mandatory': { 'OfferToReceiveAudio': true, 'OfferToReceiveVideo': true } } ); } 




3.4. Ice candidate


ICE (Interactive Connectivity Establishment) Candidate performs the function of connecting clients by setting the path between clients over which media streams will be transmitted. We also generate the ICE Candidate to the signal server, a description of the processing of messages from the signal server in the next step.



 function gotIceCandidate(event){ if (event.candidate) { sendMessage({ type: 'candidate', label: event.candidate.sdpMLineIndex, id: event.candidate.sdpMid, candidate: event.candidate.candidate }); } } 




3.5. Processing messages from the signaling server


 var socket = io.connect('', {port: 1234}); socket.on('message', function (message){ if (message.type === 'offer') { pc.setRemoteDescription(new SessionDescription(message)); createAnswer(); } else if (message.type === 'answer') { pc.setRemoteDescription(new SessionDescription(message)); } else if (message.type === 'candidate') { var candidate = new IceCandidate({sdpMLineIndex: message.label, candidate: message.candidate}); pc.addIceCandidate(candidate); } }); 


The code is simple, but I will explain:





3.6. gotRemoteStream


If everything went well, the onaddstream handler from step 3.1 will be called, which will contain the media stream from the interlocutor.

 function gotRemoteStream(event){ document.getElementById("remoteVideo").src = URL.createObjectURL(event.stream); } 




Let me remind you that the entire source code of the application can be found here .



Literature:



Source: https://habr.com/ru/post/198632/



All Articles