
Once I was depressed, I didn’t want to do anything, and then I remembered that as a child I strongly wanted to have a video monitoring console like some villain in a movie who sits in a dark room and laughs watching helpless people who are trying to find a way out. Well, having refreshed my childhood memories, I decided to bring them to life, well, that part with the monitoring console, without the little people. And here my friend was HTML5 leaps and bounds, or rather the
Stream API .
Since I had previously used
getUserMedia to capture sound from a microphone, I thought that there would be no problems with the video either, but they still got out into the light. Those. There were no problems with the capture of the video stream, but with the simultaneous output of data from several sources on one page it turned out to be not so simple as we wanted.
So, let's start from the very beginning, namely from capturing and outputting video from one source. For this we will use the
getUserMedia function , which is supported in all normal browsers of older versions (
Stream API ), well, of course, except for IE.
')
Explanations
- All code examples below will be written in angularjs , for now I am writing on it.
- All scripts will be written to work with Chrome and Opera browsers, below will be written why.
getUserMedia
To access the webcam, you need to request permission from the user, and then
getUserMedia comes on the scene, it takes three arguments:
- constraints - here we indicate what type of data we want to access. We will look at it below in more detail;
- successCallback - the function returns a LocalMediaStream object, this is our stream from the camera;
- errorCallback - the function works if an error occurs when trying to capture a stream or if the user refuses to provide access to his device.
As a means of output, we will use the
video element, in the
src attribute of which the
URL element in the Blob format from the
LocalMediaStream object will be passed.
As a result, the simplest function for capturing a stream will look like this:
Here the following happens:
- We create a video element;
- With the help of f-and createObjectURL from the LocalMediaStream object , we create a URL element of type Blob, which we transmit as the source to the video element;
- Allow autoplay;
- Insert our created item on the page.
The task is at least solved, we brought the stream from one camera to our page. Now we need to output streams from the rest of our cameras.
MediaStreamTrack
Of course, in trying to solve my problem, I turned for help to the object
MediaStreamTrack , which is an interface for working with streams from all multimedia devices that the browser could reach.
MediaStreamTrack is still quite a rare beast and is found in the latest versions of
Chrome ,
Opera and
Firefox . So why do we need him? And then, to get information about data sources.
In general, we groped the clue to solve our problem. Only I felt the joy of a dream coming true, as I realized that I could not get all the sources at once for their withdrawal. After a hysterical search for a solution, it was found that in
Chrome and
Opere, the object
MediaStreamTrack has the
function getSources , which is our salvation. As the name implies, this function returns an object that contains information about all sources of audio and video.
Well, let's find our cameras:
getMediaSources: function () { var mediaSources = []; MediaStreamTrack.getSources(function (sources) { an.forEach(sources, function (val, key) { if (sources[key].kind === 'video') { mediaSources.push(val); } }); }); }
The
sources object provided to us by the
getSources function is an array of objects with information about data sources. Each of these objects contains the following information:
- id - the unique identifier of the source, generated by the browser;
- kind - the type to which the source belongs ( audio or video );
- label - device label (source), in my case there was a USB Video Device ;
- facing - as I understand it, the parameter is relevant only for mobile platforms and points to the front and rear camera (Accepts two values User - front camera and environment - rear camera).
Decision
Thus, to summarize what we can now do. We can get a list of all sources with source identifiers, as well as we can intercept data from them and output. It remains only to put it all together, and we get what we wanted.
We will have the following sequence of actions:
- When you load the page with the help of f-and MediaStreamTrack.getSources, we determine all the sources of the video signal;
- We list the sources on the page. We do this so that we still have to give permission to access each camera. This can be avoided if the page works through https
- When you click on any source from the list, we intercept data from it using GetUserMedia , create a video element for it and display it. (If we select the same source several times, then a copy of the stream will be simply made)
Before giving the final working example, we’ll go back to the f-and
webkitGetUserMedia , namely its first
constraints argument. The documentation says that source types are transferred there in the format:
{"video": true,"audio":true}
This is obviously not enough for us, because we need to at least pass the source identifier. It turns out that instead of the standard object you can pass the so-called restrictive object. Thanks to him, we can adjust quite a lot of parameters, such as frame rate and resolution.
var constraints = {}; constraints.video = { mandatory: { minWidth: 640, minHeight: 480, minFrameRate: 30 }, optional: [ { sourceId: sourceid } ] };
Our facility is divided into two parts:
- mandatory - here you can see the mandatory restrictions for our video and, if they cannot be met, an exception will be thrown.
- optional - these are not mandatory parameters that, if possible, will be applied to the stream (i.e., if we indicate here that we want to have a video signal with an output frequency of not 30, but 60, at the output, and our camera provides such a stream, then we we will get what we want, and if the camera does not satisfy the conditions, the video will be output at a frequency of 30 frames, which will correspond to the value of the minFrameRate parameter in the mandatory block).
Of the parameters that can be customized, I found these:
- frameRate - frame rate
- aspectRatio - aspect ratio
- minWidth - minimum width
- minHeight - minimum height
- sourceId - unique source identifier
- width
- height
Now everything is ready to write the final version of our module:
Module code (function (w, d, an, mst, nav) { "use strict"; angular.module("camersRoom", []). value("$sectors", {}). directive("ngVideoSector", ['$sectors', function ($sectors) { return { restrict: "A", link: function (scope, elem, attr) { $sectors[attr.ngVideoSector] = elem; } }; }]). directive("ngRoomPlace", ["$room", "$sectors", "$compile", function ($room, $sectors, $compile) { return { restrict: "A", controller: function ($scope, $element) { this.createViews = function (html) { var videoBlock = $sectors.rec, content; videoBlock.append(html); content = videoBlock.contents(); $compile(content)($scope); }; }, link: function (scope, elem, attr, cont) { if ($room.support) { var mediaSources = [], html, count; $room.getMediaSources().then(function (sources) { an.forEach(sources, function (val, key) { if (sources[key].kind === 'video') { mediaSources.push(val); } }); count = mediaSources.length; if (count) { html = $room.createSourcePreview(mediaSources); cont.createViews(html); } else { scope.error = { show: true, text: " !" }; } }); } else { scope.error = { show: true, text: " , . Google Chrome" }; } } }; }]). factory("$room", ["$q", "$sectors", function ($q, $sectors) { var Room = function () { var methods = { get support() { return !!this.media; }, set support(value) { this.media = value; } }; an.extend(this, methods); this.support = mst.getSources; }; Room.prototype = { _createVideoElement: function (stream) { var video = d.createElement("video"); video.src = w.URL.createObjectURL(stream); video.controls = true; video.play(); $sectors.place.append(video); }, getMediaSources: function () { var defer = $q.defer(); mst.getSources(function (sources) { defer.resolve(sources); }); return defer.promise; }, createSourcePreview: function (mediaSources) { var htmlString = '', i = 0; an.forEach(mediaSources, function (val) { i++; htmlString += '<button class="video-preview" ng-click="startBroadcast($event)" id="' + val.id + '"> ' + i + '</button>'; }); return htmlString; }, addVideoPlace: function (sourceid) { var constraints = {}; constraints.video = { mandatory: { minWidth: 640, minHeight: 480, minFrameRate: 30 }, optional: [ { sourceId: sourceid } ] }; nav.webkitGetUserMedia(constraints, function (stream) { this._createVideoElement(stream); }.bind(this), function (e) { alert(" !"); }); } }; return new Room(); }]); }(window, document, angular, MediaStreamTrack, navigator));
I will not give the code of the
html template. Everything can be viewed on the
demo and on
github .
That's all, thank you for your attention, and I hope that this article will be useful to someone.
Well, of course, the list of references: