Introduction
Hello to all! A few months ago, an article
“Broadcasting online video using nginx” was published on Habré, in which
Aecktann spoke about his experience of introducing the nginx module I’ve been developing for broadcasting video - nginx-rtmp-module. Since then, the product has been actively developed, and in this article I will talk about it in more detail.
The broadcaster is needed to transfer the video stream to the client. It is either a live stream, or broadcasting a recorded video (VOD, Video-on-demand). There are a large number of video broadcasting technologies. Among them are traditional protocols, such as RTMP or MPEG-TS, as well as the recently emerging technology of adaptive broadcasting over HTTP. The latter include HLS (Apple), HDS (Adobe), Smooth Streaming (Microsoft), MPEG-DASH. When choosing a technology, the main factor is its support on the client side. That is why broadcasting in RTMP format is currently one of the most common. HLS is supported by Apple devices, as well as some versions of Android.
Building and configuring nginx-rtmp
To add a nginx-rtmp module to nginx, you need to specify it in the --add-module option in the nginx configuration, like any other module.
./configure --add-module=/path/to/nginx-rtmp-module
After the build and installation, you need to add the rtmp {} section to the nginx.conf configuration file. It should be added to the root of the config. For example:
')
rtmp { server { listen 1935; application myapp { live on; } } }
For many cases, this simple configuration will suffice. It sets the RTMP application with the name myapp. In this application, we will later publish streams and play them from it. Each thread will also have its own unique name. It is worth noting one important nuance regarding the above configuration. It is true for the case when the number of nginx workers is equal to one (as a rule, is specified at the beginning of nginx.conf).
worker_processes 1;
To be able to use live broadcasts with a large number of workers, you need to specify the rtmp_auto_push on directive (see the section
“Workers and local retransmission” ).
Publish and play live stream
You can use Flash players (JWPlayer, FlowPlayer, Strobe, etc.) to publish and play videos. However, ffmpeg (and ffplay) are often used to broadcast server streams and for testing. Let's start broadcasting the test file test.mp4 with the following command:
ffmpeg -re -i /var/videos/test.mp4 -c copy -f flv rtmp://localhost/myapp/mystream
Here we must take into account that RTMP supports a limited set of codecs, however, such popular codecs like H264 and AAC are among their number. If the codecs in the test file are not compatible with RTMP, recoding will be required:
ffmpeg -re -i /var/videos/test.mp4 -c:v libx264 -c:a libfaac -ar 44100 -ac 2 -f flv rtmp://localhost/myapp/mystream
You can broadcast as a stream from a file, and from another source. For example, if we assume that some live MPEG-TS stream is
being broadcast at
video.example.com/livechannel.ts , it can also be wrapped in rtmp:
ffmpeg -i http://video.example.com/livechannel.ts -c copy -f flv rtmp://localhost/myapp/mystream
Example broadcast from a local webcam:
ffmpeg -f video4linux2 -i /dev/video0 -c:v libx264 -an -f flv rtmp://localhost/myapp/mystream
You can play a stream using ffplay with the following command:
ffplay rtmp://localhost/myapp/mystream
And finally, a simple example of using JWPlayer to play a stream from a browser (fully listed in the / test / www directory of the module):
<script type="text/javascript" src="/jwplayer/jwplayer.js"></script> <div id="container">Loading the player ...</div> <script type="text/javascript"> jwplayer("container").setup({ modes: [ { type: "flash", src: "/jwplayer/player.swf", config: { bufferlength: 1, file: "mystream", streamer: "rtmp://localhost/myapp", provider: "rtmp", } } ] }); </script>
Video on demand
The module supports broadcasting video files in mp4 and flv formats. Setup Example:
application vod { play /var/videos; }
When playing, respectively, you must specify the names of the files, otherwise everything is the same as the live broadcast.
ffplay rtmp://localhost/vod/movie1.mp4 ffplay rtmp://localhost/vod/movie2.flv
Retransmission
When building distributed systems, it is important to have the ability to relay flows for load balancing.
by a large number of servers. The module implements two types of relaying: push and pull. The first type of relay
consists in transmitting to the remote server a locally published stream, and the second is to transmit the remote
flow to a local server. Example push relay:
application myapp { live on; push rtmp://cdn.example.com; }
At the moment when the publication starts at rtmp: // localhost / myapp / mystream, a connection is created with the remote server and the stream mystream is published further on rtmp: //cdn.example.com/myapp/mystream. When local publishing is terminated, the connection with cdn.example.com is automatically terminated.
Pull relays perform the opposite operation:
application myapp { live on; pull rtmp://cdn.example.com; }
In this example, when a client appears to want to play the rtmp: // localhost / myapp / mystream stream locally, a connection will be made to rtmp: //cdn.example.com/myapp/mytstream and the remote stream will be relayed to the local server, after which it will available to all local clients. At that moment, when there is no client left, the connection will be terminated.
Mobile Broadcasting (HLS)
For broadcasting on the iPhone / iPad device, as well as on new Android versions, the HLS protocol (HTTP Live Streaming) is used.
The protocol was developed by Apple and is a “sliced” stream of MPEG-TS / H264 / AAC stream sent over HTTP. A playlist in m3u8 format is attached to the stream. Give HTTP nginx can do fine. So, you just need to create and update the playlist and fragments of the HLS-stream, as well as monitor the removal of old fragments. For this there is a module nginx-rtmp-hls. It is located in the hls directory, but is not built by default. requires the libavformat library, included in the ffmpeg package. To build nginx with HLS support, you need to add this module explicitly during configuration:
./configure --add-module=/path/to/nginx-rtmp-module --add-module=/path/to/nginx-rtmp-module/hls
So it turned out that some time ago the ffmpeg project was forknut. And now we have two projects - ffmpeg and avconv, and therefore, compatibility problems (or rather, incompatibilities) of libraries immediately began to arise. To build nginx-rtmp you need the original ffmpeg. At the same time, some Linux distributions have switched to using avconv, which is not suitable for building. In this case, I wrote a detailed
instruction .
To generate HLS, it is enough to specify the following directives:
application myapp { live on; hls on; hls_path /tmp/hls; hls_fragment 5s; }
And finally, in the http {} section, configure the return on everything related to HLS:
location /hls { root /tmp; }
Now we publish the stream mystream to the application myapp, and in the iPhone browser we type in the address bar
example.com/hls/mystream.m3u8 . In addition, the stream can be embedded in the html video tag:
<video width="600" height="300" controls="1" autoplay="1" src="http://example.com/hls/mystream.m3u8"></video>
I note that to play on the iPhone stream must be encoded in H264 and AAC. If the source stream does not meet these conditions, you need to configure transcoding.
Recoding
When broadcasting video, it is often necessary to transcode the incoming stream to another quality, or other codecs. This task is fundamentally different from the distribution of RTMP and, unlike the latter, is associated with high CPU loads, large and active memory consumption, often relies on the use of multithreading and is potentially unstable. For this reason, it should not be included in the main server process, and ideally should be carried out as a separate process. It should be noted that a great tool for solving this problem already exists - this is all the same ffmpeg. It supports a huge number of codecs, formats and filters, allows the use of many third-party libraries. However, it is quite simple and actively supported by the community. The nginx-rtmp module provides a simple interface for using ffmpeg. The exec directive allows you to run an external application at the time of publication of the incoming stream. When the publication is completed, the application is also forcibly terminated. In addition, restarting the running application is supported if it suddenly terminated itself.
application myapp { live on; exec ffmpeg -i rtmp://localhost/myapp/$name -c:v flv -c:a -s 32x32 -f flv rtmp://localhost/myapp32x32/$name; } application myapp32x32 { live on; }
In this example, ffmpeg is used to transcode the incoming video into Sorenson-H263, resize it to 32x32 and publish the result to the application myapp32x32. You can simultaneously specify several exec directives that will perform any transformations with the stream and publish the result to other applications on both the local and remote servers. The directive supports several variables, including $ app (application name) and $ name (stream name).
Workers and local relay
As you know, nginx is a single-threaded server. In order to effectively use all the cores of modern processors, it usually runs into several workers. The processing of HTTP requests usually occurs independently of each other, and only in some cases (as, for example, in the case of a cache), access to common data is required. Such data is stored in shared memory.
With live broadcasting, the situation is different. All client connections losing the stream obviously depend on the connection publishing this stream. The use of shared memory in this case is inefficient, too time-consuming, would lead to synchronization and a large loss of performance. Therefore, for the use of several workers, an internal relay mechanism was implemented via UNIX sockets. Actually, such retransmissions are practically the same as ordinary external push retransmissions. Local retransmissions are enabled by the following directive.
rtmp_auto_push on;
It must be specified in the root section of the configuration file. I note that local retransmissions are needed only for live broadcasts.
Record
Often there is a need to write to the disk published streams. The module allows you to record both individual data from the stream (audio, video, key frames) and the stream entirely. You can set a limit on the file size, as well as the number of recorded frames. The following example includes recording the first 128K of each stream.
record all; record_path /tmp/rec; record_max_size 128K;
The recording takes place in flv format in the / tmp / rec directory.
You can control the recording manually, by turning it on and off using an http request. To do this, use the control module. Information about it can be found on the project website.
Authorization and business logic
In many cases, it is required to introduce restrictions or accounting for the operations of publishing and playing videos. This is due to the logic of the project in which it is used. The most common case is the need to authorize a user before giving him access to watch a video. In order to integrate the business logic of the project into the broadcaster, the module implements HTTP callbacks, such as on_publish and on_play. The server code receives all the available customer information, including its address, stream name, page address, etc. If HTTP status 2xx is returned, the callback is considered completed successfully and the client continues. Otherwise, the connection is broken.
on_publish http://example.com/check_publisher; on_play http://example.com/check_player;
Statistics
At any given time, thousands of clients can be connected to your server. Naturally, we need an interface to see their list, as well as all the main characteristics of the streams published or played by them. Moreover, it is important that this information can be both analyzed visually and processed programmatically. Such an interface for the nginx-rtmp module exists. To use it, you need to set the following directives in the http-section of nginx.conf.
location /stat { rtmp_stat all; rtmp_stat_stylesheet stat.xsl; } location /stat.xsl { root /path/to/stat.xsl/dir/; }
The rtmp_stat directive includes the return of an XML document with a complete description of live clients publishing or losing streams, a list of applications and servers. This document is useful for software processing, but it is completely unsuitable for visual analysis. To be able to view the list of clients in the browser, the rtmp_stat_stylesheet directive sets the relative path to the XML style sheet (stat.xsl). This file is in the project root. You need to configure nginx to distribute it at the specified url. The result can be viewed in the browser.
It is possible to explicitly break client connections. To do this, use the control module, not described in the article.
Simple Internet Radio
From the very beginning of the article, I constantly used the word "video". Of course, the module can broadcast not only video, but also audio streams. Here is a simple example of an Internet radio station on bash that broadcasts mp3 files from / var / music. This stream can play simple JWPlayer embedded in a web page.
while true; do ffmpeg -re -i "`find /var/music -type f -name '*.mp3'|sort -R|head -n 1`" -vn -c:a libfaac -ar 44100 -ac 2 -f flv rtmp://localhost/myapp/mystream; done
Compatibility
The module is compatible with all basic software that works with the RTMP protocol, including FMS / FMLE, Wowza, Wirecast, tested with the most common flash players JWPlayer, FlowPlayer, StrobeMediaPlayback, and also works fine with ffmpeg / avconv and rtmpdump.
Loads
The module uses an asynchronous single-threaded nginx server model. This allows for high performance. We use the module on Intel Xeon E5320 / E5645 machines in single worker mode. In this mode, it is possible to achieve the maximum throughput of the existing network cards - 2Gbps. The users of the module confirm the preservation of the same ratio (2Gbps per core) in local relay mode with several workers. Practice shows that the broadcaster's performance usually rests on the network, and not on the CPU.
I didn’t conduct direct comparisons with other products, however, the “heavy” multi-threaded FMS, Wowza and Red5, being more functional, should, due to the implementation features, significantly lose to my decision on the number of simultaneously connected clients and CPU load. This is confirmed by many users who have made such comparisons, including in the
article I already mentioned.
Conclusion
In conclusion, the module is distributed under the BSD license. It builds and runs under Linux, FreeBSD and MacOS. The article describes only a small part of the nginx-rtmp-module functionality. Those interested can familiarize themselves with the project at the links below.
I would be glad if the project seems interesting to Habr's readers.
Thanks to all!