I want to share my little experience in the implementation of the distribution of video content.
so
There is a service that distributes video content
for viewing (i.e. download is not provided for).
At the same time, all content is divided into 2 categories:
1 - given entirely (file completely) / pause, forward-rewind;
2 - given by one “virtual file” of the form [end of first file] + [some number of files completely] + [beginning of last file]. The mpegts format, each set is coded the same way, so you can simply glue the pieces together.
')
These two categories differ logically (completely different things), have distinctly different URIs and are physically located in different places.
As originally proposed
A bunch of nginx + apache.
The first category is the banal distribution of content in nginx with little tuning.
The second part is via the apache-php script using a loop
while(!feof($fp)){ ... echo fread($fp, $buf) ... }
where $ fp is a pointer to the file with fseek () executed where appropriate.
What did not like
As it turned out, nginx is not very suitable for distributing statics with a large number of range-bytes requests (namely, such requests are mainly obtained when viewing online). It is not possible to use AIO to service such requests. As a result, there is a long queue to the disk, and the customers often watch the video with “brakes”. It is useless to set a large number of buffers - just wasting memory.
I tried the latest version of nginx (1.12.2 at the current time), and --with-file-aio, and --with-threads, and all sorts of things. The effect is not received.
Well, a bunch of "echo fread ()" in php is also very doubtful. The output of fread goes to the intermediate php buffer and, accordingly, the script consumes memory not less than this buffer. If you read the file in small pieces, the load on the CPU increases and the speed of return decreases. If you read in large chunks, then too much memory is spent on each request.
What is the result?
Well, first of all refused apache. Instead, php5-fpm. This gave a significant increase in speed (response speed) + reduced memory consumption.
First category
content, for the sake of experiment, I decided to try to give my script.
In nginx:
location ~* /media/.*\..* { fastcgi_pass unix:/var/run/php5-fpm.sock; include /etc/nginx/sites-available/fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/m_download.php; root /var/www; send_timeout 1h; }
m_download.php will not be fully quoted. The main functionality:
fpassthru($fd)
where $ fd is a pointer to a file. Of course, you must first parse the HTTP_RANGE header, open the file and set the offset in it.
fpassthru () gives the file "from the current to the end." In this situation, it is quite suitable. All players play correctly.
To my great surprise, it was this way of giving that gave the result I needed. There are no queues to the disk (system system is used more precisely, and with my SAS-3 12Gb / sec and await <10ms is generally good). Accordingly, there is no waiting for the processing of the request. File upload speed (if downloaded) is about 250 Mbit / s. "Brakes" customers are completely gone.
At the same time, memory usage has greatly decreased, therefore more remains for the file cache. The script itself consumes about 0.5 MB of private memory when executed. The executable code still exists in memory in a single copy, so its size does not matter.
Second category
(this is where you need to make a few pieces of different files) also changed.
Refused from the bunch "echo fread ()".
Unfortunately, in php there is no function for direct output of an arbitrary piece of the file. In fpassthru () there is no “how much to display” parameter, it always displays “to the end”.
I tried to call the system dd using passthru ().
Those.:
passthru('/bin/dd status=none if='.$fffilename.' iflag="skip_bytes,count_bytes" skip='.$ffseek.' count='.$buf_size);
And ... lo and behold! The memory consumption by the script is slightly more than 0.5 MB, the buffer size can be set to any (it does not affect memory). The upload speed (with 4 MB buffer) ... whistles (same 250 Mbit / s).
Here is a story. As a result, I had to refuse to distribute content just nginx. It is used, but only to redirect requests to php5-fpm.
In short, my IMHO: nginx gives static well, but does not read well from disk.
I never would have thought that distributing files through a php script could be more efficient.
Well, add that I was looking for any httpd with AIO for range-bytes requests "out of the box." It seems that the lighttpd version 2 may be, but the version is still unstable ... I did not find anything suitable.