📜 ⬆️ ⬇️

Fake'em Chatroulette in Linux - more flexible than in Windows

You all know about services like CR, which are now fairly divorced on the Internet. The main behavior of users of these services, to put it mildly, is depressing. No, of course, it is very beneficial to manufacturers of keyboards and mice that are destroyed by actively clicking / pressing the next button, but, imho, to any normal homo sapiens, it should blow up the brain and cause a creative itch in it and in hands, requiring actions to destroy the total symmetry and monotony in the behavior of the screened. In addition, this is as much as 76800 pixels of advertising space. In general, something needs to be done about it. And in this text, the story of how to do these things (affairs aimed at launching arbitrary video streams into chat rooms) in Linux begins. In addition, you will learn from it: (1) how dd can be used for buffering, (2) using ffmpeg to grab any area of ​​the screen or video image, (3) about the differences between vloopback and avld, (4) about the effect of attaching processes to different threads on Atom, (5) about using Xvfb for video editing, (6) about a social experiment, (7) well, and you will learn something else.



1. Linux
')
In general, of course, you can download some utility like the one on manycam.com, and get access to some features in Windows. But ... These possibilities, although diverse, are limited by the will of the authors of this program and the authors of the plug-ins to it. These plugins are not so easy to write. And what is good for Linux and unixway for these tasks, as the philosophy of information processing, is that you can simply and flexibly connect all sorts of filters of different degrees of fun together and end up with an interesting result. There is more freedom, so everything that happens is happening below in Linux. But, of course, Windows users should join the struggle for a variety of chat rooms, at least with the help of the same ManyCam.com.

2. avld and vloopback

The first thing to do is to set up a video interface with flashplayer, through which all these chat rooms work. Linux Flashplayer understands video4linux (2) video devices, so you need to organize such a virtual device. You can do this with the help of special drivers. In the wild, there are many of them, but I personally managed to make work only avld and vloopback. More or less sanity, they even work on the kernel version 2.6.33.4. Another plus is that these drivers (as far as I know) are included in the Ubuntu repositories, and that (exactly) they are in AUR Arch Linux, they probably also exist in Gentoo - therefore, they can be installed correctly in the system : do not need to engage in self-collection. I will not waste time describing how to install these packages, but I will immediately proceed to answer the question: why are we talking about two drivers?

The answer is. For most fun scenarios, avld will be enough for you - it is very easy to use and reliable as felt boots in winter, but it has a significant drawback: avld (so far) does not support many video devices, a virtual v4l device creates only one during loading the driver, which is not enough. vloopback allows you to create a lot of devices (videopipes), but at the same time it is able to hang the flashplayer tightly with the browser, and they need to be used according to a sufficiently shaman script. Of course, you can decide, not caring about complexity, use only vloopback, but it has one more serious drawback: it produces a video stream at the speed at which the process reading this video stream is able to accept it, fps is not controlled at all. Even if the process of a video stream is generating, frames are very rare, just in this case, vloopback will produce the same frame for a while. Therefore, firefox through a flashplayer when working with a vloopback device turns into a 100% cpu-pig. Therefore, if you are having fun with vloopback on a laptop, it makes sense to transfer the processor to the powersave mode, otherwise the system will warm up without much use.

avld to set fps allows, through that cpu-gluttony flashplayer is limited, and there is an additional opportunity to control the video.

So, you have installed the drivers, what to do with them next?

1.1. avld

First, you need to load the driver. Do it better team

  modprobe avld width = 320 height = 240 fps = 30 


After that, your system should have a device /dev/videoN , where the number N depends on the number of already existing v4l devices on the system. Theoretically, according to the standard for Linux distributions, users of the video group should have access to read and write access to this device, so do not forget to add yourself to it.

It should be clear that for tsiferki indicated in the parameters of the module, and in the documentation for avld it is written that these parameters can be changed at runtime, if in /dev/videoN write a new configuration:

  echo 'width = 640 height = 320 fps = 15'> / dev / videoN 


but I could not change the settings in this way. However, you can change them if you simply restart the module.

After loading the module with the necessary parameters (and for flashplayer 320x240 25fps, they are well suited; with other sizes) you can immediately start the browser, go to some chatroulette.com, poke the flash plug-in area with the right mouse button and select the appeared camera in the settings (its name will begin with the words 'Dummy video device').

After turning on the camera, you should see a black square of almost your own production. To fill it with colors, you need to sequentially write raw pictures, that is, simply a sequence of values ​​encoding the colors of the pixels, to the device /dev/videoN . You can do it like this:

  ffmpeg -an -i some_video.avi -f rawvideo -s 320x240 -pix_fmt bgr24 - |  dd obs = $ ((320 * 240 * 3)) of = / dev / videoN 


That is, a certain video file is taken, a video stream is extracted from it ( -an tells ffmpeg to ignore audio tracks), decoded into a sequence of raw pictures of the right size with the correct format of color information about a pixel, and recorded as a whole, in block size (320 * 240 * 3) bytes, in avld.

If the stream from ffmpeg is sent immediately to /dev/videoN , then nothing will come out, because avld believes that every frame is completely provided to it by a separate write call with the appropriate file descriptor, and ffmpeg writes its output in small portions, in several kilobytes, respectively , avld will think that he gets the whole frame, but at the same time receive only a small piece, and because of this give out some kind of porridge. To fix this above and use dd, the obs option tells you which blocks to output to the output file. That is, here dd is used as a buffer for accumulating full information about the frame, and then spitting it out as an inseparable block in avld.

Actually, everything with avld. You can write anything to /dev/videoN . It is also useful that avld remembers the last frame and gives it to the application that consumes the video stream from this v4l device. Therefore, one picture can be written in /dev/videoN , and it will appear before the not very clear view of the inhabitants of chat rooms.

By the way, yes. MEncoder records the video decoding results in raw format just frame by frame, which allows using it without dd.

1.2. vloopback

The interface for this thing is much more complicated than avld. The reason is that vloopback was not conceived as a fake camcorder, but as a device like a pipe (which | in bash) for realtime video processing. In this regard, this piece has a rather sophisticated set of options, and a rather complicated software interface, which may require the use of special utilities to configure the parameters of these same videopipes, which, for example, can scale video (right inside the kernel, yes ... Linux can be strange). But if the goal is to get fun, and not to understand all the intricacies of vloopback, then you can act in a simple way.

First, of course, you need to load the module into the kernel:

modprobe vloopback pipes=M

where the pipes parameter can be used to request vloopback to create the appropriate number of channels. After downloading, vloopback will tell you where in /dev names for the inputs and outputs of the corresponding channels will be placed. You can view this information as follows:

  $ dmesg |  grep vloop
 [vloopback_init]: video4linux loopback driver v1.4-trunk
 [vloopback_init]: Loopback 0 registered, input: video0, output: video1
 [vloopback_init]: Loopback 0 Using 2 buffers
 [vloopback_init]: Loopback 1 registered, input: video2, output: video3
 [vloopback_init]: Loopback 1, Using 2 buffers 


Each channel has two names in /dev , pretends to be a v4l device, of course, the output channel, and the data for this device is written to input. But the record protocol in the vloopback channel is simple, but non-trivial, as is the case with avld, and you have to use the utility with the harsh name mjpegtools_yuv_to_v4l to record the sequence of frames in the channel. This package does not exist in the repositories and AUR Arch Linux, so I personally had to compile this utility from source, which is quite simple: (1) unpack the source files at www.filewatcher.com/m/mjpegtools_yuv_to_v4l-0.2.tgz.11065.0.0.html , ( 2) run make .

Now, everything is ready to send a video stream to the vloopback channel. However, vloopback requires gentle handling, therefore, in order for nothing to hang, you need to follow some rules. First, you always need to first run the video stream in vloopback, then run the application that will read the data from the video device. To stop work, you must first close the application that reads and then writes data to the vloopback channel. In short, as in the stack.

In addition, vloopback requires setting geometry and pixel format, this can be done with the help of mplayer. So, first the video stream in the vloopback channel:

  ffmpeg -an -i some_file.avi -f yuv4mpegpipe -s 320x240 -pix_fmt yuv420p - |  mjpegtools_yuv_to_v4l / dev / video0 


At the same time, the browser with flashplugin should not be launched. Before opening any kind of chatroom in it, you need to set / check the output-end of the video channel. This can be done with mplayer.

  mplayer tv: // -tv 'driver = v4l: width = 320: height = 240: device = / dev / video1' 


After mplayer is completed, you can start the browser with some chat and choose a newborn virtual camera. REMEMBER: so that nothing is stuck, you must first close the browser (or disable flashplugin), and only then stop recording frames to the channel.

2. Content.

So, virtual cameras perceived by flashplugin are now known as do. But how to fill them is the question. Generally, when I started it all, I had a keen desire to check: whether my face is crooked, or in these chatrules, people behind it are sitting in order to constantly poke on the next button.

Therefore, the first thing I wanted to do was to become a 'man in the middle' between two users, that is, to broadcast the video stream of the first to the second and, of course, vice versa. Climbing behind this video stream inside the flashplugin is probably possible through the pre-loaded libraries, but it is long and difficult. Simply take the finished picture from the screen. ffmpeg will help with this (for avld):

  ffmpeg -f x11grab -s 320x240 -r 15 -i: 0.0 + x_offset + y_offset -f rawvideo -pix_fmt bgr24 - |  dd obs = $ ((320 * 240 * 3)) of = / dev / videoN 


The -r parameter sets fps - the frequency of taking frames from the screen. And picking up the correct x_offset and y_offset parameters a couple of times you can achieve the desired crossover and ruin your mood thereby, because it immediately catches the eye that people really sit and stupidly press the next button, regardless of who they see in front of them (which gives rise to the hypothesis that all these chatrunts are the machinations of manufacturers of mice and keyboards hoping for faster wear of some buttons). Perhaps this behavior is a manifestation of the social analogue of Newton's second law, when the body in the absence of forces (if, under conditions of symmetry of the environment) moves in a straight line and evenly.

But then another question arises: how to attract the attention of men (preferably the attention of girls)? In general, what is positive, a good degree of attractiveness have desktop broadcasts (for those who are not quite in the subject, the beginning of the team):

  ffmpeg -f x11grab -s 1680x1050 -r 15 -i: 0.0 ... 


on which, for example, a game of chess is open or something more or less intellectual. Of course, on the desktop you can post your bright, thoughtful face, showing the video stream from the real camera in a separate window:

  mplayer -nosound tv: // 


But non-production windows interfere, so there is a desire to impose various video streams in memory, and then send them to a virtual webcam. Of course, again, you can try to find a utility that does this, but you can also recall that the X-Window itself provides quite good opportunities for applying anything to anything.

3. Xvfb

In order to be able to draw in the virtual framebuffer, this is the buffer, first, of course, you need to create and run:

  Xvfb: 1 -screen 1 640x480x32 


this creates a virtual display 1 with a new screen 1, the resolution and color depth should be obvious with what (Master Yoda). Well, that's it. The rest is just as obvious. You can send a video stream of your desktop to this virtual display:

  export DISPLAY =: 1.1;  ffmpeg -f x11grab -s 1680x1050 -r 15 -i: 0.0 -f yuv4mpegpipe -pix_fmt yuv444p -s 640x480 - |  ffplay -f yuv4mpegpipe - 


during initialization, ffplay will see the DISPLAY environment variable and understand which X-server it should draw on (I use yuv4mpegpipe here to write smaller keys on the receiving side pipe'y, but it’s clear that converting the color to yuv444 and then drawing from it The pictures add CPU load, it is more efficient to use -f rawvideo, but then you have to prescribe the pixel format and frame size everywhere. Further, you can impose your video image on this disgrace:

  mplayer -nosound tv: // -tv 'width = 160: height = 120' -display: 1.1 -geometry + $ (((640 - 160 - 10)) + $ (((320 - 120 - 10)) 


Well, send a video stream to a virtual video camera:

  ffmpeg -f x11grab -s 640x480 -r 15 -i: 1.1 -f rawvideo -s 320x240 ... 


OK, at this moment we have a picture in a picture, and we can demonstrate our chess abilities to others, etc. etc. But, unfortunately, the reality is harsh and pornographic. Most of all, people (including girls) living in chatrules are caught by a video series in which couples make love to webcams (yep, people like to pry).

4. Crop

But most of these clips that you can get on the Internet ...

(By the way, here’s a simple way to pull out video from any flash media in firefox. Now flashplayer-10.0.42.5 works with video content like this: it starts downloading a movie through the browser’s cache, and while the movie is buffered, the corresponding file is in the cache firefox, but when buffering ends, the file is deleted, it just remains open, and flashplugin can read it, but you and I have Linux, of course, you need to make a hard link to it on your disk. Therefore, if you want to save the video, then open the page with it, then go those in the properties and clean your browser's cache will remove all but the desired video, making it hard link command ln , well, everything - I've found yours.

So, the majority of videos of dubious content on the Internet are hung with all sorts of different logos and running lines, which you need to get rid of so as not to destroy the illusion of those who look. To get rid of them, of course, you need to cut a clean video from the entire picture. To do this, suitable video filter ffmpeg - crop (crop in our Nashenskomu). But this filter works sternly and if you write something like:

  ffmpeg -an -i some_video.avi -vf crop = 32: 32: 400: 300 -f rawvideo -s 320x240 ... 


then you will be cursed for the fact that the square with the coordinates (left_up top - right_low) (32,32) - (432,332) does not fit into a frame of size 320x240, that is, ffmpeg first scales the frame, and then tries to apply a crop filter to it. Of course, you can run bc -l and calculate everything you need to end up with a clean image of a suitable size, but this is tedious; instead, you can use the pipeline:

  ffmpeg -an -i some_video.avi -vf crop = 32: 32: 400: 300 -f yuv4mpegpipe - |  ffmpeg -f yuv4mpegpipe -i - -f rawvideo -s 320x240 -pix_fmt ... 


Well, here we can say to everyone: Thank you for your attention, happy video creativity in the open spaces of chat rooms. But I still pograzomanstvo, for a long time did not write on Habré.

5. Atom users.

In general, for all these entertainment, the performance of some not very powerful Athlon II X2 is more than enough. You can still watch a couple of movies at the same time (at the same time) in p720 resolution. But now the single-nuclear Atom pulls it badly, and I want to somehow squeeze additional fps out of it.

And here, in spite of all the symmetry of its hardware threads, setting the affinity of different groups of processes can help. This is probably due to the fact that the scheduler does not distinguish between hardware threads, considering them to be completely equal entities, and freely transfer processes between them. BUT, because, TLB, then these threads must be different. And when you flip the process, there are forced switchings of these TLBs. If firefox is mixed with ffmpeg in its execution, then, to switch from one to another within the framework of the x86 architecture, you need to completely reset the TLB and reload it with new data.

Therefore, the performance is improved a little if you nail with

  taskset -c 1 firefox 


firefox to the second thread, and all processes related to the processing of any video to the thread of the first thread (with the number 0). Two or three frames per second can be won in this way.

6. Audio.

But I could not build a virtual microphone from ALSA. And this is despite the fact that ALSA has a native loopback driver, which is called aloop. And it really works as it is supposed to: everything that is recorded in one of its channels can be read from the corresponding other channel. jack-server works fine with it, and you can differently transfer the sound between different applications. But to build from this aloop what would look like a virtual microphone did not work for me. But, probably, it is possible, I just didn’t have enough patience and time to sort through these wooden ALSA configs.

And so, it would open the way to audio advertising in the vast chat roulette. If anyone comes out with this, write an article on Habré, I promise to put it into karma for it :)

7. Well, everything.

I hope it was not boring and boring. Have fun, and so on.

Source: https://habr.com/ru/post/96016/


All Articles