📜 ⬆️ ⬇️

Meet GStreamer: Output Devices

Hello again, habrauser, which is interesting GStreamer! Today we will talk about the output devices (sink) of various media data, write a primitive player to listen to the radio and record the stream to a file, and learn a lot of new things.
The output device (sink) is an element for outputting a signal somewhere, be it a sound card, a file, a video card, or a network interface. At its core, an output device is the exact opposite of a data source, and, unlike data sources, output devices have only one pad - sink.
Consider the output device in more detail.

Go


1. fakesink

This device is similar in its meaning to fakesrc - it does nothing. fakesink is used to output the signal "into the void".
Honestly, I myself can not think of where it can be used, therefore, I do not find much usefulness in this device.
Usage example:
gst-launch-1.0 fakesrc ! fakesink 

2. fdsink

The fdsink device is used to output a stream to a file descriptor; it, like fdsrc, has only one parameter — fd, which must contain the file descriptor number. By default, displays the stream in STDOUT. Naturally, there is little benefit from this element, and there is little point in applying it in real projects.
Usage example:
 gst-launch-1.0 filesrc location=/foo/bar.mp3 ! fdsink | gst-launch-1.0 fdsrc ! decodebin ! autoaudiosink 

3. alsasink, pulsesink, oss4sink / osssink, jackaudiosink, autoaudiosink

These elements are used to output the stream to the sound card by using the necessary audio subsystem. Of the parameters, you can only note the device - it must contain the ID of the sound card, which, in turn, will be displayed stream. From the above list of modules, only autoaudiosink stands aside and has one feature - it automatically selects where and through which sound subsystem to output the stream, so it does not have the device parameter.
Examples of using:
 gst-launch-1.0 filesrc location=/foo/bar.mp3 ! decodebin ! alsasink device="hw:0" gst-launch-1.0 filesrc location=/foo/bar.mp3 ! decodebin ! pulsesink gst-launch-1.0 filesrc location=/foo/bar.mp3 ! decodebin ! autoaudiosink 

4. filesink

As you probably already guessed, this device is used to output a stream to a file. It can be used for different purposes, for example: record a radio stream, record a video stream from a web-camera, and also an audio stream from a sound card. Everything else, this device is simply necessary in the case of using GStreamer as a tool for converting files.
We will not consider the properties of this element in detail, since they are similar to the properties of the filesrc element that we met in the last article. One difference is that filesink has an append parameter. The append parameter is used to append a stream to the end of an existing file instead of overwriting it from the beginning.
Usage example:
 gst-launch-1.0 v4l2src num-buffers=1 ! jpegenc ! filesink location=/tmp/capture1.jpeg 

This example illustrates the creation of a photo with the first device that supports v4l2, and then saving the snapshot to /tmp/capture1.jpeg.
5. multifilesink

The multifilesink element is the exact opposite of the multifilesrc element that we met in the last article, and it is used to output the stream to different files. The parameters of this element are similar to the parameters of multifilesrc, so we will not dwell on them.
Usage example:
 gst-launch-1.0 v4l2src num-buffers=10 ! jpegenc ! multifilesink location=/tmp/capture%d.jpeg 

This example illustrates creating 10 photos and saving them to files capture0.jpeg-capture9.jpeg.
6. giosink

And this element is the exact opposite of the giosrc element — it is used to output a stream to a file via GIO. Like giosrc, giosink has a location parameter that contains the path to the file to which you want to record the stream.
Usage example:
 gst-launch-1.0 location ! giosink location=file:///foo/bar.raw 

7. ximagesink and xvimagesink

These elements are used to output the video signal via the X server. These elements can be used both for viewing video “in console” and for implementing video output in applications. The difference between the elements is small, but there is, and it consists of two points - ximagesink uses only the X server for output, and xvimagesink uses libxv. Also, xvimagesink has a bit more options. Consider them:
display

The name of the X-display, for example: 0,: ​​1,: 2 ...
pixel-aspect-ratio

This parameter indicates the aspect ratio, for example 2/1. The default is 1/1.
force-aspect-ratio

In some cases, explicitly specifying the pixel-aspect-ratio may not work (if the “negotiations” between the elements resulted in the need to leave the original pixel-apect-ratiio), and this property corrects this “problem”.

The following are properties that are only available from xvimagesink.
brightness, contrast, hue, saturation

Having translated the names of these properties into Russian (“brightness-contrast-hue-saturation), one can understand their purpose. Values ​​can range from -1000 to 1000.
device

The sequence number of the video card with which you want to display video.
double-buffer

This property turns on and off the use of double buffering .
colorkey, autopaint-colorkey

These properties are used to control the color of the overlay on which the video is drawn. The colorkey must contain gint with the color code, and autopaint-colorkey includes a “fill” overlay with this color.
Note:
There is no explanation in the documentation about what color is, but, most likely, the color is indicated in the RGB format, according to the formula (((RR & 0xff) << 16) | ((GG & 0xff) << 8) | (BB & 0xff).
draw-borders

This property enables or disables drawing a black stroke in places where a “void” has formed when using the force-aspect-ratio.
Examples of using:
 gst-launch-1.0 videotestsrc ! ximagesink gst-launch-1.0 videotestsrc ! xvimagesink 

8. aasink and cacasink

These elements are probably not relevant, and can be used either by oldfags or by those who want to show "what Linux can do," although I may be mistaken. Both of these elements allow you to display video through the libaa and libcaca libraries, that is, display the video in the form of ASCII art. There is only one difference between them: libaa displays black and white characters, and libcaca displays color.
We will not dwell on the parameters of these elements, since there is no practical benefit from them (IMHO).
Examples of using:
 gst-launch-1.0 videotestsrc ! aasink gst-launch-1.0 videotestsrc ! aacasink 

9. gdkpixbufsink

This element outputs the video stream to the GdkPixbuf object, which is accessible via the read-only last-pixbuf property. For what it is needed - I can not even imagine.

Examples


As an example, we will use the player from the previous article, but with the addition of a new feature - writing the stream to a file.
example2.py
 #env python2 # coding=utf-8 import gi gi.require_version("Gst", "1.0") gi.require_version("Gtk", "3.0") from gi.repository import Gst from gi.repository import Gtk from gi.repository import GObject import os import signal import argparse Gst.init("") signal.signal(signal.SIGINT, signal.SIG_DFL) GObject.threads_init() def parse_args(): parser = argparse.ArgumentParser(prog='example1.py') parser.add_argument('--volume', help='  (0-100) (default: 100)', type=int, default=100) parser.add_argument('--output', help='        (default: /tmp/out.ogg)', type=str, default='/tmp/out.ogg') parser.add_argument('location') args = parser.parse_args() return args class RecorderBin(Gst.Bin): def __init__(self, name=None): super(RecorderBin, self).__init__(name=name) self.vorbisenc = Gst.ElementFactory.make("vorbisenc", "vorbisenc") self.oggmux = Gst.ElementFactory.make("oggmux", "oggmux") self.filesink = Gst.ElementFactory.make("filesink", "filesink") self.add(self.vorbisenc) self.add(self.oggmux) self.add(self.filesink) self.vorbisenc.link(self.oggmux) self.oggmux.link(self.filesink) self.sink_pad = Gst.GhostPad.new("sink", self.vorbisenc.get_static_pad("sink")) self.add_pad(self.sink_pad) def set_location(self, location): self.filesink.set_property("location", location) class Player(): def __init__(self, args): self.pipeline = self.create_pipeline(args) self.args = args ##       ##      message_bus = self.pipeline.get_bus() message_bus.add_signal_watch() message_bus.connect('message', self.message_handler) ##   self.pipeline.get_by_name('volume').set_property('volume', args.volume / 100.) def create_source(self, location): """create_source(str) -> Gst.Element""" if not location.startswith('http') and not os.path.exists(location): raise IOError("File %s doesn't exists" % location) if location.startswith('http'): source = Gst.ElementFactory.make('souphttpsrc', 'source') else: source = Gst.ElementFactory.make('filesrc', 'source') source.set_property('location', location) return source def create_pipeline(self, args): """create_pipeline() -> Gst.Pipeline""" pipeline = Gst.Pipeline() ##      source = self.create_source(args.location) decodebin = Gst.ElementFactory.make('decodebin', 'decodebin') audioconvert = Gst.ElementFactory.make('audioconvert', 'audioconvert') volume = Gst.ElementFactory.make('volume', 'volume') audiosink = Gst.ElementFactory.make('autoaudiosink', 'autoaudiosink') ##  tee     tee = Gst.ElementFactory.make('tee', 'tee') ## decodebin   pad',     ##   def on_pad_added(decodebin, pad): pad.link(audioconvert.get_static_pad('sink')) decodebin.connect('pad-added', on_pad_added) ##      pipeline elements = [source, decodebin, audioconvert, volume, audiosink, tee] [pipeline.add(k) for k in elements] ##      : ## +-> volume -> autoaudiosink ## *src* -> (decodebin + audioconvert) -> tee -> | ## [ +-> vorbisenc -> oggmux -> filesink ] source.link(decodebin) audioconvert.link_pads('src', tee, 'sink') tee.link_pads('src_0', volume, 'sink') volume.link(audiosink) return pipeline def play(self): self.pipeline.set_state(Gst.State.PLAYING) recorder = RecorderBin('recorder') self.pipeline.add(recorder) self.pipeline.get_by_name('tee').link_pads('src_1', recorder, 'sink') recorder.set_location(self.args.output) def message_handler(self, bus, message): """ """ struct = message.get_structure() if message.type == Gst.MessageType.EOS: print(' .') Gtk.main_quit() elif message.type == Gst.MessageType.TAG and message.parse_tag() and struct.has_field('taglist'): print('GStreamer    -') taglist = struct.get_value('taglist') for x in range(taglist.n_tags()): name = taglist.nth_tag_name(x) print(' %s: %s' % (name, taglist.get_string(name)[1])) else: pass if __name__ == "__main__": args = parse_args() player = Player(args) player.play() Gtk.main() 


Note:
This example (like the example from the previous article) does not work in Ubuntu 13.10, falling from segfault (see lp: 1198375 ).
')
Consider what happens here. For convenience and logical separation, we create the RecorderBin container, into which we place three elements - vorbisenc, oggmux and filesink. The vorbisenc and oggmux elements are needed to encode a RAW stream into vorbis format and to wrap it in an ogg container, respectively. Details on the containers (bin), we will not stop, just remind you that the containers are complete elements that perform any action in the pipeline.
In RecorderBin, all three elements are linked to each other sequentially, according to the scheme:

 vorbisenc → oggmux → filesink 

Next, we create the tee element necessary for multiplexing the stream, since most of the elements, as you remember, often have only one input and one output, and the tee element solves the problem that occurs when you need to "divide" the signal and send it to two different points (sound card and file in our case).
After that, we link the src_0 element tee with the sink input of the volume element, and in the play method, after setting the status to PLAYING, add our RecorderBin to the pipeline and link the src_1 output of the tee element with sink RecorderBin.
Logically, it would be possible to link everything to create_pipeline, but for some reason GStreamer blocked the entire pipeline when adding another sink element before setting PLAYING status, and I could not find a solution to this problem.

Conclusion


Today we reviewed almost all available devices for stream output. In subsequent articles, we will consider the so-called. filters are elements that perform various work related to thread processing. Filters include various encoders and decoders, de / multiplexers, various audio / video filters and other service elements.

Literature


  1. GStreamer Application Development Manual
  2. GStreamer Core Plugins Reference Manual
  3. GStreamer Base Plugins Reference Manual
  4. GStreamer Good Plugins Reference Manual


PS
Dear colleagues, friends, and simple habrazhiteli, I apologize for the fact that he published the post in six months, and not a week, as promised. I will not voice the exact timing of the publication of the next post this time. I am afraid once again not to keep the promise, but I will try to prepare it in the near future.

PPS: Sample sources are available on GitHub .

Previous article

Source: https://habr.com/ru/post/204014/


All Articles