📜 ⬆️ ⬇️

Simple GStreamer audio player

Recently, I needed to implement a small audio player. I, for various reasons, chose the Gstreamer library. And so I decided to share the knowledge gained. I hope the information below will be useful to someone.

So, let's begin


First of all, let's get to the bottom of the basic concepts in Gstreamer

The element is the most important class of objects in Gstreamer. They can unite in a chain and create a so-called channel (pipeline). Each element has a strictly defined function: reading a file, entering or outputting data, etc.

Nests (pads) - used to transfer data between elements. In each element can be from one and above nests.

Element container (Bin) - combines a chain of elements. With this container, you can manage the elements as one unit.
')
The channel (pipeline) is similar to bins, except that the location of the elements it contains containers of elements.

Read more about all this in the documentation .

We now turn to the implementation of our class.

This is how the class of our audio player looks like:

audioengine.h
#ifndef AUDIOENGINE_H #define AUDIOENGINE_H #include <gst/gst.h> #include <glib.h> #include <QObject> class AudioEngine : public QObject { Q_OBJECT public: AudioEngine(QObject *parent = 0); ~AudioEngine(); int Init(); void MusicPlay(); void MusicPaused(); void MusicStop(); void AddFile(char *file); void SetVolume(gdouble val); gint64 GetMusicPosition(); gint64 GetMusicDuration(); void SetMusicPosition(gint64 pos); private: GstElement *pipeline; GstElement *source; GstElement *volume; gint64 pos; static void OnPadAdded(GstElement *element, GstPad *pad, gpointer data); private slots: }; #endif // AUDIOENGINE_H 



In the Init () function, we initialize the Gstreamer library:

 gst_init(0, 0); 

This function accepts argv and argc command line arguments, in our case they can be omitted.

Next, create a channel:

 pipeline = gst_pipeline_new("audio-player"); 

and the elements we need:

 source = gst_element_factory_make("filesrc", NULL); demuxer = gst_element_factory_make("decodebin", NULL); decoder = gst_element_factory_make("audioconvert", NULL); volume = gst_element_factory_make("volume", NULL); conv = gst_element_factory_make("audioconvert", NULL); sink = gst_element_factory_make("autoaudiosink", NULL); 

Element source - designed to read audio files.
demuxer - used to decode an audio file.
decoder and conv - to convert an audio file to another format.
volume - designed to control the sound volume.
sinc - automatically detects the audio device and outputs data to it ...

It should be noted that the demuxer creates sockets for each stream element and we will have to install an event handler to associate the demuxer with the decoder. The OnPadAdded () function will help us with this.

The event handler in our code looks like this:

 g_signal_connect(demuxer, "pad-added", G_CALLBACK(OnPadAdded), decoder); 

Add all created items to the channel:

 gst_bin_add_many (GST_BIN (pipeline), source, demuxer, decoder, volume, conv, sink, NULL); 

and link the elements together.

 gst_element_link (source, demuxer); gst_element_link_many (decoder, volume, conv, sink, NULL); 

The function for adding a file to the player looks like this:

 void AudioEngine::AddFile(char *file) { g_object_set(G_OBJECT(source), "location", file, NULL); } 

The g_object_set () function passes arguments to the source element. In this case, the location argument tells us that the audio file is on the local machine, file is the path to our file. The last parameter NULL tells us that the element no longer needs any arguments.

Functions to start, stop playback look like this:

 void AudioEngine::AddFile(char *file) { g_object_set(G_OBJECT(source), "location", file, NULL); } void AudioEngine::MusicPlay() { gst_element_set_state(pipeline, GST_STATE_PLAYING); } void AudioEngine::MusicPaused() { gst_element_set_state(pipeline, GST_STATE_PAUSED); } 

here everything seems to be clear.

Function for volume control:

 void AudioEngine::SetVolume(gdouble val) { g_object_set(G_OBJECT(volume), "volume", val, NULL); } 

Functions for to get the duration of the track and its current position:

 gint64 AudioEngine::GetMusicDuration() { gint64 len; gst_element_query_duration(pipeline, GST_FORMAT_TIME, &len); return len; } gint64 AudioEngine::GetMusicPosition() { gint64 pos; gst_element_query_position(pipeline, GST_FORMAT_TIME, &pos); return pos; } 

it should be borne in mind that functions return time values ​​in nanoseconds.

And finally, the function to change the position of the track:

 void AudioEngine::SetMusicPosition(gint64 pos) { gst_element_set_state(pipeline, GST_STATE_PAUSED); gst_element_seek_simple(pipeline, GST_FORMAT_TIME, GST_SEEK_FLAG_FLUSH, pos); gst_element_set_state(pipeline, GST_STATE_PLAYING); } 

in order to change the position, we will have to stop the playback, and after the change, we will again start it.
gst_element_seek_simple () takes our channel as arguments, time format, position search flag, and position itself, which is measured in nanoseconds.

The full code is listed below by reference in a place with a small GUI implemented in Qt.
Sources on GitHab

Thanks to all.

Source: https://habr.com/ru/post/204172/


All Articles