📜 ⬆️ ⬇️

Making the camera in Qt work on Android



Already, Qt is a good environment for developing mobile applications, but some moments remain unfinished there. So, for example, if you try to run a standard camera example, it will work on Windows, but not on Android. At the same time, examples using the camera in Qml are quite working. So working with the camera on Android is implemented, but there is no full access to it. And if we want the freedom to have access to a video stream?

When studying the sources of the QtMultimedia module, it became clear that the reason for the limitations of working with the camera is the need to hide crutches. And these crutches had to be put in order to provide hardware output through OpenGL. However, it is possible to provide full access to the camera’s video stream.

Before you begin the explanation, it is worth warning that it is not necessary to do everything described below in order to obtain individual images. You can simply use the camera via Qml and write your component on it to capture individual frames. And how, it is written here .
')
In order not to write everything from scratch, take that same Qt example with the name “Camera Example” (which is in the screenshot) and make it work. To display the image, it uses an object of the QCameraViewfinder class. We will write our own instead. And for output we have to use OpenGL.

To write your own classes for outputting frames received from media objects, Qt proposes an abstract class QAbstractVideoSurface with virtual functions through which interaction takes place. Create your class based on it, which will be responsible for obtaining the frame, and call it CameraSurface. And for the frame output, the CameraSurfaceWidget class inherited from QOpenGLWidget will be responsible. It would be possible to combine these two classes, but when inheriting from QAbstractVideoSurface and QOpenGLWidget, double inheritance from the QObject class arises. And so it is impossible to do.

You can see all the implementation code below, but here I’ll just describe the key points. And just in case, to learn more about how to work with the QAbstractVideoSurface class, you can here .

We get a new frame in the bool CameraSurface :: present function (const QVideoFrame & frame). The frame parameter is that new frame of our video stream. The data that can come from the camera can be in the form of an array (this happens in Windows or Symbian) or in the form of a texture (in Android). And if the texture came to you, do not even try to read it. When you call frame.handle (), you might think that you just get a texture index, but in fact at the same time, a tricky initialization of resources takes place based on the OpenGL context of your thread. But this function is not called in your thread, which means that this OpenGL context will not work here. And let the keyword const in the function declaration not deceive you, the data inside is insidiously labeled as mutable. Just copy the frame and read the data while drawing.

But that's not all you need to know. When linking to the camera, our CameraSurface has a hidden property “GLContext”, and it is expected that you write your OpenGL context there. And it is better to do this in the stream of the CameraSurface object, that is, using the slot call through the functionality of signals and slots Qt. And then send the event talking about the entry in “GLContext” through the property object “_q_GLThreadCallback”. And this event must have the code QEvent :: User. In theory, this is a custom type of event, but you shouldn't have known about these crutches at all, so don't care. In general, everything works on Windows without actions, but if this is not done on Android, the camera simply will not start sending frames.

In short, the drawing code would be something like this:

void CameraSurfaceWidget::paintGL() { if (!_surface->isActive()) {//       ,  _surface->scheduleOpenGLContextUpdate();//     OpenGL QObject* glThreadCallback = (_surface->property("_q_GLThreadCallback")).value<QObject*>();//  , , //     ? if (glThreadCallback) { QEvent event(QEvent::User);//    glThreadCallback->event(&event);//   } //       . , ,     . } else { QVideoFrame& frame = _surface->frame(); //  } } 

As a result, we are able to process the stream and the Windows-like interface on Android. Data from the frame texture, by the way, can be pulled out using the Frame Buffer Object and glReadPixels (there is no glGetTexImage in OpenGL ES). And this is not the only way to do it. You can still get frames through QVideoProbe, but then everything seems to be processed on the processor, because it wildly lags. So it's better to just forget about it.

Qt more oddities
And one more strange thing in the end. If the frame format is Format_RGB32, then the color channels are in the order of BG R. If the format is Format_BGR32, then RG B. Something in Qt is confused.

Download the corrected example here .

Source: https://habr.com/ru/post/254625/


All Articles