📜 ⬆️ ⬇️

Qt and mobile camera. Part 2, Meego

Hello, Habravchane!

In the last article, I talked about how to create a convenient Qt-viewfinder for a camera on Symbian devices. Now it's time to make our code more universal and add compatibility with Meego 1.2, which, as it turned out, is not a trivial task.

In short, I remind you that we got in the last part:
Now, why we can not just take and add this widget to the application for Nokia N9 (N900, N950). It's all in the format of the image returned by the camera - UYVY. When launching our viewfinder, all we get is a message to the debug log about the wrong format. What is UYVY I do not know, but, fortunately, not without the help of foreign users of Maemo / Meego, I managed to find a solution to this problem. Further I will give only excerpts of the code from the first article with new modifications.

The solution is to convert UYVY into a more familiar and, most importantly, clear for Qt format RGB. Oddly enough, the code that we will use in the future is included in QtMobility and can be found in the source code (link in the basement of the article). The action plan for the implementation of the UYVY-> RGB16 converter is as follows:
  1. We add imaginary support UYVY to our viewfinder to get access to the image, not to read the logs for nothing
  2. Implementing the conversion function in the frame handler class
  3. We modify the QAbstractVideoSurface :: present () method with the addition of our new function
First we need to tell the system that UYVY is a valid format. To do this, add the QVideoFrame :: Format_UYVY value to the list of formats returned by the QAbstractVideoSurface :: supportedPixelFormats () method:
')
>> myvideosurface.cpp
QList<QVideoFrame::PixelFormat> myVideoSurface::supportedPixelFormats( QAbstractVideoBuffer::HandleType handleType) const { if (handleType == QAbstractVideoBuffer::NoHandle) { return QList<QVideoFrame::PixelFormat>() << QVideoFrame::Format_RGB32 << QVideoFrame::Format_ARGB32 << QVideoFrame::Format_ARGB32_Premultiplied << QVideoFrame::Format_RGB565 << QVideoFrame::Format_RGB555 << QVideoFrame::Format_UYVY; //  } else { return QList<QVideoFrame::PixelFormat>(); } } 

Further, the most important part, because of which the whole article appeared - UYVY-> RGB16 conversion. To do this, use the progressive conversion function optimized for the ARM \ Neon graphics processor. As I wrote above, you can find it in the QtMobility source code (\ src \ multimedia \ qgraphicsvideoitem_maemo5.cpp). However, people have problems with transferring this function to their cozy code (I confess, I only managed to do it the second time), so I’ll give the code completely. It is not necessary to delve into the meaning of this ASM snippet (within the framework of the problem being solved). I myself have no idea what is going on in it (not counting the lines with comments), so it is enough just to copy this piece to me. You can add it to myvideosurface.cpp before implementing the methods of the class myVideoSurface.
  #include <stdint.h> #ifdef __ARM_NEON__ /* * ARM NEON optimized implementation of UYVY -> RGB16 convertor */ static void uyvy422_to_rgb16_line_neon (uint8_t * dst, const uint8_t * src, int n) { /* and this is the NEON code itself */ static __attribute__ ((aligned (16))) uint16_t acc_r[8] = { 22840, 22840, 22840, 22840, 22840, 22840, 22840, 22840, }; static __attribute__ ((aligned (16))) uint16_t acc_g[8] = { 17312, 17312, 17312, 17312, 17312, 17312, 17312, 17312, }; static __attribute__ ((aligned (16))) uint16_t acc_b[8] = { 28832, 28832, 28832, 28832, 28832, 28832, 28832, 28832, }; /* * Registers: * q0, q1 : d0, d1, d2, d3 - are used for initial loading of YUV data * q2 : d4, d5 - are used for storing converted RGB data * q3 : d6, d7 - are used for temporary storage * * q6 : d12, d13 - are used for converting to RGB16 * q7 : d14, d15 - are used for storing RGB16 data * q4-q5 - reserved * * q8, q9 : d16, d17, d18, d19 - are used for expanded Y data * q10 : d20, d21 * q11 : d22, d23 * q12 : d24, d25 * q13 : d26, d27 * q13, q14, q15 - various constants (#16, #149, #204, #50, #104, #154) */ asm volatile (".macro convert_macroblock size\n" /* load up to 16 source pixels in UYVY format */ ".if \\size == 16\n" "pld [%[src], #128]\n" "vld1.32 {d0, d1, d2, d3}, [%[src]]!\n" ".elseif \\size == 8\n" "vld1.32 {d0, d1}, [%[src]]!\n" ".elseif \\size == 4\n" "vld1.32 {d0}, [%[src]]!\n" ".elseif \\size == 2\n" "vld1.32 {d0[0]}, [%[src]]!\n" ".else\n" ".error \"unsupported macroblock size\"\n" ".endif\n" /* convert from 'packed' to 'planar' representation */ "vuzp.8 d0, d1\n" /* d1 - separated Y data (first 8 bytes) */ "vuzp.8 d2, d3\n" /* d3 - separated Y data (next 8 bytes) */ "vuzp.8 d0, d2\n" /* d0 - separated U data, d2 - separated V data */ /* split even and odd Y color components */ "vuzp.8 d1, d3\n" /* d1 - evenY, d3 - oddY */ /* clip upper and lower boundaries */ "vqadd.u8 q0, q0, q4\n" "vqadd.u8 q1, q1, q4\n" "vqsub.u8 q0, q0, q5\n" "vqsub.u8 q1, q1, q5\n" "vshr.u8 d4, d2, #1\n" /* d4 = V >> 1 */ "vmull.u8 q8, d1, d27\n" /* q8 = evenY * 149 */ "vmull.u8 q9, d3, d27\n" /* q9 = oddY * 149 */ "vld1.16 {d20, d21}, [%[acc_r], :128]\n" /* q10 - initialize accumulator for red */ "vsubw.u8 q10, q10, d4\n" /* red acc -= (V >> 1) */ "vmlsl.u8 q10, d2, d28\n" /* red acc -= V * 204 */ "vld1.16 {d22, d23}, [%[acc_g], :128]\n" /* q11 - initialize accumulator for green */ "vmlsl.u8 q11, d2, d30\n" /* green acc -= V * 104 */ "vmlsl.u8 q11, d0, d29\n" /* green acc -= U * 50 */ "vld1.16 {d24, d25}, [%[acc_b], :128]\n" /* q12 - initialize accumulator for blue */ "vmlsl.u8 q12, d0, d30\n" /* blue acc -= U * 104 */ "vmlsl.u8 q12, d0, d31\n" /* blue acc -= U * 154 */ "vhsub.s16 q3, q8, q10\n" /* calculate even red components */ "vhsub.s16 q10, q9, q10\n" /* calculate odd red components */ "vqshrun.s16 d0, q3, #6\n" /* right shift, narrow and saturate even red components */ "vqshrun.s16 d3, q10, #6\n" /* right shift, narrow and saturate odd red components */ "vhadd.s16 q3, q8, q11\n" /* calculate even green components */ "vhadd.s16 q11, q9, q11\n" /* calculate odd green components */ "vqshrun.s16 d1, q3, #6\n" /* right shift, narrow and saturate even green components */ "vqshrun.s16 d4, q11, #6\n" /* right shift, narrow and saturate odd green components */ "vhsub.s16 q3, q8, q12\n" /* calculate even blue components */ "vhsub.s16 q12, q9, q12\n" /* calculate odd blue components */ "vqshrun.s16 d2, q3, #6\n" /* right shift, narrow and saturate even blue components */ "vqshrun.s16 d5, q12, #6\n" /* right shift, narrow and saturate odd blue components */ "vzip.8 d0, d3\n" /* join even and odd red components */ "vzip.8 d1, d4\n" /* join even and odd green components */ "vzip.8 d2, d5\n" /* join even and odd blue components */ "vshll.u8 q7, d0, #8\n" //red "vshll.u8 q6, d1, #8\n" //greed "vsri.u16 q7, q6, #5\n" "vshll.u8 q6, d2, #8\n" //blue "vsri.u16 q7, q6, #11\n" //now there is rgb16 in q7 ".if \\size == 16\n" "vst1.16 {d14, d15}, [%[dst]]!\n" //"vst3.8 {d0, d1, d2}, [%[dst]]!\n" "vshll.u8 q7, d3, #8\n" //red "vshll.u8 q6, d4, #8\n" //greed "vsri.u16 q7, q6, #5\n" "vshll.u8 q6, d5, #8\n" //blue "vsri.u16 q7, q6, #11\n" //now there is rgb16 in q7 //"vst3.8 {d3, d4, d5}, [%[dst]]!\n" "vst1.16 {d14, d15}, [%[dst]]!\n" ".elseif \\size == 8\n" "vst1.16 {d14, d15}, [%[dst]]!\n" //"vst3.8 {d0, d1, d2}, [%[dst]]!\n" ".elseif \\size == 4\n" "vst1.8 {d14}, [%[dst]]!\n" ".elseif \\size == 2\n" "vst1.8 {d14[0]}, [%[dst]]!\n" "vst1.8 {d14[1]}, [%[dst]]!\n" ".else\n" ".error \"unsupported macroblock size\"\n" ".endif\n" ".endm\n" "vmov.u8 d8, #15\n" /* add this to U/V to saturate upper boundary */ "vmov.u8 d9, #20\n" /* add this to Y to saturate upper boundary */ "vmov.u8 d10, #31\n" /* sub this from U/V to saturate lower boundary */ "vmov.u8 d11, #36\n" /* sub this from Y to saturate lower boundary */ "vmov.u8 d26, #16\n" "vmov.u8 d27, #149\n" "vmov.u8 d28, #204\n" "vmov.u8 d29, #50\n" "vmov.u8 d30, #104\n" "vmov.u8 d31, #154\n" "subs %[n], %[n], #16\n" "blt 2f\n" "1:\n" "convert_macroblock 16\n" "subs %[n], %[n], #16\n" "bge 1b\n" "2:\n" "tst %[n], #8\n" "beq 3f\n" "convert_macroblock 8\n" "3:\n" "tst %[n], #4\n" "beq 4f\n" "convert_macroblock 4\n" "4:\n" "tst %[n], #2\n" "beq 5f\n" "convert_macroblock 2\n" "5:\n" ".purgem convert_macroblock\n":[src] "+&r" (src),[dst] "+&r" (dst), [n] "+&r" (n) :[acc_r] "r" (&acc_r[0]),[acc_g] "r" (&acc_g[0]),[acc_b] "r" (&acc_b[0]) :"cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "d8", "d9", "d10", "d11", "d12", "d13", "d14", "d15", "d16", "d17", "d18", "d19", "d20", "d21", "d22", "d23", "d24", "d25", "d26", "d27", "d28", "d29", "d30", "d31"); } #endif 

In this simple way, you can make a single UYVY line sixteen-bit RGB. Now it remains to implement this conversion in our frame handler. I have carried out all conversion in separate method myVideoSurface :: convertFrame (). It looks like this:

>> myvideosurface.cpp
  QPixmap myVideoSurface::convertFrame(QVideoFrame lastVideoFrame) { QPixmap lastFrame = QPixmap(); if (!lastVideoFrame.isValid()){ return QPixmap(); } if (lastVideoFrame.map(QAbstractVideoBuffer::ReadOnly)) { #ifdef __ARM_NEON__ if (lastVideoFrame.pixelFormat() == QVideoFrame::Format_UYVY) { QImage lastImage(lastVideoFrame.size(), QImage::Format_RGB16); const uchar *src = lastVideoFrame.bits(); uchar *dst = lastImage.bits(); const int srcLineStep = lastVideoFrame.bytesPerLine(); const int dstLineStep = lastImage.bytesPerLine(); const int h = lastVideoFrame.height(); const int w = lastVideoFrame.width(); /** *   itself */ for (int y=0; y<h; y++) { uyvy422_to_rgb16_line_neon(dst, src, w); src += srcLineStep; dst += dstLineStep; } lastFrame = QPixmap::fromImage( lastImage.scaled(lastVideoFrame.size(), Qt::IgnoreAspectRatio, Qt::FastTransformation)); } else #endif { QImage::Format imgFormat = QVideoFrame::imageFormatFromPixelFormat(lastVideoFrame.pixelFormat()); if (imgFormat != QImage::Format_Invalid) { QImage lastImage(lastVideoFrame.bits(), lastVideoFrame.width(), lastVideoFrame.height(), lastVideoFrame.bytesPerLine(), imgFormat); lastFrame = QPixmap::fromImage( lastImage.scaled(lastVideoFrame.size(), Qt::IgnoreAspectRatio, Qt::FastTransformation)); } } lastVideoFrame.unmap(); return lastFrame; } } 

The final touch is to modify the myVideoSurface :: present () method with the following innovations:

>> myvideosurface.cpp
 bool myVideoSurface::present(const QVideoFrame &frame){ m_frame = frame; if(surfaceFormat().pixelFormat() != m_frame.pixelFormat() || surfaceFormat().frameSize() != m_frame.size()) { stop(); return false; } else { observer->newImage(convertFrame(frame)); return true; } } 

That's all, now the image in RGB16 will return to our callback, which you can work with exactly the same way as with RGB24 (this format is used in Symbian). And since our education is optimized, it happens without losing the speed of the application. In the end, the viewfinder widget can be used in the same way as in the version for Symbian.

In general, this image processing principle is used in the same QtMobility, for which we implement it ourselves, the question is only in the platform (as can be seen from the sources, inside mobility this method is used in Maemo 5). So it remains to hope that in the future such conversion will be implemented in the mobility code for Meego.

PS By the way, at the time of writing this code, in Qt (4.7.2) there was still no support for saving RGB16 to JPEG (although it was indicated on the bug tracker that the bug was fixed). Therefore, in the callback before this saving, we had to apply one more conversion step - RGB16-> RGB888. This is done in one line:
 image = image.convertToFormat(QImage::Format_RGB888); 
Here image is a QImage object that came into the newImage () method. This process also does not affect the speed, so it is quite applicable when displaying an image on the screen in real time.

Successful conversion!

References:

Source: https://habr.com/ru/post/129294/


All Articles