📜 ⬆️ ⬇️

Developer in the camera. Video

Apple constantly keeps developers on their toes.
Front cameras on the iPad and iPhone have given birth to a new round of ideas from the creators of short-term applications. I also did a little research on two-cell phones and invite you to click on the button, who cares.

image

Video capture in iOS 4.3+ has become as simple as an orange.
')

Four calls - and you are the owner of the pixels received from any of the two cameras iPhone.

A bit of code, designers do not read


We get our HabrahabrView
class HabrahabrView
@class CaptureSessionManager; @interface HabrahabrView : UIView <AVCaptureVideoDataOutputSampleBufferDelegate> { CaptureSessionManager *captureFront; CaptureSessionManager *captureBack; UIImageView *face; } 


In the class body, we introduce the obligatory function captureOutput, which will receive pictures from a video camera 20 times a second.
captureOutput function
 -(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer, 0); // Get the number of bytes per row for the pixel buffer // void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); // Get the number of bytes per row for the pixel buffer size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // Get the pixel buffer width and height size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); unsigned char* pixel = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer); CGColorSpaceRef colorSpace=CGColorSpaceCreateDeviceRGB(); CGContextRef context=CGBitmapContextCreate(pixel, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little|kCGImageAlphaPremultipliedFirst); CGImageRef image=CGBitmapContextCreateImage(context); CGContextRelease(context); CGColorSpaceRelease(colorSpace); UIImage *resultUIImage=[UIImage imageWithCGImage:image]; CGImageRelease(image); CVPixelBufferUnlockBaseAddress(imageBuffer, 0); [resultUIImage retain]; [self performSelectorOnMainThread:@selector(cameraCaptureGotFrame:) withObject:resultUIImage waitUntilDone:NO]; } - (void) cameraCaptureGotFrame:(UIImage*)image { face.image = [self fixOrientation:image]; // decrement ref count [image release]; } 


Now that all the preliminary work is finished, we get a picture on the screen where our video will be displayed.
Small ImageView 58 by 70
  face = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 58, 70)]; [self addSubview:face]; 


So we connect the rear camera
Back camera ready
  [self setCaptureBack:[[[CaptureSessionManager alloc] init] autorelease]]; [[self captureBack] addVideoInput:2 PView:self]; [[self captureBack] addVideoPreviewLayer]; [[[self captureBack] captureSession] setSessionPreset:AVCaptureSessionPresetLow]; 


And so the front
Front camera ready
  [self setCaptureFront:[[[CaptureSessionManager alloc] init] autorelease]]; [[self captureFront] addVideoInput:1 PView:self]; [[self captureFront] addVideoPreviewLayer]; [[[self captureFront] captureSession] setSessionPreset:AVCaptureSessionPresetLow]; 


Let's arrange all the functions in a separate class.
Here is a boring code, do not read
 #import "CaptureSessionManager.h" @implementation CaptureSessionManager @synthesize captureSession; @synthesize previewLayer; - (id)init { if ((self = [super init])) { [self setCaptureSession:[[AVCaptureSession alloc] init]]; } return self; } - (void)addVideoPreviewLayer { [self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]]; [[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill]; } - (void)addVideoInput:(int)camType PView:(HabrahabrView*) habraview { NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; AVCaptureDevice *videoDevice = nil; NSInteger side = (camType==1) ? AVCaptureDevicePositionFront : AVCaptureDevicePositionBack; for (AVCaptureDevice *device in videoDevices) { if (device.position == side) { videoDevice = device; break; } } if (videoDevice == nil) { videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; } if (videoDevice) { NSError *error; AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error]; if (!error) { if ([[self captureSession] canAddInput:videoIn]) [[self captureSession] addInput:videoIn]; else NSLog(@"Couldn't add video input"); // Set the output AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init]; // create a queue to run the capture on dispatch_queue_t captureQueue=dispatch_queue_create("catpureQueue", NULL); // setup our delegate [videoOutput setSampleBufferDelegate:habraview queue:captureQueue]; // configure the pixel format videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil]; if ([[self captureSession] canAddOutput:videoOutput]) [[self captureSession] addOutput:videoOutput]; else NSLog(@"Couldn't add video ouput"); } else NSLog(@"Couldn't create video input"); } else NSLog(@"Couldn't create video capture device"); } - (void)dealloc { [[self captureSession] stopRunning]; [previewLayer release], previewLayer = nil; [captureSession release], captureSession = nil; [super dealloc]; } @end 
Is done.

Work with cameras


Turn on the front camera
  [[captureFront captureSession] startRunning]; 


Everything is working. As you can see from the video code, you can pull in three resolutions. I choose the smallest AVCaptureSessionPresetLow for speed (approximately 144 by 192 pixels). For our needs, this is enough, at the same time and the picture filter is free.

How to turn on the rear camera now? Stop the front and turn on the back
  [[captureFront captureSession] stopRunning]; [[captureBack captureSession] startRunning]; 


Immediately I wanted to include both. Alas. Impossible. I tried to quickly switch the camera, but there is a delay of about a third of a second, which causes nothing but irritation.

The dream of imposing two images online had to be killed.



Fruit figurine


But do not kill the dream of some stupid application. I decided to quickly create a fruit moth - the image from the front camera is real, oranges are virtual.

Oranges in turn fall on the screen. They need to capture the mouth and eat. Who quickly eat 7 oranges - the prize.

It remains to write the procedure for recognizing the mouth.
Recognizing a mouth is a primitive occupation, even I, a person who is depressed from the words OOP, OpenSiVi, made this function quickly. Using neither OOP nor OpenSiVi.
Hint - you know the position of the orange and dance from it.

Aha, the attentive reader exclaims - the recognition procedure is debugged solely on the author when illuminating his workplace. You are right, but judging by the photos of users, the application works successfully from China to Brazil, from offices to apartments.

Of course, there was a great temptation to sneak a photo of a user when he eats the 7th orange. I always succumb to the temptations, because they can not happen again. Calm For lovers of morality, I added a button - Allow to send my photo. The default is off.

I send a small picture of a player in the size of 58 by 70 and secretly admire. Come across very funny pictures. Rzhu sometimes minutes to 3. There are also pretty personages.
For God's sake, do not talk about the photo, keep corporate secrets.

I completely forgot. My record is 12 seconds. I'm getting ready for the Olympics.

See you in London, friends.

Source: https://habr.com/ru/post/148609/


All Articles