📜 ⬆️ ⬇️

Face detection in iOS 5 SDK

The iOS SDK has been available for a long time, but every iOS developer knows that it is still very early to use the new API in their applications, since the client is interested in the compatibility of his programs with the old versions of this OS.

But all found a couple of goodies in the new SDK. The first to catch the eye was a method for the UIViewController viewWillUnload , which was so needed several months ago.
The entire list of innovations for iOS 5 is here .
In the list of additional frameworks, CoreImage and, in particular, CIDetector.h are of interest.

The CIDetector class was created to help find and identify faces in the image, which we will now try to do briefly.

')
We use Xcode 4.2 with iOS 5 SDK.

Create a project

I highly recommend turning off “Use Automatic Reference Counting”.

We connect in the framework CoreImage project

image

Create a UIViewController

#import <UIKit / UIKit.h>

@interface RootViewController : UIViewController <UIImagePickerControllerDelegate, UINavigationControllerDelegate>
{
IBOutlet UIImageView * imageView;
IBOutlet UILabel * label;
CIDetector * detector;
}

@end


Upload a picture using UIImagePickerController

- ( IBAction ) onImport : ( id ) sender
{
UIImagePickerController * vc = [ [ UIImagePickerController alloc ] init ] ;
vc.delegate = self;
vc.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
[ self presentModalViewController : vc animated : YES ] ;
[ vc release ] ;
}


I did not upgrade my device to iOS 5, because all actions will be performed on the simulator with the import from the gallery, not the camera.

Determine the faces in the picture

- ( IBAction ) onRecognize : ( id ) sender
{
detector = [ CIDetector detectorOfType : CIDetectorTypeFace context : nil options : [ NSDictionary dictionaryWithObject : CIDetectorAccuracyHigh forKey : CIDetectorAccuracy ] ] ;

NSDate * date = [ NSDate date ] ;

NSArray * features = [ detector featuresInImage :
[ [ [ CIImage alloc ] initWithCGImage : imageView.image.CGImage ] autorelease ]
] ;

NSTimeInterval ti = fabs ( [ date timeIntervalSinceNow ] ) ;

label.text = [ NSString stringWithFormat : @ "Time:% 0.3f \ n Faces:% i" , ti, [ features count ] ] ;

UIGraphicsBeginImageContext ( imageView.image.size ) ;

CGContextRef ctx = UIGraphicsGetCurrentContext ( ) ;

CGContextDrawImage ( ctx, CGRectMake ( 0 , 0 , imageView.image.size.width, imageView.image.size.height ) , imageView.image.CGImage ) ;


for ( CIFeature * feature in features )
{
CGRect r = feature.bounds;

CGContextSetStrokeColor ( ctx, CGColorGetComponents ( [ UIColor yellowColor ] .CGColor ) ) ;
CGContextSetLineWidth ( ctx, 1.0f ) ;

CGContextBeginPath ( ctx ) ;
CGContextAddRect ( ctx, r ) ;
CGContextClosePath ( ctx ) ;
CGContextStrokePath ( ctx ) ;

}
imageView.image = [ UIImage imageWithCGImage : UIGraphicsGetImageFromCurrentImageContext ( ) .CGImage scale : 1.0f orientation : UIImageOrientationDownMirrored ] ;
UIGraphicsEndImageContext ( ) ;

}


Result

In the documentation, an instance of the CIFeature class only gives information about the face frame and its type, but you can hope that when Apple completely takes over OpenCV, you can wait for class updates (IMHO).





EDIT: Fixed face in onImport method:

Source: https://habr.com/ru/post/131121/


All Articles