Sunday, July 1, 2012

Face Detection using CIDetector (Note: Works with ios 5.0 and later)

The face detection API is surprisingly simple to use. It really boils down to two classes: CIDetector and CIFaceFeature. CIDetector is responsible for performing the analysis of an image and returns a collection of CIFaceFeature objects describing the face(s) found in the image. You begin by creating a new instance of CIDetector using its detectorOfType:context:options class method.
CIDetector *detector =
    [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options];
CIDetector can currently only be configured to perform face detection so you’ll always pass the string constant CIDetectorTypeFace for the type argument. The context and options arguments are optional, but you will typically provide it an options dictionary describing the accuracy level to use. This can be configured by defining a dictionary with the key CIDetectorAccuracy and a value of either CIDetectorAccuracyLow or CIDetectorAccuracyHigh. The high accuracy algorithm can produce far more accurate results, but takes significantly longer to perform the analysis. Depending on what you need to accomplish you may find the low accuracy setting produces acceptable results.

 

 

 Analyzing the Image

With a properly configured detector in hand you’re ready to analyze an image. You call the detector’s featuresInImage: method passing it an image to analyze. The Core Image framework doesn’t know anything about UIImage so you can’t directly pass it an image of this type, however, UIKit provides a category on CIImage making it easy to create an instance of CIImage from a UIImage.
UIImage *uiImage = [UIImage imageNamed:@"image_name"];
CIImage *ciImage = [[CIImage alloc] initWithImage:uiImage];
NSArray *features = [detector featuresInImage:ciImage];
The featuresInImage: method will return a collection of CIFaceFeature objects describing the features of the detected faces. Specifically, each instance defines a face rectangle, and points for the left eye, right eye, and mouth. It only defines the center point of each feature so you’d have to perform some additional calculations if you’d need to know the feature’s shape, angle, or relative location.

 

 

Visualizing the Results

The following images show examples of the face detection API in action. The images illustrate the differences between the low and high accuracy settings along with the approximate times it took to run the detection. The location of the detected features is not significantly different between the two images, but you’ll notice the high accuracy setting took more that 10x longer to compute on an iPhone 4. It will likely require a fair amount of testing of a representative set of images to determine the appropriate accuracy setting for your app.



I have put together a sample app containing images of several iconic faces. Flip through the images and run the analysis to see the face detection in action. You can run the sample on the simulator, but I’d recommend running it on your device so you can get a realistic sense for the performance. Enjoy!

Download iOS 5 Sample App: Sample Code

1 comment:

yudhaardiansyah said...

sorry for distrubing you, but i just want to share the article about face detection,
let's check this link http://repository.gunadarma.ac.id/bitstream/123456789/3365/1/Automed%20Face%20Detection%20and%20Feature%20Extraction%20Using%20Color%20Feret%20Image%20Database.pdf
i wish that article can be usefull