Thursday, September 20, 2012

Integrating Facebook in iOS6



Apple unveiled iOS6 . It has a great impact on FB user, and obviously to the developers. Tried my hand to integrate this in my app and sharing my experience. I think this will help many people a lot.


1. In order use the Facebook we need to add the Facebook framework to our project. Select your project in the project navigator and then the project target. Go to the Build Phases tab and click on the + button inside the Link Binary With Libraries section, on the window that appears navigate to the Social.framework file and click Add.

2.  Open the controller file and add :

#import "Social/Social.h"
These two steps are necessary to integrate Facebook. Apart from these we need to set Facebook credentials in the settings on iPhone initially.

After that we need to write code for post on wall.

1.     POST TEXT:


if([SLComposeViewController isAvailableForServiceType:SLServiceTypeFacebook])    {
        SLComposeViewController *faceBookSheet=[[SLComposeViewController alloc] init];
        // Sets viewcontroller to FB type
        faceBookSheet=[SLComposeViewController composeViewControllerForServiceType: SLServiceTypeFacebook];
        //Calls the function for set Text
        [faceBookSheet setInitialText:@"Initial Text from iOS 6 facebook Demo"];
        // Specifying a block to be called when the user is finished. This block is not guaranteed
        // To be called on any particular thread. It is cleared after being called.
        [faceBookSheet setCompletionHandler:[self setComplitionHandlerFunction]];
        //Presenting the FB sheet
        [self presentViewController:faceBookSheet animated: YES completion: nil];
        // presentModalViewController is deprecated in iOS 6.
        [faceBookSheet release],faceBookSheet=nil;
        
    }

A special function is required which handles the event when user is finished.


 [faceBookSheet setCompletionHandler:[self setComplitionHandlerFunction]];  

Inside the setComplitionHandlerFunction:

SLComposeViewControllerCompletionHandler resultFB = ^(SLComposeViewControllerResult result) {

        NSString *output;
        switch (result) {
            case SLComposeViewControllerResultCancelled:
                output = @"ACtionCancelled";
                break;
            case SLComposeViewControllerResultDone:
                output = @"Post Successfull";
                break;
            default:
            break;
        }

        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Facebook Complition Message" message:output delegate:nil cancelButtonTitle:@"Ok" otherButtonTitles:nil];
        [alert show];
        [alert release];
    };
    return resultFB;

2.     POST IMAGE:

if([SLComposeViewController isAvailableForServiceType:SLServiceTypeFacebook])    {
        SLComposeViewController *faceBookSheet=[[SLComposeViewController alloc]init];

        faceBookSheet=[SLComposeViewController composeViewControllerForServiceType:SLServiceTypeFacebook];

        [faceBookSheet addImage:[UIImage imageNamed:@"yourImage.png"] ];

        [faceBookSheet setCompletionHandler:[self setComplitionHandlerFunction]];

        [self presentViewController:faceBookSheet animated:YES completion:nil];
        [faceBookSheet release],faceBookSheet=nil;
    }


3.     POST URL:


if([SLComposeViewController isAvailableForServiceType:SLServiceTypeFacebook])    {

        SLComposeViewController *faceBookSheet=[[SLComposeViewController alloc]init];

        faceBookSheet=[SLComposeViewController composeViewControllerForServiceType:SLServiceTypeFacebook];

        NSURL *url=[NSURL URLWithString:@"http://www.google.com"];

        [faceBookSheet addURL:url];
        [faceBookSheet setCompletionHandler:[self setComplitionHandlerFunction]];

        [self presentViewController:faceBookSheet animated:YES completion:nil];

        [faceBookSheet release],faceBookSheet=nil;
    }


Tuesday, August 7, 2012

Shadow effects using custom CALayer shadowPaths

I recently had to improve the performance of a few views that utilized CALayer-based shadows on rounded-rect UIView objects. On this particular iPad application, when the device was rotated, the views rotated quite a lot slower than we would have hoped. It wasn’t a show-stopper, but the jerky rotation animation made it look cheap and unpolished. The easiest way to have our cake, and eat it too, was to set a custom CGPath to the layer’s shadowPath property. This told UIKit to set the inside of the path to opaque, reducing the amount of work the rendering engine needed to perform.


// Add background tile
UIImage *bgImage = [UIImage imageNamed:@"embedded_bg.png"];
self.view.backgroundColor = [UIColor colorWithPatternImage:bgImage];
// Add the reference view
UIImage *image = [UIImage imageNamed:@"dccp.jpeg"];
UIImageView *imgView = [[UIImageView alloc] initWithImage:image];
[self.view addSubview:imgView];
imgView.center = self.view.center;
imgView.layer.shadowColor = [UIColor blackColor].CGColor;
imgView.layer.shadowOpacity = 0.7f;
imgView.layer.shadowOffset = CGSizeMake(10.0f, 10.0f);
imgView.layer.shadowRadius = 5.0f;
imgView.layer.masksToBounds = NO;
UIBezierPath *path = [UIBezierPath bezierPathWithRect:imgView.bounds];
imgView.layer.shadowPath = path.CGPath;
[imgView release];



The resulting image, as you can see above, has a shadow as you’d expect. But since we’ve declared the shape the path will have, the iPad can drastically improve its rendering performance.
Through that process however, I decided to see what sort of effects I could pull off by passing in a path other than the default rectangular bounds of the layer. Since you can create any sort of path you want, I considered the different effects I could get away with by making non-rectangular paths and using them as shadows.

Trapezoidal CGPath


Trapezoidal shadow providing the illusion of depth
By carefully drawing a trapezoidal shape below and slightly beneath the view, you can give the illusion of depth.

CGSize size = imgView.bounds.size;
UIBezierPath *path = [UIBezierPath bezierPath];
[path moveToPoint:CGPointMake(size.width * 0.33f, size.height * 0.66f)];
[path addLineToPoint:CGPointMake(size.width * 0.66f, size.height * 0.66f)];
[path addLineToPoint:CGPointMake(size.width * 1.15f, size.height * 1.15f)];
[path addLineToPoint:CGPointMake(size.width * -0.15f, size.height * 1.15f)];

Elliptical CGPath

 

Elliptical shadows create the illusion of a top-down light source
Just like the trapezoid, there are other effects you can achieve by playing with simple shapes for the use of creating shadows.

CGRect ovalRect = CGRectMake(0.0f, size.height + 5, size.width - 10, 15);
UIBezierPath *path = [UIBezierPath bezierPathWithOvalInRect:ovalRect];


Paper-curl effect

 

Paper curl example using a curved path
By using a control point on a bezier curve, you can make the bottom side of the shadow curve inward, making it appear like the view is printed on paper that has been curled inward.

CGSize size = imgView.bounds.size;
CGFloat curlFactor = 15.0f;
CGFloat shadowDepth = 5.0f;
UIBezierPath *path = [UIBezierPath bezierPath];
[path moveToPoint:CGPointMake(0.0f, 0.0f)];
[path addLineToPoint:CGPointMake(size.width, 0.0f)];
[path addLineToPoint:CGPointMake(size.width, size.height + shadowDepth)];
[path addCurveToPoint:CGPointMake(0.0f, size.height + shadowDepth)
controlPoint1:CGPointMake(size.width - curlFactor, size.height + shadowDepth - curlFactor)
controlPoint2:CGPointMake(curlFactor, size.height + shadowDepth - curlFactor)];

More possibilities than can be covered

There are plenty of other possibilities, more than can be covered here. Creating CGPathRef objects, either using UIBezierCurve or by using Quartz2D drawing methods, can easily step through composing shadows. Use a CGAffineTransform object to manipulate your path to stretch, scale, or rotate it as needed. Once you realize what your possibilities are, you can add an extra degree of polish to your application with very little effort.
If you want to play with the source use to create these examples, make sure you download the ShadowTest.zip

Sunday, July 1, 2012

Face Detection using CIDetector (Note: Works with ios 5.0 and later)

The face detection API is surprisingly simple to use. It really boils down to two classes: CIDetector and CIFaceFeature. CIDetector is responsible for performing the analysis of an image and returns a collection of CIFaceFeature objects describing the face(s) found in the image. You begin by creating a new instance of CIDetector using its detectorOfType:context:options class method.
CIDetector *detector =
    [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options];
CIDetector can currently only be configured to perform face detection so you’ll always pass the string constant CIDetectorTypeFace for the type argument. The context and options arguments are optional, but you will typically provide it an options dictionary describing the accuracy level to use. This can be configured by defining a dictionary with the key CIDetectorAccuracy and a value of either CIDetectorAccuracyLow or CIDetectorAccuracyHigh. The high accuracy algorithm can produce far more accurate results, but takes significantly longer to perform the analysis. Depending on what you need to accomplish you may find the low accuracy setting produces acceptable results.

 

 

 Analyzing the Image

With a properly configured detector in hand you’re ready to analyze an image. You call the detector’s featuresInImage: method passing it an image to analyze. The Core Image framework doesn’t know anything about UIImage so you can’t directly pass it an image of this type, however, UIKit provides a category on CIImage making it easy to create an instance of CIImage from a UIImage.
UIImage *uiImage = [UIImage imageNamed:@"image_name"];
CIImage *ciImage = [[CIImage alloc] initWithImage:uiImage];
NSArray *features = [detector featuresInImage:ciImage];
The featuresInImage: method will return a collection of CIFaceFeature objects describing the features of the detected faces. Specifically, each instance defines a face rectangle, and points for the left eye, right eye, and mouth. It only defines the center point of each feature so you’d have to perform some additional calculations if you’d need to know the feature’s shape, angle, or relative location.

 

 

Visualizing the Results

The following images show examples of the face detection API in action. The images illustrate the differences between the low and high accuracy settings along with the approximate times it took to run the detection. The location of the detected features is not significantly different between the two images, but you’ll notice the high accuracy setting took more that 10x longer to compute on an iPhone 4. It will likely require a fair amount of testing of a representative set of images to determine the appropriate accuracy setting for your app.



I have put together a sample app containing images of several iconic faces. Flip through the images and run the analysis to see the face detection in action. You can run the sample on the simulator, but I’d recommend running it on your device so you can get a realistic sense for the performance. Enjoy!

Download iOS 5 Sample App: Sample Code