📜 ⬆️ ⬇️

Simple Image Viewer for iPhone

Unfortunately, the iPhone SDK does not provide functionality similar to the one available in Photos. I do not mean the Image Picker, but the Viewer itself, where you can move the image with your finger and zoom. Here, I will try to explain how this functionality can be achieved and where to dig for its expansion ...


The first thing that catches your eye is the UIImageView class, which allows you to draw an image from a file. However, it lacks the functionality associated with processing user input, and most importantly, it has a limit on the number of pixels it can display. According to the documentation, this is only 1024x1024 pixels:

“It is greater than 1024 x 1024 pixels in size. In addition, there is no support for images of the size. ”

')
Sadly ... However, we can create a UIImage from a CGImage , on which there is no such restriction. In turn, a CGImage can be created from part of an image. So, create a simple Cocoa Touch application that will include the AppDelegate class. The wizard must generate the necessary code to create the application window. We need to modify it to display a UIImageView . The UIImageView itself is created by the method:

+ (UIImage *) imageWithContentsOfFile: (NSString *) path


path contains the path to the file that we need to draw. I will not go into the details of the file structure of applications on the iPhone, it is easier to refer to the Application Sandbox section in the iPhone Programming Guide. To simplify the narration, assume that the desired file is in the Documents folder. Below is a way to get the path to the file we add to the AppDidFinishLaunching method of the AppDelegate class:

// constructing path to the image
NSArray * paths = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES);
NSString * documentsDirectory = [paths objectAtIndex: 0];
NSString * fileName = @ "photo.jpg";
NSString * path = [documentsDirectory stringByAppendingPathComponent: fileName];


Next, we create a UIImageView , using the created path and add it as a subview to our window, remembering to delete the object link:

UIImageView * anImageView = [[windowImageView alloc] initWithImage: path];
[window addSubview: anImageView];
[anImageView release];


We start. We look. The image on the screen, however, it does not respond to user input. In addition, as we know from the documentation, we cannot display an image in which there are more than 1024x1024 pixels.

Exit, create your own class, based on UIImageView , which will process user input and allow you to create an image from a CGImage . Add a new class to the project. These are two files MyImageView.h and MyImageView.m . The first one is as follows:

#import <UIKit / UIKit.h>

interface windowImageView: UIImageView {

}

end


Second:

#import "windowImageView.h"

@implementation windowImageView

- (void) dealloc {
[super dealloc];
}

end


From the name of the CGImage class that we are going to use, it is clear that we will need a framework called CoreGraphics . In Xcode, in the file list, select the Add Framework ... and find CoreGraphics.framework .

Let's go ... First of all, you need to create a new method that will initialize MyImageView using the CGimage object. In the interface block of the file MyImageView.h before the end we write:

- (id) initWithCGImage: (CGImageRef) cg_image


Let me remind you that in curly brackets we declare instance variables, and after curly brackets - methods. Having declared this function, we have to implement it. This is done in the @implementation block of the file MyImageView.m :

- (id) initWithCGImage: (CGImageRef) cg_image {
cg_raw_image = CGImageCreateCopy (cg_image);

// position view in the center of an image
current_x_position = CGImageGetWidth (cg_image) / 2 - 320/2;
current_y_position = CGImageGetHeight (cg_image) / 2 - 480/2;
// getting image data for the area being displayed
CGRect subImageRect = CGRectMake (current_x_position, current_y_position, 320, 480);
CGImageRef cg_subimage = CGImageCreateWithImageInRect (cg_image, subImageRect);
UIImage * imageToDisplay = [UIImage imageWithCGImage: cg_subimage];
CGImageRelease (cg_subimage);
if (imageToDisplay! = NULL)
{
if (self = [super initWithImage: imageToDisplay])
{
// initialization code here
}
}
return self;
}


cg_raw_image, current_x_position, current_y_position are instance variables that we will use later to process user input. The essence of this function is that we create a UIImage from a CGImage , which in turn is created as part of the original image and matches the size of the phone screen or the size of your view. Of course, the use of numbers in this code should be replaced by a call to functions that allow you to get the dimensions of the view in which the image will be displayed. It is worth noting the obligatory call of the CGImageRelease function, which reduces the object reference count, which leads to the release of the memory occupied by this object. This function must be called again for cg_raw_image . This should be done in the dealloc method of the MyImageView class.

We need to modify AppDelegate to call the new method. ApplicationDidFinishLaunching will now look like this:

- (void) applicationDidFinishLaunching: (UIApplication *) application {
// no status bar in main window
application.statusBarHidden = YES;
// creating window
self.window = [[[UIWindow alloc] initWithFrame: [[UIScreen mainScreen] bounds]] autorelease];
// constructing path to the image
NSArray * paths = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES);
NSString * documentsDirectory = [paths objectAtIndex: 0];
NSString * fileName = @ "photo.jpg";
NSString * path = [documentsDirectory stringByAppendingPathComponent: fileName];
NSURL * url = [NSURL fileURLWithPath: path];
// creating CG image
CGDataProviderRef provider = CGDataProviderCreateWithURL ((CFURLRef) url);
CGImageRef cg_image = CGImageCreateWithJPEGDataProvider (provider, NULL, true, kCGRenderingIntentDefault);
if (cg_image! = NULL)
{
MyImageView * anImageView = [[MyImageView alloc] initWithCGImage: cg_image];
anImageView.userInteractionEnabled = TRUE;
[window addSubview: anImageView];
[anImageView release];
}
CGDataProviderRelease (provider);
CGImageRelease (cg_image);
// Override point for customization after app launch
[window makeKeyAndVisible];
}


Here, instead of a UIImage , using a JPEGDataProvider , we create a CGImage , which we use to initialize MyImageView . Now, after launching the program, we should have a result similar to what we had using a UIImage . However, we got rid of the limit on the number of pixels in the file. It is worth paying attention to the line:

anImageView.userInteractionEnabled = TRUE;


This allows our new class to respond to touch. Let's go back to our class and add functions to handle user input. The iPhone SDK does this using three functions:

- (void) touchesBegan: (NSSet *) touches withEvent: (UIEvent *) event;
- (void) touchesMoved: (NSSet *) touches withEvent: (UIEvent *) event;
- (void) touchesEnded: (NSSet *) touches withEvent: (UIEvent *) event;


At the moment we are only interested in touchesMoved , because we want to move our image with our finger. Add the touchesMoved function to MyImageView :

- (void) touchesMoved: (NSSet *) touches withEvent: (UIEvent *) event {

UITouch * touch = [[event allTouches] anyObject];
int x_dist = [touch previousLocationInView: self] .x - [touch locationInView: self] .x;
int y_dist = [touch previousLocationInView: self] .y - [touch locationInView: self] .y;
current_x_position = current_x_position + x_dist;
current_y_position = current_y_position + y_dist;
CGRect subImageRect = CGRectMake (current_x_position, current_y_position, 320, 480);

CGImageRef cg_subimage = CGImageCreateWithImageInRect (cg_raw_image, subImageRect);
UIImage * imageToDisplay = [UIImage imageWithCGImage: cg_subimage];
CGImageRelease (cg_subimage);
self.image = imageToDisplay;
}

Here, we get a touch object that contains touch information made by the user. This allows us to track the user's movement and calculate the distance in pixels by which we should move the image. As the event occurs, we create a new image as part of the main image and display it by changing the image value of the MyImageView class. The drawing of the new image takes place automatically.

Everything described above demonstrates the principles that will allow to achieve functionality similar to the one that exists in the Photo application. Here you need to add a lot of things, including checking out of the image, and zooming using two fingers, and much more ...

In addition, similar functionality can be implemented using UIView , from which the UIImageView class is inherited . This will increase the flexibility of the application, but you will have to implement image drawing in the DrawRect method.

Source: https://habr.com/ru/post/26771/


All Articles