📜 ⬆️ ⬇️

Reduce the size of the application: proven methods

Introduction


One of the important aspects of mobile application development is size optimization. We all know from personal experience that the smaller the application weighs, the more readily it is downloaded, especially if there is no Wi-Fi access point at hand, and the speed and / or traffic of the mobile Internet leaves much to be desired. In addition, we must not forget that some markets put a limit on the size of the released application. For example, in the App Store, products up to 100 MB in size are available for download on the mobile Internet, but if the weight of the application exceeds this threshold, then you can download it only via Wi-Fi. On the Play Market, however, an application that pulls more than 100 MB cannot be downloaded in principle. In this article, we describe what methods and tricks resorted to by our developers of native iOS applications in order to reduce the weight of the product, and add to this a few practical tips found on the web.


The main ways to reduce the size of the application


Graphic content


Now design plays a key role in any good application. If the interface is minimalistic or the product has a small set of functions, then this stage can be skipped. If the project has a rich functionality or supports a certain number of color schemes, then there is no longer a way to do without a large number of images with all the ensuing consequences for weight. In addition, often by default, sets of images are added to projects under various form factors of mobile devices, such as @ 1x, @ 2x, @ 3x for iOS applications. Below we give the methods that we used in our applications to solve the problem with an abundance of graphic content. Perhaps some of them you apply yourself.

One of the simplest ways is to use only 3x image instead of three scales. This method can not be called the best, since on devices oriented under 1x and 2x scales, such images will not always look acceptable. However, in the absence of the best by this technique, it is possible to reduce the size of the project with a huge amount of graphics.
')
Another way is to add vector images instead of raster ones. On iOS, we exported images to PDF. Often, such a file really weighs less, but it does not work with all images. The catch here is that in vector graphics it may incorrectly display some image masks, making them completely black or distorting colors.

Now consider an example with an application that has several color schemes (in common "skin"). The more color schemes in an application, the more the number of images required increases. If the image uses more than one color, then you have to store several options for each skin. However, in the case where the image is monochromatic, it can be made template and in the code itself can change the color of the shade (tint color). On iOS, you can create a similar template in two ways:

  1. set the Template Image in Xcode itself (see Figure 1);
  2. set template mode programmatically



Fig.1. Setting the image mode template in Xcode.

UIImage *templateImage = [[UIImage imageNamed:@«Back Chevron»] imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate]; [backButton setTintColor:[UIColor blueColor]]; 

- where UIImageRenderingModeAlwaysTemplate and is the template image mode.

Replacing Animated Images


Adding animation is common in applications. It draws the user's attention to the desired interface objects and makes it less static, providing a more pleasant interaction experience. Some simple animations, like moving an object from one part of the screen to another or appearing below a new window, can be done programmatically. Others, more complex ones, require rendering each frame of animation. When we first encountered the addition of an animated image during development, we used one of the most common ways to implement it, namely, animating through an array of images. It looked like this:

 NSArray *gif=@[@"frame1",@"frame2",@"frame3",@"frame4",@"frame5", @"frame6",@"frame7",@"frame8",@"frame9",@"frame10"]; NSMutableArray <UIImage *> *images = [[NSMutableArray alloc] init]; for (int i = 0; i < gif.count; i++) { UIImage *image=[UIImage imageNamed:[gif objectAtIndex:i]]; [images addObject:image]; } imageView.animationImages = images; imageView.animationDuration = 0.3; imageView.animationRepeatCount=1; [imageView startAnimating]; 

First, an array is created with the names of the images, then an array that is alternately filled up with images from the names. Then, a variable of type UIImageView is given an array of images for the animation, the duration of the animation and the number of repetitions. Then the animation itself starts. However, if there are many frames and at the same time each of them has three scales, then for the size of the application it does not bode well. Having come to such a sad result, we set about finding a way to add a gif-file instead of an array of pictures. Fortunately, on the Internet, we came across the category UIImage + animatedGIF, which already knows all this. This category adds two methods to the UIImage class:

 + (UIImage * _Nullable)animatedImageWithAnimatedGIFData:(NSData * _Nonnull)theData; + (UIImage * _Nullable)animatedImageWithAnimatedGIFURL:(NSURL * _Nonnull)theURL; 

The first method loads the gif saved as data, and the second method takes it directly from the link to the resource, for example, from the application bundle. The gif-file itself can be made from the same frames on some service for creating such files, where the number of frames per second, compression and resolution is set. Properly set parameters will give a gif of acceptable weight at the output. Now it only remains to add it to the bundle and use one of the methods mentioned above.

However, the gif file also takes up space, so we tried to execute all the animations programmatically. In the Audio Editor Tool on the start screen, we play an animation of the appearance of the AUDIO EDITOR logo letter by letter . Previously, this animation was implemented using the GIF, but due to the high resolution of the image, it weighed a bit too much. Therefore, we decided to implement it using CABasicAnimation.

 CAGradientLayer *gradient=[CAGradientLayer layer]; gradient.frame=animationLabel.bounds; gradient.colors = @[(id)[UIColor colorWithWhite:1 alpha:1.0].CGColor, (id)[UIColor clearColor].CGColor]; gradient.startPoint = CGPointMake(0.0, 0.5); gradient.endPoint = CGPointMake(0.1, 0.5); animationLabel.layer.mask=gradient; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.99 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ gradient.colors = @[(id)[UIColor colorWithWhite:1 alpha:1.0].CGColor, (id)[UIColor colorWithWhite:1 alpha:1.0].CGColor]; }); CABasicAnimation *startPoint=[CABasicAnimation animationWithKeyPath:@"startPoint"]; startPoint.fromValue= [NSValue valueWithCGPoint:CGPointMake(0.0, 0.5)]; startPoint.toValue= [NSValue valueWithCGPoint:CGPointMake(1.0, 0.5)]; startPoint.duration = 0.9; [startPoint setBeginTime:0.1]; startPoint.removedOnCompletion=NO; CABasicAnimation *endPoint=[CABasicAnimation animationWithKeyPath:@"endPoint"]; endPoint.fromValue= [NSValue valueWithCGPoint:CGPointMake(0.1, 0.5)]; endPoint.toValue= [NSValue valueWithCGPoint:CGPointMake(1.0, 0.5)]; endPoint.duration = 1.0; [endPoint setBeginTime:0.0]; endPoint.removedOnCompletion=NO; CAAnimationGroup *group = [CAAnimationGroup animation]; [group setDuration:1.2]; [group setAnimations:[NSArray arrayWithObjects:startPoint, endPoint, nil]]; [ gradient addAnimation:group forKey:nil]; 

To make our logo appear letter by letter, like on a gif, we used a gradient mask, which eventually shifted the initial position of transparency. To begin with, we created a gradient layer, whose transparent color comes almost from the very beginning. Then they set the gradient as a mask for the text layer of the logo, thereby making it transparent. The next step was to create a group of animations, which added two animations. The first of them shifted the initial position of the gradient, and the second - the final position, thereby making it opaque. Note one caveat: an important step was to specify a negative value in the removeOnCompletion property, otherwise the animation would be removed upon completion and the layer would return to its initial value.

Audio conversion


Our applications often use WAV audio files. Due to its structure, this format takes up a lot of space in the project. For this reason, it was decided to first completely replace all files of this format with a more lightweight M4A in the bundle, and then, in the application itself, convert them to WAV. Why not just use M4A? Because when cyclically playing a file of this format, there is a delay at the beginning of each cycle, as if there is some void there. The final step is to save the already converted file in the application directory after the first launch.

 +(void)convertAudio:(NSURL *)url toUrl:(NSURL *)convertedUrl{ AVAudioFile *audioFile = [[AVAudioFile alloc] initForReading:url error:nil]; AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:audioFile.processingFormat frameCapacity:(uint32_t)audioFile.length]; [audioFile readIntoBuffer:buffer error:nil]; NSDictionary *recordSettings = @{ AVFormatIDKey : @(kAudioFormatLinearPCM), AVSampleRateKey : @(audioFile.processingFormat.sampleRate), AVNumberOfChannelsKey : @(audioFile.processingFormat.channelCount), AVEncoderBitDepthHintKey : @16, AVEncoderAudioQualityKey : @(AVAudioQualityMedium), AVLinearPCMIsBigEndianKey: @0, AVLinearPCMIsFloatKey: @0, }; AVAudioFile *writeAudioFile = [[AVAudioFile alloc] initForWriting:convertedUrl settings:recordSettings error:nil]; [writeAudioFile writeFromBuffer:buffer error:nil]; } 

In this method, the file is taken from the bundle by url and saved to the directory by convertedUrl. The read file is loaded into the buffer and from there it is written to the new one with the required recording settings. Thus, we use a more stable and heavy WAV after the first launch, but at the same time the size of the application is significantly reduced during the download and installation.

Uploading files from the server


Uploading files from the server is what is needed for applications with a significant amount of content. A large number of music presets, image sets and more, which greatly increases the size of the application, can be downloaded later. Of course, downloading each individual file would take a lot of time and traffic, so the archives with everything needed are loaded from the server, and already in the application they are unpacked and stored in the application directory. For unzipping, the SSZipArchive library is used (the library repository can be found here ). This library is capable of both packing files into an archive, and unpacking archives. But we are only interested in one method from the main class of the library:

 + (BOOL)unzipFileAtPath:(NSString *)path toDestination:(NSString *)destination progressHandler:(void (^)(NSString *entry, unz_file_info zipInfo, long entryNumber, long total))progressHandler completionHandler:(void (^)(NSString *path, BOOL succeeded, NSError *error))completionHandler; 

This method unpacks the file from the path to the destination path, and while it is unpacked in the progressHandler, you can perform any actions (for example, displaying the progress of the unpacking), and then in the completionHandler you can show that the unpacking has completed successfully, or to display an error on failure.

Conclusion


Ultimately, judging by the Mix Wave application, which weighs ~ 41 MB before installation, and after loading all the presets - 281 MB, the methods described could reduce the size of the application by about seven times. The result is not bad, although there may be more relevant ways. If you know about such, we offer to share in the comments.

UPD: Thanks to Dim0v for the useful comments about the graphic content. We give them below:

To read
“First, App slicing works for devices with iOS 9 and higher. iTunes Connect re-compiles the downloaded archive into several options for different devices. Thus, for example, the iPhone 6, when installed from the app store, will draw only @ 2x resources, and the iPad mini 1 only @ 1x. Therefore, if the product supports iOS 9+, listening to the advice on leaving only 3x resources will have the exact opposite effect - for iPhones + nothing will change, but devices with lower resolution will have to pull 3x resources, while they could manage 2x or 1x .

Secondly, the advice on converting raster images to a vector also does not make sense. The only thing you can save in this way is a place on the developers' computer. Xcode rasterizes vector images when building a build, which is easy to verify, for example, by scaling the “vector” image on the device and seeing a wildly pixelated bitmap. I do not argue, vector resources are convenient: it is easier to export to designers, no need to make sure that when a resource changes, all its versions of different resolutions remain “synchronized”, etc. But transferring existing raster images to a vector for the very purpose of reducing the size of the build makes no sense. ”

Source: https://habr.com/ru/post/334314/


All Articles