
Hello, habrazhitel!
In this article, I want to share my video experience in one of my latest projects for iOS. I will not go deep into details, I will only describe one of the tasks that could not be solved with the help of a search in Habra, github and the rest of the Internet. The task was as follows: to make a video scroller, not simple, but to be like in the standard iOS 7 gallery.
')
Since The standard component
MPMoviePlayerViewController was used to play the video, and it supports rewinding the video to any position, the main task was to get pictures from the video at regular intervals and put them on
UIView so that they would be approximately under the current position in video. Running a little ahead, I want to say that along the way I had to solve a couple of problems: brakes when generating pictures from video on iPad, and different length of the slider in the vertical and horizontal orientation of the device.
So, first we need to understand how you can get pictures from the video.
AVAssetImageGenerator will help us in this. This class is specifically designed to receive images from an arbitrary video location. We assume that our test file is located in the home folder and is called
test.mov :
NSString *filepath = [NSString stringWithFormat:@"%@/Documents/test.mov", NSHomeDirectory()]; NSURL *fileURL = [NSURL fileURLWithPath:filepath];
An example of using
AVAssetImageGenerator :
Previously, I did not encounter
CMTime , and in order to divide the time into equal intervals, it would not be bad to understand what this data structure is.
CMTimeMake takes two arguments as input:
value and
timescale . I have read the official documentation and I want to explain in simple words to those who do not know what these arguments are.
First,
timescale , this is the number into which each second will be divided. This argument is used to specify the accuracy with which we can specify the desired point in time. For example, if
timescale is set to 10, then 1/10 of a second can be obtained.
In turn, the
value indicates the desired part of the time, taking into account the
timescale . For example, we have a video length of 60 seconds,
timescale is 10 to get 30 seconds,
value must be 300.
To better understand the time representation with
CMTime , I’ll say that the number of seconds in the current video is
value / timescale . From the previous example, 30 seconds is 300/10. If we understand the transfer of time from seconds to
CMTime and back, then no further problems with this structure should arise.
Go ahead, now we need to know the length of the video. It's quite simple, the
asset object we created earlier already has the property we need.
CMTime duration = asset.duration;
Well, we have everything to cut the video into a bunch of pictures. Now the question arises, how many of them are needed for portrait and landscape orientation of devices. The first thing you need to pay attention to is the height of the scroller in the standard iPhone and iPad gallery. Yes, it is almost the same, only the width is different. It is not difficult to guess that the number of pictures is equal to the width of the slider divided by the width of one picture. I decided that I would make square pictures of 29x29 pixels. There is one subtle point here, in the generator the size of the pictures must be specified in pixels, therefore there will be a 58x58 value.
generator.maximumSize = CGSizeMake(58.0, 58.0);
For simplicity and convenience, the number of pictures I have indicated in defines
#define iPad (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) #define ThumbnailsCountInPortrait (iPad ? 25 : 10) #define ThumbnailsCountInLandscape (iPad ? 38 : 15)
Now everything is ready for generating images. I made two different arrays, because in portrait and landscape orientation, the pictures from the video will be different.
NSMutableArray *portraitThumbnails = [NSMutableArray array]; NSMutableArray *landscapeThumbnails = [NSMutableArray array];
I do not think that it would be appropriate here to tell how to place the received pictures in a row on a
UIView , and even more so how to take them from different arrays with different orientation of the device. In fact, this is nothing complicated and all this can be seen in the finished example.
Lastly, I would like to tell you about the method of solving the problem with the brakes. Since the slider is initialized when the controller is loaded, then there is a delay in the animation of the transition to the current controller. The simplest solution is
dispatch_async . This extremely useful thing allows you to execute the contents of the block asynchronously in the background, without slowing down the application.
Usage example:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [videoScroller initializeThumbnails]; dispatch_async(dispatch_get_main_queue(), ^{ [videoScroller loadThumbnails]; }); });
I think it is clear that
videoScroller is our object, which initializes its data in the background, and then loads it.
A working example can be found here:
https://github.com/iBlacksus/BLVideoScrollerPSThis is my first article, if it turns out to be interesting for the audience, then I’m ready to continue sharing my experience, in particular, I’m planning to write an article on creating a slider that allows you to choose a text color from an arbitrary palette, which is just a picture.