📜 ⬆️ ⬇️

Full-screen animation in an iOS game, or how to be if the graphics are not really loaded into memory

I’ll say right away that we have wasted a lot of time on this project, but we have gained some useful experience, which I think it will be interesting to read to many in order not to step on the same rake. Those interested in working with large animations for iOS, please under the cat.


So, everyone probably saw a game with a talking cat that repeats everything that is told to him in a funny voice? So our customer had a desire to make a similar game, but with a different character.

Here I want to talk only about graphics processing. I will not dwell on other things, although we also wrote the SDK along the way to distort the voice, which allows us, although very roughly, to control the opening of the character's mouth depending on the spoken phrase and can be easily integrated into similar games.
')
So, in the process of negotiations with the customer, it turned out that the graphics for the animations are already ready in JPEG format, full screen, for all types of iOS devices, including the iPad3 with its wild screen resolution (remember, this is 2048x1536). Moreover, the animations were rendered on the basis of 60 frames per second. And having tried to realize the game on their own, they faced a lot of difficulties that they chose to put it on someone else’s head. For which they turned to us.

At the first moment I was horrified by this approach to the preparation of graphics, considering it to be extremely not optimal.
The first thought - it was necessary to separate the background and character.
But, on the other hand, then the character’s animations will have to be saved with the alpha channel, so that it can later be applied to this background, i.e. in PNG format, and this is already at times more weight graphics, and therefore applications.
Then I dismissed this soap, having come to the conclusion that their version is not so bad - the graphics shrunk in JPEG do not weigh much and then we can simply cut the character so that later to animate only this cut out screen, overlaying it on the background, which can serve as one of the full frames of the animation.

Another thing that immediately occurred to me was that with this frame size, the use of texture atlases disappears immediately, because the texture size limit supported by devices (which is either 2048x2048 or 4096x4096 depending on the device and the iOS version) does not contain even a couple of frames. So when loading the animation, we will load each frame separately from the file system.

And perhaps it is worthwhile to persuade them to cut the laid-down fps animations up to, say, 15 frames per second, which, in my experience, is quite enough to animate the movements of the characters. That, in general, easily succeeded. As a result, every fourth frame was taken as a basis.

It also turned out that customers do not have a clue about the existence of the Cocos2d engine and they liked how smoothly and quickly their animation was worked out in OpenGL in that simple Cocos2d example that I sketched for demonstration.
We shook hands and got to work.

Attempt number one


First of all, we have shoved all their animations in such a way as to animate (read to keep in memory) only that part of the frame in which something changes. But at the same time, for simplicity, they decided to make all the frames of the same size, for which they would determine the zone that would frame the variable part throughout the entire animation and cut out only one of all the frames.



To do this, they used the Photoshop action, which was edited for each animation, to set the size of the Crop-zone and applied it to the entire folder with animation.

Being in full confidence that we were on the right track, we processed all their animations in this way and only then began to collect test examples.

Not that our next problem was unexpected, just did not want to bother with its decision without appreciating its scale.
It was clear that we could not preload all the animations into memory, but according to the specification quite a few animations will be activated by the user, and therefore must be ready for immediate playback at any time. So at least they need to be kept in memory.
We conducted several tests and soon found out that this is not realistic due to exceeding the limit on the used memory.

Well, we decided, the only option is to load them just before launch. And it could work for short animations, but any less lengthy caused unacceptable delays, up to 10-12 seconds, to load. It is understandable, iOS has to somewhere turn a JPG image into a clean bitmap, and then transfer it to OpenGL for use as a texture. And in this form it is already several megabytes, even though the original image weighs less than 100KB.

Another problem that also arose in this case was the huge amount of these textures in memory in expanded form. Some animations simply did not enter into the memory entirely. So we came to understand that here we can not do without the good old PVRTC format.

Attempt number two


PVRTC (we used PVRTC4444) has several advantages over JPEG.
Firstly, it does not require any transformations inside the device, the picture can be used as a texture immediately, which gives a significant time saving during the initial loading of the graphics. Another important fact is that pictures in the form of PVRTC weigh less than the raw bitmaps, which eventually turns into any JPEG image in OpenGL, which means that even the longest animations can be kept in memory several at once.
But there are also disadvantages. The main one is the size of the application. Although they weigh less than raw bitmaps, compared to JPEG, they are many times larger files.
Those. winning the speed of preloading graphics, we significantly increase the weight of the application.

Another disadvantage is the limitations associated with the geometric size and proportions. All files in the PVRTC format must necessarily be square with a side of an equal power of two.
Having estimated the volume of all files, if for iPad3 we bring all our frames to the size of 2048x2048, we immediately abandoned this idea. And they made a decision to be limited to two sizes. For iPhone, use frames of size 512x512, and for all iPads to limit to textures of size 1024x1024. Because while playing the animation, there is still no noticeable difference in the picture quality on the iPad3 and iPad2, since the geometric size of the screen is still the same and then, the pictures quickly change.
So it was necessary to somehow bring all our graphics with its chaotic frame sizes (they could be for example such - 1355x1278) to the form 1024x1024 (and 512x512 for iPhone).

Without further ado, we simply decided to scale the already cut out graphics so that, keeping the proportions, bring it to a width of 1024, and the remaining space will be vertically filled with transparency. And then in the code to stretch the loaded pictures back to their original size.
And in cases where the original height was greater than the width, we decided to deliver one more square to the junction, which would contain the missing part of the original image.



We had to spend two days scaling and reformatting our pictures in PVRTC. For reformatting, we used a utility bundled with XCode and a simple shell script written by us to process all the files in the folder, and for scaling, of course, photoshop with its actions).
An interesting detail is the weight of each file in the PVRTC is determined only by its geometric dimensions. Those. Each iPhone animation file occupied 128KB, and each square for the iPad weighed 512KB, regardless of the content.
Of course, the size of the application has grown dramatically to indecent hundreds of megabytes, but the loading of animations took place almost instantly and for most animations it was possible to do simply - download them at the moment they are needed, play and immediately unload from memory. But there were still long animations that still required one or two seconds to preload, which of course hurt the eyes of the customer.

Then we decided to do an “autopsy” and look at how the game about the cat was arranged. It turned out several important points:
First, the entire animation graphics they have is full-screen in JPG format, while it is monstrously heavily reaped - apparently, like us, its developers have relied on the fact that during playback it is difficult to discern the quality of the picture.
Secondly, judging by the number of frames and the actual playing time of the animations in the game, they limited themselves to a speed of 5-6 frames per second.
We also found out that for both iPads they used the same graphics.

After that we decided to check one assumption.

Attempt number three


We assumed that maybe these guys are just chasing all the animations in UIKit, without even using OpenGL. And indeed, why do you need OpenGL, if you are going to be content with 5 frames per second.

For fun, we decided to try playing our original graphics with UIKit tools - simply sequentially loading each full-size frame into a UIImageView.
It turned out that in this mode, iPad2 has time to play it at a speed of 30 frames per second, and iPad3 (despite the four larger image size) gives, also quite decent 20 frames per second!
And this, without preloading the entire animation, we simply load one frame, display it and immediately unload it from memory, replace it with another without exposing any delay between frames.
That's the opening! You could not do anything at all, everything would work fine and so the graphics in JPG format would give a smaller application size, and you don’t need to keep anything in memory. With this approach, the animation can be of any length, it is limited only by the desired size of the application.
As a result, we rewrote the game using this approach.

I would make a conclusion from all this. If you have to deal with large animations and you are not satisfied with a frame frame that is not too high, then you may want to look towards using the UIImageView (maybe even in a separate layer on top of your Cocos2d scene). It may well be that this will be the best solution.

In all fairness, I want to note all the same that here I have omitted other important points that we had to take into account in our decisions. For example, the game should have been able to record what is happening in the form of video, for which we initially wanted to use one popular framework that only works with Cocos2d, since it records video from OpenGL. This was one of the reasons why we held on to Cocos2d. But in the end, since all the animations were implemented through UIKit, we had to implement video recording on our own.

Source: https://habr.com/ru/post/166447/


All Articles