My name is Max, and I’m an
alcoholic who have been developing iOS for over 7 years now.
In the wake of the
job seekers, I’ll say that I regularly conduct interviews with mobile developers for companies.
Among the candidates there are cadres who smoke hookah right on the Skype interview, try to google questions on the go, want a ZP 180k for 3 months of experience, behave as if the gop-stopped me on the street (with the appropriate terminology) and so on.
')
But in most cases, even with adequate middle specialists, there is a common gap: the lack of understanding of the principles of asynchronous task execution and hardware acceleration in iOS.
In this article, I decided in simple words to tell about the use of multithreading in iOS, so that after the first reading it was possible to easily and fully understand the use of this knowledge in practice.
(If too lazy to read, then attached
video )
There will be two materials, one dedicated to multithreading (this one), and the second in hardware acceleration: how to evenly distribute the load between the CPU and the GPU in order to get a perfectly-smooth interface.
For those who are interested not only to learn how to apply the techniques, but also to comprehend Zen, there is
an excellent article . It is, however, still for Swift 3, but the essence has not changed during this time.
SHOCK! The true causes of lags!As one expert told me at the interview: the application is slowed down due to the fact that the signal from the server cannot go faster than the speed of light. And in this gap, everything lags.
So, physics, you heartless bastard. The mystery is revealed, we diverge.
Brief practical theory
A practical theory is such a theory, without which you are not a result-oriented practitioner, but simply an uneducated savage.
And before you start fantasizing about asynchrony, threads, and other wisdoms, you need to answer the question: why do you need to parallelize something? Here is the main thread, why not turn everything on it? I hope for many the obvious answer: because everything will slow down, ale.
And why is the main stream so special then? Its exclusivity is that all interaction with the application from the outside takes place on it: touch processing, notifications, system messages and so on.
And the main thing in our case is that the whole responder chain hangs on main thread:
UIApplication -> UIWindow -> UIViewController -> UIView.

All clicks on the screen, all user interaction, come exactly there.
But okay, let the pressure be processed on the main thread, but, damn it, Apple, why can't I, like Clint Eastwood, draw with two hands?
Yes, because to communicate several threads, you will have to smear a thick layer of gizmos for synchronization, and this is all unnecessary junk and pressure on already scanty resources. Apple even introduced Main Thread Checker to help avoid any exotic bugs caused by inhuman treatment of the main thread.
In general, the first rule is to leave main thread for UI, and UI for main thread.
Well, where, then, do the rest?
IOS has a lot of tools for this purpose: threads, posix threads, gcd, operation queue.
Everyone has their own use, but in everyday life, consisting of banal tasks of the type: go to the server, bring, save and display, enough gcd and operation queue.
GCD is an Apple library for performing tasks in parallel. It consists of performed operations (tasks) and queues that contain these same operations. The most banal FIFO collection with taskey. Of course, there are still a lot of options, but we do not need them yet.
NSOperationQueue is the same queue, only high-level and OOP oriented. And in fact, just a beautiful wrapper over gcd, it does not have any functional advantages, although it did before.
The choice between the two, in most cases, depends on taste, with rare exceptions. Work with what you prefer. Personally, I prefer gcd because of better handling and the lack of an additional overhead projector.
By the way, among developers, such obscurantism has often occurred that the alleged NSOperationQueue is no longer based on GCD, but has been specially reworked for iOS and therefore faster / higher / stronger. But this is not the case, I will quote an apple:
Apple :

So NSOperationQueue has no special advantages over GCD.
Priorities GCD & NSOperationQueue
Run through the main components.
Each queue has such a concept as the priority with which it receives resources. This is called quality of service, most often used with the abbreviation 'qos' or quality of service.
The higher the priority, the faster and more CPU time is allocated for tasks on this queue. Yes, yes, it is also faster, you heard right. The system can optimize processor wake-up, thereby saving energy. This is useful to remember if you are working with low power mode when the user has a battery.
So I want the Yandex.Taxi authors to find out. After all, you can save the battery in such a simple way, and not to arrange the "mining of bitcoins" on my iPhone.
What are the priorities? There are several of them and we must remember everything so that it does not endlessly hurt. But many say that it is supposedly not being sanctified anywhere, and the type is not at all necessary.
And so priorities:
userInteractive ,
userInitiated ,
default ,
utility and
background .
Main does not consider, because it is not a priority, but a separate queue for the main thread. She, by the way, also has a priority: userInteractive. So, for example, by running shamanism with pictures in a separate queue with userInteractive priority, you will receive non-illusion logs, because the race for resources will begin. Smaller problems than just running in main, but more difficult to debug, because the lags will be intermittent.
(there are still unspecified, but this is generally wild, with which you are unlikely to ever come across)
If you want to understand exactly how the transfer of operations takes place between the queues - the above cited a link to the article.
So when what to use?
- userInteractive - no need to apply at all. It is tacitly reserved by the main thread, as I wrote above. Apple defines its application spectrum as: operations that are critical for user interaction, taking no more than a fraction of a second. Sounds like a UI thing, doesn't it? In practice, I had only one task, which was to compete with the interface in speed and requiring surgical precision, but it was not solved via gcd. In short, userInteractive is for the gods from Apple, and not simple hard workers like us.
- userInitiated - local operations that require instant results. For example, saving something to the database before moving to the next screen. But, at the same time, it does not block UI. I especially emphasize that it is local operations. The network is not included here.
Suppose you twist the download indicator in the middle of the screen and you need to do something urgently to show the content. Obviously, this cannot be done on the main stream, because gui will start to fail, but there is no point in throwing too deep into the background, because from the entire interface one single twist turns. In this case, userInitiated is used. - default is the default priority. Apple disagrees with itself, in the documentation saying that it is not necessary to use it, but on the WWDC, on the contrary, states that this is the best priority for downloading pictures and other network communications. Having played with different QoS, I can say that default is best suited for downloading images or small files that affect the user's perception of the application. The difference between utlity (the next level) and default is really felt when working with images, especially when rendering. Default works much faster, but it does not compete with the interface for resources. My recommendation is to keep all network business logic and images defaulted.
- utility - something not very high priority, but still necessary in the near future. For example, processing bulky files or complex database manipulations, media conversion and so on. Simply put, when you need to do an urgent task for the application, but where a couple of extra seconds of waiting will not play a role. By the way, such operations are the first candidate for transferring to background mode when working with lower power mode.
- background - the most vegetable mode of all. As they say, for those who have known life and are not in a hurry. It is worth using when saving energy, or for ultra-heavy operations. Type of loading of fat files, backups and more. And if suddenly the user turned on lower power mode, and your operation was already in the background priority, then maybe well, it nafig right away, eh?
Practice in the real world
Speaking of application, if you use a third-party framework for a task, then most of the tools do the work on the queue with which they were called, or support its explicit mention. If it did not work out in 5 minutes to find a way to explicitly specify a priority, then it is easier to simply wrap the operation in dispatch_async and not worry.
Most importantly, note that often callbacks are called on the main thread for some historical reasons. Sometimes you make a request with default qos, and then you pull the save to the base in the completion block, forgetting that you're already at home. And scratching a turnip, why this application is barely going.
So if there is no certainty, then we set breakpoint in the block and look at the call stack. In such moments it is better to double-check than to look for lags through the profiler. I like to ask about the profiler during the interviews.
Main thread:

Any other:

In general, be sure to pay attention to what the flow is action. Save a lot of time and nerves later.
One more nuance that arises when you drop by asynchrony: but how much should you put into separate operations? Where is the border? What are the consequences?
Philosophically, if something is asynchronous, then it can be asynchronous. But we will approach more pragmatically: if your application is made up of a multitude of pre-second operations, then you should first think whether these trivialities can be merged into some larger task? If you produce a separate operation for each sneeze, then there will be only more lags.
For example: we have a certain table with products in the store. Each cell is a price, avatar, multi-line description. The price is localized (the symbol of the ruble + formatting), the description too (it has a certain prefix verbal). As a rule, the compilation of a localized string is done right at the moment of setting the values in the corresponding labels.
But you can asynchronously do this? We first localize in the background, and then put in the label.
So, this is a bad decision. The best option would be for each object of the product to compile localized values immediately after a request to the server, writing data into the appropriate fields of the entity.
Especially useful will be the size of these same fields in advance to calculate, writing to the model. Yes, this is normal, despite the fact that it looks unusual.
In our team, this practice has long been adopted - to calculate the height of cells obviously when receiving data from the server, saving them to the database. Or in the array, if you do not use the database, if only it happened previously and in the background. It is better to let your user see the spinning twist for a fraction of a second than to admire the friezes.
And no need to worry about the storage capacity. In the current reality, any memory on the iPhone is a cheap resource, and processor time is expensive. It is worth remembering.
Conclusion : prepare the data for the interface in advance. So cheaper and more beautiful.
And since you’ve already half forgotten, here are the questions you should ask yourself to stop writing nonsense:
- Is it possible to make an operation in advance in the background and cache the result?
- Which priority is better for the task?
- userInitiated : small and urgent actions
- utility or default : network tasks, rendering
- background : long processes
- On which thread callbacks are called? Is there too much load on the main thread? (you can easily check through the callpoint on breakpoint)
In the next series we will discuss hardware acceleration. It sounds scary, but it will be easy.
PS I would be grateful to any feedback on the video. The first experience, for every minute it took an hour literally.