📜 ⬆️ ⬇️

From Scheduler to Scheduler

See two other articles in this group - Multi- tasking and Pre-emptiveness: How to take away the processor .

Immediately request to strict readers. If you do not understand any of the terms used - ask, I will tell you what I had in mind. And if you like a different spelling or translation of this term - enter it in the comments. I apply those that I like.

So, in the previous articles, the mechanism of multitasking minus the scheduler is described, he is a sheduler, he is a skeduler, he is Vaska tagged , sorry, I will talk to these terms ...
')
As I have already said, the sheduler is just a function that answers the question: what thread and how long to put on the processor.

By the way, in the SMP system the sheduler is no different from a single-processor one. In general, to make it easier to understand the structure of interaction of entities on one and several processors, it is easiest to imagine the following model: for each processor there is a “idle” thread (which works, if there is no one else at all, and just stop the processor before the interruption), which constantly tries to “give” "The processor (which it seems to own) to other threads, choosing a thread using a sheduler.

Speaking of the sheduler can not be said about the priorities.

The priority is the property of the thread (or process) affecting the competition of this thread with other threads for the processor.

The priority is usually described by the pair <priority class, priority value inside the class>.

To date, a fairly clear scheme for the implementation of priorities has been formed. Thread priorities are divided into three classes:



By the way. They say that Kernighan was somehow asked what he would have done differently if Unix had written anew. “I would write creat as create,” replied the master. I would add another bag of absurdities to the list of fixes, starting with nice - for some reason, the concept of priority in Unix is ​​primed. Than nice less, the priority is higher.

We are in this article will adhere to a more human-loving scale: higher numerical value = more processor.

Someone, probably, already wants to glance in the code. Here he is:
The original text of the Phantom Scheduler .

Here we play a little Pushkin :)
And now frost is cracking
And silver among the fields ...
(The reader is waiting for the rhyme of the rose;
On, here, take it soon!)


By the way about Pushkin. I categorically do not pretend to any sacred knowledge in the field of writing planners. The example given is a hackneyed and banal, albeit a quite reasonable implementation. Nevertheless, the writing of a good sheduler is a very serious scientific task, theses write about it, and if it comes to real time, everything becomes very serious. In this article I only cover the basics. There are specialists in our country who know about three orders of magnitude more on this subject than yours truly.

For this - we continue.

A fairly traditional implementation of a sheduler implies the existence of a so-called run queue - a queue of threads ready for execution. Actually, this is not necessarily a queue, and if there is a queue, then, usually, a set of queues belonging to different classes, and sometimes to levels of priorities.

Specifically, in Phantom there are three lines according to the classes:

/** Idle prio threads */ static queue_head_t runq_idle = {0,0}; /** Normal prio threads */ static queue_head_t runq_norm = {0,0}; /** Realtime prio threads */ static queue_head_t runq_rt = {0,0}; 


In general, the thread in relation to the processor can be in three states:



That is, there are threads in the run queue that would like to get to the processor.

Hence, the work of the scheduler is to:

  1. To decide on the thread of which priority class we will now run. It's simple - check if the realtime queue is not empty - if not empty, then we start the realtime thread, check the queue of normal priorities - if not empty, then we start the normal thread. Otherwise, run the idle thread. If there are no such ones, we note that the idle processor and go to the thread of "eternal" sleep.
  2. If you decide on a priority - choose the correct thread for execution in this priority.
  3. Assign threads time interval for execution.


In general, the implementation is rather banal. Let's start with real time. The naive implementation sorts the available threads and starts the thread with the maximum numerical value of the priority. The time interval is not assigned, since it is not checked for real-time priority threads.

  // Have some realtime? if( !queue_empty(&runq_rt) ) { int maxprio = -1; phantom_thread_t *best = 0; phantom_thread_t *it = 0; queue_iterate(&runq_rt, it, phantom_thread_t *, runq_chain) { if( it->thread_flags & THREAD_FLAG_NOSCHEDULE ) continue; if( ((int)it->priority) > maxprio ) { maxprio = it->priority; best = it; } } if( best ) { assert(t_is_runnable(best)); return best; } } 


The obvious improvement of the code is not to sort all the threads in the sheduler, but to insert the thread into the queue in descending order of numerical priority. Then the scheduler simply takes out the first thread from the queue and starts it.

Now threads with a time-sharing class priority — that is, regular threads gently competing for the processor.

Here it should be noted that the variable ticks_left in the structure of the thread state determines how many 10 ms intervals the thread will hold on the processor.

First, consider what the t_assign_time () function does:

  it->ticks_left = NORM_TICKS + it->priority; 


She checks that all threads have spent their ticks_left, and if so - assigns them new ticks_left - to those with higher priority - gives more CPU time.

What does the scheduler do? It selects the thread with the highest priority and with a non-zero remainder of the processor time, and it launches it:

  // Have normal one? if( !queue_empty(&runq_norm) ) { // If no normal thread has ticks left, reassign // ticks and retry do { unsigned int maxprio = 0; // NB! not a negative number! phantom_thread_t *best = 0; phantom_thread_t *it = 0; queue_iterate(&runq_norm, it, phantom_thread_t *, runq_chain) { if( it->thread_flags & THREAD_FLAG_NOSCHEDULE ) continue; if( (it->priority > maxprio) && (it->ticks_left > 0) ) { maxprio = it->priority; best = it; } } if( best ) { return best; } } while(t_assign_time()); } 


When all the residues of all threads have run out - asks t_assign_time () to assign new residues to the threads.

In general, sorting is relatively redundant here. Simply add the threads to the end of the queue, and choose from the beginning - fair enough. In general, sorting all the threads is obviously bad, don't do that. I will also rewrite this piece in a more optimal way, for example, as already described above, for realtime.

By the way, when reading the scheduler code, it is necessary to take into account that the threads do not always “eat up” their time quantum - the thread can be locked on the synchronization primitive and part of the time not standing on its own accord. This is the reason for sorting: if the priority of the thread is high, then the thread after unlocking will return to the processor before finalizing all other threads.

Ok, let's move on to the idle priority class. We will get here only if in previous classes all threads are asleep or missing.

  // Have idle one? ret = (phantom_thread_t *)queue_first(&runq_idle); if( ret ) { if( ret->thread_flags & THREAD_FLAG_NOSCHEDULE ) goto idle_retry; // Just take first. Switched off thread will become // last on runq, so all idle threads will run in cycle ret->ticks_left = NORM_TICKS; return ret; } else goto idle_no; 


Everything is simple here - we take the first one that we get, we start it at the standard time. Since it is at the end of the runq_idle queue when removed from the processor, all such threads will be launched in a circle.

Finally, if there is nothing at all, we have a special idlest thread for this case.

  STAT_INC_CNT(STAT_CNT_THREAD_IDLE); percpu_idle_status[GET_CPU_ID()] = 1; // idle return GET_IDLEST_THREAD(); // No real thread is ready to run 


It has its own processor, simply because one thread cannot be run twice. Simple as mooing:

  while(1) { hal_sti(); hal_wait_for_interrupt(); } 


Almost all.

What is not considered here.

Interactive thread prio boost : Usually, schedulers increase the actual priority to threads that are seen in the I / O from the console or another entity, which, potentially, hides an interactive user. This increases the perceptual reactivity of the system - “OS less tupit” from the point of view of a person. And vice versa - if the thread “finishes up” its timeline to the end, and makes it stable, it lowers its priority a little, considering it to be purely computational. With the same purpose - to increase the reactivity of the system.

This, of course, applies only to threads with a normal priority class - no one ever touches real-time priorities.

Planning real time with a guaranteed share of the processor . Strict real-time systems have a hard plan, within which a thread or group of threads can receive a clearly defined amount of processor time.

Inversion of priorities.

Suppose we have a terribly important thread R with maximum real-time priority, and thread I with class priority idle, which deals with insignificant nonsense. As well as the usual thread U, which works with the user - reads commands from the console. Shell, for example.

The user is idle, waiting for input-output and the thread U.

Thread I receives a processor, decides to do its own nonsense and wants to allocate some memory for itself. The kernel memory allocation function locks the global mutex and starts allocating. Working on an idle prio, obviously.

At this moment, the thread R wakes up, and it is time to correct the position of the core of the absorber of the reactor core. And also wants some memory.

(Let's not pick and choose what U and R are doing on the same machine - U might be a TCP statistics server, for example.)

Naturally, R takes the processor from I, but rests on the global mutex when allocating memory, and stops.

Here I would continue to work, but the user is typing a command, and U gets to work, taking the processor away from I. Now, suddenly, a high-priority real-time thread R is waiting for the end of the U thread, and the reactor explodes to hell.

In order for this not to happen, they do what I have not done in Phantom yet - inversion of priorities.

It is formulated as follows: if a higher priority thread is blocked on a resource occupied by a low priority thread, the second thread receives the priority of the first thread for the time of such blocking.

That is, in our example, thread I should have received real-time priority from thread R for blocking mutex allocation, displacing thread U to the hell and finishing what blocks thread R. After unlocking mutex, its priority should go back to idle and the processor will go to R, as it should be.

Now, probably, everything. :)

Source: https://habr.com/ru/post/282213/


All Articles