📜 ⬆️ ⬇️

Simple OS Scheduler Model

Not so long ago I tried to find here some information about the Windows scheduler and to my surprise I did not find anything concrete about the schedulers in general, so I decided to post this example of the scheduler, I hope someone will find it useful. The code is written in Turbo Pascal with 8086 assembly language inserts.

What does the planner actually plan?

The scheduler is the part of the operating system that is responsible for (pseudo) parallel execution of tasks, threads, processes. The scheduler allocates processor time, memory, stack, and other resources to threads. The scheduler can forcibly take control of the thread (for example, by timer or when a stream with a higher priority appears), or simply wait until the thread itself explicitly (by calling a certain system procedure) or implicitly (upon completion) gives control to the scheduler.
The first variant of the scheduler is called real or preemptive, the second, respectively, non-preemptive.

Planning algorithms

There are several algorithms for both preemptive and non-preemptive schedulers. First, let's figure out which mechanism is better? At first glance it seems obvious that preemptive multitasking is always better than non-preemptive. But it is not so. Non-preemptive multitasking gives a big handicap preempting where it is necessary for the processor to be constantly busy with the solution of tasks, and not with their switching, preparation, waiting. An example is batch processing systems. And in modern user OS it’s hard to imagine without preemptive multitasking (although before the release of Windows NT, everything was completely without it).
So consider the main algorithms for non-preemptive layout:

As for the first, I think everything is so clear: the first one came - the first one left. The simplest but, unfortunately, is not the most efficient algorithm, because due to the solution of someone's kilometer integral with one stream, the other will wait an hour for the queue to add two numbers.

An obvious way to increase efficiency is to skip short threads forward in a queue. So, first all the flows that need to be calculated 2 * 2 are executed, and only then that kilometer integral will begin to be calculated. The truth here, too, has its drawbacks: for example, a long thread interrupted by I / O will again be placed at the end of the queue.
The third algorithm eliminates this situation: threads that currently need the least amount of time are placed at the head of the queue.
In batch processing systems, three scheduling levels are usually used: at the beginning, a job queue is formed in disk memory (memory scheduling), from there the streams enter the main memory, where one of the above algorithms is also lined up. There may be a situation where there are too many processes, then some processes are returned back to disk (access scheduler). Actually, the third scheduler, the process scheduler, deals with the organization of access of threads to the processor control.
What is obvious - non-preemptive multitasking is not suitable for operating systems that require an instant response to any user actions. In addition, such a mechanism requires the programmer to understand the internal structure of the system, and places the responsibility not only on the work of the program, but also on the work of the entire system as a whole. After all, no one wants to use your product if it hangs the system for a week, right?
Consider several preemptive scheduling algorithms:

The first algorithm is a direct analogue of the queue, or rather this is the queue: the first one arrived - the first one got its time quantum, again stood at the end of the queue. Only one obvious, but very offensive minus is that a stream that uses 1/10 of a time slice will eat as much CPU time as a stream using a whole quantum.
The second algorithm solves this problem: each thread is assigned a priority inversely proportional to the quantum fraction used. Thread with higher priority is queued up earlier.
The third algorithm is characteristic of interactive (server) operating systems, in which the processing of user requests is at the forefront. Priority to threads is assigned depending on the task they perform. Here, for example, the highest priority will be the processing of http requests, a little less processing of operator commands, etc. When using such an algorithm, it is possible to compensate for the infrequent transfer of control to the stream, by allocating a larger time slice to it. So the thread that has lower priority and potentially the longest waiting in the queue will receive the most CPU time.
Preemptive multitasking does not require the programmer to understand the subtleties of the work of the scheduler, but also has pitfalls.

Critical sections

With processor time or a stack everything seems to be simple, but what if a thread needs, for example, to print a kitten on a printer? And what if there are two such streams? With non-preemptive multitasking, everything will go like clockwork: one thread will stamp the kitten, end and give control to the scheduler, which will allow the second stream to be printed. But if the multitasking is repressive, the scheduler can switch threads at the moment when they have not yet completed printing and something like this will turn out:

To prevent this from happening, the mechanism of critical sections is introduced. The flow that a certain inseparable resource wants to occupy informs the scheduler about this. If the resource is not yet occupied by another thread - the scheduler allows the thread to continue to work, and he marks the resource as busy. If the resource is busy, the thread is placed in the queue, where it waits for it to be released. Upon completion of work with such a resource, the thread should inform the scheduler that the resource can now use other threads. These two actions: an attempt to capture a resource and a termination message with it are called critical brackets. By the way, with their inept placement, a situation of mutual blocking of threads may arise, which is not always well and quickly diagnosed and can cause a non-illusion bather at the developer and the user after the product is released.
')
Interlock

Suppose we have the indivisible resources A and B and the flows X, Y, which want to use these resources. If a certain Krivoruki is not a competent programmer, he places critical brackets like this:
...
Stream x
Borrow Resource (A)
Borrow Resource (B)
...
Give Resource (A)
Give Resource (B)
Thread y
Borrow Resource (B)
Borrow Resource (A)
...
Give Resource (B)
Give Resource (A)

after a while this situation will arise:


Sweetie

Well, actually for the sake of what it was all written. As already mentioned, the code of our scheduler will be executed in the language Turbo Pascal.
The mechanism of critical sections is implemented in the procedures EnterCritical (), LeaveCritical (). Recall once again: to enter the critical section, you need to check if it is busy, and by result, either take it and allow the thread to use it, or put the flow in a queue and transfer control to someone else.
procedure EnterCritical(num:TCriticalNum); {   } begin asm cli end; if num<=maxCritical then begin if criticalTable[num]<>0 then begin inc(criticalTableWait[num]); {    } while criticalTable[num]>0 do begin {   -} {       } asm sti end; SwitchThreads; asm cli end; end; dec(criticalTableWait[num]); end; criticalTable[num]:=1; end; asm sti end; end; 

With LeaveCritical (), everything seems to be clear:
 procedure LeaveCritical(num:TCriticalNum); {  } begin asm cli end; if num<=maxCritical then criticalTable[num]:=0; asm sti end; if criticalTableWait[num]>0 then { -  } switchThreads; end; 

The procedure for switching threads is written using assembler inserts, so you can see the moment of switching threads from one to another up to machine instructions:
 procedure TimerHandler(flags,cs,ip,ax,bx,cx,dx,si,di,ds,es,bp:word); interrupt; {         ,            } const ThreadNumber:byte=0; const NumPrev:byte=0; const tsoffset:word=0; mainSP:word=0; mainSS:word=0; begin if not DisableHardwareEvents then begin asm pushf end; oldvec8; {     .    .} end; if preemptiveSwitch or DisableHardwareEvents then if (ThreadsRegistered>1) or (parallelStart) then begin {   } asm cli end; if ParallelStart then begin {     } StepCount:=0; asm mov ParallelFinished,false {  } mov mainsp,bp {      } mov mainss,ss {  } end; end; inc(StepCount); if {ts[ThreadNumber].active and} not ParallelStart then begin {   () .      ParallelStart, .. ,        } {      } tsoffset:=ThreadNumber*sizeof(TThreadStateWord); asm push ds { ,    DS     !} mov ax,seg ts mov es,ax mov di,offset ts add di,tsoffset mov ax,ss mov ds,ax mov si,bp {ds:si   ,   } cld mov cx,12 rep movsw {    ( )} pop ds {es:di   24          } mov ax,bp {BP     12  } add ax,12*2 stosw {SP } mov ax,ss stosw {SS } end; end; {   } NumPrev:=ThreadNumber; repeat ThreadNumber:=(ThreadNumber+1) mod ThreadsRegistered; until (ThreadNumber=NumPrev) or TS[ThreadNumber].active; if ts[ThreadNumber].active and ((ThreadNumber<>NumPrev) or parallelStart) then begin {  ,     .     ParallelStart,       , ..  ,          0} tsOffset:=(ThreadNumber+1)*sizeof(TThreadStateWord) - 3; asm mov dx,ds mov ax,seg TS mov ds,ax mov si,offset TS add si,tsOffset {ds:si     } std lodsw mov ss,ax lodsw mov sp,ax mov bp,ax {    } sub bp,12*2 {       .          ,         } mov cx,12 @m1: lodsw push ax loop @m1 cld mov ds,dx end; end else if (not ts[Threadnumber].active) and (Threadnumber=NumPrev) then begin {  } setintvec($8,@oldvec8); asm mov parallelfinished,true mov ax,mainss mov ss,ax mov bp,mainsp mov sp,bp end; end; parallelstart:=false; end; DisableHardwareEvents:=false; end; 

The procedure itself is compiled with the interrupt directive, that is, it is an interrupt handler. Which can be triggered by both hardware and software by calling int 08h, like this:
 procedure SwitchThreads; begin if not parallelFinished then asm cli mov DisableHardwareEvents,1 int 08h sti end; end; 


It is also necessary to describe the procedures for registering, starting and stopping threads. If someone is interested, you can see in the source code of the RegistrThread, RunThread, StopThread procedure.
That's all! Our planner is ready.
The sources together with an example of a multi-threaded program written for this planner and the dos turbik can be downloaded here . You can play around and see how the threads will be executed differently with preemptive and non-preemptive multitasking (ExecuteRegisterThreads (true / false) procedure), simulate a deadlock situation and make sure that it is not always diagnosable (I once waited a minute for a deadlock to occur).
I recommend to run Win98 systems from DOSbox under new systems.

Source: https://habr.com/ru/post/154609/


All Articles