📜 ⬆️ ⬇️

CFS vs O (1) scheduler

I think many have heard that in addition to the standard O (1) scheduler on linux, CFS (Completely Fair Scheduler; “absolutely fair scheduler”) appeared, which Ingo Molnar is working on. Actually, I would like to devote this note to the comparison of these two planners and a brief review of their basic algorithms. At the end of the note you can read a bit on FreeBSD's planning ULE.

preamble:


amble:
someone may have seen a slashdot article about what the creator of bsd's ULE-scheduler, Jeff Roberson , commented to CFS:
')
Imagine that you are working with hundreds or even thousands of running threads, cache misses in the case of log n traversal can be quite expensive. Especially because you will have much more scheduling events with so many threads.


Indeed, extracting the next process from the tree sometimes requires rebalancing, inserting the process into the tree requires a detour of one broken line and possible rebalancing. but this deficiency manifests itself only with a very large n, which is very likely to occur on desktops, but may occur at large work stations. From here we can conclude: it’s not worthwhile to stat this planner at the loaded server / work stations and other machines designed to work with a large number of simultaneously running processes / threads.

and this deficiency manifests itself as:

Ingo said that having 1000 tasks running (task - process or thread), the system will increase the cost of context switching ( context switching ) by 20%. This is a tree of 10 levels. In the worst case, 5 more levels can be added, which will lead to a 30% increase in value. This is not fatal, but you cannot call it good behavior either.


The disadvantages of the O (1) scheduler include the insufficiently good, “fair” core rebalancing. Here is what Ingo himself says in this comment:

The easiest way to describe this is with a simple example: if I run glxgears on my computer, it will use exactly 50% of the CPU. If I use the SD scheduler (aka O (1) scheduler) and run the CPU hog with glxgears on it, both tasks will share the CPU resources. CPU hog will get ~ 60% of CPU time, and glxgears - 40%. If I use CFS, both tasks will receive 50% of the CPU time.


CFS really better allocates processor time and spreads tasks around the cores (tested in personal practice).
Also to the pluses of CFS can be attributed significantly less response time than with O (1). This is confirmed by tests made by Mikhail Piatrovski. see the results here . Thus, CFS can be safely put on the desktop, where there is not a huge number of simultaneously running processes.

how to put CFS:

First of all, I note that CFS is available for kernels starting from version 2.6.20.
You can get a patch for the kernel version> = 2.6.20 here .
then go to the directory with linux sources and apply a patch:

# patch -p1 <sched-cfs-v2.6.2x.1-x.patch


A separate git repository has also been allocated for the development of the CFS scheduler. pull the clone repository can be from here .

fw about ULE:

Each cpu has 3 queues indexed by priorities. 2 queues are used to implement such sheduling-classes as interrupts, rial-time and time sharing. The last queue is used for the idle class.
The queue switching policy is similar to O (1) shceduler, i.e. when one line is empty, it is immediately replaced by another. therefore, the switching time does not depend on the class of processes.
ULE also uses flexible and efficient policies on SMP systems, according to Jeff, scattering tasks between CPUs (quoted) “is at least as 'fair' as the 'absolutely fair scheduler'” does.

more about ULE: ULE.pdf and here .

in fact, it was a trial note, so it is the first in this blog. from here there are a few questions for those who read: would you be interested in reading more (much) detail about the implementation of CFS and whether it is worthwhile to translate any quotes inserted into the context of notes from English?
Thanks for attention.

Source: https://habr.com/ru/post/13014/


All Articles