Greetings.
Immediately I warn you that the post is full of personal considerations of the author, which may be erroneous, and this topic is extensive and interesting, so the discussion on the topic in the comments is welcome.
Only now I realized that, firstly, I did not tell a lot that it would be useful to know before writing code. In addition, there are much better ways to implement multitasking than I mentioned in the last issue.
I will also try to follow numerous tips and systematize the content of the articles.
In spite of the fact that this issue is fiction of purest water, let's make a plan.
1) Different thoughts on the implementation of multitasking.
2) Pros and cons for every way.
3) Decide how to make your bike.
Let's get started
1) When I first began to think about how to implement the subject better, a sacred, straightforward thought crept in on me. Here we think about the area where user processes are executed (with ring0, everything is easier). Let's allocate them a place starting at 16 megabytes. The rest is their area, until the very end of the RAM. Now the question arises: how can we control and ensure the process with all necessary. Sacred thought was as follows, you have already heard it: divide RAM into N pieces (not necessarily equal), then organize the queues of tasks and to them, we will take measurements on the 'weight' of programs, etc. You probably already want to say: “The author, perhaps, will issue a battle cry and you will rush to rob villages? Barbarously it all. ”. I agree. Such a model is not viable, non-expandable and terribly ineffective. A well-known comrade Tanenbaum wrote about this in his remarkable work. With this implementation of multitasking, the following problems arise: small programs will be given less time, perhaps starvation. Due to the fact that the sizes of all programs are different, and the framework is static, unequal competition will arise. In principle, this can be solved by strict control, but this method with static sections brings a headache.
2) Then, after some thought, another thought came to mind: virtual memory.
Let's remember about the page organization of memory (bless Cthulhu).
If we do not use static areas for tasks, then a number of other problems arise:
1) How to follow the growing and processes?
2) Do the processes cover each other?
3) Does physical memory need a piece of contiguous space for the process?
Let's deal with them.
1) In order not to miss the processes, it is enough to implement a memory manager (manager), which will 'distribute rations' in the form of free pages. In addition, we give each process its own descriptors, which will cut the long hands of the process.
2) This is another dispatcher's duty to carefully monitor that memory is not shared by N processes.
3) Honestly, at first I was mistaken myself,
WHERE the code is executed in physical or virtual spaces (I know, it sounds strange). The answer is in the virtual space. Just imagine the worlds as in the Matrix: both are connected, and both are real. So do not take the virtual space as meaningless and just expanding the maximum amount of addressable RAM.
In order to better understand, consider an example. Here, for example, there is a process whose particles are scattered throughout physical memory. But we can assign physical addresses to pages with certain virtual addresses, is it true? That is, the virtual address will again go on a journey through directories and pages, and in the end there will be quite a necessary and valid address in physical memory. It turns out that for the process the virtual addresses will be sequential, but in fact its code can lie anywhere in physical memory anywhere.
')
The method described above is used now in many operating systems, so it is quite stable, flexible. The difficulty will be only in the competent writing of high-quality code.
Now they have quietly approached the topic of the memory manager. In general, it is believed that this is the most difficult part in the project. I even heard such a joke:
“Why are osdev projects stall?
1) Very busy at work;
2) Married;
3) I tried to write a memory manager; "
But let's not be scared. We will try to cope. He will have to "rule fairly" in order for an idyll and a "golden age" to reign in the RAM:
1) Follow the pages: free them in time, remap.
2) Search for new free pages on request (#PF, most often).
There are several algorithms for this, which are also described by Comrade Tanenbaum. Consider them later.
3) If there is not enough physical memory, then use a swap. In this scenario, you will need to find less used pages, drive them into a swap. Then back. In short, the manager will have to sweat a lot in this case.
Now let's describe in words how the creation of a new task will take place (we take into account that this is not yet an effective version):
1) Check whether there is even room for another task.
2) Download the program image into memory at a specific address.
3) Describe code, stack, and data descriptors for a task.
4) It would be nice if you created your own catalog of pages for the task with all the consequences.
5) Transfer control program code.
6) If #PF occurs, we look at what task was performed, when referring to what address it was painted, we appeal to the memory manager with pleas and, if we are lucky, we get another page of memory, and go back to the code.
In the next issue we will look at how to write the notorious memory manager, because without it you cannot get anywhere.
What to read:
1)
Andrew Tanenbaum : "Modern Operating Systems." (A very useful book).
2)
Intel System Programming Manuals (without them nowhere).