📜 ⬆️ ⬇️

Preemptive multitasking on Z80 assembler

A slow processor and a small amount of RAM do not mean that preemptive multitasking cannot be implemented on such a platform. Moreover, the main point of organizing a multitasking environment is the efficient use of processor time, so that the processor does not stand idle while some programs are waiting for an event, but are used by other programs. Even on platforms such as the ZX Spectrum (Z80 3.5 MHz, 48-128kB of RAM), or 8-bit AVR microcontrollers, organizing preemptive multitasking makes a lot of sense.

I bring to your attention my own implementation of the multitasking controller in the Z80 (ZX Spectrum) assembler, which is not part of any OS, but can be used separately. There is nothing superfluous in it - only the organization of the execution of threads and synchronization between them. The dispatcher can be used as part of a software project, as a basis for creating a more serious dispatcher for the OS, or as training material.

Architecture of implemented multitasking system


The architecture was inspired by the concepts of the Windows NT kernel in my study of the ReactOS sources [2]. Of these concepts, a minimum has been realized, which gives the necessary features of multitasking. A more complete implementation is possible, but starting from a certain moment additional functions cease to justify themselves because of their costs on small computers.

Threads

Threads [1] are the basic units controlled by the dispatcher. Each thread has an executable code and its own stack, which can be used to store return addresses from subroutines and other information. The dispatcher switches the execution from one thread to another so that, if possible, all the threads execute as much code as they wish.
')
Each stream can be in one of three states: waiting (waiting), ready for execution (ready) and execution (running). In the idle state, the dispatcher does not allow the stream code to be executed before the occurrence of an event that is waiting for the stream. The flow in the ready state will receive control from the dispatcher as soon as it becomes possible. In the execution state, there can be only one thread, since there is only one processor in the system. Its code is executed by the processor until the thread transitions to the idle state, or until crowding out occurs, i.e. the dispatcher, on his own initiative, transfers control to another thread.

The number of threads in the system is fixed. New threads are not started, and old ones are not terminated. For microcontroller applications or as part of a separate application program, this restriction is not significant. But the dispatcher is simplified and its work is accelerated.

Each thread has a priority. If there are several threads in the ready state, then the dispatcher chooses the one with the highest priority for execution. In the current version of the dispatcher, the priority of the stream cannot be changed during work. Dynamic priority of threads is expensive to implement, although this feature is necessary to solve the problem of priority inversion [1].

The priority of all threads in the system is different. This means that the dispatcher does not organize pseudo-parallel execution of code with the same priority, quickly switching the processor from one thread to another (“Round-Robin”). But in reality this limitation is not essential. Pseudo-parallel execution of multiple threads results in slowing down each of them. Given the limited resources of computer memory, it is better to organize the sequential execution of such programs. The main advantage of preemptive multitasking is not in the possibility of pseudo-parallel execution, but in the effective separation of the processor between short-term tasks of processing requests from important sources (for example, reacting to pressing a key on the keyboard by the user) and long-running programs whose completion time is uncritical (compilation, archiving) . If you assign a high priority to a stream processing keystrokes, and a low priority for an archiving stream, then from the user's point of view editing the speed of the editor will not drop, but in the background, the file will be archived as a bonus.

Waiting objects (synchronization objects)

Based on them, a thread can go into a wait state by calling the appropriate dispatcher function. The waiting object may be signaled or unsigned. If it is signaled, the wait function returns immediately, and the thread continues execution. If the object is unsigned, then the flow enters the waiting state, and other flows that are in the ready state begin to be executed. As soon as the object becomes signaled, the waiting thread will return to the ready state and will receive control from the dispatcher as soon as possible. Operating systems typically have such wait objects as events, semaphores (semaphores), mutexes (mutex), and others. In this dispatcher, two types of Events are implemented: Notification Event (Manual Reset) and Synchronization Event (Auto-Reset) .

IRQL

The state of the processor. The abbreviation in Windows NT stands for “Interrupt Request Level” [3], although this does not exactly reflect the meaning of the concept. There are three IRQL levels in the described controller. PASSIVE_LEVEL is when the processor is currently executing the code of one of the threads. At this time, the execution thread may be preempted by another thread, or the processor may begin processing a hardware interrupt. DISPATCH_LEVEL - the processor is in this state during the execution of the critical code of the dispatcher. For example, switching a performance between threads is an operation consisting of multiple actions. At this time, it is impossible to say that the processor executes one or another stream - it is, as it were, “between them”. In this connection, it is impossible to override the code that is executed in the DISPATCH_LEVEL mode. Finally, the third level - DIRQL - corresponds to the fact that the processor is currently processing a hardware interrupt.

Unlike Windows NT, there is no place in my controller where the current IRQL level would be stored. Also, there are no functions that increase or decrease it explicitly. But IRQL as a concept is implied in the system and objectively exists in it, albeit implicitly.

User code can be executed either in threads (on PASSIVE_LEVEL) or in the user interrupt handling routine (ISR) in DIRQL. The set of available dispatcher functions is different for different IRQLs. Violation of IRQL requirements leads to system failure.

User code executing in threads should not prohibit interrupts. The ISR code is executed with forbidden interrupts, and therefore vice versa, they cannot be enabled. As for DISPATCH_LEVEL, then in Windows NT, in this mode, interrupts are not prohibited, and in my manager, for simplicity, interrupts are prohibited on DISPATCH_LEVEL.

Dispatcher functions


The assignment of functions and the description of their work are given. Details of the transfer of parameters to these functions are given in the comments to the source code and are not duplicated here so as not to clutter the text. The names of the functions, whenever possible, are taken identical to the names of the analogous functions of the Windows NT kernel [2,3].

KeResetEvent

Remove the alarm object waiting type Event. It can be called at any IRQL level.

KeSetNotifEvent

Signal object waiting type Notification Event (Manual Reset). All streams that were waiting for the signaling of this object will go into a state of readiness for execution. If among them there is a stream with a higher priority than the current one, then the execution of the current stream will be superseded in favor of the one that has the highest priority.

The object will remain signaled until KeResetEvent is called.

The function can only be called on IRQL = PASSIVE_LEVEL.

KeSetNotifEventFromIsr

The same, but for a call to IRQL> = DISPATCH_LEVEL. When this function is called from the ISR, the threads switch, if it does, after the completion of the ISR execution.

KeSetSynchrEvent

Signal a waiting object of the Synchronization Event (Auto Reset) type. If this object had waiting threads, then one of them will go to the ready state, and the object will immediately return to the unsigned state. The remaining threads will continue to wait. If there were no waiting threads, then the object will remain signaled until some thread calls the wait function or KeResetEvent on it.

When alarms of such an object are expected by several streams, the order of their exit from the wait is not defined.
If the waiting stream that has passed to the readiness state has a higher priority than the current one, then a displacement will occur.

The function can only be called on IRQL = PASSIVE_LEVEL.

KeSetSynchrEventFromIsr

The same, but for a call to IRQL> = DISPATCH_LEVEL. When this function is called from the ISR, the threads switch, if it does, after the completion of the ISR execution.

KeWaitForObject

Waiting for object alarm (Event). If at the time of calling this function the object was signaled, then the function returns immediately. In this case, in the case of the Synchronization Event, the object will be reset to an unsigned state. If the object was not signaled, then the current thread will go to standby mode, and the dispatcher will execute code from other threads that are in the ready state.

The function can only be called on IRQL = PASSIVE_LEVEL.

User ISR

By this, of course, the ability to execute a user-defined interrupt handling routine. In the form of a function, it is not selected, but in the dispatcher's source code there is a place for calling or inserting it. To interact with threads, this subroutine can use the KeSetNotifEventFromIsr and KeSetSynchrEventFromIsr functions and, thus, wake up some thread and lead to crowding out another thread.

When an ISR is executed, a separate stack area is used that does not belong to any of the threads. When servicing a hardware interrupt, only the return address from the interrupt (2 bytes) is placed in the thread stack. Other dispatcher functions also do not abuse the stack. Therefore, you can save on stacks of threads by reserving the minimum amount of memory for them.

The remaining functions of the dispatcher are not intended to be called from user programs.

Configuring the dispatcher for a specific project


To use the dispatcher in any software project, you need to configure it. In the source text, fill in the threads data structure for each thread. An example of filling is given in the source. The main thing to fill out is the address of the stack of threads. The last two bytes of the stack of each thread contain the address of its entry point. You should also specify the number of threads by specifying the NUM_THREADS constant. The maximum number of threads in the system is 255.

When starting the system, all threads must be in a ready state. The last thread having the lowest priority should not go to sleep. This thread is an analogue of the System Idle Process and does not solve any problem, but is designed to "burn" unused processor time. It is recommended to cyclically execute the HALT command.

You should also configure the location in the memory of the dispatcher itself, the interrupt vector table, and the dispatcher's ISR address. The user ISR is called from the dispatcher's ISR.

The allocation of memory for waiting objects, which in principle can be an unlimited number, is left to the discretion of the user. You can store these objects on the stack, in global static variables, or connect a heap manager to the project and allocate memory for the specified objects on the heap.

Problem situations


For the execution environment implemented by the dispatcher, situations are typical that are problematic for systems of preemptive multitasking in general [1]. These include starvation (Starvation), race (Race Conditions) and priority inversion (Priority Inversion). The first two problems can be solved by proper system design, a reasonable distribution of tasks across threads, a reasonable choice of their priority and the use of synchronization primitives (first of all, Auto-Reset Synchronization Events). The third situation in my controller is not resolved, as there is a lack of dynamic priority of threads and synchronization objects of the Mutex type. Therefore, if this situation arises in a particular project, it should be taken into account and, if necessary, add the above funds to the dispatcher.

Source


The source code of the dispatcher, along with a description of data structures, function parameters, and other information, can be downloaded on GitHub .

Literature


1. Wikipedia. The article " Multitasking "
2. ReactOS . Source code, component “ntoskrnl”
3. Windows WDK Documentation. Msdn Kernel-mode driver architecture

Source: https://habr.com/ru/post/228049/


All Articles