
Another post for a blog dedicated to working on a toy OS.
Last time I wrote about the need for a simple driver AHCI (SATA). Before starting to move in this direction, I decided to sketch a driver infrastructure: a common driver interface + a refined storage device driver interface. The formulation of these interfaces revealed a problem to which I had previously not paid attention - the problem of portability.
Platform-independent code (for example, most of the scheduler, an auxiliary code of type kprintf, ...) is mixed with code sharpened only under x86_64 (system descriptor tables, APIC, interrupts, ...). Although nothing prevented me from formulating a driver interface tied to x86_64 (in particular, to freely operate PCI addresses), it became clear to me that without a clear separation of code specific to a specific platform from the general portable code, I would only aggravate the situation. So, I decided to sort through everything written, separating the common code (in the root src /) from the code specific to the platform (in src / x86_64 /). This is what I've been doing for the last two weeks.
I will describe briefly the code separation mechanism on the example of the scheduler. The src / schedule.h scheduler interface enables (#include) the special src / x86_64 / schedule.inc file, which contains platform-dependent static inline functions (both interface and internal). All internal characters (not related to the interface, but not static) are preceded by the prefix "__". The main implementation of the scheduler is in src / schedule.c, separate internal functions and assembler code in src / x86_64 / schedule.c. Thus, the scheduler code is “sprayed” into two directories. Of course, this complexity is only for the general case, while many modules are constructed in a simplified way. For example, for cpu_info (information about logical processors), the header is in src /, and the implementation is in src / x86_64 /. Or a fully platform-specific APIC is placed entirely in src / x86_64 /.
Now about the promised function sleep. Unlike the mutex, the implementation of sleep required some modification of the scheduler (albeit minimal). The following functions were added to the interface part (
src / schedule.h ):
')
typedef void (*timer_proc)(uint64_t ticks); uint64_t get_ticks(void); timer_proc get_timer_proc(void); uint64_t get_timer_ticks(void);
Those. Now the scheduler also acts as a timer: it stores the tick counter (internal timer interrupts) and also calls the handler function as soon as the number of ticks reaches the specified number. Consider the implementation of this mechanism (
src / schedule.c ).
static inline void handle_timer(int cpu) { if (cpu == get_bsp_cpu()) ticks++; if (timer_ticks[cpu] && timer_ticks[cpu] <= ticks) { uint64_t prev_ticks = timer_ticks[cpu]; timer_ticks[cpu] = 0; set_outer_spinlock(true); timer_proc_(prev_ticks); set_outer_spinlock(false); } }
The handle_timer function is called on every timer interrupt. Despite the fact that the ticks counter is incremented only for the bootstrap processor, the timer is independently programmable for each of the logical processors. Wrapping a call to a handler in set_outer_spinlock is necessary so that a call to release_spinlock inside a handler does not accidentally execute an STI instruction (do not forget that we are in the context of an interrupt).
Now, using this advanced functionality of the scheduler, we can implement sleep (
src / sync.c ).
struct sleep_node { struct sleep_node *next; thread_id thread; uint64_t ticks; }; static struct sleep_data { struct sleep_node *tail; struct mem_pool pool; struct spinlock lock; } sleeping[CONFIG_CPUS_MAX]; static void sleep_timer_proc(UNUSED uint64_t ticks) { struct sleep_data *slp = &sleeping[get_cpu()]; if (slp->tail) { struct sleep_node *node = slp->tail; slp->tail = slp->tail->next; if (slp->tail) set_timer_ticks(slp->tail->ticks); resume_thread(node->thread); free_block(&slp->pool, node); } } err_code sleep(uint64_t period) { struct sleep_data *slp = &sleeping[get_cpu()]; err_code err = ERR_NONE; acquire_spinlock(&slp->lock, 0); struct sleep_node *node = alloc_block(&slp->pool); if (node) { node->thread = get_thread(); node->ticks = get_ticks() + period / CONFIG_SCHEDULER_TICK_INTERVAL; if (!slp->tail || slp->tail->ticks > node->ticks) {
The above code needs some explanation:
1. An instance of the sleep_data structure corresponds to a logical processor. The sleep pool for each processor is independent, since mem_pool is not thread safe.
That is why the mutex code hides a serious mistake: for all mutexes there is a single pool for mutex_node, and it is necessary that each mutex has its own pool. I plan to fix it soon.2. As can be seen from the code, when added to the list, the threads are ranked by wake-up time (in ticks).
3. The sleep_timer_proc function is the handler that is called by the scheduler in the context of a timer interrupt. Its task is to wake up the desired flow.
The rest seems fairly transparent.