Once upon a time I wrote an article about the principles of designing an application for embedded systems. Then I said that there are two basic principles - an infinite loop and a real-time OS. But recently I heard that there is a third approach - the so-called Time Triggered Design.
Michael J. Pont's “Patterns for time-triggered embedded systems” book was used as an introduction to the approach, and for those interested,
www.safetty.net/publications/pttesI will try to outline here the concept.
The concept is based on the following ideas:
- there is a single periodic interrupt in the system - tick;
- tasks do not have priorities;
- control of the next task is transferred only after the completion of the task currently running.
This set of principles is also called the cooperative task scheduler (cooperative scheduler). Classic RTOS uses what is called a preemptive scheduler.
The author cites the advantages of ease of implementation, very low overheads and, strange as it may sound, reliability.
As drawbacks - the need for more careful design. For example, one of the requirements is that the task execution time be as short as possible, ideally, if it is significantly less than the interruption period.
')
Pseudocode demonstrating this principle.
void main(void) { scheduler_init(); add_task(Function_A, 2); add_task(Function_B, 10); add_task(Function_C, 15); scheduler_start(); while(1) { dispatch_tasks(); } }
So far everything has to be clear - we initialize the scheduler, add three tasks that will be executed at specified intervals in ticks, start the scheduler and go into the endless cycle of the task manager.
Structure describing the context of the task:
typedef struct { void (* pTask)(void); uint32 Period; uint32 PeriodCur; uint8 RunMe; } task_descriptor_t;
Indeed, compared to RTOS, the “overhead” is much smaller — a pointer to a function, the frequency of the launch, the current value — how many more ticks to wait for the launch, and how many times the task should be started.
task_descriptor_t all_task_list[MAX_TASKS];
The task list is a regular array of predetermined length.
The scheduler itself hangs up to interrupt the timer, configured to occur at a specified frequency, for example, 1 ms - the same tick.
void scheduler_update(void) interrupt { foreach (task in all_task_list) { task.PeriodCur--; if (task.PeriodCur == 0) { task.PeriodCur = task.Period; task.RunMe++; } } }
In the handler, we go through the entire task list, decrement the current value of the remaining ticks before the launch, and if it reaches 0, overwrite it and increment the run counter.
And finally - the dispatcher. Which turns in an endless loop.
void dispatch_tasks(void) { foreach (task in all_task_list) { if (task.RunMe > 0) { task.pTask(); task.RunMe--; } } }
We are still going through the task list, and if the task has a launch counter greater than zero, we start this task by directly calling its function, and decrement the start counter.
Actually everything!
Indeed, the implementation is ridiculously simple (and therefore easily portable anywhere). Indeed, better than an endless loop. Indeed, no means of synchronization is required, such as semaphore-queue-critical sections. Indeed, no context switch is required.
But I do not like. And that's why.
- The requirement of one and only one interruption in the system means that all work with the periphery should occur in polling mode. What imposes its limitations. And reduces the response time of the system.
- Go through the task list. Those. before management reaches the last task in the list, at worst all previous tasks in the list will be called. The reaction time to an external event is again far from the most predictable.
- If something happens to one of the tasks, it may not reach the last one on the list. We have a cooperative mode, and each task must return control to the dispatcher itself!
- Restriction on the execution time of a single task. From which it follows that in the case of any more or less prolonged action, instead of a simple linear code, we will have to fence state machines.
For the last point there was a wonderful example in the same book. In addition to focusing on shedulers, two thirds of the book is devoted to a story about the basics of embedded systems, work with peripherals and protocols. Here is an example of working with SPI from this book.
void SPI_X25_Write_Byte(const tWord ADDRESS, const tByte DATA) {
Simple and clear linear code, it seems. But when you try to use it in the design described above, the following problems will appear:
- SPI_Exchange_Bytes () uses in itself blocking cycles of waiting for the readiness of the periphery - we only have polling, remember? (I will not give the code, there are already a lot of it here, just believe that they are there). And since the periphery may suddenly fail; a timeout is set for each wait cycle. Which in this function is equal to as much as 5 ms! As a result, in a simple function, we have five calls of the exchange byte function on SPI, each of which at worst can take 5 ms. Remember the requirement that the task be completed in a time significantly shorter than the time of one tick (which we have 1 ms)? So, now, instead of a simple and clear code for working with SPI EEPROM, I have to write a complex state machine so that in one call more than one byte is not transmitted? Still, even a single SPI_Exchange_Bytes () call can take 5 ms with an unfavorable scenario, and to further reduce the possible delay, it is necessary to rewrite the even simpler function SPI_Exchange_Bytes () so that the timeout in it is called not by one piece of 5 ms, but in small pieces of 100 ÎĽs with each call? The only thing I want to say: "Are they serious?".
- Here in my real project I need to transfer 1 megabyte of data via SPI to FLASH. Now let's calculate - if the task call happens every millisecond, and in one call I cannot transfer more than 1 byte of information - how long will I transfer 1 megabyte? Of course, you can get around this by transferring not one byte, but several, but the code will become even more complicated - I still have to make sure that the total time spent does not exceed, say, 300 ÎĽs, since the requirement that the challenge is shorter than the tick, is still valid!
- Not to mention that, if possible, to enable an interrupt from SPI, the task will be simplified even more - I just write the data block for transmission to the buffer, send the first byte, and send the next byte from the buffer in the interrupt handler to finish sending. But the second interrupt in the system will break the foundation of the Time triggered design, so you have to forget about it.
Well, it’s somehow strange that the authors for some reason did not give an example of the code of the above example of working with SPI, adapted to their own design. Scared, I guess.
Considered more and so-called. hybrid sheduler when another interrupt is enabled, in the context of which another high-priority task can be performed. But still it does not change the general essence.
The second part of the book discusses how to adapt the design for a system of several microcontrollers. The main idea is that the master microcontroller uses its timer as a source of interruptions for a tick, and all others use an external interrupt as a tick interrupt, which the master sends from its handler the interrupt through one of the GPIO pins connected to the external interrupt inputs of the other microcontrollers. Thus, synchronization of all microcontrollers is achieved. The idea is interesting, but the problems described above do not solve much, unfortunately.
In general, the approach probably has the right to life. Somewhere, where a small microcontroller, not doing anything 99% of the time, handles a couple of external events that do not require an immediate response within a specified time. On the other hand, here and supercycle normally work, while you can use multiple interrupts.
But for situations where there are more events, where it is much easier to use interrupts to work with peripherals, and where reaction time and stability are more or less critical, and where you need to use the controller's performance to the maximum - I will still remain a supporter of using RTOS. Let the system be a certain unpredictability and the need for proper use of synchronization tools - the benefits of a more strict separation of tasks from each other and from the features of the sheduler are all the same, it seems to me, more.