I want to tell a little about two approaches of designing software in embedded. These two approaches are with the use of a super-cycle or with the use of RTOS (Real-Time Operation System, real-time operating system).
I think that in the course of the story it will be clear also in which cases the first should be applied, and in which it is impossible to do without the second.
I hope it will be interesting to all those who want to look into the world of development for embedded systems. For those who are already embedded in the dog ate, most likely, there will be nothing new.
Quite a bit of theory (for those who take the very first steps).
We have a microcontroller, which is the actual processor, some memory and various peripherals, for example: analog-digital converters, timers, Ethernet, USB, SPI - all this depends strongly on the controller and the tasks to be solved.
')
You can, for example, connect some sensor to the ADC input, say, a temperature sensor, which, when energized, transforms the temperature into a voltage measured by this ADC.
And to the controller's output, called GPIO (General Purpose Input-Output), you can, for example, connect an LED (or something more powerful like a motor, but through an amplifier).
Via SPI, RS232, USB, etc. the controller can communicate with the outside world in a more complex way - by receiving and sending messages using a predetermined protocol.
In 90% of cases software is written in C, sometimes C ++ or assembler can be used. Although more and more opportunities to write at something higher level, if it does not apply to the direct work with the periphery and does not require the maximum possible speed.
To better imagine what you have to deal with, here are a couple of examples of environments with which you have to work: the size of the FLASH controller (hard drive analog) is 16-256 kilobytes, the size of RAM is 64-256 kilobytes! And in such an environment, it is really possible to launch not only the application, but also the real-time operating system with full-fledged multitasking support!
The examples below are in pseudo-code, in some places very similar to C. Without implementation details, where this is not essential for understanding.
So, the "super cycle" approach.
The program in this approach looks easy:
int main() { while(1) { doSomething(); doSomethingElse(); doSomethingMore(); } }
An infinite loop in which the controller consistently does everything it needs to do.
The most interesting, of course, in embedded systems is the work with the periphery (by the very same ADC, SPI, GPIO, etc.). With external peripherals, the controller can work in two ways: polling or using interrupts. In the first case, if we want, for example, to read a symbol from the RS232 console, then we will periodically check whether there is a symbol there until we receive it. In the second case, we configure the RS232 controller so that it generates an interrupt when a new symbol appears.
Demonstration of the first approach. For example, we want to monitor the temperature, and if it exceeds the set limit - light the LED. It will look something like this:
int main() { init_adc(); init_gpio_as_out(); while (1) { int temperature = readTemperature(); if (temperature > TEMPERATURE_LIMIT) { turnLedOn(); } else { turnLedOff(); } }
So far everything should be simple and clear. (I will not give the functions of reading the temperature and manipulations with the LED - this is not the purpose of this article).
But what if we need to do something at a given frequency? In the example above, the temperature will be checked as often as possible. And if we, for example, need to flash the LED once a second? Or interrogate the sensor strictly at intervals of 10 milliseconds?
Then timers come to the rescue (which practically any microcontroller has). They work in such a way that they generate an interrupt with a given frequency. The flashing LED will then look something like this:
volatile int interrupt_happened = 0; interrupt void timer_int_handler() { interrupt_happened = 1; clear_interrupt_condition(); } int main() { init_timer(1_SECOND_INTERVAL, timer_int_handler); while (1) { if (interrupt_happened) { ledToggle(); interrupt_happened = 0; } } }
The peculiarity of working with interrupts is such that the interrupt handler (the code that will be called directly at the moment when the interrupt occurs) should be as short as possible. Therefore, the most common solution is to set a global flag variable in the handler (yes, yes, without global variables nowhere, alas), but to check it in the main loop, and when it changes, perform the main work required to handle the event.
This global variable must be declared with the identifier volatile - otherwise the optimizer may tritely discard unused code from its point of view.
And if you need to flash two LEDs, so that one blinked once a second, and the second - three times? You can, of course, use two timers, but with this approach we will not have enough timers for a long time. Instead, we will make the timer work with a much higher frequency, and in the program we will use a divider.
volatile uint millisecond_counter = 0; interrupt void timer_int_handler() { ++millisecond_counter; clear_interrupt_condition(); } int main() { init_timer(1_MILLISECOND_INTERVAL, timer_int_handler); while (1) { uint timestart1 = millisecond_counter; uint timestart2 = millisecond_counter; if (millisecond_counter – timestart1 > 1000)
Note that we do not need to monitor the millisecond counter overflow, as the unsigned type is used.
Imagine now that we have a debugging console implemented on top of the RS232 interface (the most common solution in the embedded world!). And we want to display debug messages there (which will be visible if our controller is connected to a computer via a COM port). And at the same time we need to interrogate the sensor connected to the controller with a strictly set (high) frequency.
And here the question arises - how to implement such a banal thing, as the output line to the console? An obvious solution like
void sendString(char * str) { foreach (ch in str) { put_ch(ch); } }
in this case will be invalid. It will output a string, but at the same time it will irreversibly violate the requirement to interrogate the sensor with a strictly specified frequency. We do all this in one big cycle, where all actions are performed sequentially, remember? And the console is a slow device, and the output of a string can take much longer than the required interval between successive polls of the sensor. The example below is how not to do it!
int main { while (1) { … if (something) { send_string("something_happened"); } … if (10_millisecond_timeout()) { value = readADC(); } } }
Another example - you want to implement software overload protection. Add a current meter, connect it to the ADC controller, control the safety relay will turn on one of the input-output pins. And naturally, you want the protection to work as soon as possible after the occurrence of the overload event (otherwise everything will just burn out). And you have the same general cycle, in which all actions are performed strictly in order. And the guaranteed reaction time to an event can never be less than the time it takes to execute one iteration of the loop. And if in this cycle there will be operations that require a long time to complete, then everything will be the actual, it is they who will set the system response time to everything else.
And if suddenly, somewhere in this cycle, an error creeps in, then the whole system will fall down. Including the reaction to overload (which I really would not want to allow, wouldn’t it?).
Although with the first problem it is theoretically possible to do something else. For example, replace the simplest, but long line printing function with something like:
int position = 0; int send_string(char * str) { if (position < strlen(str) { put_ch(str[position]; ++position; return 1; } else { return 0; } }
And a simple call to this function for something like:
int main { while (1) { … if (something) { do_print = 1; position = 0; } if (do_print) { do_print = send_string("something_happened"); } … if (10_millisecond_timeout()) { value = readADC(); } } }
As a result, we have reduced the time taken to go through one cycle from the time needed to print a whole line to the time needed to print one character. But for this, instead of primitive and understandable at first glance, the line output function in the console had to add two state machines to the code — one for printing (to memorize the position), and the other for printing, to remember that we are now printing a line on over the next few cycles. Long live global variables, “dirty” functions, storing states, and other such wonderful things that easily and very quickly can turn code into unaccompanied spaghetti.
Now imagine that the system should simultaneously poll a dozen sensors, respond to several critical events requiring immediate response, process commands “arriving” from a user or computer, output debug messages, manage a dozen indicators or manipulators. And for each of the actions given their own limitations on the reaction time and the frequency of the survey or control. And try to stuff it all in one successive common cycle.
Of course, this is all real. But the one who will have to accompany all this after at least a year after writing, I will not envy.
Another common design cycle problem is the complexity of measuring system load. Suppose you have a code:
interrupt void external_interrupt_handler() { interrupt_happened = 1; clear_interrupt_condition(); } int main() { while (1) { if (interrupt_happened) { doSomething(); interrupt_happened = 0; } } }
The system somehow reacts to the interruption coming from the outside. And the question is how many such interruptions per second can the system handle? How busy will the processor be when processing 100 events per second?
It will be very difficult for you to measure how much time was spent on event processing, and how much - on polling the variable “Did the interruption occur?”. After all, everything is performed in one cycle!
And here comes the second approach.
The use of real-time operating system.
The easiest way to illustrate its application is to use the same example: simultaneous interrogation of a sensor with a given frequency and output to the console of a long debug line.
void SensorPollingTask() { while (1) { value = SensorRead(); if (value > LIMIT) { doSomething(); } taskDelay(10_MILLISECOND_DELAY); } } void DebugTask() { dbg_task_queue = os_queue_create(); while (1) { char * str = os_queue_read(dbg_task_queue); foreach (ch in str) { put_ch(ch); } } } void OtherTask() { other_task_init(); … while(1) { …
As you can see, there is no longer one main infinite loop in the main function. Instead, a separate infinite loop in each task. (Yes, the function os_start_sheduler (); never returns control!). And most importantly, these tasks have priorities. The operating system itself will provide what we need - so that the task with high priority is performed first of all, and with low priority - only when there is time for it.
And if the reaction time to, for example, an interrupt in a super-cycle design is equal in the worst case to the entire cycle (interruption happens, of course, right away, but not always the necessary actions can be done directly in the handler), then the reaction time in the case of a real-time OS, it will be equal to the switching time between tasks (which is small enough to assume that this happens right away!). Those. the interruption will occur in one task, and immediately upon its completion we will switch to another task, awaiting the event “launched” from the interruption.
interrupt void overcurrent_handler() { os_semaphore_give(overcurrent_semaphore); clear_interrupt_condition(); } void OvercurrentTask() { os_sem_create(overcurrent_semaphore); while (1) { os_semaphore_take(overcurrent_semaphore); DoOvercurrentActions(); } }
As for measuring CPU usage, this task using the OS becomes trivial. By default, each OS has the most voracious (but also the lowest priority) Idle task, which performs an empty infinite loop and gets control only when all other tasks are inactive. And counting the time spent in Idle is usually already implemented. It remains only to display it in the console.
Also, if all of a sudden you “do not notice” any mistake, then only the task in which the error will be present “falls” (it is also possible that all tasks with a lower priority will also continue), but tasks with a higher priority will continue to be executed, providing at least minimal vital functions of the device, for example, protection against overload.
And summing up: if the system is very simple and undemanding by the reaction time, it is easier to do it on the model of a “super cycle”. If the system is going to become big, uniting in itself many different actions and reactions, which are also time critical, then there is no alternative to using a real-time OS.
In addition, plus the use of the OS - a simpler and more understandable code (since we can group the code by task, avoiding global variables, state machines and other garbage necessary when using the design with a supercycle).
Minus the use of the OS - for its use requires more space, memory, experience and knowledge (although there is nothing complicated there, yet multitasking is a priori more complicated and unpredictable than consistently running code). Be sure to have a good understanding of the principles of working in a multitasking environment, the principles of thread-safe code, data synchronization, and much more.
For the "play" you can take FreeRTOS - a free open source project, while quite stable and easy to learn. Although not uncommon, and commercial projects using this particular OSes.