📜 ⬆️ ⬇️

Testing embedded systems

image I am a member of the Embox RTOS development project for embedded systems. Most often, the OS for embedded systems supports many hardware platforms, and we are no exception. Also in the project there are many services and libraries: ssh, telnet, Qt, etc. All these services and libraries I would like to have in working condition on various platforms.

I remember well the time when I had to keep Qt working. It was a horror! So I came to work during the day, something was broken again. Begin to understand. It turns out that someone has fixed a bug in the network stack and now Qt cannot create a socket. In short, Qt broke down almost daily and for the most unexpected reasons.

Naturally, the decision was to introduce some automated testing of various services into the project. What is the problem to make a server that will test all this?
')
The main problem is the specifics of embedded systems. Namely, unlike general-purpose systems, tests have to be performed in an environment with specific hardware support. For example, they have little memory, and it is not possible to put an integration testing tool inside such a piece of iron. That is, you need to test "outside." So let's get to the point.

Build and Run


As I said, we support several architectures. Therefore, to maintain the project in working condition, first of all, it is necessary to assemble it for various platforms. For this purpose, we use the gcc compiler, which is known to generate code for different architectures. But to do it manually is certainly not worth it. Fortunately, there are many different tools under the general name Continuous Integration - Jenkins / Hudson , Integrity , Buildbot , etc., to solve the problem of assembly automation. new commits to the repository. We use Buildbot. When some configuration is not collected, it is marked on the buildserver. In fact, it is still possible to automatically send angry letters to the one who broke it, but we are mostly in the same room and handle through voice communication over the air.

The next problem that arises again due to the cross-platform project is launching on the target platform or at least the architecture. Then another open source project, QEMU , came to our aid; it supports all of the processor architectures we have and a fairly wide list of peripherals.

Lyrical digression. Initially, in QEMU for the Leon 3 processor there was a bug, and we used the tsim emulator. But I wanted uniformity at least in order to establish automated testing. And since the QEMU project is open, we have adjusted its source code to support the Leon3 processor. In later versions, our edits were corrected.

QEMU allows you to set various parameters at startup (size of available physical memory, network and video cards used, plug-in drive, etc.) and we traditionally did not want to do this manually, we created a script that analyzes the configuration of our project and starts the emulator with the necessary parameters .

image

Unit testing


As everyone probably knows, testing can be of different types: Unit, regression, integration, and others. Unit testing is applied by the developers themselves and allows you to quickly check if the functionality is broken after making changes.

Enough at an early stage of development of the project, we realized that we would not go far without this type of testing. And we developed, naturally after researching existing solutions, a small lightweight framework for Unit testing in the C language. When we studied existing frameworks, the most convenient syntax, in our opinion, was the googletest framework. Unfortunately, this framework for the C ++ language, and we wanted to be able to write C-shnye tests, therefore, following the same syntax, we developed an analogue for C.

This framework turned out to be very successful, because on the one hand it actually does not require anything from the platform on which it is called, the only thing is the setjmp and longjmp functions for the architecture used, and on the other hand it has a convenient syntax. Because of the small requirements, this framework can easily be applied at the earliest stages of the development of embedded systems with virtually no working environment. Therefore, we actively use tests not only for the developed software modules, but also for testing the hardware component, for example, hardware timers, as well as testing system functions, for example, creating a thread in the kernel.

An example of testing the functionality of creating and executing a stream:

TEST_CASE("thread_create should return -EINVAL if thread function is NULL") { struct thread *t; t = thread_create(0, NULL, NULL); test_assert_equal(err(t), -EINVAL); } 

And one more example. The efficiency of the timer interrupt.
 static void test_timer_handler(sys_timer_t* timer, void *param) { *((int *) param) = 1; } TEST_CASE("testing timer_set function") { unsigned long i; sys_timer_t * timer; volatile int tick_happened; /* Timer value changing means ok */ tick_happened = 0; if (timer_set(&timer, TIMER_ONESHOT, TEST_TIMER_PERIOD, test_timer_handler, (void *) &tick_happened)) { test_fail("failed to install timer"); } i = -1; while (i-- && !tick_happened) { } timer_close(timer); test_assert(tick_happened); } 

I will not go into much of the description of our framework, it is beyond the scope of this topic, besides, I only used it, and developed abusalimov , but if someone is interested, you can write a separate article about it. I just note that the included tests run right at the start in automatic mode.

image

Integration testing


After the unit tests are completed, we proceed to integration tests. As mentioned above, performing integration tests inside an embedded system is rather problematic. Began to understand what tool to start using for testing outside. Investigated several integration testing frameworks - TETware RT , OpenTest , tcltest , autotestnet , DejaGnu . The following issues emerged:


At that time I was already familiar with googletest for C ++, and I liked their syntax for writing unit tests. Tests are written in the place of their announcement and are not registered anywhere else, there are many assertions, it is possible to define handlers that are called before and after the execution of each test. And I thought, why not move this convenience into the scope of integration testing for embedded systems?

Based on the knowledge of the shortcomings of existing solutions, I set myself the task of maximally combining the lightness of DejaGnu, the power of TETware RT and the aesthetics of the googletest.

As a technology for development, I chose the Tcl language with the Expect extension to automate testing. Expect allows you to simply describe the process of connecting to the embedded system, send commands and process the result.

Schematically, the principle of the framework can be described as shown below. Most of it works on the host, and the network sends commands that are executed on the interpreter of the embedded system, after which the result is transmitted back and analyzed on the host.

image

Hello embedded tester!

Let's look at what the simplest test looks like.

 package require autotest namespace import autotest::* TEST_CASE {echo “hello” print hello} { test_assert_regex_equal “echo “hello”" hello } 

The first two lines mean that we include the autotest library that I implemented, which is a package in Tcl. Next, a test case is announced. The essence of the test is simple - the embedded system must correctly execute the echo “hello” command by typing hello . Let's see how it works.

Immediately, I note that all tests are run only on the host system, and with the built-in we interact via TELNET. The TEST_CASE procedure first establishes a connection to the embedded system. Of course, for this, the telnetd service must be previously started inside the embedded system. That is, the scheme is this - we start QEMU with the prepared image of our OS, and wait until telnetd starts. Then you can connect and run tests.

So, after the connection is established, the test_assert_regexp_equal “echo“ hello ”" hello ” line is executed . It sends the “ echo “hello” command to the embedded system. The embedded system executes the command and all its output goes automatically to the host (this is telnet!). The host in turn receives the result and compares it with the string “hello” .

Framework Overview

The framework contains a set of library procedures test_assert_ * . All of them work in the same scenario — send a command from the host to an embedded system, and then get the output of the command and compare it with the expected one. Below is the test_assert_regex_equal procedure code from the example above:

 proc test_assert_regexp_equal {cmd success_output} { send $cmd set cmd_name [lindex [split $cmd " "] 0] expect { timeout { puts "$cmd_name timeout\n"; exit 1 } -regexp "$cmd_name:.*" { puts "$expect_out(buffer)\n"; exit 1 } "$success_output" { } } } 

If one of the test_assert_ * procedures is completed on exit 1 , then information is displayed that the test failed (the file name and line number are printed) and control is transferred to the next set of case-test.

Infused with the convenience of googletest, I implemented the TEST_SETUP and TEST_SUITE_SETUP procedures (similarly for TEARDOWN ). I want to draw attention to one feature. These procedures exist for both the host and the embedded system - TEST_SETUP_HOST and TEST_SETUP_TARGET . The difference is that the first one will be executed on the host, and the second one on the embedded system.

Step-by-Step Test Debugging

Let's imagine for a moment that our “Hello embedded tester!” Test failed. That is, when it was launched, the framework gave an error that line 5 did not trigger test_assert_regex_equal . I want to find out what went wrong in our embedded system. To do this, let's modify our test as follows by adding the “b” character to the end of the line where the error occurred.

 package require autotest namespace import autotest::* TEST_CASE {echo “hello” print hello} { test_assert_regex_equal “echo “hello”" hello b } 

Now the test will be executed up to the line with the symbol “b” . This means that immediately after entering the procedure, it reports this by typing the line number, the name of the file with the breakpoint and the line itself, and then enters the mode of waiting for the input key to be pressed. In other words, it is such a kind of breakpoint. Now, when the test is in standby mode, it is possible to set a breakpoint inside the embedded system and then continue the test execution.

By the way, the standard debugging tools in Tcl do not allow setting breakpoints on an arbitrary line within the procedure, and it is suggested to do with debug printing.

What to do "inside", and what "outside"?

So, I satisfied my curiosity and implemented the framework, attached to it a beautiful conclusion about the test results, logs, and wrote the first test for ping. Now I had to start using it in combat conditions.

Oh yeah, the important point is that not all integration tests make sense to perform “outside” the embedded system. For example, a test for ping from the target platform to the host can be perfectly performed from the “inside”. To do this, simply execute the ping command and check the result of the execution of the last command (in unix, this is 'echo $?' ). This is implemented using the startup script. It configures the system after it is loaded - for example, the network is configured. At the end of the startup script, relatively simple integration tests are added that can be performed inside the embedded system.

An example of a startup script.
 "ifconfig eth0 10.0.2.16 netmask 255.255.255.0 hw ether AA:BB:CC:DD:EE:02 up", "route add 10.0.2.0 netmask 255.255.255.0 eth0", "route add default gw 10.0.2.10 eth0", "export PWD=/", "export HOME=/", "mkdir /mnt", "mkdir /mnt/fs_test", "test -t fs_test_read", 

At the end of the script there is a line “test -t fs_test_read” , which runs the test for the file system.

A natural question appears: what tests cannot be performed this way and how to deal with it? The answer is simple. For example, to perform a test, you may need to process the output of the program using the grep and awk utilities. Naturally, you do not want to push such utilities inside an embedded system. Therefore, such tests are performed “outside” with the use of the implemented framework.

As an example, I will give a test on ntpdate. Ntpdate is a program that exposes the date and time through the NTP protocol. It is implemented in our OS. The test verifies that the time set inside the embedded system is the same as the current time on the host.

Ntpdate test
 namespace import autotest::* set host_date "" proc get_host_date {} { global host_date spawn date -u --rfc-3339=date expect -regexp {.{10}} set host_date $expect_out(0,string) } TEST_SETUP_HOST {get_host_date} TEST_CASE {ntpdate sets current date in UTC format correctly} { variable host_ip global host_date test_assert_regexp_equal "ntpdate $host_ip\r" ":/#" test_assert_regexp_equal "date\r" "$host_date" return 0 } 

The get_host_date function gets the current time on a host in UTC format. To do this, it is registered using the TEST_SETUP_HOST procedure, that is, it will be called on the host. Below is TEST_CASE . It first runs the “ntpdate $ host_ip” command on the embedded system, and then the “date” command and verifies that $ host_date is contained in the output of the command.

Results


As a result, the testing process described above can be schematically represented as follows:

image

And this is how our test site looks like:

image

I would be glad if someone will share their experience in organizing the process of testing embedded systems.

Source: https://habr.com/ru/post/239387/


All Articles