📜 ⬆️ ⬇️

Testing embedded systems - one aspect, which for some reason say little

By writing an article pushed reading the article with a similar name, the last visit to Embedded World and development experience in this area.

For some reason, when they talk about testing in relation to embedded systems, they almost always mean by this a platform that allows you to “cut off” this most embedded system in order to test written code “independently of the hardware platform”.

Of course, the approach is the place to be, and with it you can test and find a lot, but ...
')
As an example, a simple system: a microcontroller and an infrared temperature sensor connected to it via I2C. How are we going to test?

What can be virtualized here so that the test does not lose all meaning? If all the code essentially comes down to initializing the I2C controller peripherals and implementing the communication protocol with the sensor itself? And blocking access to the resource for the case of a multitasking environment.

In my opinion, for normal testing, you need to be able to read the temperature value from the sensor, and in some way from the outside to obtain data on the actual ambient temperature and / or the object that the sensor points to. And compare them. Those. for normal “end-to-end 'testing, without a real board with a controller and a sensor, as well as a communication interface“ out ”, in my opinion, it’s impossible to do ... In the most extreme case, we can assume that the temperature in the room where people work will be in the range of 18-30 degrees, and check the resulting value for getting into this interval. But if you need to check the accuracy, then - without a thermo-camera, alas, not enough.

Life example: we had to somehow work with the ADG2128 chip — an 8x12 switching matrix with I2C control. And the chip, as it turned out, had an undocumented glitch - its I2C part “woke up” the chip not only when it received its address at the beginning of the packet, but whenever it detected its address on the bus. Even in the middle of the transfer. I2C, as it were, is designed to hang several devices on it. And now - there is a communication with another device hanging on the same bus, and in the middle of the communication a byte with the address of this ADG slips, he wakes up and starts sending his data to the bus ... In general, the interesting one was a bug, and its fix-crutch was also very peculiar, albeit working in the end.

So - how could such or a similar glitch be “caught” using the testing approach without having the embedded system itself with a “live” chip on it?

Another couple of examples from the life of embedded systems - after adding another function, the controller memory ends. Adding new functionality leads to “racing” or deadlock. Alternatively, incorrect, but still possible, in reality, user actions from the series “connect to the device wrong / wrong / wrong there / wrong time” lead to similar consequences. Or send the wrong configuration to the device. Or, the device itself at a certain configuration will begin to consume more current than USB can provide. Or when connecting the device to a battery-powered laptop, there is no connection between the “ground” and the “ground” in the socket - and the measurement will be amazingly inaccurate due to a bug in the designed circuit ...

In my opinion, normal, “full-fledged” testing will be possible only in the form of developing another device, a real, “iron” one, which will emulate all the necessary impacts on our tested device, plus our test framework should be able to control our tested device itself and impact simulation device.

When we were developing a DSLAM (a telecommunication device with a wide Ethernet at one end and 32/64/128 DSL modems at the other), the test bench looked something like this: 64 modems connected to 64 ports of the L2 / L3 traffic generator, and an uplink connected to another port. The test script configured the DSLAM, traffic generators, started the traffic and checked the results.

When we developed a multi-channel application oscilloscope, the testing device looked like this: a box with 4 independent outputs connected to the inputs of the test oscilloscope, each output could simulate all sensors supported by the oscilloscope (such as current clamp or pressure sensor), and also produce values ​​that would real sensor Test scenario - set the outputs of the combination of sensors and generated values, configure the device under test (set the sensor, range, etc.), measure them with the device under test, compare with the generated ones.

All this was integrated into the CI system - the current build was assembled and poured onto the device, after which the testing described above began.

The systems were used both at the development stage for regression testing and at the production stage for testing a new device before sending it “in the field”.

Undoubtedly, such an approach is expensive - but with a long and complex multi-functional project, it seems to me that there is no alternative to it. And without it, there is a direct road to the “death loop of testing” (Growth of the necessary number of “manual” tests as new functions are added, as a result, even the simplest change in the code cannot be done quickly: 1 hour for change / bugfix and a week of manual regression testing, Yeah. About the week - not a joke, alas.)

Now we are doing the testing system itself in the form of a more or less universal modular system, let's see if anyone else needs it ...

Source: https://habr.com/ru/post/239403/


All Articles