📜 ⬆️ ⬇️

Crying engineer

Does your spouse understand how you think?


Engineering is an analytical profession in which, if done correctly, everything can be checked by cold calculation. This is a given. It doesn't matter what you think about something, only the result is important.

A recent article by Malcolm Gladwell in New Yorker raises this question. It discusses the role of engineers in the automotive recall service. Remember the Pinto disaster? Clashes from behind can turn a car into a fireball. What is wrong with these engineers?

It turns out that the numbers just do not support the passionate cries for change. A big deal that was initiated when three teenage girls died, when their Pinto burned down, was won by Ford.
')
The article gives non-engineers an excellent understanding of how we think, how we make decisions, and more importantly, how we look at the world. She paints a stark contrast to the mindset of number-oriented people and many others who make decisions based on how they feel.

Of course, sometimes our analytic parties are not always relevant. When we raised children, my wife asked why I always thought about what could go wrong with them. I replied: "I trained in the worst case analysis."

EE, like many of the firmware people, has spent the last four decades in the gap between designing circuits and writing code to support these circuits. The hardware side does not forgive emotions or any ideas forcing electrons to move not exactly as predicted by theory. As for software, it froze in the “but does it work?” Phase.

In the equipment we have a body of knowledge. Ohm's law. Maxwell's laws. Transistor physics. We can analyze HFE and other parameters to predict the transfer function, and we can calculate Q and the resonant frequency when creating an oscillatory circuit that meets certain requirements.

The world of the software is distorted. It is difficult to predict. The simplicity question is - how long will this interrupt routine written in C be executed? Most of us cannot predict this. Fortunately, it can be measured. Unfortunately, few do it. How to convert requirements to the size of the required flash? How can you predict the size of the stack or heap?

In the equipment we can analyze abnormal situations. Extreme temperatures, tolerances and fits, paired dimensions are all mathematically understood.

The software is not too clear. The flaws are hidden, appearing in the most unexpected places. The storm of interruptions is difficult to analyze. Task management is not deterministic.

Then features appear. They sweep through software, like the flames of a fire in California. Rarely are they considered in the engineering standard - which is a cold, hard analysis. Unfortunately, software processes are very difficult to learn. Academic literature is full of articles about different ideas, but the vast majority of them are related to experiments using a tiny code base created by several developers, among which are mostly students with little experience and, of course, not in the real world. Engineers are skeptical about the conclusions from these toy experiments.

However, we have a lot of data that most of the software engineering community does not know.

What is the best way to develop code? This question probably does not even make sense, given the wide range of applications that are constantly being created. The program, which will be used by one person twice, has completely different needs than the one that drives the engines on the A380.

Consider the agile community. There are dozens of agile methods. Which ones work? Which one is better? Nobody knows. In fact, there is little reliable data (outside of toy experiments) on the effectiveness of agile methods against other approaches. This is definitely not a reason to throw out agile, as I believe (due to the lack of analytical data) some of the agile ideas are just brilliant.

Some people say that, of course, we do not have enough data on agile, but this is also true for other methods. This is a lot of truth, because in EE there were centuries for the development of the theory in order to benefit from the work of George Ohm and others. Software is a relatively new concept. We still need the Theory of Software Total.

But I think you can re-formulate these questions. For example: "What are the two most effective ways to reduce errors and speed development?"

I wonder what your answer will be.

It's very simple: formal checks and static analysis. Why? We have data. In fact, we have tens of thousands of data points. One source is “The Economics of Software Quality” by Capers Jones and Olivier Bonsignour, but there are many others.

We know that cyclomatic complexity is actually a good way to measure aspects of test effectiveness.

We know, in fact, that the average team of programmers does not remove 15% of previously introduced errors. We know that the firmware team misses a third of the errors made.

We have data, part of Ohm’s law for software development. However, in the embedded world, only about 2% of teams use this data to control their development methods.

I believe that there is a lot of data, and I urge developers to use them and apply the results in their daily work.

Edwards Deming said best of all: "We believe in God, everything else requires data."

Source: https://habr.com/ru/post/257739/


All Articles