📜 ⬆️ ⬇️

Source code as a way to think

A small preliminary note: A detailed explanation would require the volume of an average book. Here everything is given schematically, briefly and without details. The text is, of course, hooligan, but before you run into the author, you should take into account that there are twenty years of experience behind him and a lot of literature, both classical and IT specialists, which are not known.

There is a word that brings huge losses to the industry every year. And the word is - bug.

Bugs are some kind of virtual malicious bugs hiding inside programs. They have their own will. They penetrate the most important sites. They spoil the results, interrupt the work and do other nasty things.
')
Of course, this is nonsense, to be honest. But, if we deduce the mental model from what programmers do and say, it turns out that the virtual living beings are being sought, caught, identified and destroyed.

A massive global endless game, which enthusiastically indulges in almost all industry workers, including testers, management, process organizers and highbrow theorists.

Why it happens? Because in the industry they completely misunderstand what source code is and what it is for.

If you interview the experts, we get a hundred different opinions. But in the bottom line, if we discard all the husks, the code is the fruit of creative effort and an expression of the genius of the author. In a standard situation and with any actual level of professionalism, the programmer perceives himself to be an almost holy genius, creating an almost perfect product.

If you make the programmer not perfect, one interesting thing turns out: the code ceases to be a ready-made result. He even ceases to be the result. And it becomes a reflection of the current understanding of the programmer of the conditions of the task and how to solve it.

The code precisely reflects, but does not describe. The latter is possible, but requires the restructuring of the entire process, from recording formats to the brain.

Brains are critical. We need people of a special culture who are not afraid to look like fools, which are practically not found in IT.

Writing and saying what you think is always a lack of tact, contempt for others, and rudeness. If someone puts a comment in his code “Stupid idea. Does not work, if N <0. Correct ASAP. ”, He runs the risk of at least strange. But if it falls into the responsibility of the genius programmer, there will not be limited to petty hysteria. Even if "stupid" will be implied only by context. Or write something like “I don’t know what this works.” Then show this to the boss and ask for a boost.

And, of course, it is much more profitable to say “We fix bugs in the communication module,” rather than “Reading the documentation, we missed a few critical moments and the week will be redoing everything from scratch.”

Okay, leave. Most do not stand up to this. Fearfully. And dropping self-esteem is also scary. And to lose face ... And bosses, too ... In short, figs with him, let's move on to the buns.


Quality is determined by the time in which the error lives in the system.

Jidoka

Actually, everything on this topic. That's just the concept magically passed by the European mentality.

Before praising lean and kanban and praying for retelling retellings in books, the authors of which explain how to perform the rituals of a cargo cult better, you should refer to the basics. And there it is quite unequivocally said that you are not putting building locks on a rotten foundation. If the program does not work correctly, it should stop. Point.

Yes, if at the presentation, customers see how the program will crash with a NullPointerException, it will not be very good for commerce. There are a thousand and one more reasons to prove why mistakes need to be hidden. But, if we talk about the code as a reflection of understanding, you can move forward only by correcting errors.

And the standard of the industry is just the tendency to hide or bury any system failures in huge logs, where no one ever voluntarily looks. After that, enthusiastically play catching bugs.

And here we can mention another interesting quality.

Any interface should return an error at an adequate level.

The first thing I usually do on any project is to write my own diagnostic module. Because everything that exists meets the needs of geniuses, but is poorly intended for idiots who have mistakes. As a result, there is a very powerful tool in my hands, one of the main properties of which is adequacy.

Adequate level for error in execution logic - program shutdown

Because if logic fails, they cannot be ignored. Need to understand.

Yes, then, when the problem is clear, you can get around, you can throw out part of the process or data. But to tell the programmer that he does not think correctly, the program needs one way - a complete stop.

It is terribly slow and terribly expensive. (And terribly scary.)

In theory.

In practice, this significantly straightens the path to the goal and in time cuts wrong decisions. (Just do not drag agile here, which is translated into Russian by the phrase “We are geniuses!”)

Adequate level for user error - the place where the data is located

If the user has produced garbage, he should see it as a result of the program. It is not always possible to catch this while entering and tint the error field with red in the dialogue. If the input table is ten thousand lines, it is useless to swear, it is useless to stop and it is also useless to give errors to the log or by jumping messages. Will not notice. Or bypass.

You have to create a message in a language that the user understands, with an exact indication of the location, causes and possibilities of correction, and then not just output it to the log (which is not adequate to the error level), but drag it to the final results. So that when the user is indignant, “Why do I have an empty value here?”, Quietly open the next column in the table and read out loudly and expressively why.

However, if we have boxed software or a project for clients, we can forget about the users. The support service also needs to earn money on something. And it is desirable that these were not our mistakes.

Do not catch fleas - errors can be predicted


Industry prays at Unit Testing. It is fashionable, it is progressive, it is agile. That's just the purpose of his answer to the question "I really, a genius, right?"

Naturally, petty testing is necessary when, according to the standard, it is necessary to confirm each line of code or, five times a week, to catch the results of visiting playful pens of colleagues. But that does not advance us. And it reflects the work in real-time tasks and real data.

When a programmer is not perfect, he does not need a mirror, it is important for him to understand what is actually going on inside. And here the easiest way is to make a functional prototype or, in a real situation, use the real data in the right place to stop and see what happened at this stage. Naturally, the easiest way to do this is when the code is written, because it is at this point that the programmer makes assumptions about what should and should not work.

Also, if an error is found to understand the conditions and causes, the easiest way is not to trail step by step through an almost perfect code, but to check the course of thought and build a theory, then in the critical places, add the necessary parameters to the log and stop where something interesting happens.

Because for me, the most frequent debugging tool is a string

  die "OK!"  ; 

Naturally, in most cases the line above is $ diag-> DBG (...), which displays everything I wanted to know and that I was interested in. Sometimes this is a couple of values, sometimes a structure and a half a megabyte, which is then carefully studied with pens or a program.

Then, when everything becomes clear, you can remove it. However, it is more useful not to delete a trace, but to hide it in a comment. Because mistakes tend to occur again, and you will not need to invent a new one, what and where to look.

Naturally, no language and no one does not support this. Because they are created by brilliant programmers, but they do not make mistakes and always know what the world should be.

Mistakes can be caught where they are easiest to understand.


If we are waiting for an error, we must make the program reliable. That is, to put a fence where something bad can happen. Even if “it cannot happen.” Everything here ultimately boils down to three words: preconditions, postconditions, invariants. Naturally, most languages ​​and languages ​​do not support this or support it completely crookedly.

You have to do something like that with pens.
  
 $ diag-> ASSERT (($ max_depth> = 1), "Depth must be an integer> = 1"); 

It's simple. If the precondition condition is met, work on. If we received data that we cannot work with, we stop.

There is one thing that makes the diagnostic module have to be redone. And this is not at all because standard tools do not allow flexibility to redirect input to a file or dialog box. Secret in a text message.

This is a short message to the future. From a doubting author from the time of writing a program to an unknown person who will have to figure out why the condition did not work.

This may be the author the next day, or maybe even a cognitive brother in far away Bangalore with unknown skills and unpredictable assumptions about what is happening inside the program. And for this case, it is critically important not just to crow, but to transfer knowledge. However, for the author a couple of weeks after writing, the information also becomes fresh.

And here we come to the difference between reflection and description. In the typical case of correct application, the text sent to the person will be different from what the code ordered the computer. Below are a couple of examples from the current code.
  
 $ diag-> ASSERT (defined ($ network -> {$ first_vertex} -> {$ second_vertex})
                , "Fatal error with deletion"
                );

 $ diag-> ASSERT (($ main_name and $ main_name! ~ / ^ NA $ / i)
                , "Main name is empty" 
                );

The secret here is that in case of an error, we do not just get information about what and where it broke, but we can understand the information at the meta level. That is, we see why and why the author did what he did. This eliminates one of the unpleasant features of classic debugging, where a quick fix for the code creates a ripple effect, plugging an error message in a particular place, but causing others that are more difficult to detect and fix.

Most programmers do not check for errors hiding behind the lack of time or the need to make the code fast. But everything is based on the inner conviction that there are no mistakes and never will be. And it is harmful to be presumptive. An optimized program that does not work correctly simply produces more garbage per unit of time.

Even a rather rare fence, but made thoughtfully and professionally, in the early stages will catch those errors that, with the “industry-standard” quality and organization of the process, will be found after tedious testing or, in general, will come to light from customers.

As a rule, most internal checks never work. But, if the "incredible" happens, the program itself reports that it was not necessary to turn up its nose, and the most baseless fears served their purpose.

Basically, that's all. It is possible to go deep for a long time, but this is already the text of other volumes and other quality.

Source: https://habr.com/ru/post/164277/


All Articles