📜 ⬆️ ⬇️

Tenfold legibility

Everyone knows that there are "ten-time" programmers who are 10 times more productive than an ordinary programmer. We can’t measure performance, so we don’t know if it’s true. But in fact, there are quite a few people who are unusually productive, enough to prove the existence of a "tenfold programmer."

How do they achieve this?

It is often considered that tenfold performance results from tenfold abilities or tenfold knowledge. I do not think so. I do not want to say that abilities or knowledge are useless. But for many years I have noticed that the most important thing here is a tenfold intelligibility . The trick is to constantly evade lousy jobs.
')
And under it, I do not mean "intellectually unsatisfactory." The definition of lousy work is that its result goes straight to the toilet.

I myself redid a lot of lousy work, especially when I was inexperienced and naive. (One of the great advantages of experience is that a person becomes less trusting — and this more than compensates for the evaporation from school memory).

I will give you just an illustrative example of the work of hard, developing and going straight to the toilet: my adventures a decade ago with a fixed point.

Do you know what “fixed-point arithmetic” is? I'll tell you. This is when you work with integers, but pretend that these are fractions, implicitly assuming that the integer x is actually x / 2 ^ N for some value of N.

To add two numbers, you simply calculate x + y. For multiplication, you need x * y >> N, because just x * y will be x * y / 2 ^ 2N, right? It is also necessary to be careful that this rubbish does not overflow, somehow deal with different N in the same expression, and so on.

Then, in the early 90s, I ported software to the processor, which was developed in our company. The floating point hardware unit was not planned for it - “we will do everything with a fixed point”.

Here are some of my then affairs:


Months and months, I worked in full force, producing a complex and debugged code - everything, as usual.

But what I really needed to do was:



Why did all this epic degenerate into many months of lousy work, for which it was possible to do something useful? Because I did not know what was what; because I did not guess that it was possible to argue with the boss; and because the work was creative and interesting. Which very quickly went to the toilet.

The most difficult thing about “managing” these tenfold people — those about whom everyone knows they are very productive — is to convince them to take up the task. (everything else is much easier - they themselves know how much; if they come to do something, it will be done).

You were waiting for something else, admit it? I mean, if you're so productive, what do you worry about? You work fast; the worst thing that awaits you is that as a result nothing will work out - and then you will quickly do something else, right? I mean, this is slow and not very productive people have to be picky - they are slower and they have less opportunity to switch to something new - right?

But this is just an optical illusion: more productive people do not work so fast - not ten times faster. The reason they seem 10 times faster is that almost none of what they have been done is thrown away, in contrast to the heap of work that others do.

And thrown cases in productivity are not counted. You regard a person as “the guy who made X”, where the usefulness of X is known to everyone - and forget about all those Y who were not particularly useful, despite the efforts and talent that went into their production. Even if this was something to blame - the manager, or lack of time, or whatever.

If you take well-known examples - you know Ken Thompson from C and Unix - but not from Plan 9, in fact, and not from Go. So far the opposite is true - Go got your attention only because it was the brainchild of those guys who made Unix. You know Linus Torvalds, although Linux is a Unix clone, and Git is a Bitkiper clone, actually, because these are clones of successful products, and they succeeded because they appeared on time.

The first thing you look at is not originality and not how difficult it was to write, or how good this thing is by some specific criterion: you see how you can use it.

A ten-time programmer, as a rule, is satisfied with a real war, just to avoid doing something that will never be used.

One of these smart people once asked me about checkedthreads , which I had just finished: “Does anyone use this?” With the same branded irony. I said I did not know; On Hacker News, someone wrote in the comments that he might try.

I know this is a wonderful thing; with its help, you can find all your errors in multi-threaded programs. But this is not a replacement for pthreads, you have to write code using new interfaces - good, simple interfaces, but not the ones that you already use. So it is likely that few will be interested in my library; Although Helgrind and thread sanitizer have a lot of false positives, they at least work with the interfaces that people use extensively.

Why would I do this then? Because it took only one day to write the first version (I haven’t decided yet to start parallel nested loops and all that), and I thought that there would be a chance if I wrote about my program in a blog (which I’m doing right now ). If I had written several posts in which I explained how it is practically possible to track errors in old-fashioned parallel, shared memory, C code even easier than Rust, or Go, or Erlang, then maybe people would pay attention.

But even so, there are too many chances for failure among that tenfold crowd, which I know personally - don't even try. Even if we use something like checkedthreads at our place and very successfully. Actually, I heard the ironic question from the guy who put a lot of work into this innermost version - because it’s very likely that we will use the program.

Do you see? Do not do what is likely to fail - that's where the performance.

How to choose what to work on? There are many things to consider:



You can easily continue this list; His main idea - what is the probability that I will finish this thing and then it will actually be used? The same idea applies recursively to every function, subfunction, and line of code: is the whole thanks to them useful? And do I not spend time on something else that will bring more benefits?

Of course, it is still more difficult; some useful things are valued above others for various reasons. That's where Richard Stallman appears and demands that we call Linux “GNU / Linux” because GNU wrote most of the user programs. And although I am not going to call the system “Gnu-Linux”, its claims, unfortunately, have their own truth, in the sense that yes, unfortunately, some hard and important work is not as noticeable as other hard and important work.

But justice is another topic. In the end, hardly tenfold performance will give you tenfold compensation.
Therefore, there are not so many reasons to “cheat” and seem more productive than you are. The main reason for which people are productive is an awl in the ass, not giving rest, and not some tangible benefits.

What I want to say is: to do more, you need not so much to achieve success faster (although this does not hurt), as it is less likely to fail . And not all failures are caused by lack of knowledge or experience; most of them happen when the program is abandoned at the stage when it is still impossible to use it - or because no one was going to use it from the very beginning.

Therefore, I, as a person who has written a lot of code for the toilet bowl, believe that it is necessary to achieve productivity not so much with work as with the lack of work — not to engage in what will eventually be thrown into the garbage bin.

Posted by: Joseph Kreinin
Original: www.yosefk.com/blog/10x-more-selective.html

Source: https://habr.com/ru/post/178553/


All Articles