<
bhref =rieblogs.ru/
blogs /borkus/
default.aspx "class=
layheadermaintitle " id=victct0000_ctl00_ctl00_ctl00_cl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00_ctl00htf00lc_blog_tl
/ 00/00/
00 /00/00
/ 00/00/
00 /00/00
/ 00/00/
00 /00/00
/ 00/00/
00 /00/00
/ 00/00/
00 /00/00
/ 00/00/
00 /00/00
/ 00/00/
00 /00/00
/ 00/00/
00 /00/00
/ 00/00/
00 /00/00
/ 00/00/
00 /00/00
/ 00/00/
00 /00/00
/ 00/4/
4 /4/3 .ru wrote a note “Review:“ Bugs ”in programs and“ mistakes ”in texts”, where he describes how journalists are worried about their typos and mistakes, how they lick articles, and how ashamed they are when mistakes do happen. Then he cites interesting figures:
About the press: At the same time, let's look at the facts. The number from 40 to 116 pages with a population of 12,000 characters on each page (an average of about 450,000 characters, 100,000 words, about 7-10 thousand lines, 10,000 sentences) was made in a week by about 30 people. Total - approximately we made 0.3 serious errors per 1000 lines. Multiply-divide by a factor of 2.
<O: P> About software: </ O: P> Now let's look at the results of the work of the IT sector, from which there were complaints. Programs are created over the years, according to some estimates (see examples at www.osp.ru/os/2005/04/185558 ) the number of errors in commercial products (final release) is up to 0.5 per 1000 lines.
... And although the volume of the standard program is comparable to the annual stack of magazines (hundreds of thousands of lines) ... But they still remain. And after all - very few people say to programmers: be ashamed of your work, how can you! But just - correct the error, guys.
')
Of course, the question of his article is somewhat different, but the idea of ​​such a comparison seemed to me worthy of a comment. So:
It seems to me that it is inappropriate to compare errors in printed materials and programs.
Mistakes that Vlad is talking about in programming ... they are no longer something they just don’t meet, they’re not even beaten for them, since almost all of them are caught in the compilation phase.
The next stage is the compensatory mechanisms of the human brain, which automatically converts the cleanup when reading into the original concepts, often without informing the mind about it.
Then it turns on the fact that a person will perceive an article by an average of 5-10 percent, well, 50, if he reads very carefully. So even semantic errors usually pass by attention and have no effect on anything.
Well, and finally, most importantly, imagine that any person reading an article in a newspaper or magazine begins as a thoughtless automaton strictly following the letter that he read. Can you imagine what a nightmare would be? The blue dream of Goebbels, if, of course, humanity would have experienced this.
Human language is used by definition for noisy channels with mildly suspicious sources of information. Computer languages ​​are used for secure and reliable channels with signal sources that you believe are like to yourself. Agree, a huge difference. I suspect that if the human language were used in the same way, then even the most polished article in the most authoritative journals would still contain “blue screens” more than all of Windows, which, by the way, have very little left.
Well, and finally, the natural question: is it possible to make programs as “tolerant” to noisy channels as people? Can. Probably. Someday. But then using the program will be as difficult as managing people. Who managed - knows.
Original as always on the blog
Thoughts that could not be kept in my head ...