Every phenomenon has a “bad” and “good” side. And there is no absolute evil. As well as absolute good.
The dogma (with reference to the development) is the statement that some technology (or technological solution) is
an absolute good , that is, it has advantages without any disadvantages.
The main idea of this article is that such dogmatic thinking is very common in the industry, but it is harmful and leads to sad results.
')
To begin with,
programming is bad. Why?
Because it is
difficult . What's better?
ConstructionDesign errors are visible immediately. Programming errors are invisible.
Programming is a laborious process, requiring a lot of knowledge and skills. Design requires low qualification.
Imagine that 25 years ago you were given the task of
programming a “dynamic page with pictures and text”, which highlights the links, when you click on them, it highlights another “dynamic page” and so on.
Imagine the difficulties associated with the implementation of such a program. On different platforms.
And now? And now, every nerd will
construct such a page from HTML and CSS in a day. Moreover, errors in the
design of such a page, as a rule, are visible immediately and are easily corrected.
So programming is hard. That is, it is laborious, requires high qualifications and a large team, contains unobvious "pitfalls", needs to develop architecture, etc.
The difficulty is inherent in programming by its nature. It is inevitable. This is the “dark side” of programming.
But there is a "bright side." Programming gives and undeniably good - programs whose behavior allows you to solve various tasks, operating systems, drivers, etc.
A dogmatic view of programming (as a process as a whole) often comes down to a denial of its complexity.
But this view is not always expressed explicitly. For example, some apologists, evangelists, and other patients like to argue that programming is “still difficult”. But with the appearance of “this thing will change everything.”
Let's look further at the dogma that ...
“Automatic memory management (java) is better (simpler) than manual (C ++) and, in general, Java (C #) is better than C ++”Manual allocation and release of memory is a difficulty.
However, there are certain techniques that allow you to manage this complexity, that is, to keep it under control.
For example. Any resource (which needs an explicit release) always belongs to some object. It is allocated when an object is created and returned to the system when an object is destroyed. Any object always belongs to another object. At the very top of this hierarchy is the main owner of all objects - the application.
If you want to divide the ownership of the object, we introduce the owners with a counter. The last owner (who set the counter to zero) frees the object. Ownership of the object can be explicitly transferred.
Why am I writing this here? To make it clear the level of complexity.
From my practice I can say that these techniques solve 99% of the problems of manual memory management. No more tweaks or tricks needed; only attentiveness.
And here comes the automatic memory management. And "simplifies" this complexity.
Is the complexity gone? Of course not. She transformed.
A garbage collector has appeared. This is a separate task that requires resources at unpredictable times (suspending your application). It needs to be
customized , that is, the collector introduces its (additional) complexity.
An automatic destructor left the language and the simplest task was to close the file in the same place of the code where it was opened — turned into complexity. The finalizer does not solve this complexity.
The most interesting thing is the lack of memory. It turns out that in java it has now become
_different_ . If in C ++ the lack of memory meant a
shortage of memory , then in Java the lack of memory means ... Yes, it is not clear what! Another complication.
Phew Well, well, I was loaded to the ears with new complexity. But in return? Instead, I am finally free from memory leaks if I have automatic garbage collection?
NOT.What was the dogmatism here? And why did this happen? I'll try to explain.
The complexity requires high qualifications.
Programming - as a difficult process - requires high qualifications. It is expensive.
The explosive growth of IT has led to an explosive increase in demand for developers.
There was a
clever head that decided that programming could be made
simple , that is, possible with the help of low-skilled developers.
Java is one such attempt. Java was conceived as a development tool for a low-level programmer (forgive me now Java guru :-)).
Java remained live for certain reasons; she has certain advantages that compensate for her shortcomings.
But is it possible to state unequivocally that automatic memory management is better than manual?
I leave this question at your discretion.
Is it possible to unequivocally say that Java is
simpler than C ++?
Here is an example.
Replace non-printable characters in the string with a space .
This task in C ++ is not a task. At all. Why am I so sure? Because Google!
Google cannot find the topic “C ++ fastest replacing non-printable characters”. There is no such topic.
And in Java, this is a task. A search for a solution is required. Requires digging java internals.
Wait, how is that ?! After all, Java is the same “good” ?! Same standard library, abstraction behind abstraction, OOP everywhere and
it is necessary to print a lot of familiar syntax!
Bytes hid behind classes. It's good. And that's bad.
Another dogma.
“XML is great for storing customizations.”Storing persistent (surviving the life of the application itself) settings is a challenge. Does this solve XML complexity?
Do not read further, think.
XML is really useful because of its hierarchical structure. It allows you to quickly access the desired branch (when it is already loaded into memory). It expands easily. It is stored in a text file that can be manually edited by a human.
So, we have a 100 megabyte XML file with settings. I want to change in the settings one flag from “0” to “1”, that is, in fact, one byte. How many bytes will I have to write to disk to reflect this replacement?
100 (one hundred) megabytes.
Yes, XML is not bad for storing read-only settings. But the complexity associated with persistence has not gone anywhere. It was covered by the small size of XML files.
To gain access to a tiny setting, you need to read the entire XML file. To save a tiny change - you need to write the entire file.
Why did this happen?
Let's isolate the most important "good" XML.
This is a “human-readable” text format.
Happens good without bad? Can not be!
“Good” always comes handy with “bad” - text format is poor with large amounts of data. It cannot be updated in parts. It must be fully loaded into memory in order to read the minimum piece of data.
Again, we are convinced that the bad (i.e. complexity) has not gone anywhere.
We want real persistence - RDBMS to help us. It controls this complexity, taking away from us the hierarchy, the human-readable format and direct access to the settings (only through the query language - SQL).
And so on.
So can programming be made simple? Is there a
“silver bullet” that will allow students to create their operating systems in a couple of days?
No, programming was and remains difficult.
Many technologies have emerged that hold different aspects of this complexity under control.
But to say that programming has become simple - it is impossible.
Finding ways to intelligently control complexity is good. Finding a path
without difficulty is a waste of time.
PS The examples in this article are intentionally simplified. This is done so that the article does not become a multi-volume work. Of course, any of the topics covered is deeper and more multifaceted.