📜 ⬆️ ⬇️

There are only structural and object programming paradigms.

I prepared this article during discussions on the Most pattern, but did not publish it then. I thought I figured it out, since Domain Driven Design was mentioned, and it seemed that the need for design and programming in the OOP style was not disputed by anyone. But still, over time, I was confronted with incomprehension. This will be a purely historical theoretical article. Of course, even without trying to grip the entire breadth of the topic. But this is a message to a young developer, who reads at the top and cannot choose which principles and rules he should follow, which is primary and which is secondary.

The title of this topic for many now can show very controversial (and rather intently provocative, but for the cause :)). But still, we will try to substantiate it here and to understand what properties the programming paradigm must have in order to have the right to be called a paradigm.

The only one I ask is that if you read it diagonally, comment is restrained.
')


What does Floyd tell us about paradigms?

The term "programming paradigm" was introduced by Robert Floyd ('' RW Floyd. '' [Http://www.ias.ac.in/resonance/May2005/pdf/May2005Classics.pdf The Paradigms of Programming] '' Communications of the ACM ' ', 22 (8): 455-460, 1979. For a Russian translation, see the book: Lectures of Turing Prize Laureates for the first twenty years (1966-1985), Moscow: MIR, 1993.). He in his lecture in 1979 says the following:

A familiar example of a programming paradigm is structured programming, which seems to be the dominant paradigm in programming methodology. It is divided into two phases. In the first phase, top-down design, the problem is divided into a small number of simpler sub-problems. This gradual hierarchical decomposition continues until selected sub-problems arise that are simple enough to cope with them directly. The second phase of the structured programming paradigm entails working upward from concrete objects and functions to more abstract objects and functions used throughout the modules produced by the downstream design. But the paradigm of structured programming is not universal. Even her most ardent defenders would recognize that it alone is not enough to make all difficult problems easy. Other higher level paradigms of a more specialized type continue to be important. (This is not an exact translation, but the author’s compilation based on R. Floyd’s lecture, but adhering to his words as much as possible. The wordings are modified and arranged only to emphasize R. Floyd’s main idea and its clear presentation.)


He further mentions dynamic programming and logic programming, also calling them paradigms. But their peculiarity is that they were developed from a specialized subject area, some successful algorithms were found, and corresponding software systems were built. He further says that programming languages ​​should support programming paradigms. And it also indicates that the structured programming paradigm is a higher level paradigm:

The paradigm of an even higher level of abstraction than the paradigm of structured programming is the construction of a hierarchy of languages ​​where programs in the highest language work with abstract objects and translate them into the programs of the language of the next lower level


Features of higher level paradigms

At the moment, there is a tendency to consider all possible paradigms as standing at the same level as possible alternatives when creating software. But it is not. Paradigms are not interchangeable.

As we can see, R. Floyd also distinguished paradigms for higher-level, and more specialized ones. What features of paradigms allow us to say that they are higher level? Of course, this is the possibility of their application to various subject tasks. But what makes the paradigms applicable to various subject tasks? Of course, the question here is not in the peculiarities of the objective problem, which can be solved by one or another approach. All the paradigms that offer to create algorithms in one or another specialized way are not paradigms at all, this is only a special approach within the framework of a higher level paradigm.

And there are only two high level paradigms: structured programming and even higher level object oriented programming. Moreover, these two paradigms at a high level contradict each other, and at a low level, the level of construction of algorithms coincide with each other. And approaches (low-level paradigms), such as logical, dynamic, and functional, may well be used within the framework of the structured programming paradigm, and some emerging specializations - aspect, agent-oriented, event-oriented, are used within the framework of the object-oriented programming paradigm. Thus, this does not mean that programmers need to know only one or two high-level paradigms, but knowledge of other approaches will be useful when a more specialized, low-level problem is being solved. But at the same time, when you have to design software, you need to start with higher level paradigms, and if necessary move to lower levels. But if the problem arises of choosing which principles to give preference to, the principles of lower level paradigms should never dominate the principles of higher level paradigms. For example, the principles of structured programming should not be respected to the detriment of the principles of object-oriented programming, and the principles of functional or logic programming should not violate the principles of structured programming. The only exception is the speed of the algorithms, which is the problem of optimizing code with compilers. But since it is not always possible to build perfect compilers, and interpreting paradigms of a higher level is, of course, more complicated than a low level, sometimes you have to go to the non-observance of the principles of high-level paradigms.

But let us come back to our question: what makes the paradigms applicable to various objective tasks? But to answer it, we need to make a historical excursion.

Basics of Structured Programming Paradigm

We know that ideas about structured programming arose after the report of E. Dijkstra as early as 1965, where he justified the rejection of the GOTO operator. It was this operator that turned the programs into unstructured (Spaghetti code), and Dijkstra proved that it is possible to write programs without using this operator, with the result that the programs will become structural.

But theory is one thing, practice is another. In this sense, it is of interest to consider what the situation was by 1975. This is clearly seen in the book by E. Jodan ([http://www.az-design.ru/index.shtml?Projects&AzBook&src/005/02YE000 Jodan E. Structural Design and Design of Programs, 1975]). It is important to consider this because now after more than 30 years, the principles are already well-known then, now they are rediscovered and raised to a new rank. But at the same time, the historical context and the hierarchy of the importance of these principles are lost, which is primary and what is secondary. This amorphous situation very well characterizes today's programming state.

But what was then? As Yodan describes, it all begins with the answer to the question: “What does it mean to write a good program?”. Here is the first criterion for which questions the high level programming paradigm must answer. If she does not answer this question directly, but tells you how to get some interesting characteristics of your program, then you are dealing with a low-level paradigm — the programming approach.

At the dawn of the birth of programming, there was such an approach to evaluating programmers from the speed of writing programs. Does this mean that he writes good programs? Does he enjoy special respect and respect for the leadership? If the answer to the last question is in the affirmative, then all the questions of improving programming are more likely to be of academic interest. But management may also note that some superprogrammers can make programs very quickly or write very effective programs, but these programs sometimes remain unformed, they are impossible to understand, accompany or modify. And on the latter, too, time is wasted.

Remarkable, quite characteristic dispute of programmers:
* Programmer A: “My program is ten times faster than yours, and it takes three times less memory!”
* Programmer B: “Yes, but your program is not working, but mine is working!”

But programs are constantly becoming more complex and therefore it’s not enough for us that the program just works. We need certain methods to verify the correctness of the program and the programmer. Moreover, this is not a testing program, but a certain systematic procedure for checking the correctness of a program in the sense of its internal organization. That is, already then, in modern terms, they talked about code revision (Code review).

In addition, they already talked about the flexibility of the program - about the simplicity of its change, expansion and modification. To do this, you must constantly answer questions of a certain type. “What will happen if we want to expand this table?”, “What happens if one day we want to define a new change program?”, “What if we have to change the format of such output data?”, “What will happen if Does someone decide to enter data into the program in another way? ”.

Also talked about the importance of interface specifications, i.e. A formalized approach to the specification of inputs, functions and outputs that must be implemented by each module.

In addition, central attention was paid to the size and immutability of the module. And with regard to the immutability of the module, it was not considered entirely, but with the selection of individual factors:
1. The logical structure of the program, i.e. algorithm. If the whole program depends on some special approach, then in how many modules will you need to make changes when the algorithm changes?
2. Arguments, or parameters, of the module. Those. changing interface specifications.
3. Internal table variables and constants. Many modules depend on common tables; if the structure of such tables changes, we can expect the modules to change as well.
4. The structure and format of the database. To a greater extent, this dependence is similar to the dependence on common variables and tables mentioned above, with the difference that, from a practical point of view, it is more convenient to consider a database independent of a program.
5. Modular program management structure. Some people write a module without really thinking about how it will be used. But if the requirements have changed. What part of the logical structure of the module will we have to change?

These and many other aspects (which we did not consider here) as a whole and formulate an idea of ​​structured programming. Taking care of these aspects makes structured programming a high-level paradigm.

Basics of the Object Oriented Programming Paradigm

As we could see all the principles of the organization of good programs are considered in structured programming. The emergence of another or a group of previously unknown principles of writing good programs could change the paradigm? Not. This would only expand the methods and ideology of writing structured programs, i.e. structured programming paradigm.

But if high-level paradigms are designed to answer the question of how to write a good program, and the emergence of a new technical technique, or the consideration of new factors does not go beyond the boundaries of structured programming (since it will remain structural, regardless of the number of techniques and factors), then what then will go beyond the boundaries of this paradigm. Indeed, as is known from science, paradigms in general do not change so quickly. Scientific revolutions rarely occur when the previous paradigm, in practice, simply cannot explain the phenomena taking place from the existing theoretical views. We have a similar situation when changing the paradigm from structural to object-oriented.

It has already been recognized that the reason for the emergence of the object-oriented paradigm was the need to write more and more complex programs, while the paradigm of structured programming has a certain limit, after which it becomes unbearably difficult to develop a program. Here, for example, that writes G. Shildt:

At each stage of programming development, methods and tools appeared to curb the increasing complexity of programs. And at each such stage a new approach absorbed all the best from the previous ones, marking the progress in programming. The same can be said about the PLO. Before PLO, many projects reached (and sometimes exceeded) the limit beyond which the structural approach to programming turned out to be unworkable. Therefore, in order to overcome the difficulties associated with the complexity of programs, the need for OOP arose. ([http://www.williamspublishing.com/Books/978-5-8459-1684-6.html Herbert Schildt, C # 4.0 Complete Manual, 2011])


To understand the reason why it is object-oriented programming, allowed us to write more complex programs and virtually eliminate the problem of the occurrence of the complexity limit, let us turn to one of the founders of the PLO Gradi Bucha ([http://www.helloworld.ru/texts/comp/other /oop/index.htm Grady Booch, Object-Oriented Analysis and Design]). He begins his explanation of OOP by what complexity means and which systems can be considered complex. That is, a deliberate approach to the question of writing complex programs. Then he proceeds to the question of the connection between complexity and human capabilities to understand this complexity:

There is another major problem: the physical limitations of a person’s ability to work with complex systems. When we begin to analyze a complex software system, it contains many components that interact with each other in various ways, and neither the parts of the system nor the methods of their interaction reveal any similarities. This is an example of unorganized complexity. When we start to organize the system in the process of its design, it is necessary to think about a lot at once. Unfortunately, one person cannot follow all this at the same time. Experiments of psychologists, such as Miller, show that the maximum number of structural units of information that the human brain can simultaneously follow is approximately seven plus or minus two. Thus, we are faced with a serious dilemma. The complexity of software systems is increasing, but our brain’s ability to cope with this complexity is limited. How can we get out of this predicament? ”“


Then he talks about decomposition:

Decomposition: algorithmic or object-oriented? Which decomposition of a complex system is more correct - according to algorithms or objects? There is a catch in this question, and the right answer to it: both aspects are important. The separation by algorithms focuses attention on the order of events that take place, and the separation by objects gives special importance to agents who are either objects or subjects of action. However, we cannot design a complex system simultaneously in two ways. We have to start dividing the system either by algorithms or by objects, and then, using the resulting structure, try to look at the problem from a different point of view. Experience shows that it is more useful to begin with object decomposition. Such a beginning will help us to better cope with the organization of the complexity of software systems.


Thus, he also favors object-oriented principles over structural principles, but emphasizes the importance of both. In other words, structural principles must obey object-oriented principles in order for the human brain to cope with the complexity of the problems that arise. He further emphasizes the importance of the model:

The importance of building a model. Modeling is widespread in all engineering disciplines, largely due to the fact that it implements the principles of decomposition, abstraction, and hierarchy. Each model describes a certain part of the system under consideration, and we, in turn, build new models based on the old ones, which we are more or less sure of. Models allow us to control our failures. We evaluate the behavior of each model in ordinary and unusual situations, and then we carry out appropriate modifications if something does not satisfy us. The most useful thing is to create such models that focus on objects found in the subject domain itself, and form what we call object-oriented decomposition.


Now, if you look more closely, it turns out that the object-oriented paradigm is nothing more than modeling in general, the main aspect of which was most clearly expressed by S. Lem:

Modeling is an imitation of Nature, taking into account its few properties. Why only a few? Because of our inability? Not. First of all, because we have to protect ourselves from information overload. Such an excess, however, may also mean its inaccessibility. The artist paints pictures, but although we could talk to him, we will not know how he creates his works. About what happens in his brain when he paints a picture, he himself does not know. Information about this is in his head, but it is not available to us. Modeling should be simplified: a machine that can write a very modest picture would tell us about the material, that is, the brain, the basics of painting more than such a perfect “model” of the artist, which is his twin brother. The practice of modeling involves taking into account some variables and the rejection of others. The model and the original would be identical if the processes occurring in them were the same. This is not happening. The results of the development of the model differ from the actual development. Three factors can affect this difference: the simplicity of the model compared to the original, the properties of the model that are alien to the original, and, finally, the uncertainty of the original itself. (fragment of the work “The Sum of Technologies”, Stanislav Lem, 1967)


Thus, S. Lem speaks about abstraction as the basis of modeling. At the same time, abstraction is the main feature of the object-oriented paradigm. Mr. Buch writes about this:

Reasonable classification is undoubtedly a part of any science. Mikhalski and Stepp argue: “an integral task of science is to construct a meaningful classification of observed objects or situations. Such a classification greatly facilitates the understanding of the main problem and the further development of a scientific theory. ” Why is the classification so difficult? We explain this by the lack of a “perfect” classification, although, naturally, some classifications are better than others. Coombs, Raffia and Tral argue that "there are as many ways of dividing the world into object systems, how many scientists are taken for this task." Any classification depends on the point of view of the subject. Flood and Karson give an example: “The United Kingdom ... economists can be seen as an economic institution, sociologists - as a society, environmentalists - as a dying corner of nature, American tourists - as a landmark, Soviet leaders - as a military threat, finally, the most romantic of us , the British - as the green meadows of the motherland. "


And then he talks about the choice of key abstractions, those that we need:

'' 'Search and selection of key abstractions.' '' A key abstraction is a class or object that is included in the vocabulary of the problem area. '' 'The most important value of key abstractions lies in the fact that they define the boundaries of our problem' '': they highlight what is included in our system and therefore is important for us, and eliminate the superfluous. The task of identifying such abstractions is specific to the problem area. According to Goldberg, "the right choice of objects depends on the purpose of the application and the degree of detail of the information being processed."

As we have already noted, the definition of key abstractions involves two processes: discovery and invention. We open abstractions by listening to subject matter experts: if an expert talks about her, then this abstraction is usually really important. , , , . , «, , »; — . , , , , , , . , .

— .


So, the object-oriented paradigm becomes a high-level paradigm, and takes precedence over the principles of the structured programming paradigm, as it deals with the modeling of reality, builds subject domain models in the language of specialists in these areas. If you neglect this for the sake of writing a good program that will be easy to modify, expand, which will have clear interfaces and independent modules, you will return to the paradigm level of structured programming. Your program will be all good, but it cannot be understood, since it will not correspond to reality, it will be explained in terms of only those known to you, and a specialist who knows the subject area will not be able to understand the program without your help. In the end, the complexity will decrease in a very narrow range,although you organized a good program. But it is the program, not the model. The absence of a model, or only its superficial presentation, will “blow up” your good program from the inside, and will not allow you to further develop and accompany it in the future. When you enter classes whose abstractions do not exist, when these classes are purely systemic and have nothing to do with the subject area, when they are introduced only to simplify the interaction flows of other classes, your software becomes “with a beard”, and if you do not follow the refactoring behind such sites, at one point, the development of your software will stop, and it will not be possible - you will reach the limit of structured programming (did you think that using classes and objects would not threaten you?).or only its superficial presentation, “blow up” your good program from the inside, and will not allow you to further develop and accompany it in the future. When you enter classes whose abstractions do not exist, when these classes are purely systemic and have nothing to do with the subject area, when they are introduced only to simplify the interaction flows of other classes, your software becomes “with a beard”, and if you do not follow the refactoring behind such sites, at one point, the development of your software will stop, and it will not be possible - you will reach the limit of structured programming (did you think that using classes and objects would not threaten you?).or only its superficial presentation, “blow up” your good program from the inside, and will not allow you to further develop and accompany it in the future. When you enter classes whose abstractions do not exist, when these classes are purely systemic and have nothing to do with the subject area, when they are introduced only to simplify the interaction flows of other classes, your software becomes “with a beard”, and if you do not follow the refactoring behind such sites, at one point, the development of your software will stop, and it will not be possible - you will reach the limit of structured programming (did you think that using classes and objects would not threaten you?).when these classes are purely systemic and have nothing to do with the subject area, when they are introduced only to simplify the flows of interaction of other classes - your software becomes “with a beard”, and if you do not follow such areas at one fine moment the development of your software it will stop, and it will become impossible - you will reach the limit of structured programming (did you think that using classes and objects would not threaten you?).when these classes are purely systemic and have nothing to do with the subject area, when they are introduced only to simplify the flows of interaction of other classes - your software becomes “with a beard”, and if you do not follow such areas at one fine moment the development of your software it will stop, and it will become impossible - you will reach the limit of structured programming (did you think that using classes and objects would not threaten you?).using classes and objects, does it threaten you?).using classes and objects, does it threaten you?).

upd.I've been thinking, the topic is sharp, I will not comment. I set out the facts in the article, but I don’t want to roll down to the level of holivar. If this did not help to think - well, what then means no luck this time. Indeed, it will be constructive - if you write counter-arguments in a separate article. I do not undertake to destroy mass stereotypes.

Oh, and also to make it clear - I decided to publish after discussions here. Will we program Rosenblatt's perceptron?where in an obvious way it became clear that functional programming when building a bad model in OOP works worse than anything else. And the fact that they boast super speed is a fiction, in fact, the right model is important. For some (not many such tasks are comparatively), functional programming can be successful, but it should not be used everywhere, where it does not give anything good. Well, or so - you can write the piece discussed there ONLY in a functional style, and that it works faster than with the events of the OOP?

Source: https://habr.com/ru/post/140613/


All Articles