There is a problem with the description and interpretation of the principles of the development of the SOLID architecture (authored by Robert Martin). Many sources give their definition and even examples of their use. Studying them and trying to try on using myself, I consistently caught myself thinking that there was not enough explanation for the magic of their use. And trying to see the internal gears, to understand - and for me it means to memorize - laid them out in their own "terms-shelves". Well if it will be useful to someone else.
Let's start "juggle with shelves" of the above design approach.
One piece of code should be changed only during the implementation of one goal. If a section of code implements two tasks and changes for different uses, then this section should be duplicated by instance for each purpose. This is very important because it requires a departure from the generally accepted principle of eliminating duplication.
The purpose of this principle is to eliminate implicitly inserted errors resulting from the fact that the following invariants exist in the development for a section of code, procedure, class, component (hereinafter referred to as combining these concepts):
if there are places of use [component] that are not important for the current task being solved by a programmer, then it is very easy for him to forget about checking compatibility with these places of use of the change made to this [component].
Therefore, all places of use should be located in the [Single Responsibility] zone of single responsibility, that is, they should be changed and taken into account at once for any task solved by the programmer).
The principle applies to both the code section and the component, library, program, software package used in several places.
Many sources give an example of a class with only one “function” as an SRP ideal and a class of “divine object” that combines all the functions of an application, like an antipattern. An IMHO class with only one “function” is the requirement of premature optimization of the code architecture, which induces to write sets of classes (code entities) from scratch, while forgetting that the absence of more than one place of use allows the programmer to quickly evaluate a small number of locally (in one class) interacting code, than analyzing the external communication of separate code entities responsible for their "function". The "divine object" for a tiny application also seems to be not a strong crime - it allows you to start development: select all the necessary entities and, writing them down, separate them from external objects of the standard library and external modules (create a live cell and isolate it with a membrane). In the process of growth and development of the project, there are many techniques to help follow the SRP, one of them is the division into classes and minimization of the number of "functions" for which each class is responsible (cell division and their specialization in the body).
Here I would like to write a set of techniques for maintaining SRP, but this work has not yet been completed (I hope the "hands will reach"). From the obvious areas where you can search for these tricks:
Code development is optimally planned in such a way that a new programmer needs to add new code to implement a programmer, while the old code does not need to be changed. The code must be open (Open) to add and closed (Closed) to change.
The goal for this principle is to minimize labor costs and eliminate implicitly inserted errors resulting from the development of the following invariants:
It is advisable to choose the implementation variant of the task minimizing the time spent by the programmer.
More often, in software development practice, the cost of adding is much less than the cost of change, which makes clear the benefit of using the [Open-Closed] principle. At the same time, there are a lot of techniques for maintaining the program architecture in a state where the implementation of a new task is reduced only to adding [components]. This work with architecture also requires a programmer's time, but as practice shows, in large projects it is much smaller than using the approach of changing old procedures. And, of course, this description of the development is an idealization. There is almost no realization of the task only by adding or only by changing. In real-world problems, a mixture of these approaches is used, but OCP emphasizes the benefits of using the add-on approach.
And here I would like to write a set of techniques for maintaining OCP. From the obvious areas where you can search for these tricks:
This principle limits the use of the extension of the base interface [base] by the implementation, stating that each implementation of the base interface must have a behavior as the base interface. At the same time, the basic interface fixes the expected behavior at the places of its use. And the presence in the behavior of the implementation of the differences from the expected behavior, fixed by the basic interface, will lead to the possibility of violation of the invariant [2].
This principle is based and specifies the method of design, based on abstraction. In this approach, an abstraction is introduced — some basic properties and behavior characteristic of many situations are fixed. For example, [component-procedure] "Move to previous position" for situations: "Cursor in text", "Book on shelf", "Element in array", "Legs in dance", etc. And for this [component] are fixed ( often everyday experience and without formalization) some prerequisites and behavior, for example: "The presence of a moving object", "Repeat several times", "The presence of the order of elements", "The presence of fixed positions of elements". LSP requires that, when adding a new usage situation, all the prerequisites and limitations of the base are met for the [component]. And the situation “a grain in a sugar can” cannot be described by this abstraction, although the grain, of course, has a position, there are positions in which the grain has been previously, and it is possible to move it in them - there are no fixed positions of elements.
The goal for this principle is to eliminate implicitly inserted errors resulting from the development of the following invariants:
a developed [procedure] implementation of the base must fulfill all of its limitations, including the heavily tracked implied (provided informally).
Very often, to describe this principle, an example is given with a Rectangle ([base]) and a Square (implementation). Situation class CSquare : public CRectangle
. In [base] enter operations of work with width and height (Set (Get) Width, Set (Get) Height). In the CSquare implementation, these Set operations are forced to change both object sizes. I have always lacked an explanation that "informally" in [the base] is given the following restriction: "the possibility of independent use of Width, Height". In the implementation of CSquare, it is violated, and in the places of use a simple sequence of actions based on the use of this independence: r.SetWidth(r.GetWidth()*2); r.SetHeight(r.GetHeight()*2)
r.SetWidth(r.GetWidth()*2); r.SetHeight(r.GetHeight()*2)
- for the implementation of CSquare will increase both sizes by 4 times, instead of 2 times estimated for CRectangle.
IMHO, this principle points to the difficulty of tracking such informal constraints, which, given the enormous utility and high frequency of using the “base-implementation” development approach, requires special attention.
These two principles are very similar in terms of their requirements. Both implicitly imply the utility of using the minimum possible basic interface as a tool for the interaction of two [components]: "client" and "server" - these names are chosen simply for identification. In this case, the general information used by [components] is concentrated in the basic interface. One [component] ("server") implements a base interface implementation, the other [component] ("client") refers to this implementation.
The goal for these principles is to minimize component dependencies, allowing independent changes to their code if it does not change the underlying interface. The independence of component changes reduces complexity and effort if the components fulfill the requirements of the SRP principle. A similar approach is possible because the following invariants exist in the development:
the locations of use of the base [component] do not require verification after making changes to the [component] implementation.
At the same time, it is clear that it is advisable to minimize the "size" of the basic interface by discarding unused functionality and restrictions, thereby limiting the [component] implementation by principle (LSP) less
The principle of ISP emphasizes the need for separation (segregation) of the "server" interface, if not all of its published functionality is used by this "client". In this case, only the client’s [base] is allocated and minimization of jointly restrictive information is ensured.
And here I would like to write a set of techniques for maintaining DIP. From the obvious areas where you can search for these tricks:
Returning to the title, I’ll explain why I’ve chosen not to understand. The denial is added in order to emphasize the errors suffered and very IMHO useful rule. It is better not to understand and therefore not to use technology than to misunderstand, to take for granted, to spend on the use of technology its resources and as a result not to receive any useful exhaust except complacency and the possibility of bragging about involvement in fashionable technology.
Thanks for attention.
Source: https://habr.com/ru/post/444932/
All Articles