As we discussed
last time , the design is not simple; constantly have to keep in mind a bunch of different options and try to find a compromise among the many different requirements, tearing your elegant solution to pieces. On the one hand, I want the solution to be simple to maintain, well expandable, with high performance, and it should be understandable not only to its author, but to at least one other person; I want the solution to eat little memory and not violate any of the 100,500 principles of the PLO, and, most importantly, we want to finish it at least this year, although the manager constantly insists that it should have been ready a month ago.
Many of these criteria are not very compatible with each other, so sooner or later we come to the conclusion that good design is like
that , trying to squeeze through dozens of conflicting requirements and find a reasonable compromise that maximally satisfies the most weighty requirements without forgetting that the weight of these requirements may also change over time.
Due to its ambiguity, you can find many examples where two different teams prefer different criteria; one group may consider the security of a solution a more important criterion, while the other group may prefer efficiency. This ambiguity leads to the fact that a whole zoo of technologies is used on different projects, and all of them can differ greatly from each other in implementation issues (yes, to go far, even within the .Net Framework, it is quite easy to find different solutions to the same tasks).
')
But stop philosophizing, let's look at some more or less specific examples.
Efficiency and maintainability
I think that one of the most common compromises that most developers face is a compromise between the efficiency (productivity) of a solution and its maintainability.
Let's look at this example.
internal class SomeType { private readonly int _i = 42; private readonly string _s = "42"; private readonly double _d; public SomeType() { } public SomeType(double d) { _d = d; } }
This is quite a typical example when a certain class contains default values, initialized when a field is declared. However, this example leads to some swelling of the code, since the C # compiler converts it into something like this:
public SomeType() { _i = 42; _s = "42";
Thus, all fields initialized during the declaration get their values ​​even
before calling the base class constructor , thanks to this we can access them even from virtual methods called from the base class constructor, and also that all readonly fields will be guaranteed initialized ( but in any case, do not use this trick!). But we get this behavior due to the "swelling" of the code, which is duplicated in each constructor.
Jeffrey Richter in his excellent book “CLR via C #” gives the following advice: since the use of field initialization during declarations can lead to swelling of the code, you should consider selecting a separate constructor that does all the basic initialization and explicitly call it from other designers.
Obviously, here we are confronted with the classic compromise of readability and maintainability versus efficiency. In general, Richter's advice is quite reasonable, just do not follow him blindly. When solving such a dilemma, we must clearly understand whether it is worth reducing the
readability (after all, you have to look for the right constructor every time to look at the default values) and
maintainability (what if someone adds a new constructor and forgets to call the default constructor) the performance gains we get? In most cases, the answer will be: “No, not worth it!”, But if this class is a library class or simply instantiated a million times, then my answer will not be so unequivocal.
NOTEYou should not think that I dispute Jeffrey Richter's opinion, you just need to clearly understand that Richter is “made of a different test” than most of us; he is used to solving lower-level tasks, where every millisecond counts, but this is not so important to most application developers.
Safety and efficacy
Another very common compromise in the design of some solutions is the choice between structure and class (between a significant type and a reference type). On the one hand, structures up to a certain size (on the order of 24 bytes on the x86 platform) can significantly improve performance due to the lack of memory allocation in the managed heap. But on the other hand, in the case of changeable significant types, we can get a number of very nontrivial problems, since the behavior may be far from what many developers assume.
NOTE
Many people consider changeable significant types as the greatest evil of modern times. If you do not understand what they are talking about or simply do not agree with this opinion, then it is worth looking at an article entitled
“On the harm of changeable significant types” , perhaps after that, your opinion will change;)
Let's look at a more specific example. When implementing the collection enumerator, the author must decide how to implement this enumerator: as a class or as a structure. In the first case, we get a much safer solution (after all, the enumerator is a “mutable” type), and in the second case, a more effective one.
So, for example, the enumerator of the
List < T > class is a structure, which means that in the following code snippet you will get behavior that will be unexpected for most of your colleagues:
var x = new { Items = new List<int> { 1, 2, 3 }.GetEnumerator() }; while (x.Items.MoveNext()) { Console.WriteLine(x.Items.Current); }
Most of the developers who see such behavior, quite reasonably outraged by the stupidity of comrades from Redmond, who clearly decided to make fun of the poor brother programmer. However, things are not so simple.
In the life of any collection, sooner or later there is a moment when someone wants to look at its internal contents. In some cases (for example, in the case of arrays and lists) an indexer can be used for these purposes, but in most cases the collection is
iterated using a
foreach loop (directly or indirectly). For most of us, one additional allocation of memory in the heap for each cycle seems trivial, but the .NET environment is quite universal, and cycles are one of the most common constructions of modern programming languages. And if all this happens not on a four-core processor, but on a mobile device, then such a decision by the BCL developers will no longer seem so delusional.
The choice between a class and a structure (especially if the structure is changeable) is a very serious decision, which the designer must have an exact understanding of what he will receive in one case and what he will lose in the other.
Simplicity vs versatility
When it comes to higher-level design, it is often a problem of choice: how much should our solution be universal, or is it enough to restrict ourselves to solving a specific task, and only then proceed to a generalized solution?
Many programmers and architects intuitively believe that the best way to deal with changing requirements is versatility.
If today the customer needs only a toothpick, then let's immediately make a Swiss knife with the function of a toothpick, just in case, and suddenly the requirements will change!In fact, neither we nor our customers need a universal solution in and of itself; all we need is to
provide our solution with a certain flexibility, which will allow it to be adapted after changing the existing requirements . At the same time, by and large, nobody is interested in how this flexibility will be ensured: at the expense of simplicity or universality; whether a change in the configuration file will be required, or it will be necessary to add or change a couple of classes — this is not so important. It is only important how much effort the team will have to spend on a new opportunity and what the consequences will be for this decision (if the entire system collapses after such a change).
When it comes to making this choice, and we decide whether to make a “Swiss knife” from the very beginning or not, I tend to take a compromise solution. As I wrote in
the reuse note, the most effective solution to this problem is a simple solution from the very beginning, which is summarized in one of the subsequent iterations, when this is necessary.
The use of ingenious architectural constructions is still the same premature optimization as the unreasonable use of ingenious language constructs. Versatility for the most part implies additional complexity, and if your expansion points are not directed where the wind of change blows, then you will get just an overly complex and unnecessary solution.
When designing classes and methods, I use the following rule: any module, class, or method must “expose” a minimum amount of information. This means that all classes and methods by default should be with the smallest possible scope of visibility: classes - internal (internal), methods - private. It sounds like the statement of the famous captain, but very often we put out "well, here's another method, it won't get any worse." The initial solution should be as simple as possible; the less our clients have dependencies on our implementation, the easier it is for these clients to live and the easier it is for us to change our classes. Remember that
encapsulation is not only hiding a class or module implementation details, it also protects customers from unnecessary details.Libraries and usability
There is a certain type of tasks, the solution of which I would not entrust to any person. No, the point is not, I would not have entrusted this task to someone other than myself, there are simply certain tasks that are badly solved by one person, regardless of his level. Many tasks are much better solved jointly, but there is one type of tasks for which “another opinion” is simply necessary - we are talking about the development of libraries and frameworks.
If you look through the wonderful book
“Framework Design Guidelines” , then from the very first pages it will become clear that the priorities of the library developer are greatly shifted compared to the application developer. If the application developer's main criterion is simplicity, ease of maintaining the code and reducing the time-to-market, then the library developer has to think not so much about himself as about his main client: the library user.
The developer of the library can score on all the principles of OOP, if they contradict the main principle of the library - simplicity and intuitive use. A library can be quite complex to maintain, since every solution added to it can never (or almost never) be changed.
If during the design of the application we can afford to make a mistake and change a dozen of even open interfaces, then everything becomes much more complicated when your class has a couple dozen external users. Martin Fowler has a wonderful article called
Published vs Public Interfaces , in which he makes a clear distinction between these two concepts. The cost of changes to any “published” interface increases dramatically, which means that the mistake made when developing the first versions of the library can and will haunt its developer for many years (here is a great example described recently by Eric Lippert
“Foolish Consistency is Foolish” ). It is for this reason that Microsoft is in no hurry to make hundreds, if not thousands of very useful classes from the .NET Framework public, because each new public class significantly increases the cost of maintenance.
The solutions to all the compromises described above are sharply different when we move from applications to libraries. Micro-optimizations, extensibility, dirty hacks, the problem of “breaking changes”, consistency (even to the detriment of many other important factors), all this is found in libraries all the time. That is why, when it comes to most of the tips related to software development, you need to clearly understand that this is most likely related to the development of application applications, rather than specialized libraries.
Conclusion
Most of the compromises that we face can be divided into several categories. First, you need to clearly understand whether it is a framework (or a widely used reuse library) or an application. Here you need to understand that these two worlds are quite different and very well shifting priorities when choosing between two compromise solutions.
Another very important criterion when choosing one or another solution is the understanding of long term and short term benefits (long term vs short term benefits). One solution may be good for solving today's problem, but it will certainly add a number of problems in the future. Do not forget the
"technical duty" , and that such metaphors will be able to convince not only colleagues, but also the customer of the importance of "long-term prospects" when making this or that decision.
And finally, do not forget that programming is an applied discipline, not an end in itself, so experience, pragmatism and common sense, here are three very useful tools for solving most problems.