📜 ⬆️ ⬇️

Big wad of dirt, part 2

Continuation of the translation of the article “Big ball of Mud”.

DISPOSABLE CODE


the same
QUICK HACK (fast hack)
KLEENEX CODE (code on the cloth)
DISPOSABLE CODE (recyclable code)
SCRIPTING (script)
KILLER DEMO (demo killer)
PERMANENT PROTOTYPE (permanent prototype)
BOOMTOWN (fast-growing city)

The owner of the house can build a temporary hangar or carport for a car with the firm intention to dismantle this structure and replace it with something more permanent. As time shows, such structures can live much longer than originally planned. Maybe he has no money left to replace these temporary solutions. Or, when such a new building appears, it is immediately tempting to use it “for some more” time.
')
The same thing happens with the prototyping of the system - you are not very worried about how beautiful and effective your code is. You know that you only need the code to show a working prototype. As soon as it is ready, the code will be thrown out and re-registered already more carefully. When the time of the demonstration comes, there is an overwhelming desire to load it with steep, but, in fact, useless functions. Sometimes such a strategy is “bring success.” The client, instead of sponsoring the development of the next phase of the project, is satisfied with the prototype.


You need to immediately fix a minor problem, or quickly make a prototype, or prove the concept.

Time or lack of it is often the decisive force that forces programmers to write one-time code (THROWAWAY CODE). A good, thoughtful, sound program takes more time to solve a problem than we have, or the problem is not worth spending a lot of time on it. Very often, programmers quickly create a program with minimal functionality, promising themselves that in the near future they will make a more refined, elegant version. At the same time, they are well aware that the creation of reusable components allows them to solve problems that arise in the future much easier and that a well-designed architecture allows us to develop a system that can be easily maintained and expanded.

Quickly creating dirty code is often considered a temporary measure. But perhaps more often there is never time to continue this work. Weak spots appear in the code, and the program, on the contrary, is growing rapidly.

Consequently, very often by any available means, a simple, convenient, one-time code is created that will help solve the problem.

One-time code is often written as an alternative to reusing someone else's more complex code. When a project deadline looms on the horizon, the likelihood that you will create a raw and sloppy program that will work by itself outweighs the unknown costs of learning and mastering someone’s library or development environment.

Typically, programmers are not experts in a particular area of ​​expertise, especially at the very beginning of their careers. Use case diagrams or CRC cards [Beck and Cunningham, 1989] can help them master this knowledge. However, there is nothing better than creating a prototype to help the team understand the domain.

When you create a prototype, there will definitely be a person who will say "well, you can send." In order not to let the prototype get into production, you can write a prototype in such a language or using such a tool to make the production version not possible.

Extreme programming advocates [Beck, 2000] often create fast, one-time, quick-to-use prototypes (spike solutions). Prototypes help us understand how to get around problems, but prototypes should never be confused with good design [Johnson and Foote, 1988].

Not all programs should be a palace. A simple one-time program is like a tent city or city that has grown as a result of the discovery of deposits, and such a city does not need solutions to problems with a 50 year perspective, because in five years it will still turn into a ghost city.

The real problem with a one-time code appears when this code becomes reusable.

Creating a one-time code (THROWAWAY CODE) is an almost universal practice. All software developers with any level of experience and skills, at least once used this approach to software development. For example, in the pattern community there are two examples of “fast and dirty code” that have been preserved for a long time. This is the PloP online registration code and the Wiki-Wiki Web page.

In fact, the initial EuroPLoP / PloP / UP online registration code was a distributed web-based application that ran on four different machines on two continents. The conference information was controlled by a car in the city of St. Louis, and registration information was stored on machines in Illinois and Germany. The system could generate registration reports and even instantly update the online list of visitors. It all started in 1995, when a “fast and dirty” HTML collection, C demo code, and csh scripts were created. By and large, it was expected that this would be an experiment, but as it happens, the project went beyond the expectations of its creators. Today it’s still the same HTML collection, the same C demo code and csh scripts. This is a good example of how “fast and dirty” code can start living its own life.

The original C code and scripts contained perhaps less than thirty original lines of code. Many lines were simply copied and pasted and differed only in the text they generated or the fields they checked.

Here is an example of one of the scripts that generated the attendance report:

echo "<H2>Registrations: <B>" `ls | wc -l` "</B></H2>" echo "<CODE>" echo "Authors: <B>" `grep 'Author = Yes' * | wc -l` "</B>" echo " " echo "Non-Authors: <B>" `grep 'Author = No' * | wc -l` "</B>" echo " " 

This script was slow and inefficient, especially when the number of registrations increases, but its advantage is that it works. If the number of visitors exceeded one hundred, the script worked very poorly and unstablely. However, since the venue of the conference could not accommodate a hundred visitors, we knew that the registration would be limited for a long time and did not expect problems with the script. Although in general it was an incompetent approach to the problem, it satisfied many and performed the functions for which it was written. Such practical limitations are typical for fast code and most often they are not documented. By and large, in the fast code almost nothing is documented. If the documentation exists, it is usually irrelevant and not accurate.

The Wiki-Web code on www.c2.com also began as a CGI experiment led by Ward Cunningham (Ward Cunningham) and also went beyond expectations. The name "wiki" is one of Cunningham's personal jokes. He borrowed this word from the Hawaiian language when he noticed it on the apron bus at the airport, going on vacation to Hawaii. Translated, this word means "fast." Ward subsequently began using this word for any “quick and dirty projects.” Wiki Web is unique in that any visitor can change everything that someone wrote before him. At first glance, this is some kind of vandalism, but in practice everything turned out to be quite good. In the light of the success of this system, the author made additional attempts to finalize the project, but the fast and dirty Perl code still remained the foundation of the entire system.

You might think that both systems are on the edge and are about to turn from small clumps of dirt into a large clump of dirt. The C code of the registration system has spread from one NCSA HTTPD server and still contains a zombie code. The strategy “let it work” (KEEPING IT WORKING) is the first thing that comes to mind when you need to make a decision about expanding or improving the system. Both systems are not bad candidates for reconstruction (RECONSTRUCTION), provided there are resources, interest and audience. In the meantime, these systems, which continue to perform their tasks quite satisfactorily, for which they were developed, remain as they are. It takes much less effort and energy to maintain them than it would take to completely rewrite them. They continue to develop in stages (PIECEMEAL), slowly.

You can try to fix the decay of the architecture caused by the “fast and dirty” code by isolating it from other parts of the system, while preserving its own objects, packages and modules. As long as such code is in quarantine, its ability to affect the integrity of healthy parts of the system will be minimized. This approach is discussed in the framework of the SWEPING IT UNDER THE RUG pattern. As soon as it becomes obvious that the alleged one-time artifact will still be present for a while, then you can turn your attention to improving the structure, either through an iterative step-by-step growth process (PIECEMEAL GROWTH), or through a new draft project, which is discussed in the “reconstruction” pattern (RECONSTRUCTION).

GROWTH GROWTH


The same
NATURAL GROWTH
ITERATIVE INCREMENTAL DEVELOPMENT

The Russian space complex "Mir" was created to change the configuration and capacity of the modules. The base module was launched in 1986, the modules "Kvant" and "Kvant-2" joined the complex in 1987 and 1989, respectively. The Crystal module was added in 1990. In 1995, the Spectrum modules and the docking bay were added (in 1986, they did not even think about this bay). Finally, the last module “Nature” was launched in 1996. Similar maneuvering with several modules allowed the complex to be reconstructed several times as it grew.

Urban planning can not boast a stable success story. For example, the capital of the United States, Washington, was built according to the master plan of the French architect Pierre Langfang. The capitals of Brazil (Brazil) and Nigeria (Abuja) were also first cities on paper. Other cities, such as Houston, grew without any support plan. Each approach has its drawbacks. For example, the radial system of the streets of Langfan’s plan failed as soon as the streets began to move farther from the center. On the other hand, the absence of any plan led to the appearance of a quilt consisting of residential, commercial, and industrial areas located in random order.

Most cities look more like Houston than Abuja. Cities could begin as settlements, land, docks or railway stations. It is possible that gold or forest, access to the transportation network, or land owned by no one belonged to these places. As time passed, in some settlements the number of people reached a critical level, and then a cycle was started with positive feedback. The success of the city attracted artisans, merchants, doctors and priests. A growing population is able to maintain infrastructure, state institutions and police protection. And this, in turn, attracts even more people. With few exceptions (the city of Salt Lake City is remembered right away), the founders of such settlements never thought that they were laying the foundations for a big city. Their ambitions were rather modest and momentary.

Over the past few years, it has become fashionable to criticize the "traditional" cascading model of the software development process. It may seem to the reader that such attacks are akin to stinging a dead horse. However, if this is so, then the horse is too tenacious for a dead animal. Although many believe that this approach has long discredited itself, it has generated such a legacy of processes and methodologies that they have survived to this day under various masks.

In the period before the advent of the cascade model, the programming pioneers used a simple, careless and relatively unorganized approach to software development “compile and fix” (code-and-fix). Given that the problems in those days were on the verge of primitiveness, this approach most often justified itself. However, the result of the lack of organization almost always became a big clump of dirt.

Cascade model appeared in response to this swamp. And if the “compile and fix” model was suitable for small projects, it could not cope with larger-scale tasks. As the software became more and more complicated, it was not enough to put together a group of programmers in the office and instruct them to write code. Large projects required good planning and coordination of the actions of the whole team. Why, the question was asked, software cannot be developed in the same way as automobiles and bridges are developed, with thorough problem analysis and detailed and pre-designed design? Indeed, the study of software development costs showed that almost always solving a problem during program support was more expensive than during design development. Of course, it was better to mobilize all resources and all specialists in order to avoid costs during software support. Undoubtedly, it is wiser to first lay the entire sewage system before building walls and then making holes in them. Seven times measure cut once.

One of the reasons why the cascade model was able to achieve prosperity a generation ago is that computer and business requirements have changed at a rather slow pace. The hardware was very expensive, so the salaries of the programmers who were hired to work with this software had to be greatly reduced. User interfaces, by today's standards, were primitive. You could get any user interface that you wanted, provided that it was an alphanumeric "green screen". Another reason for the popularity of the cascade model was its similarity to the practices used in more developed engineering and manufacturing areas, which was very convenient.

Modern designers have to deal with the onslaught of ever-changing demands. In part, this is due to the rapid growth of the technology itself; in part, this is due to rapid changes in the business climate (some of the changes are due to technology). Customers are used to more advanced software and want more choice and more flexibility. Products that were once developed from scratch by lone programmers should now be integrated with third-party code and applications. User interfaces have become complex both inside and out. Indeed, sometimes you have to allocate a whole level of the system in order to maintain the user interface. There is a threat that the changes will outpace our ability to keep up with them.

Master plans are often strict, incorrect and outdated. Users will need changes over time.

Change : The fundamental problem with top-down design is that the requirements of the real world are inevitably moving targets. You can’t just hope to solve the problem once and for all, because by the time you finish it, the problem has already changed. You can not just do what customers are asking, because often they themselves do not know what they want. You cannot simply plan, as planning should provide you with the ability to adapt. If you can not fully predict what is happening, then you need to be prepared for flexibility and quick response.

Aesthetics : the goal of top-down design is to be able to pre-recognize and identify significant architectural elements of the system. From the point of view of such an understanding, high-quality design elegantly and fully determines the structure of the system before even one line of code is written. Discrepancies between these draft plans and reality are considered deviations from the norm and are treated as errors on the part of the designer. If the design is of high quality, then it will foresee such omissions. In the presence of inconstant requirements, hopes for such a flawless design are as vain as the desire to always hit the ball in the hole the first time.

In order to avoid this awkwardness, the designer may try to cover up himself by proposing a more complex, more generalized solution to some problems, knowing that others will have to take on the task of creating these artifacts. When such assumptions about complexity turn out to be true, they can be a source of strength and satisfaction. This is the beauty (Venustas) that Vitruvius spoke about. However, sometimes the expected circumstances never come, and it turns out that the designer and the developers have wasted time to solve a problem that nobody has encountered. In another situation, not only did the expected problem not arise, but the solution itself had to develop in a different direction. In such cases, theoretical complexity may become an unnecessary obstacle to subsequent adaptation. Paradoxically, the pursuit of elegance can be an unintended source of complexity and confusion.

In its most unsightly form, the desire to predict and prevent change can lead to “analytical paralysis,” since the sealing network of perceived discrepancies grows to a point where design possibilities are too limited.

Thus, one must constantly pay attention to the forces that contribute to change and growth. Let growth opportunities be first used locally. Constantly refactor.

Good software attracts a wider audience, which, in turn, may impose more requirements on the program. These new requirements can often be met, but at the cost of developing solutions that completely contradict the original architecture. [Foote, 1988] called this architectural destruction a “loss of middle age community”.

When designers are faced with the choice between creating from scratch something elegant or destroying the architecture of an existing system to quickly solve a problem, the architecture loses. This is a natural phase in the evolution of the system [Foote and Opdyke, 1995]. This phase can be called “uncleaned kitchen”, during which parts of the system are scattered around the table and wait until they are finally removed. The danger is that the cleaning can not wait. As for the real kitchens in the catering establishments, the sanitary-epidemic station will interfere with the situation. In the case of software, alas, it is rarely where you can find a special agency that would control such a neglect. Uncontrolled growth can ultimately become a malicious force. And the result of neglect will be a big clump of dirt.

In his book How Buildings Learn, Stuart Brand [Brand, 1994] observed that what he calls architecture “high road architecture” often leads to expensive buildings that are difficult to change; while folk architecture buildings such as bungalows and warehouses were more adaptable. Brand also noted that the function is gradually turning into a form and that people's buildings are more adapted to changes. Similarly with software, you are unlikely to want to defile the holy place of another programmer.

In his book, The Oregon Experiment [Brand, 1994] [Alexander, 1988], Christopher Alexander wrote:

Major development is based on the idea of ​​replacement. Piecemeal Growth is based on the idea of ​​correction. Major development is based on the delusion that ideal buildings can be built. Phased growth is based on a healthier and more realistic view that mistakes are inevitable. If there are no funds to correct these errors, then each building, once it is built, is sentenced to be, to some extent, non-working ... Step-by-step growth is based on the assumption that the process of adaptation between buildings and their users is necessarily slow and continuous. can not under any circumstances to achieve a single stroke.

Alexander also noted that our mortgage mortgage and capital expenditure policy makes large sums of money available, but this policy does not provide any resources for support, improvement and development [Brand, 1994] [Alexander, 1988]. In the software world, we will engage our most talented and experienced people from the very beginning of the life cycle. Later, software support is provided by the junior staff, and resources can be very scarce. The so-called support stage is part of the life cycle in which you actually have to pay the price for a fictitious master plan. Technical support programmers are invited to cope with the ever-growing differences between the already established design and the ever-changing world. If the hypothesis is that a detailed understanding of the architecture comes at a later stage in the life cycle, then this practice should be reconsidered.

Brand noted that support is a teaching. He identifies three levels of learning in the context of systems. The first level is a habit when the system properly performs its functions within the framework of the parameters for which it was designed. The second level comes into play when the system has to adapt to change. Here, the system usually has to be changed, and its ability to withstand such changes determines the degree of its adaptability. The third level is the most interesting: study to learn. In the case of buildings, as an example, the construction of an additional floor. Since the system is forced to undergo major changes in structure, it will adapt, and subsequent adaptations will be less painful.

Phased growth can be applied as an alternative solution depending on the prevailing circumstances, starting with the existing, living, breathing system and working outside, step by step, so as not to damage the viability of the system. You strengthen the program as you use it. Large-scale advances on all fronts are absent. Instead, the change is broken down into small, feasible tasks.

What is surprising in gradual growth is the role that feedback plays (Feedback). Herbert Simon [Simon, 1969] remarked that only a few systems that were created as a result of evolution or created by man depended on the forecast (which is our primary means of coping with the future). He also noted that supportive mechanisms such as homeostasis (self-regulation) and retrospective response are much more effective means. Homeostasis protects the system from short-term fluctuations in the environment, and the response mechanisms react to long-term discrepancies between the actual and desired behavior of the system and make appropriate adjustments. Alexander [Alexander 1964] set out in great detail the roles that homeostasis and response play in adaptation play.

If you can adapt quickly to change, then predicting change becomes less important. Brand notes that the retrospective view is better than foresight [Brand, 1994].Such a quick adaptation underlies one of the mantras of extreme programming [Beck, 2000]: you won't need it .

Supporters of extreme programming say that you need to pretend that you are not as smart as you think and wait until your clever idea is needed, no need to hastily voice this idea. In those cases where you are right, you will say that you thought so, that it will be so, so you know what to do. In cases where you make a mistake, you will not have to make an effort to solve a problem that does not exist.

Extreme programming is highly dependent on feedback so that all requirements are synchronized with the code, focusing on short-term (three weeks) iterations and extensive, lengthy consultation with users regarding design and development priorities throughout the development process. Extreme programmers are not involved in large-scale advance planning. Instead, they write the code as quickly as possible and send these prototypes in the right direction, depending on what the users want based on the feedback.

Feedback also plays a role in determining the owner of the code. If the developers do not meet the deadlines, then during the next iteration they are given another task, no matter how close they are to the completion of the previous task.

In extreme programming, testing is an integral part of the development process. Ideally, tests are developed before writing the code itself. The code is tested constantly in the process of its development. In extreme programming, there is something from a “return to the future.” In many ways, such an approach resembles the “compile and fix” method. The only thing that distinguishes extreme programming is the central role of feedback that contributes to the development of the system. The development of the system is also promoted by modern object-oriented programming languages ​​and powerful refactoring tools.

Proponents of extreme programming call this minimal attention to planning and preliminary design. Instead, they rely on feedback and continuous integration. We are confident that preliminary planning and design to some extent is not only important, but also inevitable. No one will do the project blindly. It is necessary to lay the foundation, decide on the infrastructure, select the tools and set the general direction. In the early stages, it is necessary to concentrate on a common architectural vision and strategy.

If left unchecked, changes will undermine the viability of the system. And the planned changes will strengthen the system. Changes can spawn harmful spreads or healthy and proper growth.

In an object-oriented environment over the past ten years, it has been generally agreed that objects emerge from an iterative and incremental evolutionary process. See for example [Booch, 1994]. The software tectonics model (SOFTWARE TECTONICS) [Foote and Yoder, 1996] examines how systems can incrementally cope with change.

The biggest risk associated with phased growth is that this growth will gradually destroy the entire structure of the system and inevitably turn it into a big clump of dirt. The “sweep away the rug” strategy goes hand in hand with incremental growth. Both patterns emphasize acute, local problems and do not pay attention to big problems with architecture.

In order to counter these forces, it is necessary to constantly engage in consolidation (CONSOLIDATION) and refactoring. It is through such processes that local and global forces are settled over time. Such a life cycle perspective duplicates the fractal model [Foote and Opdyke, 1995]. To quote Alexander [Brand, 1994] [Alexander, 1988]:

The organic process of growth and recovery should create a gradual sequence of changes, and these changes should spread evenly across all levels. [When developing a campus], it is necessary to pay no less attention to the restoration of details - rooms, spans of buildings, windows, paths - than the creation of a completely new building. Only in this case can we speak both about a balanced environment as a whole, and about its elements at each stage of history.

Other parts of article

1 part

Source: https://habr.com/ru/post/352486/


All Articles