Technical debt management practices in a single team
About a year ago, our team moved from a phase of accelerated functionality to smoother development with a focus on quality improvement. At this point, a significant number of non-optimal solutions, ugly code, outdated libraries have accumulated in our products. With all this it was necessary to do something.
To date, we managed to build a process that makes the fight with technical debt predictable, painless and inevitable.
What was obtained as a result of:
Let's tell how we achieved it.
My working definition of technical duty is the amount of work that needs to be done in order for the project to fit the team’s vision of the beautiful. Note that technical debt may arise not only due to the liberal use of crutches in the design, but also due to a change in ideas about the beautiful. For example, generally accepted practices in the industry have changed. Or the developers fell out of love with OOP and fell in love with functional programming. Or once a fashionable framework is no longer a cake and it has become difficult to find specialists who would like to write on it.
However, the main cause of technical debt is entropy in all its diversity. Disabled unit tests, outdated comments that have lost contact with the code, unsuccessful architectural solutions, implementation of features that no one else uses, a reserve for the future that has not come, and much, much more.
From this it follows that the emergence of technical debt is inevitable in any long-lived project.
What is bad technical debt? It increases the cost of further development due to a number of factors:
These losses are sometimes called “technical debt interest”.
There are situations when these losses are cheaper than eliminating technical debt:
The desire to be in tomorrow's deadline at any cost, to the detriment of the speed of development the day after tomorrow.
Sometimes a pre-fabricated crutches design is an objectively correct choice. In my practice, this was most pronounced when making demos for exhibitions. The date of the event is fixed, if he did not have time for an important exhibition - the next attempt will be in a year. At the same time, you can show the product “out of hand”, carefully avoiding all the bugs. I, as an engineer, make such projects unpleasant, but the crutches in them are justified.
When you make a product that will live a long time, everything is different. Catching up on time due to dubious technical solutions is expensive. The total cost is:
The second point is very easy to underestimate, but about the third, the most expensive, there is a risk not to think at all.
When you meet with a twitching eye at a technical conference, it may turn out that it was the nightmare of endless debugging of the crutches construction that brought him to such a state.
The worse the situation with technical debt, the stronger the temptation to bury the entire project code and write everything anew. This is one of the classic mistakes that can kill the whole project.
The topic is so well disclosed in the well-known article by Joel Spolsky that I see no reason to give my arguments.
Things You Should Never Do Part I
It is not always easy to argue for the need to eliminate technical debt with a projection on profit. The development team may be tempted to bypass sharp corners and start a major refactoring without prior order. This can be done during off-hours, in pauses between other tasks or “on the tail” of other tasks due to inflating assessments.
What's bad about it? Oh, a whole bunch of things:
After the team uses this recipe, the eye begins to twitch at the management.
There are a number of patterns of life tasks in projects:
Maxim Dorofeev in his “Empty Inbox Technique” tells very well about these things .
So that technical debt is not accumulated, work on its elimination should be drawn up with an eye to the principles listed above.
All tasks, except for the smallest, are got in backlog. So they have a chance to be made not only in their free time, but also as part of the planned work. In addition, such tasks are harder to completely lose sight of - the backlog is viewed more often and more closely than the TODO in the code, pieces of paper on the monitor, abandoned wikis, tea-filled napkins with diagrams and other sources of information.
As long as such changes remain inexpensive in terms of development costs and the amount of testing needed, we can gradually improve our code base without interrupting business tasks and introducing additional risks.
If a tree fell in the forest, but no one heard that, was there a sound? If there is bad code in the project, but this module will never need to be changed, is there technical debt?
I think that when it is unpleasant for a programmer to look at some old module - this in itself is not a very big problem. Much worse is what happens when in this module you need to add new functionality or extend the old one. Compared to making changes to well-written code, such tasks are more often (and stronger) beyond the original rating and contain more bugs. Sometimes much more. To protect ourselves from problems of this kind, we try to plan refactorings so that they are done before writing new functionality in the same place.
If changes in functionality and refactoring seem small, you can do them together. Empirically selected task size for which such an approach would be optimal - 3 days of work by one developer and less. When it is clear that there is more work, it is divided into refactoring with the current behavior and the implementation of new functionality.
Thus, the procedure for eliminating technical debt is determined by the order of business tasks in the backlog.
The “reliance on business priorities” principle has another application. One of the typical problems that plague developers who are trying to write well is the difficulty in allocating time to optimize performance, improve maintainability, or other things that do not directly figure in the work plan. For these improvements, you can almost always find a business need. Who does not want the system to work faster, more stable, cheaper to maintain? All these advantages can be assessed and, on the basis of this assessment, we can put tasks for improvement into backlog, along with any others.
So if you want to optimize performance, and you have to edit another boring bug instead, you may simply not be able to explain the benefits of optimization in a language that the owner can understand.
Almost any code, except written recently, is a little behind the current idea of ​​the beautiful in the field of style and architecture. When it is necessary to change the code within a certain task, it is considered good practice to make all the safe improvements that are possible in the affected area. What could it be?
Such improvements are expected to make the code better, but they will not create a significant increase in the cost of development or testing.
Due to this principle, the quality of the code is gradually increasing in the background, even in places where no separate refactorings were planned. Moreover, the more often we work on a certain part of the system, the better the code of this part. A pleasant contrast with projects where the developer spends the most time in the parts with the worst code.
One of the basic principles of SCRUM says that at the end of each sprint the system should come to a stable state.
“By the end of the sprint, the increment should be ready, which implies its compliance with the criteria for readiness of scram teams and readiness for use. It should be ready to use, regardless of the decision of the owner of the product to release or postpone. "
Any work on the elimination of technical debt is done in compliance with this principle.
Large transformations are necessarily decomposed so that any single stage can be completed in one sprint. For example, we changed the build system in two stages ( Angular 1.x: crouching webpack, hidden grunt )
We work with VCS on principles close to classic gitflow . Development goes in feature-branches, testing there. As a rule, such a branch lives no longer than one two-week sprint. A branch that lives longer almost always leads to additional costs.
Our experience clearly confirms this pattern. Every time we couldn’t complete a big refactoring in two weeks, it was accompanied by pain and suffering. And the longer the task was and the longer the open branch lived, the slower the work went and the more problems there were.
The need to always be a few steps away from a stable release creates one of the most difficult and interesting engineering tasks - the search for the optimal decomposition of strategic plans. Large-scale changes can be decomposed into separate independent steps. It is advisable to start getting benefits as early as possible. The better this breakdown is done, the more likely it is to complete the job.
Once a release we make a detailed review of the technical backlog:
When business stories appear on the horizon, technical analysis is done and all the technical stories that would help in implementation are linked to the business history.
In preparation for the planning of the sprint:
Having taken the lead role in the team, I asked each developer and QA what improvements in the product they most want to make. Most of the wishes were due to technical improvements to the platform and refactoring. As further experience showed, all key technical problems of the product were included in this set of wishes. So this practice can be used to quickly form a technical backlog from scratch or get a general idea of ​​the state with technical debt in a new project for you.
The current backlog of technical tasks is due to the practices described above and does not require separate efforts or analysis. In addition, new ideas on technical improvement of the product are added to the backlog. This makes any team member who came up with this idea. The main thing at this stage is not to lose the idea. Refinement and determination of priority occur only later, in the course of work planning.
Source: https://habr.com/ru/post/337308/
All Articles