📜 ⬆️ ⬇️

Ten deadly sins in assessing the complexity of software development

Introduction


In this topic, I want to introduce to you, dear readers, a retelling of the webinar from a person whose name needs no introduction. In order to present the hourly webinar as a small topic, I had to squeeze the author’s comments significantly, so I deliberately do not mark the topic as a “translation”. This time Steve McConnell decided to share with us his experience in the form of short abstracts, in which he reflects the most terrible mistakes in assessing the complexity of software development. In 1998, Software Development magazine readers called Steve one of the most influential people in the software development industry, on par with Bill Gates and Linus Torvalds. Steve is the author of the book Software Estimation. Demystifying The Black Art ” - one of the most popular books in the field of software development. It must be admitted that the webinar was held relatively long ago (June 2009), but the information presented there was not outdated at all. The topic itself will be built as follows. The headings will be fairly accurately translated from the presentation that Steve showed, but otherwise I will try to reflect only the main ideas in order not to overload the topic. If someone thinks that this or that thought I state incorrectly - you are welcome in the comment, you can correct me.


Ten Almost Deadly Sins in the evaluation of the complexity of software development


To “warm up,” Steve first lists “almost deadly sins,” i.e. still not the worst, but still very serious. He practically does not comment on them.
So, according to Steve, Almost Mortal sins in the assessment of labor intensity are the following things:

Ten Deadly Sins in assessing the complexity of software development


1. Confuse project objectives and evaluations.


A typical situation is as follows. The management sets itself the task of assessing the laboriousness of the work, while adding that the project is planned to be shown at some annual exhibition somewhere abroad. That is, sort of, and evaluate how much you need ... but then it’s necessary. Here, the assessment of labor intensity is mixed with the objectives of the project (“to show at the exhibition at a fixed time”). The solution to the problem is to iteratively align the goals and scores. For example, to achieve a goal, you can reduce the amount of the functionality being represented in order to have time to do everything on time.

2. Say "Yes" when you, in fact, mean "No"


It often happens that at the negotiating table, at which estimates / deadlines are discussed, people are divided into two groups. On one side are the developers, who are often introverted, young and rarely have the gift of persuasion ... and on the other side are extravertive and “experienced" sales managers who are not only persuaded, but are also specially trained to convince. In such a situation, it is obvious that, regardless of the quality of the assessments, the one who knows how to “convince” rather than the one whose grades are more adequate “wins”, “wins”.
')

3. Make promises at the early stage of the Cone of Uncertainty


Before you is the so-called "cone of uncertainty" (or uncertainty ... as you like).
image
This is a graph, on the horizontal axis of which time is indicated, and on the vertical axis is the error value, which is laid when estimating the labor intensity. As can be seen from the graph, over time, as more and more data becomes available about the estimated project, about what exactly and under what conditions you have to do, the “spread” of the error becomes less and less.
The essence of the problem is that it is impossible to make promises at that moment in time (the leftmost part of the cone), when the magnitude of the error is still too large. Steve evaluates the “confidence” limit somewhere around 1.5x, i.e. that moment in time when the probable error will be 1.5 times both up and down. To make promises before this moment is to expose oneself to too much risk.

4. Assume that underestimation has a neutral impact on project results.


The author repeatedly emphasizes this idea in his book (see Introduction). Take a look at the chart below.
image
The left side of the graph shows the underestimation zone (underestimation), the right side of the graph shows the overestimation zone. Vertical delayed cost of error. From the graph it is clear that the cost of revaluation increases linearly (according to the Parkinson's law ). At the same time, the cost of underestimating increases like an avalanche, as the error of underestimating the effort required increases. In the case of underestimation, it is more difficult to predict additional efforts than in the case of overestimation.

5. Focus on evaluation methods at a time when you really need an ART evaluation of software development effort.


Estimating the labor intensity in its essence is not only specific methods, but also the practice of their application. This is a set of successful approaches that are well proven. The art is the use of the right technique at the right time and place.

6. Make estimates in the "Zone of Incredibility"


Here we need to clarify what is meant by the zone of improbability . For an arbitrary project, we will present the following dialogue ( note, strongly shortened ):
- Will 12 developers be able to complete the project in 10 months?
- Yes, maybe - we answer.
- And 15 developers in 8 months?
- Well, yes, - we answer - more likely yes, than not.
- A 30 for 4?
- It is unlikely - it becomes obvious that 30 people, most likely, will not be able to work together in such a short time.
- 60 in 2 months?
- Well, this is ridiculous! - you will answer ...
- A 120 developers for 1 month?
- Well, it's not funny at all. Bullying just ...
From this dialogue, it can be seen that the “compression” of terms at a given labor intensity cannot occur indefinitely - there is a limit. So the idea of ​​this item is not to make estimates beyond this limit. Such estimates cannot be consistent. The limit of "compression", according to Steve, is somewhere around 25% of nominal ratings.

7. Overestimate the benefits of new methods and technologies.


The use of new technologies involves:
Steve's personal recommendation: “Considering that using a new technology for the first time will reduce the productivity of development.” And again the thesis: "No silver bullet."

8. Use only one method for estimating labor input.


At this point, the author cautions against two things:
When using different techniques, it is important to understand why differences have arisen.

9. Neglecting specialized labor-intensive software.


Simulation using computer programs can improve the adequacy of estimates. Naturally, the use of special tools does not guarantee you the reliability and adequacy of assessments. But with skillful use can significantly improve their accuracy. In addition, the author provides a link to the website of his company, where free tools are available for conducting computer-aided assessment of the complexity of software development. One of the main advantages of the special software is that the results look more convincing for the “consumers” of ratings.

10. Hasty assessment


Last but not least, a statement is a warning against using hasty, unwarranted assessments. It is important to always take a short timeout for at least a small preliminary assessments.

Conclusion


I will not try to convince you of the truth of all the statements that Steve makes. You are free to rely on your knowledge and experience. Steve is a man with great knowledge and experience, but he is a man, and people are mistaken. If you think that somewhere he is wrong, then write down, please, about this in the comments - it will be very interesting to discuss.

Source: https://habr.com/ru/post/75903/


All Articles