In the
previous article, I raised the question of how convenient and necessary Gantt diagrams are in program design. The use of the diagram complicates in many ways, and not in the most experienced hands also harms the planning of the project. What method will allow estimating not less, but more often even better, the terms of project implementation and at the same time significantly reducing the costs associated with drawing up and maintaining up to date the project plan?
The described methodology is borrowed from the Agile family and does not imply the drawing up of a plan in the usual sense of what is constructed using diagrams. The technique is based on two basic concepts: the uniformity of iterations (the principle of yesterday's weather - if the weather is settled, then tomorrow the weather will be the same as today) and the speed of the team.
TechniqueThe essence of the technique is that if at the output of each iteration your team has a working product (which in itself is a measure of progress), then after several iterations you get an average and fairly accurate estimate of the speed of your team. Speed can be measured in terms of realized functions (user stories) per iteration, or per day, or hours spent over a period, this is the second question.
')
In our system, the speed of a team is measured in the amount of work the team performed per day. This is primarily a command characteristic, not an individual one. This indicator encapsulates development phases, dependencies between tasks, resource loading, weekend or unplanned absence of participants and other risks, in general, everything that happened with the team during the iteration. With high probability during the next iteration, the development will proceed at about the same pace.
In fact, you don’t need a time schedule at all, that is, a task tree defined by project participants, you need a set of functions to be implemented, an agreement within the team about the process of implementing each function (using TDD, using automated testing, etc.) and data on the performance of your team. Everything. Now you determine with high accuracy the timing of these functions.
One more thing remains: all functions require different labor input for implementation. You can divide them into classes: simple, ordinary and complex. We went the other way and appreciate the complexity of the individual wishes (feature or user story). Here an error of underestimation arises, since it is difficult for a team, without realizing and not understanding all the subtleties of the realization of a wish, to accurately assess it in advance. On the other hand, the team consists of people and the wish may not be fulfilled the first time, which will be revealed by the test results. Therefore, another indicator is introduced: the percentage of errors.
ResultThus, the deadline for the fulfillment of a certain set of wishes (features or user stories) by your team is calculated using a simple formula: a total estimate of the workload * an error in underestimating * the error rate / speed of the team.
A significant advantage of this methodology is the transparency of the assessment - you see what needs to be changed in order to achieve the goal. You abstract from all the complexities and subtleties of the development process, do not predict laboriousness, but observe it and interpolate for future iterations. Another advantage is independence from the way of assessing the complexity of wishes. The team leader can evaluate the complexity, then he thinks only about implementation, and the costs of all other phases are described by the error of underestimation. Evaluate the complexity of the team can, then the error of underestimation over time will tend to 1, and the responsibility of the team for their assessment to the maximum possible value.
You just think that now there is no need to paint tasks for the whole team, specify specific performers in advance, arrange dependencies, level resources, and most importantly, rewrite the project plan when changing the schedule, deadline or cost. You will instantly learn about the cost and timing of a scop or change it.
RestrictionsA little about the cons, of course, they are here, but they are so insignificant that they can be completely neglected, or a little bit to correct your process in order to follow the principles of Agile.
The methodology is based on the assumption that the team does not change, as well as the capabilities of its members. Often this is true in several iterations, and maybe releases. If you want to complete a project with your team, then it should be fairly cohesive and stable. If the team breaks up, or its members change, then the accumulated indicators must be calculated anew. The good news is that you can collect fairly accurate indicators after two or three iterations.
It is assumed that the structure of work in all iterations is approximately the same, that is, an analyst works for some specific time, then each function goes through the same type of development, testing, and documentation. At the end of each iteration, you should have a stable product, not so much that in the first iteration we are engaged in analysis, in the second - in development, in the third - in testing.
It is assumed that an effective team works that can independently detect and resolve dependencies arising between tasks, they do not need a manager to kick, the team has an understanding of the process of making a software product.
If your team has developed a web application that automates the activities of the store, and now it is proposed to develop a trading application in C ++, then most likely the team’s indicators will have to be rebuilt. However, in any case, this is a very risky undertaking :)