This post is inspired by the assessment of a large technological project in which I had the opportunity to participate. The assessment began disastrously — after weeks of meetings, working group gatherings, and Timlide thinking, the development provided an estimate of the development timeline — with a spread of
14 months between the minimum and maximum project durations.
The project itself was devoted to a large and voluminous feature in an already existing product, but it was not a r & d project, where such a spread could be plausibly included in the project plan.
And while the finance department has already uncovered a machine gun, our project gang of four gathered for an urgent discussion on what to do with such development time: whether it is possible to plan the loading of people, consider risks, how to deal with critical interconnections with other components. But perhaps the most exciting question was how valid such an assessment is, and whether we can help the development evaluate more accurately and better.
The first question was clarified quickly enough - the development team assessed the project on a large and detailed TK, which was prepared by the product owner's team: product and project managers, business analyst. TK spent a lot of time on this, but, like all the excessive waterfall documentation, it still didn’t give a complete picture of the tasks and their features, which ultimately form the development timeline.
')
We all knew that people do not know how to evaluate. We also knew that most of the estimates of our development team on previous projects differed from reality by up to six months.
We decided to try on a really big project in an environment where the cult of waterfall reigned, flexible evaluation techniques, and see what happens. After all, worse than the 14 months of variation in the timing of the project, could not be.
Qualitative evaluation always takes time.
The idea that a developer can read a fifty-page document and make component-wise evaluation of a product has always seemed implausible to me. Even business analysts are starting to yawn from most of the technical tasks with which to work. Most of them are written in a dreary language that is not intended to facilitate the assessment task. But the worst thing about them is that the initial requirement that formed in the head of the product owner managed to undergo changes and interpretation of the person who wrote this TK, and then distorted by the perception of the developer, who was offered to evaluate it.
A large and beautiful TK instills in the project management the belief that everything will be fine. And at the same time, it significantly reduces the chances that the project will be completed on time and in accordance with the expectations of the product owner. Indeed, the design of these requirements is often not involved (or not enough) those people who have to do the real work - designers, developers, testers. And if you already have a TK, set it aside and invest your time in designing requirements with the project team.
That is what we did. We set aside the Talmud with the terms of reference, which described the scenarios for the work of the components of the product, and sat down with the team for designing the requirements and evaluating them. What did we try?
User story mapping
First of all, we abandoned feature-based ratings and attempts to evaluate, for example, a login screen or offline database search. We reformulated all requirements into user stories, defining acceptance criteria for them, thinking through test scenarios and defining error handling scenarios, so that the average story looked like this:
As a system administrator, I can edit the organization's work hours so that system users know about the current company schedule.Acceptance criteriaI can specify the working days of the organization.
I can specify the working hours of each working day of the organization.
I can specify a break within each business day.
TestsSuccessful editing of business days
Successful editing of working hours of the organization
Unsuccessful editing of working hours of the organization
Successful break editing
Error processingAttempt to send empty fields to server
No network when sending data to server
No response or server error
We wrote more than 90 user stories, distributing them into 20 epics (higher-level stories) and organizing everything into a huge map that began with the first steps of user actions in the system and ended at the exit from it.
We are tired. But as a result, we knew exactly what to do - we had an assessment of the developers, who essentially invented themselves and thoroughly knew how the system would work (and did not read about it from hearsay). We had an assessment of designers who, during discussions, managed to make high-quality prototypes of the entire UI, and an assessment of testers who, with close interaction with the development and design,
We were able to suggest risks and bottlenecks where a deviation from the initial assessment would necessarily arise due to an increase in the testing time or simply from the ill-considered nature of a certain moment.
Impact mapping
Most project plans assume that the world and their organization are standing at attention at the time that work is underway on a project. Impact or effect mapping can be interpreted more broadly by asking questions about the relationships of the product, users, and stakeholders. Working as a team, and from time to time attracting people from third-party components that interacted with our product, we created a map of the interactions between the stories in our project and other people's deliverables that we needed to receive by a certain time. Having this information in our hands, we were able to transfer the dangerous “pieces” of the project to its beginning, in order to have more control over them and not allow them to significantly shift our plans in the middle of the project.
Relative assessment and focus factor
People are poorly rated in hours and days, and these estimates are even worse on the calendar. Indeed, in one day a developer or designer can work 8 hours without distractions and interruptions, and in another - most of the day can be spent on meetings and phone calls.
For a start, we realized that in the best case, our team works 30 hours a week - about 10 hours is spent on intermediate planning, meetings, communication with each other and all sorts of unplanned activities.
Then we determined the focus factor for each team member - for example, the leading developer participates in our project at 4 pm and is busy with functional management in his department for 4 hours, and the technical designer fumbles with two more teams and can devote the project no more than an hour per day . For many, this information became sudden - after all, before all assessments were made without thinking, it was taken into account that everyone works 8 hours a day and 5 days a week.
We began to evaluate tasks in abstract bananas and oranges instead of hours and days. Having received the first data on our speed (how many oranges and bananas we manage to produce in a week), we were able to give out data on grades and terms in the usual days, and later regularly corrected them (the speed tends to change from iteration to iteration). It would be very interesting to hear a success story of explanations to top management, why projects should be evaluated in bananas or crocodiles, but not in hours.
Retrospective evaluation
After each iteration, we measured our speed, and corrected the dates of work. We had a well-developed and technologically strong team on our side, our speed fluctuations were more than covered by the risks incurred by the project manager.
Please note that the speed fluctuations of a team that is just starting to work together or does not have the necessary experience can be very, very significant. After all, any group of people must go through all the stages of forming-norming-performing to become a full-fledged team, and not a gathering of developers, designers and testers.
Moreover, since we started working not on the whole product, but on a part of it, we were able to more accurately predict the completion dates of the entire project. We also managed to shake faith in big upfront design, and we liked it.
Total
We completed the entire assessment in 2 weeks. A couple of two-week iterations took a retrospective adjustment - based on the results of the development of the first user stories, we corrected the assessment of our speed. After that, we have already worked through 6 iterations, and it seems that our work still has not lost touch with the assessment. Of course, to draw conclusions about the unconditional effectiveness of what we tried early, but this is the best of what happened to us before.
In the course of work on assessments, we also found that we dealt a serious blow to waterfall in our organization, and this is a good bonus to understanding when our project will be completed.
Used materials
Dr. Cristoph Steindl, IBM -
Estimation in Agile ProjectsTom Demarco and Timothy Lister - all work without exception
The collective wisdom of my brilliant colleagues
PS The most important advice is to take laptops from its participants during the evaluations, they kill group dynamics. Give everyone lots of colored paper, pens and markers, group around a large blackboard, love your product, and you will succeed.