Over half of all software projects are late or over budget. But isn’t late only a failed expectation, an expectation solely based on a team’s estimate? Is your team confronted with the questions of when will it be done or how much will it cost? We tend to respond to these uncomfortable inquiries by sticking our heads together and coming up with an educated guess.
Maybe there are easier, more transparent ways to make predictions about our software releases?
Why estimates don’t work
It would be great if our ability to give estimates would work for complex, abstract problems like most modem software projects. In reality though this is rarely the case.
Even if you are aware of your cognitive biases like the optimism bias you will end up making inaccurate predictions. Hofstadter’s law sums that up neatly:
Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.
What about swarm intelligence?
We can take the wisdom of the crowds as an argument in favour of group estimations like planning poker. However the wisdom of the crowds doesn’t play out all that well in smaller groups like your typical agile team. The wisdom of the crowds works best when participants are heterogenous in their ways of thinking and have different information. In agile teams you might be lucky enough to have a diverse team but you rarely have diverse information on a project at the point that you are asked for an estimate, often at the very beginning of a project.
Emotions are contagious
Teams have complex dynamics which can also negate the wisdom of the crowds. There might be an urge to conform to the opinion of a strong leader in the team, essentially reducing the crowd’s wisdom to that of this single person. Besides that our emotions are contagious. Imagine you’re working on a task which is going horribly wrong. Your build pipeline randomly breaks, you’re confronted with legacy code which makes it all the more painful and you discover some hidden dependencies. With a day like that you will bring a much more pessimistic attitude to the table. This mood in turn will infect your colleagues, a well researched effect often seen in social networks or gatherings. Ultimately your estimate is not much more than a snapshot of a weak opinion.
Estimates are a bad investment
Coming up with estimates costs time and energy. They are an investment. Your expected return is an accurate prediction. Let’s assume for a moment that your estimates are in fact accurate. What if there is a way to come to the same prediction with less time and energy expended? Then there’s no reason to estimate. This way exists. But we’ll get to that later.
Let’s assume your estimates are not accurate. Then they are a form of waste and should be eliminated. By simply dropping estimates you already improved your return of investment because you’re wasting fewer resources. Those resources can of course be invested otherwise, like delivering real customer value.
Battleground for interpersonal conflict
Your estimates can also create a ton of tension between the team and stakeholders. I want to emphasise again that your estimates are only a personal interpretation of perceived effort. They are by nature highly subjective. So when a stakeholder questions the estimate, it’s really the team’s opinion being questioned. This is a recipe for personal conflict which can fuel a destructive them vs. us narrative between management and teams.
Why do we even estimate?
In any case estimates don’t look like a great investment. Why do we keep doing it then?
We humans have a craving to make sense of the future. We try to control and predict our environment. In organisations this fuels the two big questions following every project: How long will it take and how much will it cost?
There was a time and place when guessing was good enough and without alternative. This is especially true for complex systems like the weather. Before we had weather stations all over the planet you needed to make an guess based on your current knowledge and past experience to decide when it’s best to plant your crop.
Many modern software projects are just as complex and we naively relied on the same mechanisms to predict them. However, by now we have discovered better ways of predicting the future. Yet we kept the traditions and rituals to rely on our limited capabilities to estimate the future.
So if we can’t trust ourselves to predict the future, who can we trust?
Throughput based estimations
Most agile teams use story point velocity for predicting sprint scope and project releases. But story points don’t say much about the real effort, they are about perceived effort, they are a guess. Instead of guessing the effort we can just measure it and let us be guided by more objective data. We can count how many stories we really get done each sprint. Then make predictions based on our throughput. With this we’re removing the middle man from the process.
Before you had 1) your actual performance (stories delivered), 2) your interpretation of the performance (story points assigned to each story) and 3) the prediction of future performance (stories to be delivered) based on your interpretation (story point velocity). Now you only have 1) your past performance (stories delivered) to 2) predict future performance (stories to be delivered).
Normalise your stories
There is an obvious problem with such throughput based predictions. Its accuracy depends on user stories staying relatively same in size. There is nothing in our process which guarantees that. So what we need to do is establish a ceiling for story size.
Wait, haven’t we just invented another intermediary layer which again costs time and energy just to come up with predictions? No, because smaller stories have two advantages in themselves. First of all it means you deliver value to your customers earlier. Secondly, it ensures that stories have a reasonable size which make them more practical for the team to handle.
There are many ways to increase a team’s throughput by slicing work into smaller, more concrete chunks. One way to do that is by establishing a definition of ready like the INVEST principle which worked well in my experience.
A question of granularity
What’s the best metric to measure your throughput? We have to make a trade-off between availability and granularity. Generally you want the most granular throughput metric which is available for the time horizon of your prediction. Your granularity levels could look like this:
- Your quarterly plan is broken down into multiple epics (or projects)
- Each epic is broken down into user stories
- You break down your user stories into tasks
So the most granular throughput metric is your task throughput. But it’s unlikely that you will have every user story broken down into tasks at the beginning of a project. So you can’t use task throughput to predict project timelines. You can only use it on a smaller scale, like predicting the next sprint. However, you do have the initial user stories for your project. This is the most granular level which is also available and is therefore our ideal throughput metric.
Facing scope creep
So is this enough to create better release predictions? Theoretically yes, practically no. In the real world projects have the tendency to grow in scope over time. There are two reasons for that:
- Customers and stakeholders have new ideas
- You discover more work hidden in existing requirements
If you want to make accurate predictions you will have to start measuring how many stories you add to your backlog every sprint. Once we measure the backlog growth rate as well we have all the ingredients for realistic predictions:
- Our throughput
- Our backlog growth rate
- The current project scope
Predicting with throughput and scope creep
Let’s put it all together and try to predict a project. You know how many stories you get done in a week or in a sprint so you know your user story throughput. You also know how many stories your project contains.
Here is an illustration of this. Let’s say you deliver 6 stories per sprint and your project backlog grows by 2 stories each sprint. The initial scope of the project is 60 user stories. Here’s how that would look.
Most interesting is that if you don’t consider the scope creep you will naively assume that your project will be done (red line meeting turquoise line) when in reality it’s only 2/3 through.
The only way to win at estimates is not to estimate
You can apply the same principles on even higher levels. If you want to predict how many epics you can get done in a quarter you can check your epics throughpout and base your prediction on that. We have done exactly that in my team with surprising success.
Changing the narrative
When your prediction is questioned you can now talk about the data that created it instead of the team’s opinion. We can discuss if adding engineers would improve velocity. We can think of responsibilities the team can hand off to be more focused. But the most important tool we have is to de-scope the project. With a data-driven visualisation like this it is easy to understand for a stakeholder how the project gets delayed with every new feature request. If a deadline is really that important we have to tweak a few of those variables. The easiest of those variables to tweak is the project scope.
Do we even need predictions?
Even though there are data driven ways to make predictions they are not perfect themselves. So I recommend taking the next step. Question if you need predictions and forecasts altogether. You might realise that they aren’t actionable and are solely used to instill a false sense of control and certainty, something to ease the underlying fear of our frighteningly complex world. However, instead of hiding that complexity we can embrace it and be transparent, one way to do that might be lean roadmaps. Explaining this to customers, stakeholders or management is no easy feat. It relies on building honest relationships, something that can pay off manifold.
Outcomes over output
Your predictions focus on output. When will this feature be ready? When can we close this project? How much can we bid for this contract?
You can burn a thousand story points but deliver zero customer value
All of this is focused on output. The problem is that there is only a loose correlation between output and outcomes. You can burn a thousand story points but deliver zero customer value. By tracking our progress in output and giving predictions on future output we’re at the same time taking focus away from outcomes. Unfortunately this is a reality in most of modern business world. But it does not have to stay like that. We can question the need for a forecast for output when we’re asked and instead explain the benefit of outcome-oriented work. When you think in outcomes you will naturally try to shorten your release cycles to deliver value as early as possible and cut out all the bullshit you can identify early on.
Your prediction is only one of many possible realities
Making predictions based on throughput rather than (un)informed guesses is a first step to better predictions. However, to truly make the most of our time we need to obsess more over outcomes and devalue output as a goal in itself. Take a step back, question your predictions and other rituals and discover the truly meaningful parts of your workflow.