No more estimates: a team’s story of transcending story points

Estimating with story points is a common practice. But is it worth the effort?

In this session I will be sharing my teams’ experience with story points and the outcome of a #noEstimates experiment. We will be looking at the issues we encountered with story point based estimates and our alternative approach without estimations. I will be sharing which metrics we used for better estimates and which lessons we learnt.

Theory: what are story points and wherefore are they useful

  • @jamgregory: Another year, another #NoEstimates talk.

    One of these days, we’ll get rid of them :D

    Looking forward to seeing how @fidrelity’s team got on. #LeanAgileX

  • @jamgregory: @fidrelity What is a story point?

    It’s usually used to estimate the effort (amount of work, complexity, uncertainty) of a particular user story. #LeanAgileX

How do you agree as a team about story points? => planning poker to agree on an estimate, avoids bias

  • @jamgregory: Planning poker is a great way to agree on that estimate. The story point is kept secret until everyone is ready to reveal, and then any discrepancies can be discussed.

    Hopefully everyone agrees eventually on a story point for that story! #LeanAgileX

What can you do with story points => calculate velocity

we can predict many things with our velocity => sprint scope, projects, epics, releases, etc

The issue with story points

we talked about the estimates, not the story we spend time and energy on the process, not the value

creating a shared understanding is hard

  • how you are doing it and why you are doing it
  • you need to invest time and energy to create familiarity with the process

emotions are contagious

  • your emotions will influence your estimation
  • your emotions will also influence your team member’s estimations if you are pessimistic => pessimistic estimations

  • @jamgregory: It can be difficult to remove your emotions from the estimation process. If you’ve had a rubbish day, you’ll be more pessimistic. It’s not a particularly objective metric! #LeanAgileX

our velocity failed us

  • it was too granular => over committing

we couldn’t make a sprint commitment based on our velocity

Flawed by design

  1. We are actively hiding uncertainty

    => invest more in story preparation

    => adding more requirements upfront

    => more to estimate

    • @jamgregory: There’s a few problems with stories…

      They can hide uncertainty. They’re not a requirements document. Different engineers will implement things at different speeds. The Fibonacci sequence can make it difficult to get an accurate figure. #LeanAgileX

  2. Output over outcomes

    you can deliver a 1000 user story points and deliver zero customer value

  3. Obfuscation technique as defence from an overly controlling management

    estimates are treated as commitments and turned into a deadline

    from the outside you cannot see much what is happening on the inside

    => fighting fire with fire

Putting the pieces together

#noEstimates movement

=> this started to question our estimation process

retrospectives => how can we improve, how can we do this in a better way

story breakdown into tasks

=> small pieces, highest priority first

=> more granular velocity

  • @jamgregory: A solution to this is breaking down the stories into smaller tasks.

    These smaller tasks give you better visibility in your velocity - if your burndown chart isn’t moving, you know something is going wrong. #LeanAgileX

=> focus on the user story

=> simplifies sprint planning

=> drop planning poker

It helps if your stories line up with INVEST

  • independent
  • negotiable
  • valuable
  • estimatable: sounds contradictive in a NoEstimates talk, the one doesn’t exclude the other
  • small
  • testable

We created a story ceiling: ceiling in story size

predicts your backlog growth rate

  • When will it be done?
  • How much will it cost?

=> Velocity: 6 stories/sprint

=> Backlog growth: 2 stories/sprint

Key Takeaways

Does it scale? Yes

estimated vs unestimated epics

unestimated epics had all roughly the same size

=> throughput based forecasts

we as humans are pretty bad at estimating, throughput based forecasting have a better accuracy and take less time from humans

trust your data!

question your rituals

=> relentlessly inspect and adapt, is there a better way of doing this

for us the best tool was our retrospectives


When breaking down into tasks how do you resolve the common issue of the sequence of changes. Developer creates branch 1 for task 1 but developer two needs branch 1 finished to start task 2 for example.

we ordered tasks in sequence

sometimes branching out from one tasks branch :/