Are we measuring the wrong things in scrum?

One of my favourite talks I’ve watched very recently is Quality Coaching in the wild: when theory hits reality | Ministry of Testing. There was a very brief sentence that @amcharrett that tapped into a deep concern in how we measure the effectiveness of our scrum sprints, namely:
“…Having autonomous teams who who used Kanban rather than sort of limiting to number of points…”

Now we do estimate points, discuss velocity etc. and try and use it as a guideline for planning. We also measure how well we stick to our commitments. i.e. if the team commit 70 points to a sprint, how good are we at honouring that commitment i.e. % done.

My deep concern is after all this time, we still have engineers that feel they have to marry up points to time so that they can give a meaningful value. If there are high priority tickets to get done in the next sprint, the estimates don’t matter that much as they go in anyway and we look at the velocity at the end and say “Well it looks like it’ll be a challenge to complete the sprint?” rather than hold the sprint strictly to our target velocity.
Are we wasting energy worrying about points? If we deliver well and on time, does how much points we completed in a sprint matter? For those that use kanban or any other process, how do you know that your engineering delivery process is working effectively?

4 Likes

I have similar experiences like you and some others as well.
My team got blamed for not completing sprints because we just needed some more days or even just hours on some stories, so that we could not close them in the old sprint. And then closed them 1-2 days later in the new one. COMBINED with a lack of interests and appreciation of what achieved, the changes we made, the great product features we created.
And our sprint cycles having nothing to do with release cycle. We have a B2B product which is connected to other software vendors. We cannot easily release on prod very 2-3 weeks, but have to coordinate in advance. Also we have multiple test stages after our development cycle.
By that we sometimes deploy multiple times a week to different environments and on prod only every 2-3 month.
But never the exact increment we got from sprint at the very end of a sprint.

Even some developers judged there work as bad when we did not “completed” a sprint and did not celebrated what they achieved anyway.
At least that is occurring way less. Some have learned the lesson, that a “(not) completed” sprint and story points don’t represent the actual work.

By all of this I consider to switch to Kanban too. We have to do the work anyway and sprints and story points give a false safety and also can hurt the team.
What I learned so far by my research that at Kanban you measure the throughput and can give probability of how likely it is that an item with a certain size will make it in time X.
This is based on measurements, not on estimations (which are likely to be wrong).

While I get where Scrum/Sprints are coming from, I don’t consider it much appropriate for modern times. Maybe for beginners to learn agility in an more structured environment.
In the days of waterfall (I have experiences of that), developers developed for month until any first version of a product was shown to someone to give feedback. Testers as well as customers.
Scrum changed that by making the waterfall only lasting a few weeks (when you have testers on the team those have even earlier contact with first attempts). Changing from cycles of 6+ months to 2-3 weeks was a big improvement back then. You became way earlier feedback and wasted less effort in the wrong direction.
But by my experiences this outdated for many contexts. The cycles of sprints don’t match to the reality of many software projects.

I don’t advocate for dropping reviews and retros at all. Lets keep them every X weeks. But do the planning of development more continuous.
Kanban has for that the pull principle and the replenishment.

To me Scrum feels like development with training wheels (which might me the right thing for some people). Lets put of those off.

2 Likes

Thanks for that, great to see you’ve experienced the same. I’m trying to take a step back to see what matters and like you say, I’m not seeing sprint velocity relating to quality. So totally agree, we need to embrace the reality of how we build software and I think the kanban is the reality.

2 Likes

Nice match! I’m glad that I can help you.

Also don’t think to much about what the typical Sc(r)um velocity means. Points per sprint, no matter how many hours/days people worked on the sprint content.
Points per person day or hour would make way more sense, but is harder to calculate.

Our PO invented something to do this. At the start of each sprint we give rough estimations of our availability in days in the sprint (which does not take illness into account). At the end he divides the achieved points by that person day.
X points/day
Makes more sense, but it still feels like pretending and wannabe measurement of throughput and planning. Theater for the management.

Render unto Caesar, what is Caesars
Basically, the counting of coins is, for coin-counters. Using them to make more coins and get goods and services moving is almost orthogonal at times. Even though they aren’t.

Knowing what work to do first and what last, what work to stop doing and how long and how much time, are fundamentally critical to the one thing engineers hate doing, to optimise and reduce waste. So because we suck at estimating and reducing waste, we have scrum. If we are lazy we have kanban. Kanban is good for many things, for example when you have billable customer hours.

1 Like

I disagree with that if mean that serious.
I see the reduction of waste independent from using Scrum or Kanban.
Yes, one can use Kanban to bill many hours to customer. But one can also do other things buy it.
I have heard of people who do serious product and project development by it.

I see you implying that it is less/not doable in Kanban on which I have another opinion.

My team has a laziness problem. Everyone knows what the process “should be” but there’s a clear struggle in getting there.
Good estimations require good planning and experience. The way some guys work is to start out a PoC then blend it into the actual work. Really weird.
I have never seen sprint targets being achieved because almost always things get delayed due to some unexpected code complication. The only thing that comes out of our sprints is how much time we committed and how much we logged.

The problem I think lies on the left side of the SDLC. The better the left side, the easier the right side.

Years ago that would have been my response, but I learnt from a great mentor that judgement and understanding are mutually exclusive. i.e. if you understand, you won’t judge - you judge because you don’t understand
So taking a step back I would say, I may judge the cause to be that the team is lazy, but the reality is they’re not seeing the importance of the process. So as a Quality Coach, I need to get in there and understand why.
Regardless of how much I may have invested in the process and believe I’ve done all the right things in collaborating with the team to build it, I need to be open to the possibility that there is a truth underneath their behaviour.

So the conclusion I came to was does poor sprint velocity, estimation etc. = poor quality software delivery?..no it doesn’t. So that fundamentally is why I feel our measuring of the process has become an isolated metric in how well are we doing with our sprint planning, and the underlying reason why its difficult for people to buy into it.

3 Likes

Aptly, put. The trouble we have is very less people overwhelmed by work. But, something good happened too. We realized the problem, and now we’re working with a model that solely fits our working habits making our accountability more transparent.

1 Like

I was having a jibe, Kanban is useful, but as a tester it makes my life a nightmare.

In my last job, the team switched for kanban, and after years of working in scrum-like setups it screwed up the “definition of done” to move to kanban. Because deployment and testing are activities that are cyclical, they thus lack the places to give feedback and drop in any rework for bug fixes without creating yet another Kanban item for the bugfixes. If Kanban implements WIP limits, then sure I’ll buy it for dev+test. But I’m not Kanban trained, and never worked anywhere where it has worked swimmingly for development.

Unless teams sit down and have a honest talk about Software Development Lifecycle and also have freedom to make substantial process changes, while they are running, time will always be consumed in small chunks here and there, so as to fill up the available space on the calendar.

1 Like

++Ninja award badge @ghawkes

I get your situation and understand how releasing the existing guard railing causes you problems.
My team adapted to a mindset of Stop Starting, Start Finishing and this includes testing as well. I don’t carry out all testing, but also consult/supervise developers testing new feature.
It helps us to have sub tasks for every thing. Including testing.
Maybe you will get something out of my article When You Need A Test Column (very seldom). I plan to add a working example to make our solution better to grasp.

That sounds like the core problem. I wish you energy to work on that.

2 Likes