What are some indicators/metrics that you have had a successful release?

The main metric I use right now is the amount of items that were released (user stories/defects/etc) and how many post-release defects were due to the release. We have a way to manually track this in our organization. This is something we track per release (every two weeks).

A real-time metric we use is the web server logs. We have tools in place to monitor when a new (never been seen before) error is logged. Our devs care about quality, and they will dig into the errors after releases.

Curious as to what other teams use.

To start, I think there are useful metrics which may be used during testing. Used incorrectly, the metrics are often misleading. The story goes like this. There are two testers who happen to (beyond all probability) find 5 identical defects in a product. The first logs all products in the issue tracking tool and forgets about them. The second brings the defects directly to the rest of the design team and talks about them. While communicating, his team determined that 2 of the defects were actually working as designed and the tester had misinterpreted the requirement. Another 2 defects were quickly fixed and required no issue, the programmer fixed the issues before the meeting was even finished. The last was an issue which the tester logged.

So who is the better tester? The one with 5 issues, or the one with 1?

With that kind of story in mind, my teams tend to use, among other tools, a different indication about the success of the sprint/release. How do we feel about our progress? Do the testers feel that we have done good testing? Do the designers and programmers and architects feel like we have communicated our results effectively? Does the management have faith in our reported results? Have our customers given feedback?

These are (mostly) not measurable, thus we talk about the product and the processes instead. The team does this in the sprint retrospectives. The management does this through my constantly asking for feedback. The customers do this through buying more of what we are making. While emotions are often just as misleading as metrics, the chance is good (in my experience) that if the team has a positive view about what the testers are doing, then testing is proceeding well.

Now, there are huge requirements for tracking by emotion to be an effective tool. If nobody is actually looking at the tests, then it’s easy to make the tests look deep while, in fact, they are shallow. Thus team members should involve themselves in processes which aren’t traditionally their own.

I will cut this off by saying that there are very clear disadvantages to using only emotional indicators (which I would be happy to discuss further), but given that I take the line “Individuals and interactions over processes and tools” to heart, I feel that at least discussing the merits and pitfalls is worth the time.

1 Like

Right now we don’t have a formal metric.

The informal metric is “Do we hear screams and how soon after we finished deploying did they start?”

Some explanation: the software where I work is extensively used by the company’s customers (external users) and by company staff (internal users). If we have internal users telling us they can’t do critical things shortly after a release, we have a problem. If release day goes by (we typically release early in the morning) without any complaints form internal users, it’s a good release.

That is a great story/parable that I am sure I will be using in future talks with people thanks for sharing.

I totally agree with a lot of the points your making including ‘Individuals and interactions over processes and tools’ I haven’t found any benefit to reporting defects found on a user story from within a sprint, to track how we are proceeding with development. We do track how we are doing across teams sprint to sprint as far as did we complete what we planned to complete etc.

I do think though we can have great sprints, and feel really good about our test efforts, release the code that same code we felt awesome about, and still release low quality software. If we missed a bug that there were no work arounds for and our customers are unable to use the software, I would consider that an unsuccessful release.

I guess what I am looking for is post release what are some indicators or metrics that teams track to help show off our release quality (and track that over time).

We are in a similar boat, with internal users and external users. The webserver log monitoring we put in place has helped us many times in tracking down the external users bugs that we have released.

Working for a restaurant chain with a relatively predictable daily order/gift card order system. We had metrics around releases based on the severity of defect level discovered within the first two weeks.

While it took several days to agree upon the definitions of the levels of severity, assigning a point value to each and then, and I don’t really agree with this, deciding how many points per release could accumulate before it was deemed a bad build.

Why don’t I agree? Well if we hit a critical, i.e., business stops, no workaround other than restoring the prior build, that’s way more serious than 4 mid level issues, or 6 low-level items but the consequences are the same. I’d rather spend the time on high/critical issues that have a real world loss of rev vs a menu that slightly truncated the text on an iPhone 5 :slight_smile:

Practically for me: After 3 days. it’s good due to the number of users on the system. My current role has very few users that aren’t adventurous so even if I’ve missed something it could be weeks before they notice:)