QA Metrics in a non traditional SDLC

Hello everyone,

I’m hopeful that someone here can provide some guidance and act as a guiding star. I joined this company less than a year ago, which, despite being in the market for over 21 years, is not a startup but rather a jobseeker page. My role as QA Director involves creating a QA team and a strategy, as they have been operating without one for over 5 years and have recently recognized the need for a dedicated QA team.

Now, the strategy isn’t the issue. In fact, it has been yielding positive results, evident in the smooth releases with minimal post-deployment issues. However, my concern lies in the need to quantify and showcase these improvements with data. The company doesn’t follow a traditional Software Development Life Cycle (SDLC), and one of the changes I proposed was adopting iterations. Given that we deploy updates frequently, I find it challenging to present the data generated during validations without inundating the entire company with numerous emails detailing our pass rates, number of bugs found, bugs fixed in production, and so on.

This is currently my primary concern. For instance, my regression set is still in the process of being scripted for automation. At this point, I only have the smoke tests automated.

I’m hoping that someone in the community can shed light on or suggest a path forward for addressing these challenges.

Thank you, Community.


Why do you feel you need to send emails? I think you are asking about communications, not metrics maybe?

BUT. Nothing stops you from making sure that there is a dashboard or a “feed” that shows the latest stats and a way to see history. AS for “Metrics” why not include unit tests alongside your automated smoke test metrics, and put them into the same workflow/dashboard? If you release really often, how do people do all the manual testing I have to ask?

But firstly, I’m forgetting my manners. Welcome, welcome @jorge_hidalgo to the most awesome Software testing community in the universe*

*(terms and conditions may apply, once you are in the club, withdrawal penalties take effect)


Alex Schladebeck once told about a simple metric: number of calls to the help desk.


I have gone through the same scenario, I will suggest to start implementing best practices of QA starting from Test Strategy to Go/no-go. This will give clarity also do this by stats, as over the releases you capture stats and that can be used for showing trends and improvements as well.
PS: In starting you might get resistant but once you prove with your stats some improvement then others also support you.

Hi @jorge_hidalgo,

Perhaps you might seek inspiration from the following community members: