I’m hopeful that someone here can provide some guidance and act as a guiding star. I joined this company less than a year ago, which, despite being in the market for over 21 years, is not a startup but rather a jobseeker page. My role as QA Director involves creating a QA team and a strategy, as they have been operating without one for over 5 years and have recently recognized the need for a dedicated QA team.
Now, the strategy isn’t the issue. In fact, it has been yielding positive results, evident in the smooth releases with minimal post-deployment issues. However, my concern lies in the need to quantify and showcase these improvements with data. The company doesn’t follow a traditional Software Development Life Cycle (SDLC), and one of the changes I proposed was adopting iterations. Given that we deploy updates frequently, I find it challenging to present the data generated during validations without inundating the entire company with numerous emails detailing our pass rates, number of bugs found, bugs fixed in production, and so on.
This is currently my primary concern. For instance, my regression set is still in the process of being scripted for automation. At this point, I only have the smoke tests automated.
I’m hoping that someone in the community can shed light on or suggest a path forward for addressing these challenges.
Why do you feel you need to send emails? I think you are asking about communications, not metrics maybe?
BUT. Nothing stops you from making sure that there is a dashboard or a “feed” that shows the latest stats and a way to see history. AS for “Metrics” why not include unit tests alongside your automated smoke test metrics, and put them into the same workflow/dashboard? If you release really often, how do people do all the manual testing I have to ask?
But firstly, I’m forgetting my manners. Welcome, welcome @jorge_hidalgo to the most awesome Software testing community in the universe*
*(terms and conditions may apply, once you are in the club, withdrawal penalties take effect)
@jorge_hidalgo
I have gone through the same scenario, I will suggest to start implementing best practices of QA starting from Test Strategy to Go/no-go. This will give clarity also do this by stats, as over the releases you capture stats and that can be used for showing trends and improvements as well.
PS: In starting you might get resistant but once you prove with your stats some improvement then others also support you.
Hi Conrad ! … We have already 100% of coverage for Smoke and Regression Testing and according the strategy that we as a team are following , we only focus our manual effort for the new features that are going to be released and trusting 100% in our automated efforts
I’ve never worked anywhere Jorge , where I have more than 95% code coverage, or even have 100% test-case coverage automated. Although that is usually because I’m testing diverse platforms that do not automate very well in my job. But recently I took some inspiration to automate earlier on in my process from this recent posting : https://www.ministryoftesting.com/articles/in-sprint-test-automation-on-agile-teams-yes-you-can
I’m guessing you work in a KANBAN style delivery setup maybe? And perhaps understanding the business constraints that dictate that way of delivering can help you work out what metrics other KANBAN teams found useful?
If your product runs on many many platforms which can often not be automated through all scenarios even after making platform security changes then you don’t ever have that 100 % of customer journeys automated goal anymore. Often sheen time to execute some scenarios (even or especially manually) can take over 10 minutes to traverse a complete provisioning journey. So you pick your battles when someone like azure tell you an AD sync can take up to 10 minutes, you work with that and move onto other more valuable pieces. You can only test sync a few times and have to really focus on testing your own code not the 3rd party interactions aside from the negative ones or black swans in there. Stubs do help, but sometimes a stub is a distraction.
You shift your goal to an 80/20 scenario way of thinking and focus on the top 80% use cases. You also may have a UI that has to render in dark/light and high contrast modes, automating those tests well is costly, so one chooses your battles when for example we all know that security flows are probably 1000x more valuable than checking that the colour of a disabled button icon makes sense to a user seeing it. You break complexity down, but no I’ve never worked anywhere where 100% was achievable. Ok maybe once on one product, but generally it’s not my goal.
You don’t need to send numerous emails, instead schedule regular, simple, informative reports highlighting the selected key metrics - a weekly summary sent to relevant stakeholders.
Identify 2-3 critical metrics directly linked to product quality and user satisfaction, e.g. the percentage of bugs (critical, major, minor) found before and after release and the time taken to resolve them in production. Compare values before and after, etc
Use tools/simple scripts to get test execution results/bug reports automatically.
Use Jira (or any) dashboards displaying auto test coverage, bug trends over time, burndown chart, release stability metrics, etc
Suggest retrospectives to discuss issues and bottlenecks in the QA process, SDLC, product, etc.
Automation is good but focus its efforts on regression testing, to have more time for exploratory testing.
Think about how you can improve QA infrastructure as needed without using lots of additional resources and time.
Would be really cool to set up automated alerts for regression test failures and performance issues in production, alerts for any errors in prod, and some app help checks (you can use some tools with the help of the dev team or even write some simple scripts).
Communicate with the dev team to improve unit coverage and quality standards, suggest code review procedures and some process changes that you think are relevant and useful.