My name is Damien and I am Lead QA working in Financial Services.
**I would like to know what analysis Tools or collaboration techniques teams are currently doing to help come up with a more targeted Test Strategy ? With the goal to reduce the number of tests written for a new feature. **
I am trying to change the mindset of our team/department, to move away from the “Test Everything” Test Strategy mindset. As a result our test suite is at 16000 automated Cucumber scenarios. Which is now difficult to maintain with a lot of duplication. When working on a new feature, we end up writing tests just to demonstrate that it works with an existing feature. I would love to find a tool or come up with a strategy, that can scientifically inform us that, we do not have to write these tests because etc.
I am keen to find out what teams are doing to help with the Analysis phase when working on a new feature.
I stumbled just last week across this paper about “Change Driven Testing” which might help you:
Also I am devoloping as a side project a tool myself which tackles exactly the problem you are describing.
The tool uses Git and static source code anylisis to determine which tests of the defined test suite(manual or automatic) should run in order to test the newly introduced changes. I am still not done with the development, but I can provide a prototype if you are intrested in trying it out. The tool works however so far just with Java applications.
That tool sounds interesting Samuel, I don’t have a java app but I would be interested to follow your project. What is the best way to follow updates on it?
Hmm to be honest there is so far no way you could follow my project😅
However anyway I thought about putting a beta version online to get feedback from the community and inform about updates for people like you who are interested
Sounds like a little project for the weekend😉 will keep you informed
Hi - sorry for the delayed answer. I finally found some time to set up a website - so you can follow the progress of this project. Here I am demonstrating the usage on a demo project:
One hint: In general the demo works on a mobile device, but it is much more fun on a normal computer screen.
What exactly am I seeing here @samuelm . Without having to read the code, are we talking about instrumentation or adding decorators to product code or just using a domain or language specific tool to provide code churn insights but in a way that a tester can really use?
It works by adding Java Annotations(I think that is compareable to decorators in python?!) to product code. For instance let’s assume you have a class Login that implements the authentication flow for a user. After you annotate this class with @Spec(id=“login”) all the relevant code for this class is calculated(for instance which methods calls this class and what methods do these methods call and so on). Also it is compared with the code of the last commit(or any other commit). By doing so relevant changes in the source are identified.
At the end a report in html form is generated which includes the following information:
The feature login did change/or not change
How many lines / how much percentage of the feature did change
Which methods/classes/etc. did change or have an impact on this feature
Risk based testing technique is an example of how you focus in on what’s important. Exploratory testing is another way to help you find problems that matter.
I have never really tried to use FMEA but have read about it a fair deal @mikeharris .The FMEA model is quite exhaustive, but in software systems we have more data to drive even better decisions available to us. It is easy to drop into “a tool or an AI” will do this for us if we can collect enough clean data? For example why score risk from 1 to 10, when a scale of 1-5 is around 200% faster to apply and just as accurate on average. I love scoring risks, all together like this, numerically; because it lets us communicate those risks clearly.
I feel we that other good pointers to test focus areas do exist; like understanding the role of product architecture, environment and dependencies. Having good language skills and grasp of the environment code lives in, is crucial to applying models like the FMEA system. I find it too easy to rely on experience, and that is unhealthy, and not robust.
Mike, when I worked at a certain huge company, we had a lot of test case failure data as well as code static analysis tooling available to us. A rather clunky web service was cobbled together to show which test cases could be grouped together as likely to fail based on which “component” you thought the changes you made were in. Simply by looking at a history of how often a test would fail based on code change in an area. some of that mapping was semi-manual mind you. That’s because feature and component mappings are often, but not always directly connected and simple.
It was effective at “autonomously” eliminating huge batches of test cases, (that were obviously irrelevant to a human observer,) but when you have a CI/CD system that wants to run every single test for every single code change merged, it starts to make huge sense.
@conrad.connected That sounds really interesting, and a good solution for understanding existing features. The team I wrote about was trying to understand the risks on a new feature, for which there was no data.
We’ve lately been trying to solve the problem of what to automate and we started using a simple spreadsheet that Angie Jones talks about in her talk on the topic (Which Tests Should We Automate - Angie Jones – Sr. Automation Engineer, Twitter - YouTube). This really helped us take a more risk based approach and identify the things that would have significant impact on the customer and that we would look to immediately fix. We used it as a whole team activity as well, not just QAs, so that developers started to gain an understanding into the idea that not all things are equally important!
Oh yeah, that “What to Automate Sheet” is excellent. A team I worked with happened to use that and it seemed to start good conversations. Kudos to Dorothy Graham and Angie Jones.
I am definitely late to respond to this thread but aqua cloud ALM has protection from duplicates. And when the number of scenarios grows, it doesn’t slow its performance. So if you will decide to switch from Cucumber try to find something with duplicate protection.