Because we are gradually updating the test cases into test automation.So what kind of mechanism will help us to measure the effectiveness of test automation ?
If by “measure” you mean to involve counting then you can’t. Test cases aren’t comparable, some are big or small, very important or less important, time consuming or fast. It doesn’t make sense to count them. Automation checks are the same way. You also can’t compare test cases and automation checks because they’re not the same.
Effective automation (checking with tools) is about effective testing. In order to make automated checks you have to explore to find what’s best to automate. Checks written in automation should serve a test strategy, otherwise there’s no reason to write them. Testing cannot be automated, so this part you have to do yourself. How effective your automation is will be based on how effective your testing is. Whether you’re using automation to perform useful checks in the service of good testing, or to comfortably ignore problems in the product by obfuscating them with automation and automation reports.
So how to measure effectiveness becomes a very difficult question. Automation that fails to provide coverage of high-risk areas of your product and project is not as effective as it should be, so you could measure coverage, at a high level, with respect to some model versus a risk map of areas of the product or high-risk concerns that you have. It doesn’t lend itself easily to hard numbers, but it will tell you if your automation is a waste of your time. You could look at your passing and failing tests and determine if they should be passing or failing - those that are swapped (red when it should be green, green when it should be red) show an incorrect checking of your product.
So you could look at:
- Coverage (of some kind) - Is it checking the right things? Things we already checked in unit checks, or a subset of other testing?
- State and Data - is the suite setting the correct states and test data, allowing the checks to return accurate results?
- Mocking, stubbing and other lies - Is any partial check, whose setup and code is a selected subset of the real-world product, selecting in a way that allows the result of the check to remain accurate?
- Risk - is it checking the right kind of things? Do we adapt it to change risk?
- Runtime - is it taking too long?
- Maintainability - is it easy to maintain and update?
- Reported successes and failures - is it finding things to investigate? Are there false positives or false negatives, and how often?
- Check contents - Are the checks likely or even possible to fail? Do the names describe anything close to what they actually do?
- Repeatedness - is the suite checking things that we want to invest time in repeatedly checking? Do they lend themselves to repetition, and are they reliably repeatable in ways that matter (no test is truly repeatable)?
- Reporting - are testers reliably interpreting, investigating and reporting the results of the automation?
Just as a few examples. I’d say only measure what you’re looking to change, so if you want to make the suite quicker then runtimes of the whole suite and runtimes of parts of the suite are helpful to show what areas are taking a long time, and by how much you’ve reduced the runtime. It doesn’t show you anything about coverage or accuracy or risk but it will help you if runtime is an issue for you.
The problem, really, is that in testing the reality is always more interesting and nuanced than the number.
I’m going to start with the simple - and probably way too obvious answer: how many of your test cases that should be automated have been automated, by fraction or percentage, whichever you prefer.
However, that will not give you a measure of the effectiveness of your test automation effort, merely the completeness of your conversion project.
The question of which tests should be automated is one that has no simple, easy answer. Sometimes it’s not a case of “should this test be automated” but “where should this test be automated” - some tests are better pushed to the API level or lower, where others must be UI only (and all this falls apart if you’re testing a legacy product that can’t be unit tested and can’t be restructured to something more modern).
You also need to take into account the nature of your software. How fault tolerant is it? Are there parts of the system where minor bugs can be safely worked around? Are there areas that absolutely must function correctly at all times?
Then there’s the context - you will want to be far less risk-tolerant with software that runs airplanes or medical devices than you will with casual games. What does a major bug look like in your industry? Is it likely your automation will find one? Can you identify the 20% of your software that gets 80% of the use? Can you identify a critical path through your software that can’t have any errors or bugs, ever?
All these questions and things to consider help you to focus your automation efforts to where they will do the most good, although this is not necessarily a measure of effectiveness.
If you are using your automation to perform regression checks, you will want to focus most heavily on the critical paths and most used 20% of the software first, then expand. The effectiveness is still not measurable because the effectiveness of automated regression lies in the assurance it provides that nothing you have tested is broken (which is why focusing regression on critical paths and the most used modules is important). You can’t provide that assurance for the entire product (because in modern software there are infinite paths through the system. Does the user flip between two modules 3 times? 50 times? 500 times? Is it worth testing that the constant flipping between modules doesn’t break anything? It might be - but there’s more value in testing that navigation between the modules passes the correct information and that each module functions correctly.
Sometimes the hardest thing for testers to accept is that there often isn’t a “correct” answer. Only answers that are more or less helpful depending on circumstance.
@kayu In the end, the effectiveness of test automation comes down to the value of testing. How much money did you save by catching more bugs compared to how much money did you spend building tests?
You can work with your CFO office to determine how much revenue was lost and not gained last year due to issues found in production. Divide this by the number of issues leaked to production - you get the price of a bug.
We built a calculator some time ago to make this a bit simpler: ROI of Nocode Test Automation Calculator - testRigor AI-Based Automated Testing Tool
Thank you for your brief . According to this we don’t have the direct measurable mechanism for checking the automation effectiveness
Noted as understood it’s difficult to measure the test automation effectiveness
I’m certain that it has been mentioned before. But as a manual tester I would rate effectiveness in time less spent on repeating test cases (mostly during regression tests). Surely not the only parameter to check but an important one.
Evaluate the extent of test coverage achieved through automation. Measure the percentage of test cases that have been automated compared to the overall test suite. This helps ensure a comprehensive coverage of different functionalities.
It is important to note that providing an exact cost without specific project details is difficult. The best approach is to consult with a mobile app development company like MLSDev. They can evaluate your unique requirements, discuss the scope of the project, and provide a personalized cost estimate based on their expertise and experience in mobile app development.
This was based on experience or expertise ,not based on any formula
I seem to see automation somewhat differently than some of the others that have replied. Some suggest that the value of automation is in the value of the testing. I feel that is only partially correct, and does not offer clear guidance in how to apply automation.
Automation in testing is like automation in anything else. Take automation in a hospital, for example. The hospital does not want the software to help patients. The hospital wants to help patients and would like software to make the best contribution it can. That is not the same. And so the hospital should create software that will give the biggest bang (value) for its buck (limited resources). That can be anything. It can be taking care of parts of the financial process that would otherwise take time away from care professionals, for example. For the hospital to decide on how to use their resources, they will need an explicit, business-oriented goal.
If all you want is to measure something that will make management happy, you will always find something suitable. But if you want meaningful measurements, you need to aim them at the business-oriented goal for automation. That is not nr of automated checks, or % of regression automated. Ask the business what these numbers mean to them, or imagine how they would respond if you tell them you achieved the goal. If their eyes glaze over, you know you are talking IT to business folks and you need something else.
I know of only five business-oriented goals for automation in testing.
In my opinion, measuring test automation effectiveness requires a thoughtful and systematic approach. Here are some key factors of test automation that I believe should be considered:
1. Test Coverage: A significant aspect of measuring test automation is to evaluate how much of the application’s functionality is covered by automated tests. This can be measured by analyzing the test suite and identifying the percentage of test cases automated compared to the total number of test cases.
2. Test Execution Time: Another crucial aspect is to measure the time it takes to execute the automated test suite compared to manual testing efforts. By comparing the execution time, we can determine the efficiency gains achieved through automation.
3. Test Failure Analysis: It’s important to assess the frequency and nature of test failures. Analyzing the failures can provide insights into potential issues with the application or the test scripts themselves. Tracking the failure rate and identifying common patterns helps improve the quality of automated tests.
4. Defect Detection: Measuring the effectiveness of test automation involves evaluating its ability to detect defects. Comparing the number of defects identified through automated testing versus manual testing helps gauge the efficiency and accuracy of automated tests.
5. Maintenance Effort: Automation frameworks require regular updates and maintenance. Measuring the effort invested in maintaining the automation suite, including script maintenance, framework enhancements, and test data management, helps determine the overall effectiveness of automation.
6. Return on Investment (ROI): Calculating the ROI of test automation involves analyzing the cost savings achieved through reduced manual testing efforts, increased test coverage, and improved defect detection. By comparing the upfront investment in automation tools, frameworks, and resources against the benefits gained, we can assess the overall value.
7. Feedback from Stakeholders: Gathering feedback from stakeholders such as developers, testers, and project managers is essential. Their perspectives on the effectiveness of automation can provide valuable insights and help identify areas for improvement.
with the help of such factors and establishing appropriate metrics, we can effectively measure and evaluate the impact of test automation in terms of coverage, efficiency, defect detection, maintenance effort, ROI, and stakeholder satisfaction.