Complexity estimation, effort measurement

Hello everybody.
I would like to ask you for some thoughts about measurements, estimation and test coverage.
I’m working as a test strategy manager in industrial company, which is developing its own SW for the products.
Testing is done manually (TestLink style), but also in agile style with automatization precesses and integration into CI, etc.

The point is, that lots of our customers are using many different variations of configuration in their production system.
E.g. they got 10+ variations with different OS, LSD screens, Printers, Payment Terminals, etc.

Due to this fact, we have too many variations to cover with test scenarios. With 15+ customers, it grows up to 100+ variations of tests sets. We are looking for the possibilities for reduce the number of tests, or change the way how we estimate the test effort.
Simply said, we are spending too much time with testing.

Currently, we are estimating testability with our own metrics:

  • (1-5) frequency of changes
  • (1-5) amount of errors
  • (1-5) manually test effort
  • (1-5) usability in other projects
  • (1-5) effort for automatization
    Based on it we can estimate the priority for every test scenario.
    So we have priority 5 (easy) for e.g. printer and priority 1 (hard) for payment terminal.

Q1: Any ideas about complexity estimation?

Q2: Is there a reasonable way how to manage such big range of various configuration and therefore reduce nunber of tests?

Q3: Anybody with similar problem?

Really appreciate any input, idea, comment from you guys.

BR

Michal

2 Likes

Hi Michal,

I suggest using a combinatorial test design to generate the test configurations. You select test values for each test factor: OS, LSD screens, Printers, Payment Terminals, etc. Then the tool generates a set of test configurations which cover all the interactions: OS choices with screens, printers with terminals, etc. The number of configurations (test scenarios) is minimized.

Q1: Any ideas about complexity estimation?

Try an example (a small one to start). The number of configuration scenarios generated indicates the complexity. Some tools compute the number of combinations to be covered also.

Q2: Is there a reasonable way how to manage such big range of various configuration and therefore reduce nunber of tests?

You can refine your example design with what-if changes, so you end up with a workable plan.

Q3: Anybody with similar problem?

I have experience with these problems and can advise. There are several examples on my company’s service, Testcover.com. If you have an example to share, I’d be happy to look at it. You can reach me directly at sherwood@testcover.com.

2 Likes

Hi George. Thank you for interesting ideas. I will keep it in mind during my research.

Due to this fact, we have too many variations to cover with test scenarios. With 15+ customers, it grows up to 100+ variations of tests sets.

Actually, you have infinity number of tests. A test done 1 second later is a different test, although it might be the same in all ways that matter to your testing. The point is that you need to have a good think about why you’re doing testing and what matters to you. All testing is sampling. Good testing is good sampling. So look at how you’re doing your sampling. That will help you control your testing based on the time and money you have available (the logistics of your test strategy)

Q1: Any ideas about complexity estimation?

Projects tend to change and warp depending on what happens during the project. You are trying to measure testability, and I have to recommend this to you: Heuristics of Software Testability. This will give you lots to think about in terms of how to deal with the difference between what you know and what you need to know (the epistemic risk gap).

If you want to look at testability risks in a structured way, you can find a set of approaches here Heuristic Risk-Based Testing.

Q2: Is there a reasonable way how to manage such big range of various configuration and therefore reduce nunber of tests?

What would you consider “reasonable”? The easiest way to reduce your testing is to do no testing. I’m guessing you’d consider that unreasonable because you’d be lacking the information you need. You could change nothing, but I’m guessing you’d consider that unreasonable because your testing is too expensive.

So you could reduce the problem by reducing the information you need. This is improving the epistemic testability of the product by stating that you do not need to know certain things. Make a cost-benefit decision about dropping tests.

You could reduce the problem by improving the way you test. If you have sets of test cases, for example, you could change to a list of the things you need to know about and allow your testers to explore those things freely. They are sometimes called scenarios or test charters, if you’re searching for them. This has many advantages in that you’re more likely to find problems because your test tasks are less prescriptive, testers are more engaged with the system and learn and explore better, and testing tends to be done faster because the paperwork is much lighter. If you need that paperwork then I’d point you towards session-based test management.

You could test based on better information. Knowing the nature of bugs that tend to escape to production, which parts of the product are business-critical, what customers actually want from the product, what parts of the product are are more complex, where your interfaces are, etc. This will lead you to perform better sampling on your product.

Fixing bugs helps testing go faster. Finding bugs is time-expensive for a tester, because they have to invest time in investigating and reporting it that they could be spending testing. It also affects testability because bugs hide other bugs and create unknowns. Because finding bugs is time-expensive moving testing up front is a great way to improve this problem and has massive impact in other ways by reducing feedback loops. Finding and fixing a problem during programming in a pairing session is MUCH cheaper than finding it during a testing session later on.

Look at issues in your test project. A bug is anything that threatens the value of the product. An issue is anything that threatens the value of the testing (or project). “Cancel does not close dialog box” is a bug, while “I can’t access the test environment” is an issue. A tester needing information or resources is an issue, disruptions to the pipeline is an issue, poor availability of test builds is an issue. Ensure testers feel that they can (and do) improve these problems, including making requests from others to do so.

When you’ve done all of that there are ways to reduce time faster than reducing coverage using combinatorics. All combinations of OS, LCD, Printer are too many, so you create tests that pair each value in one list with each value in all others at least once. This is called “all-pairs” or “pairwise” testing, and you’ll need a tool for it. There’s a free one called allpairs.

I’m trying to sum up, here, how I manage testing and develop testing strategies which has been a lifetime of learning for me, but hopefully it’s of some help anyway.

3 Likes

Hi Michal,

When I read your description, it seemed likely that there would be constraints among the configuration factors. For example, an operating system would be used with some screens or terminals, but every OS might not be compatible with all the other variations. If the customers are choosing their own configurations, we might expect that all the possible variations might not be valid for testing, or for production work.

Here is an example to clarify the point. Suppose we need configurations to test 3 applications as follows.

OS: Windows Linux

Browser: Chrome Firefox Internet_Explorer

Application: App1 App2 App3

There are 3 browsers and 3 applications, so we need at least 3x3 = 9 configurations to cover all browser-application pairs once. When we generate the 9 configurations, all the OS-browser pairs will be covered also. One of them will be the Linux-Internet_Explorer pair, and it will be configured with App1, App2 or App3. Let’s suppose the configuration is Linux-Internet_Explorer-App1.

This configuration is invalid: Internet_Explorer cannot be run with Linux. But if we skip this case, there is a valid pair that will not be covered: Internet_Explorer with App1. And Linux with App1 might not be covered. It is possible to fix this simple test design manually. But generally, having a test case generator that conforms to constraints is much more efficient.

Jacek Czerwonka maintains a list of almost 50 test case generators at pairwise.org. Many of these are free; some have support for constraints. As before, if you choose to try a free Testcover subscription, I will be glad to answer questions and help as needed.

George

1 Like

Thank you very much Chris for great explanation and overview.
Samples from James Bach are helpful a lot and I will study it well.
You know, I found very useful to see, how other people see it and what they can tell about that.

The way I see it, is to re-evaluate the current state of test scenarios.
Maybe to reduce, or filter the tests which are most important, or most valuable for each customer.
I will study more about Risk-based testing techniques.

Also, there are rumors, that marketing dept. will try to gently force the customers to unify, or upgrade their HW equipment,
so the number of variations will slightly decrease.

BR Michal

Hi George, good point with valid customer configurations. Although, we already reduced the number of tests to minimum,
but maybe there is still the space to reduce more.
Hopefully, we will manage to gently force some customers to unify some of their configurations and together with
risk-based testing techniques, we might be able to reduce number of variations.
Thanks for hints.

BR Michal