I want to show the degree of coverage of equivalence partitions at the unit test level. The tests are implemented in C++ with googletest, but any hints on how to measure this coverage with any xUnit framework are probably helpful.
Background of the question:
I am obliged by the customer to use ECT as a test design technique and to show the EC coverage. Right now, the coverage is checked during a formal review of the test specification, but I’m looking to automate it based on the test implementation.
Welcome to the Ministry of Test community @antji .
Been mulling over your question a few days and can see perhaps some “regulation” or protocol having an impact, and I assume part of the job is to understand why the customer wants white-box testing at all even unless you are delivering source code to them. At which point the obvious problem around having too much reliance on unit test metrics creates a friction point and may reduce willingness to do any refactoring if those metrics drop at all. But I’m guessing you are really wanting to run with the requirement the customer has given you. Which is good, so long as it’s not your only goal.
I’m also assuming you have a code coverage tool and have a good idea of what code-coverage you have, because without being able to run at least some of your unit tests with coverage instrumentation enabled, you may still be flying in the dark. I’ve not used googletest, and myself not been intimately involved in analyzing code coverage gaps, but my experience has been that helping to set up and run coverage in CI (Continous Integration) is just as useful as driving up the number of running unit tests in CI. Both of these approaches will leave gaps as you say, and they don’t do so in a way that directly points to where those coverage gaps are. Hope someone can chime in with tactics that might point to those gaps “confidently” if possible.
Thanks a lot, @conrad.braam. I have been thinking about your answer for quite some time.
Indeed, I’m familiar with code coverage. Personally, I agree with you that a code metric is more suitable to code than any test class coverage - so I based on your text I started the discussion about this metric again and really achieved to get it removed. This does not make sense, and yes, I was only trying to fulfill the customer wish, although I knew it’s insufficient and doesn’t tell anything.
So, thank you for unknowingly putting my head in the right place again
I know it’s always hard to have the good software quality discussions with a customer Antje, it’s hard enough to have useful software tech-stack and requirements conversations with them.
I am hoping that you did give them something in return for that win. Doing all the hard analysis work to get high coverage of just one kind of defect might feel like trying to drain the swamp just to make sure nothing got left un-checked in a specific area. When if being too thorough in one task means you miss other bigger risks is not hard to sell to a customer. At least you have built their good trust in your skills and the customers know you care.
A lot can be said for doing very deep dives into one area, we do learn a lot as testers. Personally I love the challenge of creating the process and tools to do these exercises. But a big reason I prefer not to is that when you raise 100 new defects from a very intensive exercise, it merely adds to the backlog of 1000 bugs we already have but don’t have time to address. I’m always trying to open a door, to have those stakeholder conversations about how they view the ecosystem the product operates in, the technology stack choices and even the design. Really glad the customer takes you seriously now Antje.