How do you get your engineers to add comprehensive test coverage and stick to patterns?

I’m fairly new to QA on my team and am finding it hard to get the engineers to stick to guidelines which ends up with lots of back and forth when they raise PRs. How are people approaching this to avoid slack tennis? Does anyone else find this challenging?

3 Likes

Welcome to the community where we want to celebrate the best thing since sliced bread every day. Ha Ha.

@savagetoast I am assuming you mean getting your automation test engineers to stick to guidelines, or do you mean product coders to stick to test-writing standards? But either way, it’s challenging. I mean code coverage is almost never what it’s cracked up to be. Whenever we measure a thing to death, we kill it. One just has to look at the example in agriculture where maximizing crop yield in the short term with simplified monoculture processes stresses not only the soil, but actively encourages pests who have now found a abundant supply of a thing they love that comes back twice a year, heaven. Good measurements need to account health in your system, and bug-tennis/slack-tennis or PR-tennis is certainly not healthy. My current secret (and this may change) is to work from the inside outwards. We embed the test engineers into dev teams, testers thus go to all meetings, but testers also form cross-functional teams that bridge teams. And having a culture that welcomes experiments as well as rewards discovery. All hard things to do.

Testers write/code all the non-unit tests, but we also get the devs to write system tests when we get overloaded, as a self-regulating and knowledge transfer technique. So nope code coverage is sexy, but it’s never been a goal, patterns, well that’s harder. I find that JBGE (Just BarelyGoodEnough) often wins, and I’m ok with ugly code, for a short time. I have had a few code reviews where agreement on how to make a test nice and pretty takes too long to read. I believe in emergent design, where people hopefully get tired of the eyesore and get in there and fix it themselves on a Friday afternoon.

1 Like

Hey @savagetoast, you can use a static code analysis tool like SonarQube, checkstyle and FindBugs/SpotBugs to check common coding rules. With a dynamic code analysis tool like JaCoCo you can check how much code is covered by Unit Tests.

You can define quality goals in SonarQube and only accept merge requests that are compliant with these goals.

With a Linter like SonarLint a developer does not need to wait till the buildjob for the merge request is done. The local development environment can be connected to the same ruleset that is defined in SonarQube and give already feedback to rules while writing the source code.

A next step could be “Mutation Testing”. It seeds defects (mutants) in your source code, e.g. by changing a “<” to a “>”. The unit tests should fail to detect the mutants. If not, the unit tests are not sufficient.

One can also check architectural guidelines with a tool like “ArchUnit” to ensure that the developers stick to the architecture / framework.

There are a lot of tools available that support automated refactoring, e.g., ErrorProne or OpenRewrite. Some IDEs have also options for that (e.g., “Save actions” in Eclipse)

As a start you need guidelines that describe how to write good unit tests that can also be understood to a certain degree by non-programmers. Using the language of the problem domain or the “business” for method names and variables supports the readability. A defined structure like “Given-When-Then” or “Arrange-Act-Assert” is also helpful for that.

Important is that the developers accept these tools, the coding rules and quality gates. Additionally, you need to have some colleagues that are responsible to check regularly if the coding rules make sense or if specific rules need to be turned off. You need also colleagues that are able to explain the rules to the others.

Without acceptance, maintenance, training and support it won’t work.

The developers shall value those agreements and tools as a big help that support them in their daily work.

All the above mentioned topics will help you to get maintainable source code from a technical perspective. It creates a “safety net” for the developers, if the source code shall be changed in future.

But there are still a lot of options left to cheat the tools and there is still a gap to get good tests that check the business rules.

For example, you can write unit tests that execute the source but never check or assert the results of this execution. One can also disable the check of coding rules within the source code so that the tools skip the check of a rule.

For these cases manual code reviews on basis of a code review checklist and the experience of a senior developer are helpful. Every merge request needs the approval of a colleague.

In addition to that one can also search within the source code for common code smells like assertions in unit tests that check if “1 equals 1”, disabled rules or other topics from the code review checklist. It helps to make these findings transparent on an abstract / anonymized level to everyone and show options how to solve the issues.

If the source code looks nice from a technical perspective but the tests are still not comprehensible, I would propose to switch from “slack tennis” to “pair programming”. QAs and developers can discuss user stories and test cases. Developers can also explain their (Unit) Tests to QAs or maybe also to the Product Owner (see also “Three Amigos”). The three roles can close each other’s knowledge gap by discussing the business rules, examples or test design techniques.

It makes also sense to check the reasons why the tests are not comprehensible. Reasons for that can be “time pressure”, “lack of skills/knowledge”, “lack of motivation”, “missing quality culture” or “bad user stories / missing acceptance criteria”.

Issues with “bad user stories” can also be solved by QA, if possible before the implementation starts. With good user stories and good acceptance criteria (see INVEST mnemonic) it is easier for the developers to write helpful tests.

Do people still use Eclipse?

/edit something code-smell here

lots of back and forth when they raise PRs

This smells like something that pair and mob programming can help.
By having continuous code review, it’s easier to focus on each small step
of coding, enhancing the code every second.

1 Like