How Does Your Code Smell?

I’ve been a software test automation engineer for quite a few years, and in that time, I’ve been asked to look at a lot of different automation. Early in my career, I learned about code smells the hard way. Now, I’ve been tasked with refactoring existing automation at a couple of different projects, and mentoring new automators to keep their code smelling fresh and clean.

Some of the things I’ve learned in my career are that you need to always give complete and meaningful exceptions when an automated task fails, that “assert(true)” is almost never the best way to give a meaningful failure, and that there is always, always a better way than using sleep().

I’ve also found that it’s best not to rely on your webdriver too heavily for automation. Lean on it only for the target behavior of your web application, and rely on backend services for repeatable tasks like data generation and even authentication.

What are some lessons that you’ve learned? What bad practices have you found to be common? How did you learn to do better? Please share your experiences below!

4 Likes

For me the biggest sign of a smell is code that’s not readable. If none of the team can read your code, chances are there’s a problem with it.

When I left a role where I was a heavy automator, there was talk about me writing hand over documentation. Once the devs looked at the automation code, between the names I had given variables and classes and the comments I had added in, there was no need to write anything more.

It was littered with sleeps in the beginning so the whole code base was smelly getting started but over time I kinda got it tidied up.

2 Likes

Flaky tests are best fixed or culled.

Flaky tests teach people to ignore red test builds, and that leads to bugs that slip in without anyone noticing until far too late.

1 Like

One curious thing I had realized is what I could call “people smells (not literally)”.
It happens to developers as well, but my naturally extremely limited world-view finds it more often in testing automation:

Lack of rigorosity of the quality of automation code due cultural/team standards.
This smell is usually exposed by few and soft comments on code reviews, creation of code guardians (“The guy that understands the performance suite”, “the gal that knows how to deal with dynamic elements”), lack of evaluation of the usage of the automation code after being “release” (is it usable by developers (feedback-time)? Does it really checks stuff, or is it done just in order to run green?), among other things.

This kind of attitudes then leads of code smells, for a number of reasons.

1 Like

One that has me nervous going forward is complex test code. This is definitely along the lines of readable code, it can be taken further as well. We have tests from an engineer that wrote very complex tests that were difficult for them to maintain, rather less anyone else. This got me thinking what would others do, or prioritize, to improve the quality of their test code?

  • Code reviews
  • Coding standards
  • Others?