Good automation habits are the practices you follow. I’ve learned that sustainable automation comes from treating tests like production code and understanding their purpose.
I automate only stable, high-value scenarios
I keep tests readable, small, and intention-revealing
I design for fast feedback, not maximum coverage
I maintain tests continuously, not “when they break”
These practices reduce noise, build trust, and enhance the output of automation.
Curious to know if this resonates with you! What practices have you been following, and which automation tools have helped you?
This resonates a lot. In security automation, I’ve learned that blindly scanning everything just creates more vague work. I focus on workflows that surface real risk and are easy to validate. I personally use tools like ZAP and ZeroThreat as I have found them helpful when tuned properly. Also, I don’t understand why people rely so much on automation when it’s just a tool and not the complete solution.
The second point concerns the value of individual checks (in this case: how they are written).
The fourth point concerns checks maintaining their value (when they break).
This focus on value is good. But consider the other ones:
Whether ‘stable, high-value scenarios’ are indeed valuable depends on your automation goal. Which is hopelessly underappreciated and, in the forms they are often given, has little or no relation to business value. So that is not as easy as it sounds.
Fast feedback over maximum coverage has the same issue: This can correspond to business value for your context, but does not have to. Do not mistake fast execution for the goal of automation: That is a means to an end. Now what is that end?
The huge issue with the automation goal is that the goal you set guides your decisions. That is, after all, the whole point of having a goal. So if you have no clear goal, the business value of your automation will be unpredictable. If the goal is not aligned with business objectives, you will have similar issues, perhaps even worse. And ‘automating the regression test’ is, as any business representative will tell you, not exactly directly relatable to business objectives. Neither are most of the other ones that I usually hear …
We live in a hyper-everything world… too much AI, too much scanning, and too many tools. Tbh, I feel that balance is key. Of course, you can never be too secure, but yes, instead of scanning everything and adding a lot of automated noise to your security cadence, it is important to make sure your scanning workflows surface actual, verifiable (and fixable) risks. I’ve traditionally used ZAP and recent discovered ZeroThreat as tools which have helped me cut down on the noise. I think more than the scans, what I love about these tools is their ability to give me targeted and prioritized lists of risks that I can address and fix to up my security. So yeah, automation alone is just not going to help
The tips are too generic in order to understand.
I don’t find value in writing them at this level of detail.
For instance, value, what is it? It’s a term that each person would understand differently.
If you write I automate only functional scenarios/ regression scenarios this is different then “value”. We are in a QA professional forum, I would expect it to be more terminology driven here.
Same for the second bullet point. What is it readable? It’s context based. I would say I use design patterns such as object repository to keep all my SUT in one place to ease code readability.
I can continue on and on of course with the explanation, but I think I made my point.
Please take it in a good mood of improvement.
We are Quality professionals, writing clearly, terminology based content is crucial to keep us learning and developing.
One thing I’ve been working on doing in my tests is to make them as self-contained as possible. Trying to eliminate the need to setup test data before running tests and incorporate API calls to check the status of test accounts and set the data in the proper state within the test. The goal is to eliminate any instances where one test is relying on the outcome of another test’s data changes.
Chaining tests is not the way to go if you have methods to avoid it.