So community, what bad habits have you seen, or have you overcome?
What are the most common discussion points on ways to do things? E.g. spaces or tabs for indentation
Share your best (horror) stories and help those newer to testing learn from you.
What are “bad automation habits”?
Trying to automate everything with no clarity of the product and its stability Underestimating the effort required for maintenance
Automating only the UI tests which have failed multiple times due to locators, long execution time.
Ignoring the failures and not checking how effective the automation results.
Automation pack becomes bigger as project grows new tests are added but the previous once are not maintained due to time constraint or rush to add more tests
To overcome
Session explaining why 100% automation is not possible and the maintenance effort required
Started API automation along with UI
Revisiting the failed test after release and created tasks for fixing those failures and maintaining the previous tests
Regular communication between manual and automation teams as they work closely avoid duplication of the effort
I believe everyone in the early career is guilty of that how much we love seeing ‘green’ pipeline not because we did not want to do it but because we were amateur! Thanks to incredible communities like these that everyone can take benefits and lots of learning from!
AutoTest the simple things - Good cop out, just to say you have ‘something’
Not using the “DRY” approach
Deleting ‘failing’ tests - to give a better report
Not ensuring Single responsibility for methods
So over complex, only the author understands (Who tests the tests?)
Constant changing of design patterns and Automation software (Low ROI)
Only run when you ‘think’ you need to - Get them in the CI/CD pipeline
No plan of attack - for reuse and low maintenance (POM for example)
Reliance solely on Automation - It is not the ‘golden bullet’
AutoTest just for the sake of it - You need to define a ‘gain’ and ‘purpose’
I will leave at that, but I also question “does the test serve any purpose, and if that area under test does fail - would be an actual problem”. Stick to critical path and high risk areas.
Just listening to a discussion between Hilary Weaver @g33klady and Suman Bala @sumanbala during Module 17 of STEC.
Hilary has this to share:
Or the one that that gets me every time is when a full stack software engineer, really good at their job, they try to tackle test automation. And then I look at their code and they’re like, sorry.
Because they think, oh, it’s just code. I can just do it. But its also you have to apply the testing principles. So that’s a myth that I see a lot with software engineers that without the testing training that they think that they could just automate, you know, whatever.
We had one framework we inherited that had 300 tests in it. They were 5 years old and kept alive. They’d found 2 defects in that 5 years on what was a core product. When I looked into the framework the test were running nothing like they would in production. But the fact that there were 300 tests gave those outside QA comfort.
So I dropped the framework and got the team to start building new tests with the mantra “I’d rather have 1 valuable manual test than 300 non valuable automated tests”. That was still difficult to communicate as some had bought into quantity = quality e.g. historic conversations like “Are you comfortable its been tested?”, “Well we’ve executed 300 automated tests and they’ve all passed” sounds a lot better to those outside QA than “Well we’ve executed 10 valuable manual tests and they’ve all passed”…but that was the journey we had to start.