Can good Engineering beat AI in test automation

Hi everyone , I want to start a discussion about good Engineering vs AI. We hear a lot about “AI Powered testing” these days, but i am curious how much AI is actually useful vs hype. Where do you personally see real value like Test generation , flaky test reduction, defect prediction or something else?

Also do you think strong algorithmic approaches and good engineering practices are something enough withtout heavy AI? why or why not? Would love to hear real experience and how do you see AI Shaping QA over the next few years.

2 Likes

I really worry because a bit like top-down development which then became bottom-up, then along came unit testing and TDD was soon the mantra of every developer. If AI can indeed write code in a focused language like COBOL, and shaken up IBM shares by as much as even 13%, then it’s worth taking notes.

I doubt good engineering will beat AI based on metrics the AI decides, for example we already know that every 10 lines of code contains at least 1 defect, and so it should be no surprise that security bros are also taking note of the power in AI tooling that is now falling into hackers hands. https://cybermagazine.com/news/will-anthropics-claude-code-security-replace-cyber-tools My main use case for AI is however in just writing code, getting it to “help” me more, just to write any code is helpful right now. But my bet is that without using these tools, we will definitely be writing weaker code and doing to more slowly.

1 Like

As a note, I am very much on the side of “a lot of the AI hype is just hype”.

Right now, the best help I’ve had from AI is doing some scaffolding work as well as scanning code and helping me locate things to fix or update.

I do think Agentic AI is better than the LLMs on their own, and we’ll probably see some things get better, but I would be really cautious about anyone trying to sell you miracles with it.

I’d also be cautious about trying to use it to solve everything. A good example of “maybe don’t hook up an AI to something sensitive” that happened recently:
https://www.businessinsider.com/meta-ai-alignment-director-openclaw-email-deletion-2026-2

1 Like

I am first and foremost a manager of testing (small, medium and large projects/programs). Over the years I have learned that the fundamental principles of test management have not changed; we exist to assist the project find and mitigate risk, through collaboration, and by working with teams of dedicated and interested, talented people. As software systems become exponentially more complex, it’s already past the point where resource alone can cope, and automation assist is very much assumed and needed. The problem is, using the correct tools and techniques at the right time/cost, and for the right reasons is becoming increasingly challenging. A lot of time we seem to use automation and tooling because it’s cool and seemingly useful, without keeping an eye on what is truly the goal; to find risk quicky and effectively, at reasonable cost.

Ai can speed things up (and almost certainly does a more thorough job if spec’d well) for sure, but once you lose sight of what’s its actually doing, and allow it (and yourself!) to wander off course, you end up adding more risk unknowingly, which will at some point need to be mitigated downstream at more cost. Those of us using claude code will know this; great things can be done but you need to ride that horse hard to get a superb result! Waay too many times I have spent ratholing and fixing a problem that didn’t exist!

Guess what I am saying is, never stop managing expectations, and constantly keep your eyes on the end game. Too many folks wander off course at someone else’s expense

3 Likes

I’d like AI to cover all automation, the way I consider it is that its an mechanical strength activity often focused on known very well things so limited ambiguity and it seems to be improving very quickly.

For now most seem to be using single agents rather than a group of test agents collaborating with each other. How good are the tests, false positives, false negatives, what does it miss. There is some of that at this point but in a short period of time if the combine with multiple agents I think that risk will drop.

Scale, complexity, context and business domain are the things experienced engineers are flagging as still needing strong human at the helm, this is likely fair as of today, in six months time I am not so sure, in two years I suspect it will be a completely different game for a good percentage of the industry.

The UI layer will be interesting, maybe costly but its a layer I feel should be very light coverage with regards to automation, there are a lot of test engineers at that grind level where the can create and modify scripts very well but are not at that architect level, AI can likely already out perform this level leaving that good engineering architect level intact for now.

There is going to be bias, jobs at risk concerns, self preservation, I am not personally seeing major blockers that time will not overcome.

I am though still wary of tools making false promises, coming back to the speed of change though even those tools may manage to get to a stage of being valuable only for a few months later they become obsolete.

The risk remains though if we drop the human good engineering aspect, does the bar drop and is a lower bar going to be accepted and the norm going forward in exchange for the other benefits it can bring.

Even if capability is there within a year that does not mean the market will adopt it across the board, evolving test mainstream market has been notoriously slow and often takes steps back at times.

I remain skeptical by nature but yes I do want AI to cover automated coverage so maybe I also have a level of optimistic bias on this front.

1 Like

I’m kind of a broken record on this subject. But AI is great if used as an assistive technology, not a replacement. I personally use it frequently to find gaps in stories, summarize documents and meeting transcripts, analyse our test coverage etc. that really adds power

Where I’m always hesitant is generating test scripts and automated tests. Now I’m not saying don’t do it, but the art of quality engineering is to manage and communicate risks that could impact success. Success in software, is not measured by how quickly you put tests together and run them. I see too much focus on speed alone, from vendors, from bosses etc. to the abdication of responsibility ansd creativity. What we need to do is look at how we can be faster, better, cheaper and ultimately successful. The answer as always, is balance.

Using tools like AI as assistive tech to get you the answers so you can make decisions and take ownership for them, is the way it should be. Well its the way I’m approaching it.

1 Like

I really appreciate this perspective

I completely agree — AI should be assistive, not a replacement for engineering judgment.

In my experience, the real power comes when AI (or smart algorithms) help us:

  • Identify coverage gaps

  • Highlight potential risks

  • Suggest improvements

  • Surface patterns humans might miss

But the ownership of risk, quality decisions, and communication still belongs to the engineer.

That’s actually the direction I’m personally exploring — not “generate everything automatically,” but using assistive intelligence to:

  • Analyze locators stability

  • Detect flaky patterns

  • Validate contracts vs responses

  • Surface potential breakpoints before execution

Speed alone isn’t quality.
Balanced acceleration with accountability is.

Curious — where do you personally draw the line between assistive and over-automated?

1 Like