How do we keep up as quality and test professionals when code generation is lightning speed?

Hi, I was wondering what everyone’s thoughts are on keeping up when code generation is lightning speed with AI?

I’ve been asked this question in an interview very recently, so thought it might come up more for others - so a space where we can share insights would be useful.

My personal thinking is, leaning into what we’ve always done - quality planning and continuous improvement in particular. What are your thoughts?

3 Likes

Quality professionals keep up by focusing less on writing code or tests manually and more on risk analysis, validation strategies, and understanding system behavior. Even if AI can generate code quickly, it still needs thoughtful verification, edge-case testing, and real-world validation.

Using smarter tools also helps. For example, I built a Chrome extension called Q-ARK to speed up common testing tasks like locator discovery, JSON validation, and API response analysis so testers can spend more time on quality decisions instead of repetitive work.

2 Likes

I tend to think speed of code generation just shifts where QA adds value, it does not remove it.

When code becomes cheap to produce, the real bottleneck becomes understanding risk and behavior. Someone still needs to ask the uncomfortable questions about workflows, edge cases, system interactions, and whether the feature actually behaves correctly from a user perspective. AI can generate code quickly, but it does not automatically understand the messy context around a product.

Another thing that becomes more important is test design and coverage thinking. If developers can generate features faster, the test surface grows faster too. That means QA professionals who can quickly identify critical paths, risky integrations, and gaps in coverage become even more valuable.

I also think we’ll spend more time maintaining the testing ecosystem itself. Suites grow quickly when features multiply, and keeping tests organized, visible, and relevant becomes a challenge. Having structured ways to track scenarios, runs, and gaps helps teams keep pace as things scale, whether that’s through automation pipelines or tools like Tuskr that help keep coverage understandable as requirements evolve.

1 Like

At an interview my answer will likely vary a bit depending on the focus of the role. I had a discussion on roles recently and we talked about four different roles and them often having a lot of crossover but with slightly different focus, QE - leaning more to build activities, SDET towards automation, QA towards process and Tester toward discovery and investigation.

I suspect each bias would be adjusting slightly differently.

I specialise in the discovery and investigation but with AI I am not only looking at maintaining this level but also expanding it to cover more at a optimal pace, here are a few things that may help. This has always been risk optimised so no change there even though it can be a go to emphasis point for this sort of thing.

If an activity suits mechanical strengths then its getting an increased chance of being allocated to machines. Build and automation activities can be good matches.

Full access to code. Dev tools further ahead than test ones, debugging, what does the code do, test ID addition, quicker access to builds, build small tools to assist testing, make code local code changes to increase testability for example having a version translated to english.

Developer automation more standard and often increased coverage. Increased stability, reduced common basic issues.

Risk research leveraging - exploratory charter suggestions.

Discovery focused agents pre-scans dynamic app interaction- this one I’m still working out. Experiments with accessibility risk indicates a reasonable amount of info and some bugs can be found by agents - this could fasttrack discovery test sessions as a good input.

Reduce wasteful activities. Eliminate test case focus, no bulky separated test management and planning, selective meeting attendance, time spent on ticketing when a two minute conversation with a developer will get better results.

Avoid filler activities in the name of utilisation - this often means multiple projects in parallel with more time on strength areas.

Common sense stuff - collaboration, early involvement etc but do not make a big deal on these, if it feels forced then it has too much focus.

Among all this allocate time for learning, things are changing so fast. Much of the above could be superseded in a few months.

Its also way too much for an interview without context and may have some elements the interviewer may disagree with. So far though I am not finding lightning code development speed an issue so maybe at an interview pick one thing to start the discussion on and see where the conversation goes.

I must admit, if I got that question in an interview I would answer it by asking my own questions. First I’d want to break down that statement “Code generation at lightning speed”. So who does the prompting of the coding agents in your organisation? What context do they consider within their prompts for building code solutions? Are things like usability, supportability, testability, security etc. considered? etc. I would keep going until I understood their confidence in the quality of their code generation.

If the answers to those give cause for concern, then I would then challenge “so the code that you don’t understand is produced at lightning speed?” Then I explain that our role has evolved to be very different in this AI Gen world. So its no longer about keeping up with the code building, its about influencing the responsibilities to ensure quality is being built into the code through the prompting and the verification that the right thing is being built is shared across the roles. We’re the best people to coordinate and guide others to do more with quality and of course we would get in and test too, but our testing would work alongside others rather than be a stage.

1 Like