How are you keeping up with the AI-Assisted Development?

AI tool assistants are a default standard for developers in the world where I am operating.

Developers are using tools like “GHCP”, “Claude”, etc. to speed up feature development.

The pace at which the features are coming out has significantly increased.

So, are the bugs in some cases…

However, I see a gap between development acceleration and testing work (especially when testers are asked to follow traditional testing methods and maintain documentation, traceability, etc.)

I feel that the passive QA strategies are unsustainable.

Are you facing this too? What are your thoughts on this?

3 Likes

Find easy bugs like UI inconsistencies , automate scenarios faster, test cases design , create Test data, test doc faster with the help of AI assistants … Delegate whatever low impact testing activities to AI assistants …. thankfully management in my current org understands the importance of manual testing which requires lot thinking n analysis which requires time n effort n cant be rushed :slight_smile:

1 Like

This is a really interesting topic, given companies are expecting everyone to move at a faster pace with the assistance of AI.

With developers delivering quicker then every before, we can use AI to speed up things such as writing and maintaining automation. But AI can’t speed up that crucial Exploratory testing that is a human driven process.

I have been looking at building out a framework that can automate Exploratory testing, although this still won’t take away the need for the human element.

Essentially, I have been using AI to do mundane tasks, such as creating test data and that is helping to start speed up the manual side.

2 Likes

I find it challenging to relate to this context “asked to follow traditional testing methods and maintain documentation, traceability, etc.”, I remember in a prior company the testers being asked for a lot of documentation, nobody read it except this one manager to get a false sense of control of the team, we automated it and forgot about it, that was about 15 years ago.

I am wary that a lot of testers seem to be doing activities that lean towards machine strengths, the good part is AI can now do a lot of these things in minutes, the downside is what are those testers going to do now. Here test work would be even quicker than dev so not so much a worry if that’s the testing being done.

Documentation, traceability can likely be covered well with AI, with reviews. You may lose some learning value though.

On automation, testers may be using the same tools as developers so get the same gains and maybe even more as they have product code as a source of truth. I’d like to hear more on this from dedicated automators.

Vibe automation - for those doing light automation, this so far works to keep up with pace.

For testing focused on the unknowns and a discovery/learning model, you can use AI as buddy on this but I’m not sure its faster and whether it should be faster. It could give you more risks ideas to investigate, help you build prototype tools to investigate a risk deeper. Faster may not be the goal but you may find more than before, I’ve always found I’ve been able to adjust coverage to match developer speed though.

Question. Has anyone really noticed pace of features has significantly increased? Developers may be having more coffee breaks than usual as their AI does it’s thing. I am also not seeing an increase in issues even when the developers I work with have embraced AI, they have increased their test coverage with it.

If this somehow results in more of the things that lean towards mechanical strengths being done by machines, freeing up for more human strength activities from the tester I’d welcome an accelerated development model.

I have six products under test this week as a solo tester, the accelerated model would allow me to stick to my strongest testing area which also happens to be the one I enjoy the most. So so far, yes keeping up with development.

This may change, embedded AI in products maybe all products is becoming more normal, it will take me a few projects to make a call on how that impacts my testing.

1 Like

I agree. Here’s the thing that really gets me at the moment with AI-Assisted Development. One thing I learnt early on is Quality achieved and managed through a balance of cost, quality and timescale. The ideal is to get cheaper, better and faster. But that isn’t always possible to improve all three, so you usually tackle one and make sure you don’t compromise the other two.

One of the things I find most frustrating that I was discussing with a developer only yesterday was, how many developers are using AI-Assisted development to be better? The only thing I’m hearing at the moment is faster and cheaper from advocates and vendors. If the lust for focusing these tools on faster and cheaper continues, devs will slowly be devolving the ownership of their own code and actually leaving quality as someone elses problem.

I think there is a danger after everything we’ve tried to do to bring us closer to product managers and developers that we start becoming segmented again. So its important for us to continue to influence advocates for using AI-assisted development with questions like “How much better is your code, than if you did it yourself?”, “How have you ensured the code follows your current coding standards?” etc.

2 Likes