Will demand for QA grow with AI?

I read this LinkedIn post about AI and the demand for QA.

Demand for QA will soon explode. The flood of AI-generated code will require rigorous testing and validation.

And the author emphasised the following:

Increased emphasis on QA
Innovation in QA methodologies
Shifting QA roles and responsibilities
Software development blurring into specification writing
Emphasis on soft skills

Iā€™m curious about this all and still trying to wrap my head around how the role of the tester may need to innovate and change with the growth of AI-generated code.

How about you? Where are you at? How do think the role of the tester/QA person will adapt? Have you seen a change already when working with applications that have been supported with AI-generated code?

4 Likes

From first hand experience the A.I.'s output is very unpredictable.

Large language models like ChatGPT are very very big - and sometimes that leads to unpredictable behaviors.

For this reason people are creating more and more guardrails around generated content.

TBH - AI is a game-changer for all of us in the tech world, and others too!

Unlike traditional coding, AI is all about learning from data and making predictions. Itā€™s opened doors to cool stuff like image recognition and language understanding, which was unthinkable before. But itā€™s not all roses - AI can be unpredictable, and that brings a new set of challenges. So yeah, with AI, the ā€˜impossibleā€™ keeps getting redefinedā€¦and thatā€™s unpredictable!

And when thereā€™s unpredictability, thereā€™s QA!

2 Likes

I agree and disagree in equal measure with the points made by this fellow on linkedin, Iā€™ve probably been building software for too long.

But at least todays weather prediction was spot on, itā€™s raining in normally dry Cambridge this morning. When anyone makes predictions and your prediction is that ā€œthings will changeā€, I get bored; but points I do side up with are:

  • The responsibility of QA shifting, being more present in the ā€œgo-liveā€ decision process. If only as a way of feeding back production defects more quickly, getting your QA team up to the coalface can only be good for everyone.
  • Emphasis on people skills, is for me key, maybe because Iā€™m getting old, but that has only taught me, that people write the code together. No X10 developers, coding standards , frameworks, nor clever processes can guarantee victory, cohesive teams do. Especially in todays isolated remote-working mode.

But I generally see AI as a thing of scale, not as a fresh enemy, rather as growing up and coming of age. QA are going to have to pick up the baton, if they donā€™t, then someone else will. And Iā€™m talking about using large-dataset systems, learning to work with tonnes of statistics and turn it into product knowledge or insight. Gone are the days when users would report bugs, today they just swap platform, and for QA to find new ways to detect the ā€œunreportedā€ bugs that cause users to leave may be just one of the ā€œlarge-data-setā€ tool areas we need to grasp. For example: Use our test workflow analysis mindset to help orgs build using analytics to uncover workflow issues in a product. I donā€™t think QA people will suddenly find they have more jobs in the marketplace, I rather believe they will have to work alongside a different set of people in the org.

/edit My bias comes from how ChatGPT-4 is much like iPod was to .MP3. Nobody outside of tech circles knew what music compression would mean, until someone commercialized it. Mp3 as a format changed how artists distributed their art, and more recently how they built it. Some of the impacts will lag a lot.

2 Likes

My relevant experience is not with AI, but with testing a complex system weā€™ve designed and built with lots of internal rules that affect the result. The similarity to AI is that I test it by trying it out on examples and assess the results via high-level acceptance criteria and my own judgement rather than specific requirements specifying the expected result. However, I also have the option to inspect the inner workings (to an extent) and manually configure certain variables and see how they affect results.
In addition to greater visibility during testing, if we donā€™t like the results for a specific input, we can identify which rules are causing it and design changes to improve the behaviour. I donā€™t see this being possible with AI, though as I said I donā€™t have experience with it.
One of the big problems Iā€™m facing with testing this system, which would apply to AI too, is getting a good overview of behaviour and how it changes. Ideally I want to see results from many different inputs and an analysis/overview of key features, and the ability to compare todayā€™s results to yesterdayā€™s with any differences highlighted. I guess this big picture, as well as the details of the inner workings, are the things that may present difficulties when testing AI.

1 Like

My fear is that, as already before LLMs, people will underestimate the necessity for testing (and also overestimate automation).
Especial as LMLs output sounds most times quite reasonable. Which is dangerous when the content is just made up.

People should make more use of testing, but if they will? :person_shrugging:
Therefore they at least have to acknowledge that LLMs are not a magic wand.

So far I have no interaction, nor necessity to, with any AI.
With AIs changes one more time the subject of the testing, but the basic principals will stay the same. Finding problems which matter to humans.

2 Likes

It is always hard to predict what will happen, but it is still important to imagine what might happen. Thatā€™s why Iā€™m enjoying reading this thread.

We can influence what happens with AI in QA if we think creatively about what might happen and what could happen, and use those ideas as we make decisions about where to spend our time now.

As everybody here already knows, we can influence how AI itself develops if we interact with it. Several months ago, I tried to get ChatGPT to write a poem in iambic pentameter and realized it couldnā€™t. So I tried to teach it. I started with ā€œWrite me a sentence with ten syllables in it.ā€ It couldnā€™t, so I tried some more prompts like writing words with certain amounts of syllables, etc., and got an idea of how well it could deal with counting syllables (very poorly at the time).

A couple of months later, I asked it to write me a poem in iambic pentameter, and it got pretty close! So it looks like lots of people were teaching it to do similar things.

How does that translate into how QA can use AI to improve quality? Iā€™m not sure yet, but Iā€™m not going to wait passively to find out. Iā€™m going to go out and explore and use my imagination!

1 Like

From what Iā€™ve seen so far, AI is already useful for giving an outline framework for test cases (our Tech Team are already using it for producing various scripts) and can have a pretty good stab at producing some automated tests (Iā€™ve seen a few examples using Cypress) but what is produced is only the first cut and needs a thorough review and usually some tweaking.

Obviously AI is improving all the time (presumably the more itā€™s used for testing, the faster it will improve) but for now itā€™s only as good as the information itā€™s provided with (and sometimes not that good). I doubt itā€™s going to vastly increase the demand for QA but is more likely (in the medium future) to change the role of the tester to improve the level of precision given in test case definitions (AI is only as good as the instructions itā€™s provided with at present).

@sebastian_solidwork Thatā€™s my fear as well. I think the need for testing will increase, but the demand probably wonā€™t.

1 Like

Iā€™m not sure the premises in ā€œthe flood of AI-generated code will require rigorous testing and validationā€ are well-founded, even if they sound logical. Are LLMs concretely leading to a significantly higher rate of customer-facing digital products being launched? Are LLM tools leading to significantly higher release cadences for existing products? Sounds plausible to me, but is it true? Will it be true? And if a lot of new products are being launched, do they require rigorous testing or are they tentative prototypes to test the waters for a large number of possible business ideas?

Anyway, if we assume we work for a company thatā€™s found a way to produce a ā€œflood of AI-generated codeā€ that meets the need for new features, change requests, and new products, will demand for QA increase? Thinking out loud:

If itā€™s a developer driving the code generation, I expect the tools that help them be more efficient (in scaffolding code, debugging issues, rubber ducking technical decisions, and so on) to be joined by tools that also make testing more efficient (in identifying issues, generating test scripts, running only relevant tests, and so on). Perhaps that arms race will keep everything in tune.

If itā€™s a business person driving a fully LLM-produced product, well, I will echo that ā€œthe need for testing will increase, but the demand probably wonā€™t.ā€ If that demand does increase, because the business person realises they need a better result than theyā€™re getting from the LLM, then it wonā€™t just be a need for human testing, but also for human development. Again the demand for testing matches the demand for development.

2 Likes

Iā€™m inclined to agree with Anna (@sles12 ) that the need for testing might increase but the demand probably wonā€™t.

About the only thing I can see truly increasing the demand for testing is C-suite folks starting to see testing more in the line of essential insurance than a cost that gets tacked on at the end so they can say theyā€™ve done due diligence.

It will probably take a fair few more bankrupted companies and dead patients before the idea starts to percolate, too.

And yes, I am rather cynical.

P.s. The stack overflow blog on the topic of AI in coding is rather apt to the topic: The hardest part of building software is not coding, it's requirements - Stack Overflow Blog

3 Likes

As a QA/Tester when we use the AI , we have pro and cons but how are we going to use this was the more challenging thing need to discussed .

For an example

You wants to create the billion of test data for doing your test activities for that if you need to generate the script . Yes you can go ahead but is that script more optimized one to use . That mean when u create your own script it took 3 min for 1 billion records but AI generated script took 1 min then it will be more effective way to use the AI in QA activity

As mention how are we going to use this AI to QA is that challenging decision.

Not had to test any AI generated code yet (to my knowledge, at least)

I think the sentiment that testers will be needed to test AI generated code is right, but to describe it as a ā€˜floodā€™ - nah. Not yet. Plus if the codes getting generated, what are the developers doing? Theyā€™ll test it too, right? RIGHT??

Itā€™s an interesting time with AI. Weā€™ve got writers and artists literally crying out plagiarism whenever writing or artwork is generated by AI.

So why are developers not crying out when AI code is being generated?

Iā€™m perplexed
ā€œuse AI to generate test dataā€ā€¦ um that makes no sense at all to me, itā€™s only going to generate cases that are popular from itā€™s training set surely, and so it likely wont include any cases that are ā€œunknownsā€.

1 Like