AI in Manual Testing

How have you utilized AI in manual testing?

My company has implemented the expectation that we should be increasing our productivity by 50% using AI without giving us guidance on how to implement it. We are also limited on only using approved AI tools such as cursor or chatgpt. I have been trying to use AI to create test cases, but run into difficulties because our software has specific uses and workflows from our customers.

3 Likes

I use AI like all the time. It is first and foremost in automation where I have made the largest gains in “productivity” (I spent some years as a Quality Manager trying to figure out wtf that is). But basically, manual testing stuff its like a guy at my side I can put questions to.

As for “productivity” its not that smart to measure this individually, I hope your company does not have such thoughts. Its delivering better stuff with better quality in a shorter time. And you’re just a piece in that puzzle.

More exactly how AI can assist you in your testing, its dependent on the nature of what you produce, the company processes and so on. And on you.

Have you asked chatgpt, Copilot and other AI’s about this?

Hi Brooke,

AI without context would not be helpful. Consider an AI as another tester in your team, how could they test if they dont have know-how of the product? its users, business scenarios. To make most of it , first feed the AI your context, your business knowledge, as you mentioned its approved by company, inform the company AI cant help without providing any data and thats the data you will be providing, etc.

Create an standardised approach whats the expectation here. I wont judge how the outlook of the company or the team you are working with or for. Everyone has to use AI these days just to stay relevant in the market. However being Quality Expert, its our job to guide them! :slight_smile:

I could say much more, but its just create the standardised approach and discuss the expectation.

Good luck!

2 Likes

It could be worth having a discussion on what tools are useful for you.
You could use cursor to help you automate some of the repetitive tests you currently do manually.
Or you could use voice mode on chatGPT and tell it what you’re doing during an exploratory test session, and get it to be your scribe.
You don’t mention how you got it to create those test cases for you, but I imagine sharing a screenshot, documentation and ACs with it could yield better results - although I don’t think test case creation is a great use for AI. Brainstorming could be, and using for putting things into the right format for you could be a huge time saver.

I have found that the productivity increases from AI don’t come from expecting it to do my thinking, but to find ways to offload the boring admin stuff. YMMV.

The main issue is context.

You have to train the AI so it doesn’t hallucinate. What I usually do is I copy paste an entire user story + docs + design screenshots and ask AI to write cases in WHEN AND THEN format.

The result does contain some hallucinations but also contains several stuff I might have thought of a bit late. I then allow myself to brainstorm around each case. I wont call them a test case as that would be something different.

That said, its not a desirable situation if your company says “You did not use AI to speed up enough”.

We have explored copilot and Atlassian’s Rovo agent. Speaking as a novice - the better the prompt you write, the better the chances of getting a decent output. I have found running the same prompt on the same requirements can give you different results, which probably would slow you down!

The approach I’m using now is that I write a set of tests first. Then I use an agent with detailed prompts to generate a set. I’ll go through the list and take from there what I may have missed.

The agent is prompted to look for edge cases, data variations and will search related material to generate regression tests. It halucinates a bit, but I’ve learned to scan the results quickly.

You could argue that you are using AI to save the team time by have a better suite of tests that reduces issues found in UAT or production :smiley:

I would suggest one of the following uses of AI with manual testing:

  • If you have a secured AI instance (i.e. you aren’t feeding data into models) then give it a lot of context. There’s ways to pre-load this.
  • Ask it for ideas on tools and techniques when you’re thinking “how would I test this?”
  • If you’re testing in a way unfamiliar, e.g. mobile, ask it “what are important considerations when testing a web application on a mobile device?”
  • Get it to generate test data for you. For example could XSS or SQL injection be a risk?
  • Brainstorm ideas but not actually create test cases.

(I’m not great at prompting!)

I should add that Wizzo by Epic Test Quest is a really neat tool in the works where you’ll be able to give it context and uses that under the hood to have meaningful & contextful prompts without you needing to.

Finally, I once had a side project where I was feeding the AI lots of data on when we, as an engineering team, have screwed up and why. I could then give it a story and it would remind me that last time we worked on this area we forgot about compatibility with something or that there were a number of bugs with unicode handling.

1 Like