Do you have a strategy for AI in your testing work? Is it defined already?

Do you already have an official AI strategy in place at your work?

What does that talk about? Governance? Scope? Allowed tools? etc?

Is the AI-assisted testing welcomed in your workplace?

Do you have any thoughts on this topic?

1 Like

Earlier, we were using Claude code for automation purposes. Apart from that, we have now started using BrowserStack for test cases management, and it comes with its own AI feature. So we are using ai tool of the browserstack to generate the test cases as well and manually review them. In the backend, it has integrated the same llm that we usually use, like ChatGPT 5.1 or Sonet, etc., so sometimes even the results are also the same if we compare with llms that are publicly available.

As of now, there is no restriction on using any public AI tools like chatgpt, claude or Gemini, etc. However, we have to ensure that we are feeding any data in the prompt that can cause any issues. So we remove or add any pseudo-name in the prompt while using free AI tools which are publicly available.

From a QA point of view, most teams I have seen do not have a formal AI strategy yet, and that can actually be fine in the early stages.

Where AI tends to land well is in assistive roles. It can help generate test ideas, summarize failures across runs, spot patterns in flaky results, or even help draft automation snippets. In practice, people reach for different tools depending on the need. ChatGPT is great for brainstorming or explaining tricky concepts, GitHub Copilot or Codeium help speed up script writing, and some test ecosystems now offer AI that surfaces gaps or summarizes what changed between releases.

On the teams I’ve been on, the ones that get value treat AI as support, not a decision maker. We keep simple rules like no sensitive data in prompts and always review AI output before it moves into any release conversation. That way AI helps cut down grunt work and keeps the focus on quality risk.

Part of that is having tools that let you quickly see test runs and context so the team can actually discuss risk instead of just counting passes and fails. Some of the newer AI test management tools ( we use Tuskr) that organize results and highlight shifts between builds fit into that approach without adding ceremony.

General AI strategy supported by general training and guidelines. Tools go through a signoff process and involve our customers as SaaS. We even have a head of AI who has been great to get involved in discussions and has been very supportive on the testing front. We also build AI solutions so that impacts a level of control we need to put in.

All team members are expected to do one of the EU introduction to regulations course, its light informal but raises awareness of things to consider.

No official testing strategy is in place, would vary from project to project but trying to share things at our test guild meetings to get some level of consistency, it moves so fast if I did one this week I suspect it would be out of date in 3.