Emergence of new QA AI Specializations?

Hey folks ,
With AI getting used more in software lately, have you seen any new types of QA roles pop up? Like stuff that’s different from the usual testing jobs?
Also, does your company have any of these new roles?
Curious what kind of skills are needed too.
Thx :slight_smile:

4 Likes

For now, I see only the transformation of engineering roles - when people use AI tools more and more.

But generally speaking, with AI technologies, we need to be prepared to test the output of those AI tools.

So we might have new specific roles, like AI-tool validator. Such validators have enormous domain knowledge in a chosen field - so they can validate AI output and make it better for other users.

3 Likes

We don’t have any special AI roles. All testers are asked to include AI in their tasks.
I think AI can be helpful for lots of different test roles.

I agree with Oleksandr. There will probably be new roles for testing AI itself. It’s a completely different kind of testing as there are no clear actions with expected results. AI is a moving target. Still love the story of where people got around information blocks for dangerous knowledge by asking the AI to tell them a good night story like Grandma from the explosives factory.

1 Like

Maybe there might be the AI equivalent of SDET, say a role that’s a hybrid of QA and a data science engineer (i.e. developer who trains and develops the AI models), focusing on whether the model is being trained and tested properly, and automating the training and validation of the model.

2 Likes

any tips on AI validation?

1 Like

I actually did recently see a role advertised for verifying AI outputs!

I believe it was in the MoT newsletter even. :slight_smile:

1 Like

After a little bit of reaserch ,
Here are real examples of emerging roles I found in today’s market:

  • AI Tester (TTC)
  • AI Automation Quality Engineer - Assistant Manager (Deloitte)
  • AI QA Tester (ADV Techminds)
  • Director, AI/ML Penetration Testing (NetSPI)
  • AI/ML Test Engineer (Aveva)
  • QA Engineer - Generative AI, Support Apps (Apple)

Here are some key responsibilities that I summarised from these roles:

  • Test Planning & Strategy – Define and execute test plans for functional, performance, and AI/ML-specific validations in collaboration with stakeholders.
  • Automation Development – Build and maintain test automation frameworks and CI/CD integrations using Playwright, Selenium, Python, and GenAI technologies.
  • AI/ML Model Validation – Test AI/ML models for accuracy, fairness, robustness, and bias; ensure quality of training data and model outputs.
  • Performance, Security & Compliance – Conduct load, stress, security, and compliance testing, ensuring adherence to standards like GDPR and CCPA.
  • Documentation & Reporting – Maintain detailed test documentation, defect logs, and reports to support audits and feedback cycles.
  • Continuous Improvement & Learning: Stay up-to-date with emerging trends in AI/ML testing, automation tools, and QA best practice
6 Likes

Thx Tybar, but I couldn’t find it.
I guess MoT doesn’t keep jobs archive :woman_shrugging:

1 Like

Could you explain to me what are you asking about exatly?

yep absolutly @al8xr , this is definetly one of the tasks mentionned in the job descriptions

1 Like

Validating the correctness of responses from AI models or LLMs or wrapper applications.

1 Like

Intrested too ! If I come across any tips I’ll share them with you Hanan

2 Likes

Very interesting research. Thanks for sharing!

2 Likes