🤖 Day 22: Reflect on what skills a team needs to succeed with AI-assisted testing

During this “30 Days of AI in Testing” challenge, we’ve explored how AI can enhance various testing processes. Throughout this time, it’s been clear that to use AI effectively, we need more than just tools and platforms; we need a team with the right skills, mindset and expertise.

Today’s task invites you to reflect on the roles, responsibilities, and skills a dedicated team would need to successfully lead AI-assisted testing initiatives.

Task Steps

  • Consider Broader Skills: Identify the essential skills and expertise that could enhance a team’s effectiveness in AI-assisted testing. How can cross-disciplinary knowledge contribute to success?
  • Envision Key Roles: Reflect on the roles that would be useful in a testing team to leverage AI for testing effectively. Think in broader terms than traditional teams; for example, consider how a data scientist or Machine Learning (ML) engineer could fit into a team. What unique responsibilities could they take on to push AI/ML initiatives?
  • Define Responsibilities: For each role in your envisioned team, define a few potential responsibilities. You might include:
    • Developing ML models to generate test data or predict defects.
    • Guiding the integration of AI tools
    • Creating AI-powered bots or assistants for automated testing.
    • Educating testers on AI concepts to encourage skill growth and interdisciplinary collaboration.
  • Share Your Ideal Team Setup: In reply to this post, share your envisioned team and the roles you see as important to succeed in AI-assisted testing. Consider including:
    • Key roles and their responsibilities
    • Essential skills required for each role
    • Rationale for including each role in your team
    • Potential collaboration opportunities between roles
  • Bonus Step: If you’re free today (Friday, 22 March 2024) from 13:00 to 14:00 GMT, join This Week in Testing—AI in Testing Special, our weekly free voice-only chat, where @simon_tomes and @billmatthews will discuss this week in testing on LinkedIn.

Why Take Part:

  • By sharing your ideal team setup, you can contribute to shaping a collective vision for the roles, expertise, and skills required to use AI in testing effectively.
  • Engaging in this task might reveal exciting new roles that resonate with your interests or aspirations in AI and testing. It’s a chance to consider how you can shape your skillset and career to align with these new opportunities.

:chart_with_upwards_trend: Take your learning to the next level. Go Pro!


Hi my fellow testers,

For today’s challenge I asked ChatGPT for some help as I was struggling to think of defined roles. This was its output:

Envisioned Team Setup for AI-Assisted Testing:

Key Roles:

  1. AI Testing Lead
  2. Data Scientist/ML Engineer
  3. Test Automation Engineer
  4. Domain Expert/Test Analyst


  1. AI Testing Lead:

    • Define the AI testing strategy and roadmap.
    • Coordinate with stakeholders to understand requirements and objectives.
    • Oversee the implementation and integration of AI tools into the testing process.
    • Ensure compliance with regulations and ethical guidelines for AI usage in testing.
    • Mentor team members and foster a culture of innovation and continuous learning.

    Essential Skills:

    • Strong understanding of both testing principles and AI concepts.
    • Leadership and project management skills.
    • Communication and stakeholder management abilities.
    • Knowledge of AI ethics and regulatory requirements.

    Rationale: The AI Testing Lead provides strategic direction and ensures alignment between AI initiatives and testing objectives. They bridge the gap between technical implementation and business goals.

  2. Data Scientist/ML Engineer:

    • Develop ML models for generating test data or predicting defects.
    • Optimize algorithms to enhance test coverage and efficiency.
    • Collaborate with the testing team to identify opportunities for AI integration.
    • Conduct data analysis to identify patterns and insights for improving testing processes.
    • Research and implement state-of-the-art AI techniques relevant to testing.

    Essential Skills:

    • Proficiency in machine learning algorithms and techniques.
    • Strong programming skills in languages like Python, R, or Java.
    • Data manipulation and analysis expertise.
    • Understanding of software testing methodologies.

    Rationale: The Data Scientist/ML Engineer brings expertise in machine learning and data analysis, essential for developing AI-powered solutions to enhance testing effectiveness and efficiency.

  3. Test Automation Engineer:

    • Integrate AI tools into the test automation framework.
    • Develop AI-powered bots or assistants for automated testing.
    • Design and implement test scripts for validating AI models and algorithms.
    • Collaborate with developers to ensure testability of AI-driven features.
    • Monitor and analyze test results to identify opportunities for improvement.

    Essential Skills:

    • Proficiency in test automation tools like Selenium, Appium, or Cypress.
    • Programming skills in scripting languages such as Python, JavaScript, or Ruby.
    • Familiarity with AI frameworks and libraries for automation.
    • Understanding of software development methodologies.

    Rationale: The Test Automation Engineer focuses on implementing AI-driven automation solutions, ensuring efficient and accurate testing of software systems with AI components.

  4. Domain Expert/Test Analyst:

    • Provide domain-specific knowledge and insights to the testing team.
    • Collaborate with AI experts to define relevant test scenarios and data sets.
    • Analyze business requirements and user stories to identify test cases.
    • Validate AI-driven features against domain-specific criteria and standards.
    • Communicate test findings and insights to stakeholders effectively.

    Essential Skills:

    • Deep understanding of the domain and industry standards.
    • Analytical and problem-solving abilities.
    • Excellent communication and collaboration skills.
    • Knowledge of testing techniques and methodologies.

    Rationale: The Domain Expert/Test Analyst ensures that AI-assisted testing aligns with business objectives and user needs, contributing domain-specific insights to optimize the testing process.

Collaboration Opportunities:

  • The AI Testing Lead collaborates with all team members to ensure alignment with overall testing strategy and objectives.
  • The Data Scientist/ML Engineer works closely with the Test Automation Engineer to integrate AI models into the automation framework.
  • The Test Automation Engineer collaborates with both the Data Scientist/ML Engineer and Domain Expert/Test Analyst to design and implement AI-driven test scenarios.
  • The Domain Expert/Test Analyst provides valuable input to the AI Testing Lead and Data Scientist/ML Engineer regarding domain-specific requirements and constraints.

During the “30 Days of AI in Testing” challenge, an exhaustive exploration of AI’s potential in enhancing testing processes unfolded. It became increasingly apparent that unlocking the full potential of AI necessitates more than just tools and platforms—it demands a meticulously crafted team equipped with the right skills, mindset, and expertise.

In response to this task, I meticulously deliberated upon the roles, responsibilities, and essential skills imperative for spearheading triumphant AI-assisted testing initiatives:

  1. Broadening Skillsets: Recognising the paramount importance of cross-disciplinary knowledge, I identified pivotal skills such as data analysis, programming proficiency, statistical reasoning, and domain expertise as integral components augmenting team efficacy. :mortar_board:
  2. Visionary Role Assignments: In crafting the blueprint for an adept testing team, I envisioned a diverse ensemble of roles meticulously calibrated to harness AI’s prowess. Beyond conventional testing vocations, roles such as data scientists, machine learning engineers, and AI specialists were envisaged, each bestowed with distinctive responsibilities aimed at propelling AI/ML initiatives to new heights. :rocket:
  3. Delineated Responsibilities: A meticulous delineation of responsibilities was provided for each pivotal role within the envisioned team:
  • Data Scientist/ML Engineer: Spearheading the development of ML models to generate test data or prognosticate defects.
  • AI Specialist: Guiding the seamless integration of AI tools, crafting AI-powered bots or assistants for automated testing.
  • Domain Expert/Test Analyst: Lending invaluable domain knowledge expertise to refine AI models and decipher testing outcomes.
  • Trainer/Educator: Empowering testers through comprehensive education on AI concepts, fostering skill amplification and interdisciplinary collaboration. :books:
  1. Dissemination of Ideal Team Configuration: Emphasis was laid upon fostering collaborative synergies between roles, elucidating the rationale behind each role’s inclusion—diverse expertise converging to surmount the multifaceted challenges inherent in AI-assisted testing endeavours.

Through the dissemination of my ideal team configuration, I contributed to shaping a unified vision delineating the roles, expertise, and skills indispensable for efficacious AI integration in testing practices. This engagement not only afforded an avenue for the exploration of novel roles but also proffered insights into the strategic alignment of personal skillsets and career trajectories with burgeoning opportunities in AI and testing.


Hello guys,

Happy Friday!!! :crystal_ball:

Like @adrianjr I too felt like asking AI support on this to define clear roles, responsibilities & rationale.

Here’s the Gemini’s response:

Building an Effective AI-Assisted Testing Team

Broader Skills for Success:

An AI-assisted testing team needs a strong foundation in various disciplines. Here are some essential skills:

  • Software Testing Expertise: Solid understanding of traditional testing methodologies and principles (exploratory, functional, etc.) remains crucial.
  • Data Analysis and Interpretation: The ability to analyze test results and identify patterns from large datasets is vital for effective AI utilization.
  • Problem-Solving and Critical Thinking: AI is a tool, and these skills are essential to understand its limitations and leverage it strategically.
  • Communication and Collaboration: Seamless communication across disciplines (testing, data science, development) is key to success.

Envisioning Key Roles:

Here’s a breakdown of some key roles in an AI-assisted testing team:

  • AI/ML Engineer/Data Scientist:
    • Responsibilities: Develops machine learning models for test data generation, defect prediction, and test automation.
    • Skills: Machine learning expertise, data analysis, model development and deployment.
    • Rationale: Provides the technical expertise to build and integrate AI tools into the testing process.
  • Software Development Engineer in Test (SDET): (Someone with both development and testing backgrounds)
    • Responsibilities: Collaborates with the AI/ML engineer to integrate AI tools into test frameworks and automates test execution.
    • Skills: Strong understanding of testing frameworks, scripting languages, and basic AI concepts.
    • Rationale: Bridges the gap between AI and testing tools, ensuring smooth implementation.
  • Test Automation Specialist:
    • Responsibilities: Creates AI-powered bots or assistants for specific testing scenarios and maintains existing automation scripts.
    • Skills: Expertise in test automation frameworks and scripting languages, with an eagerness to learn AI concepts.
    • Rationale: Automates repetitive tasks, freeing up human testers for higher-level activities.
  • AI Testing Specialist (New Role):
    • Responsibilities: Educates testers on AI concepts relevant to testing, monitors AI tool performance, and identifies areas for improvement.
    • Skills: Strong testing background with a deep understanding of AI and its application in testing.
    • Rationale: Champions AI adoption within the testing team, ensuring its effective use.
  • QA Lead/Test Manager:
    • Responsibilities: Oversees the overall testing strategy, fosters collaboration across teams, and ensures alignment with project objectives.
    • Skills: Extensive testing experience, leadership skills, and the ability to understand both AI and traditional testing approaches.
    • Rationale: Provides strategic direction, ensuring AI-assisted testing complements the overall testing strategy.

Collaboration is Key:

  • The SDET and AI/ML engineer collaborate to build and integrate AI tools seamlessly into the testing framework.
  • The Test Automation Specialist and AI Testing Specialist work together to identify automation opportunities using AI and develop/maintain AI-powered testing bots.
  • The AI Testing Specialist and Testers work together to educate and upskill the team on AI concepts relevant to testing.
  • All team members collaborate with the QA Lead/Test Manager to ensure the AI-assisted testing strategy aligns with project goals.

This diverse team, with its combined expertise, fosters a successful AI-assisted testing approach, ultimately leading to higher quality software.


Nice structure.

Interestingly, I had a few discussions last year about this type of role (they called it a Data Scientist in Test) where they hoped these specialists would help amplify the test teams through the use of AI but also to assist with the technical aspects of testing AI models/systems.

A challenge that they were facing is how to encourage DS and ML people to focus on testing rather than building customer facing products. Reminds me of the struggle many companies had when trying to attract Software Engineers into SDET roles - these were often seen as 2nd class developer roles.

Personally, I think if teams want to really embrace AI in Testing then building their own models is going to be key and so that needs people who are capable of framing problems in terms of data and AI/ML algorithms and building them - I think that is where the interesting innovation will come from.


Hello @sarah1 and fellow learners!

Thanks for this wonderful task. I had a good time envisioning these things for my future self/org needs.

Here is a detailed mindmap with my thoughts:

Also, Here is a video that I did explaining why these skills are important, what are the key roles for the future and how can they collaborate.

Check it out here:

Do share your feedback with me.



Hi folks, its Friday :smiley:

So I decided to ask Bing Co-Pilot:
As a software manager I want to build a team for AI Automation testing. Give me the skills and roles required

To which the reply was:


  1. AI Test Engineer: Responsible for creating test cases and scenarios for AI systems.
  2. Automation Tester: Focuses on writing scripts for automated testing.
  3. Test Architect: Designs the testing framework and strategy.
  4. QA Analyst: Analyzes test results and reports on software quality.
  5. DevOps Engineer: Integrates testing into the CI/CD pipeline.


First thing no one would put me in charge to manage anything.
We Irish have a saying ‘why do today what you can put off to tomorrow’
Me as a manager would be a fast lane to business collapse :smiley:

Most software teams would already include of 2, 4 and 5.
It is somewhat interesting to see a new role being pro-offered, “AI Test Engineer”.
Would you really have a separate role or would the skills required for such a role not become a pre-requisite for the QA Analyst and the Automation Engineer.

@billmatthews Bill you mentioned the struggles to get SDETs and some of the arrogance from developers towards the role.
I interviewed a number of candidates for an SDET role and really struggled to get a fit. Some developers did really well on the code challenge, but lacked testing and team experience. Also a lot of very experienced testers who saw auto-code generators as development. Some people with a “Consultant” title but lets not talk about how bad they were.
In the end we took 2 people from Support with years of domain knowledge and trained them up. The logic being we had plenty of developers to build up coding skills, plenty of QA to help with the understanding of testing, and coming into the Team we had people we worked with and who plugged our knowledge gap.

For a small to medium sized business such as I work for, there will not be the luxury of all these roles. I would see it much more as skills all of QA will learn and adopt.
In the same way taking people from Support with Domain knowledge, we would look to upskill those already in the Teams.
Specialist AI roles would probably be more on the company level than delivery teams.
I think that would be sensible approach and sellable to the bean counters.

of course if you are a company with unlimited resources…


This is a very complex question and looks like everyone will introspect their skills and what direction their career should be going :slight_smile:

Here GEMINI’s response

Envisioning Key Roles:

Here’s a breakdown of some key roles in an AI-assisted testing team:

  1. SDET (Software Development Engineer in Test) in AI Testing:
  • Responsibilities: Develop and maintain automated tests using AI tools, create AI-powered test data, analyze test results and identify AI-specific issues.
  • Skills: Strong testing experience, scripting skills (Python, Java), understanding of AI testing frameworks.
  1. Data Scientist for AI Testing:
  • Responsibilities: Develop ML models for test data generation, defect prediction, and test case optimization. Analyze testing data to identify trends and improve AI models.
  • Skills: Expertise in Machine Learning, data analysis (Python libraries like pandas, scikit-learn), experience with data pipelines.
  1. AI Testing Specialist:
  • Responsibilities: Bridge the gap between testers and data scientists. Guide test automation with AI tools, translate testing needs into actionable tasks for data scientists.
  • Skills: Testing experience, knowledge of AI testing tools, proficiency in translating technical concepts between disciplines.

Collaboration Opportunities:

  • SDETs and Data Scientists: Collaborate on defining testing objectives, designing AI models for test data and defect prediction, and interpreting model outputs.
  • AI Testing Specialist and Testers: The specialist acts as a bridge, training testers on using AI tools and incorporating AI-powered test strategies.
  • All Team Members: Regular communication and knowledge sharing to ensure everyone understands the testing process, AI capabilities, and limitations.

Rationale for Each Role:

  • SDETs: Provide core testing expertise and adapt to AI-powered testing methodologies.
  • Data Scientists: Bring AI/ML knowledge to develop models that enhance testing efficiency and effectiveness.
  • AI Testing Specialist: Smooths collaboration between testers and data scientists, ensuring optimal use of AI in testing.

This is a foundational team structure. Additional roles like DevOps engineers for integration and infrastructure management can be added based on specific needs.

By combining these diverse skillsets and fostering collaboration, your AI-assisted testing team can leverage the power of AI to deliver high-quality software with faster turnaround times.


In my small test team integrated into the development team, there are not many roles:
Both the Test Engineer(s) and Test Lead / Quality Engineering Lead need to buy- in.
And other stakeholders (like Development Lead and/or Delivery Lead and Product Manager and/or Product Owner) should be involved.
Considering a “Growth Mindset” all parties should receive proper AI training.


Since I favor a whole-team approach where everyone, regardless of specialty, participates in testing activities all the way around the holistic testing loop - I would have the whole delivery team brainstorm about what skills they would need to make the best use of AI/LLM tools. I would approach this holistically - not only testing tools, but AI assistants for coding, tools for other tasks like helping with doc. How many different tools and skills do we need? Then for each skill,

  • Is this skill already in the skillset of one or more team members? If so, plan ways to transfer those skills

  • Is this a skill we can learn? If so, plan a strategy. Someone on the team gets training, there is training for the whole team or multiple team members, we can just get a book and learn it… whatever it is.

  • Do we need to add someone to the team, part-time or full time, that has this skill? In my limited experience, it is super helpful to have someone with the data science skills. I can learn the basics of that but the deep skills seem harder to learn. One way to look at this would be to ask the question: “What’s the biggest obstacle to our team successfully benefiting from using AI/ML to help us create more value for the business and the customer?” And that might be a person with a specific deep skill set.

  • I would also have the team take a hard look at all the pitfalls and downsides of using AI/ML tools. Ethics, security, sustainability (are we damaging the planet?).

I’m making this sound like a huge big bang effort. I would start with these team conversations. Once some goals are in place - choose one goal at a time, work one baby step at a time, one small experiment at a time towards that goal.


Also - there’s some good information about skills and specialties you might want on your team, and what they are likely to cost in this short video: https://www.youtube.com/watch?v=0xENpeGTEZ0


Cannot see Day 23 post, yet.


Well, I have not yet seen a team that succeeded with AI-assisted testing, so I don’t know what sets them apart from other teams. But I imagine following might be relevant:

  • Having a knowledge about various AI techniques, their strong and weak sides. Team should be able to pick a tool and technique appropriate for problem at hand, not just the latest hot thing. I imagine each team member could have deeper expertise in one technique, so collectively they can make an informed decision.
  • They need to be a domain experts in what they are testing. In my experience from previous days, AI tool would leave certain areas uncovered, not even hinting they might exist. Team needs the ability to identify these gaps and find a way to fill them.
  • Great attention to details and ability to review. Again, my experience from previous days tells me that AI tool might just come up with things that are not true. Team needs to identify that and correct them. Hillel Wayne says that proofreading is hard.
  • Ability to create/train/deploy/maintain/run their own AI tools? I’m not so sure about this - clearly there is a benefit of the ability of rolling out your own solution exactly tailored to problems at hand; on the other hand, costs of doing that are significant, and I just don’t see every single team doing that. I guess that also depends on how big of teams we speak of.

As for specific roles, responsibilities and setup, I like what @lisa.crispin is saying.


In response to today’s task, I consulted ChatGPT-4 and, during our conversation, added information about two key roles: an AI Ethics Expert and a Training and Development Manager.

The AI Ethics Expert is aimed at providing more professional guidance on the ethical implementation of AI in testing within the team, covering fairness, transparency, and privacy issues. The role of the Training and Development Manager is primarily to optimize interdisciplinary knowledge sharing efforts and make them more seamless.

The following is the team structure, key roles, and information on collaboration opportunities for my ideal AI-assisted testing plan team:

In envisioning a team dedicated to leading AI-assisted testing initiatives, it’s essential to integrate a blend of technical expertise, strategic thinking, and interdisciplinary knowledge. This approach not only leverages the core capabilities of AI and machine learning (ML) but also ensures these technologies are effectively integrated into testing processes, enhancing efficiency, accuracy, and innovation. Below, I outline a multidisciplinary team structure that encapsulates these principles, detailing key roles, responsibilities, essential skills, and potential collaboration opportunities.

Team Structure and Key Roles

  1. AI/ML Engineer

    • Responsibilities: Develop and maintain ML models for generating test data and predicting defects. Optimize algorithms for test automation tools and ensure the scalability of AI-driven testing solutions.
    • Skills: Proficiency in machine learning frameworks (e.g., TensorFlow, PyTorch), programming languages (Python, R), and understanding of software development lifecycle (SDLC).
    • Rationale: Their expertise is critical in creating intelligent testing frameworks that can learn from data, predict outcomes, and automate complex testing scenarios.
  2. Data Scientist

    • Responsibilities: Analyze testing data to uncover patterns, anomalies, and insights that could improve testing strategies. Work closely with AI/ML engineers to refine data models based on testing feedback.
    • Skills: Strong analytical skills, experience with big data technologies, statistical analysis, and data visualization tools.
    • Rationale: Provides the data-driven foundation necessary for AI-assisted testing, ensuring that models are trained on high-quality, relevant data.
  3. Test Automation Engineer

    • Responsibilities: Develop scripts and leverage AI-powered bots or assistants for automated testing. Integrate AI tools into existing testing frameworks.
    • Skills: Experience in test automation tools and frameworks (e.g., Selenium, Appium), programming skills, and an understanding of AI integration points.
    • Rationale: Bridges the gap between traditional testing methodologies and AI-driven approaches, enhancing test coverage and efficiency.
  4. Software Developer in Test (SDET)

    • Responsibilities: Collaborate with AI/ML engineers to ensure the testability of applications from the design phase. Embed AI-driven test scenarios within the development process.
    • Skills: Programming, debugging, CI/CD pipelines, and a solid understanding of both development and testing environments.
    • Rationale: Ensures that AI-assisted testing is seamlessly integrated into the development lifecycle, promoting early detection of defects.
  5. AI Ethics Specialist

    • Responsibilities: Guide the ethical implementation of AI in testing, including fairness, transparency, and privacy concerns. Develop guidelines for AI use in testing environments.
    • Skills: Knowledge of ethical AI practices, legal and regulatory frameworks, and interdisciplinary communication.
    • Rationale: Ensures AI-assisted testing initiatives are aligned with ethical standards and societal norms, mitigating risks associated with biased or unfair outcomes.
  6. Training and Development Lead

    • Responsibilities: Educate testers and other stakeholders on AI concepts, tools, and methodologies. Develop training programs that encourage skill growth and interdisciplinary collaboration.
    • Skills: Strong educational background, understanding of AI and ML concepts, and excellent communication skills.
    • Rationale: Promotes a culture of continuous learning and adaptation, ensuring team members stay abreast of AI advancements and best practices.

Collaboration Opportunities

  • Cross-functional Workshops: Organize workshops where AI/ML engineers and data scientists work directly with test automation engineers and SDETs to exchange knowledge and co-develop testing solutions.
  • AI Ethics Reviews: Conduct regular reviews with the AI Ethics Specialist to evaluate the impact of AI-driven tests and ensure compliance with ethical guidelines.
  • Joint Research Initiatives: Encourage collaboration between team members on research projects to explore new AI techniques or tools that could enhance testing processes.

This team setup not only harnesses the power of AI and ML in revolutionizing testing approaches but also ensures these technologies are applied responsibly, ethically, and effectively. Through interdisciplinary collaboration and continuous learning, such a team is well-equipped to lead AI-assisted testing initiatives successfully.

blog post link:30 Days of AI in Testing Challenge: Day 22: Reflect on what skills a team needs to succeed with AI-assisted testing | Nao's Blog


Hi, everyone,

for today challenge I found insightful blog post in Linkedin about important roles for AI testing with detailed responsibilities overview:

There was highlighted those most valuable roles for AI testing team:

:one: AI Test Strategist
:two: AI Test Engineer
:three: AI Test Analyst
:four: AI Test Evaluator

Essential skills for team:

AI and machine learning concepts
Programming skills
Problem-solving and critical thinking
Continuous learning and adaptability
Collaboration and communication skills




Thank you very much @lisa.crispin for sharing that great video.

As I saw some good overviews of skills needed in an AI-assisted-testing team, I looked for some information to give more depth to this topic.

Here are some of the articles/blogs I found

Building an Effective AI Team: Key Strategies for Success

How to build an AI team

How to create an AI team and train your other workers

Artificial Intelligence (AI) In Software Testing

How To Use AI in Software Testing: Functions, Benefits, and Tools

Role AI testing, in particular, ChatGPT in Software testing

How AI-based Testing Tools Automate Your Testing Process

How to Introduce AI Integration in Test Automation Projects

AI in testing: 13 essential resources for QA pros


Day 22

I think I like the idea of approaching this like an ‘enabling team’ in ‘facilitation’ mode in Team Topologies land. How would we spread the word and practices of AI-assisted testing to our development teams.

Broader Skills

The following skills seem important to me:

  • Testing and what makes good testing for a team in their context, rather than applying the same practices to every team, again an enabling team helps by discovering context.
  • Ethical considerations - AI in testing becomes much more about thing not if we can do something, its whether we should. The potential for both good and harm is very large here, an enabling team for AI in testing would help frame moral and ethical considerations. The test here is mostly likely around drift, seems to me with AI it would be easy to gradually drift into dubious territory with what a team may build.
  • Observability - the testing role may become more a watcher and evaluator, looking at prompts and answers from Generative AI and evaluating them critically. You need to be able to see first though, so use of tools like Langchain as well as the specialist observability tooling like Honeycomb, plus standard logging tooling (ELK stack etc) would be useful.
  • Security and privacy - this would be both dealing with the technologists natural skepticism of AI, plus the potential leakiness of all models, how they can be deployed, what models to use for what. Which need to be internally developed and deployed and which can be used in a more public context.
  • Technology research - given that tools and technology change at a ridiculous rate, then a significant portion of your AI in testing effort would be research. Both tools and technology that are specific to testing, but also new capabilities of generative AI’s, architectures, patterns and the rest.

An enabling team helps development teams, find new patterns, tools and technologies, but doesn’t dictate them. Therefore a high level of collaboration and advocacy is required.

Envision Key Roles and Responsibilities

I think early on I would just have a small team and set a focus, rather than getting too heavy.

  • Engaging with teams to find their aspirations and problems that AI in testing could assist with. On top of this:
    • First priority - research and communication to teams of the research.
    • Second - Ethical considerations - help teams understand the implications of what they are building, and using AI in testing.
    • Third - Observability
    • Fourth - Security and privacy.

I might then switch these around depending on how its working out, by asking the teams engaged with so far on a regular basis.

Ideal Team Setup

Probably a small team of 4 or so drawn from a few disciplines. Its hard to say exactly what those would be, I guess I would try:

  • A developer who really values testing
  • An experienced DevOps person who is used to supporting testing
  • An exploratory tester with an interest in both what and how something is built.
  • A UX professional with an interest in user research to engage with teams and judge the impact of AI, both internally and externally.

Hi all

This is ChatGPT response

  1. AI QA Engineer/Analyst:
  • Responsibilities:
    • Designing and implementing AI-driven testing strategies.
    • Identifying test scenarios suitable for AI automation.
    • Developing and maintaining AI-powered testing frameworks.
    • Ensuring the reliability and accuracy of AI-based testing processes.
  1. Test Automation Engineer:
  • Responsibilities:
    • Integrating AI algorithms into automated testing tools.
    • Collaborating with AI specialists to enhance test automation capabilities.
    • Identifying opportunities for optimizing test automation with AI techniques.
  1. Quality Assurance Tester:
  • Responsibilities:
    • Participating in the design and review of test cases incorporating AI.
    • Collaborating with AI QA engineers to refine testing strategies.
    • Conducting regression testing and ensuring the stability of AI-integrated systems.
  1. AI Data Analyst:
  • Responsibilities:
    • Collecting and preprocessing data for training AI models used in testing.
    • Evaluating the quality and relevance of datasets for AI-driven testing.
    • Collaborating with QA engineers to identify data requirements for AI testing.
    • Analyzing test results and providing feedback on the performance of AI models.
    • Ensuring the integrity and security of data used in AI QA processes.



Hi there :vulcan_salute:
Another late reply here :grimacing:

For this challenge, I thought about roles that could be helpful for my company that could use AI as a working tool:

  1. Data Scientist: To communicate with our clients and to give our clients more information for them to make decisions, a Data Scientist is the one that can help us to model our data and use his/her machine learning skills to implement reporting tools to show to the clients.
  2. Chatbot developer: For the support and success team to get the client’s report faster and more effectively, this role should be a great acquisition.
  3. AI specialist: If everybody is using AI, we at least need someone to guide us and improve our workflows with it.
  4. Test automation engineer: Use AI to test the system, improve coverage and write test cases.

I think that’s it folks :wink:

1 Like