šŸ¤– Day 27: Assess your team's readiness to adopt AI-assisted testing

With our journey into AI in testing well underway, itā€™s time to evaluate our teamā€™s readiness for embracing AI-assisted testing. In previous tasks, weā€™ve envisioned a strategic approach for AI in testing in our context and reflected on the necessary team skills and roles. Today, we aim to assess our teamā€™s current readiness and identify the steps needed to bridge any gaps to successfully adopt AI in testing approaches.

Task Steps:

  • Assess Current State and Identify Capability Gaps: Assess your teamā€™s current readiness for AI adoption by examining existing skills, and your teamā€™s infrastructure and processes. Determine where your team might fall short in embracing AI-assisted testing. This could include:

    • Lack of understanding of AI and ML concepts.

    • Insufficient data for training AI models.

    • Inadequate infrastructure to support AI tools.

    • Processes that are not optimised for AI integration.

      Future tip: Emily Webberā€™s Capability Comb matrix is an excellent framework for identifying gaps in your teamā€™s capabilities. It works best if you can collaborate with your team members to gather their perspectives and insights during the assessment.

  • Develop a Roadmap: Based on your assessment, draft a plan to address these gaps and work towards building AI in testing capabilities within your team over time. Your roadmap could include:

    • Training and upskilling programs in AI and ML.
    • Strategies for improving data collection and management.
    • Upgrades or adjustments to infrastructure to accommodate AI testing tools.
    • Process refinement to seamlessly integrate AI into testing workflows.
  • Share Your Insights: In reply to this post, summarise your teamā€™s readiness assessment and your proposed roadmap. Consider including

    • The strengths your team possesses and the gaps youā€™ve identified.
    • The specific actions planned to bridge these gaps.
    • The challenges you anticipate and any strategies you plan to implement to overcome them.

Why Take Part

  • Benchmark Your Progress: Gain a comprehensive understanding of your teamā€™s readiness for AI-assisted testing.
  • Plan for Success: Create a roadmap that will help successfully integrate AI into your teamā€™s testing practices.

:rocket: Level up your learning experience. Go Pro!

6 Likes

Hey all,

We are great at checking software quality via manual testing & using Cypress to automate tests in sprints, making sure everything works smoothly. Things we can improve on include:

  • The team has limited knowledge of how AI can be applied in testing.
  • We might not have enough data to train these AI tools to be their best.
  • Learning about AI
  • Data Readiness
  • AI Integration into testing workflow

Some of our team members might be nervous about using new things, also AI can be a bit technical. Asking questions out to peers & helping each other out would be helpful in this situation.

Integration of AI might have its downsides as we have talked in earlier days challenges like AI not getting our context, biasesā€¦ So, weā€™ll start small with AI tools, weā€™ll involve everyone in choosing/using the tools. Weā€™ll also keep an eye on the AI tools to make sure theyā€™re working correctly.

3 Likes

Hi my fellow testers, my response to todayā€™s challenge is below.

** Assess Current State and Identify Capability Gaps:**

I know that some members of my team do already use some form of AI already e.g. Github Copilot and they have developed & I test a machine learning tool for geochemical data so there is definitely some knowledge already there around AI and ML concepts.
I donā€™t think we do have the data we would need to train our own AI model, at least not in one place and joined up together.

** Develop a Roadmap:**

We need to first assess where we think it makes sense in our software development process to have AI assistance then research into tools that could help in those areas.
If we were looking to train an AI model ourselves then we would need to develop a data collection strategy & may need to improve the data quality to look more like the data a real user would use.

5 Likes

Hello everyone

Below is my take on the 27th day task about Team Readiness on implementing AI in Testing.

Task Step Actions Taken
Assess Current State and Identify Capability Gaps - Conducted thorough evaluation of existing skills, infrastructure, and processes.
- Identified lack of understanding of AI and ML concepts among team members.
- Identified inadequate infrastructure to support AI tools effectively.
- Noted processes not optimised for seamless integration of AI into testing workflows.
Develop a Roadmap - Implemented training and up-skilling programs focused on AI and ML.
- Developed strategies for improving data collection and management processes.
- Initiated upgrades and adjustments to infrastructure to accommodate AI testing tools.
- Refined processes to seamlessly integrate AI into testing workflows.
Insights - Emphasised teamā€™s strengths, such as collaboration and commitment to continuous improvement.
- Outlined specific actions planned to bridge capability gaps.
- Anticipated challenges, including resistance to change and resource constraints.

Above form provides a clear overview of the task steps, actions taken, and insights shared during the assessment of the teamā€™s readiness for AI-assisted testing and the development of a roadmap for successful AI integration into testing practices.

Thank you

5 Likes

Hello @testingchef

Strengths that team possesses:

  • Technical skills in traditional testing methodologies.

  • Willingness to learn/adapt to new technologies.

  • Good collaboration and communication within the team.

Gaps Identified:

  1. Lack of Understanding of AI/ML concepts-
  • Some team members may not have sufficient knowledge of AI and ML concepts.

  • Limited exposure to practical applications of AI in testing.

  1. Insufficient Data for Training AI models-
  • The team faces challenges in accessing diverse and high-quality datasets for training AI models.

  • Existing data may not adequately represent real-world scenarios/edge cases.

  1. Inadequate Infrastructure to support AI tools-
  • Current infrastructure may lack the computational power or resources required for running AI algorithms efficiently.

  • Limited integration capabilities with existing testing tools and frameworks.

Roadmap for Bridging Capability Gaps:

  1. Training and Upskilling Programs in AI/ML - Organize workshops/seminars/online courses to educate team members.
    Encourage participation in hand-on projects/hackathons.

  2. Strategies for Improving Data Collection or Management - Collaborate with stakeholders to identify and collect relevant data sources.
    Implement data governance practices to ensure data quality, privacy & security.

  3. Upgrades/Adjustments to Infrastructure- Assess current infrastructure needs and e xplore options for upgrading hardware. Invest in tools and platforms that support AI testing.

Challenges Anticipated & Strategies to Overcome:

  • Lack of Resources

  • Resistance to Change

  • Integration Complexity

4 Likes

Iā€™ll keep the answer very short today.

Majority of my team is very skeptical towards AI. They believe that disadvantages outweigh benefits and are unlikely to engage in any AI work until there is a working system that actually does provide tangible benefits.

The rest of the team might be more sympathetic, but they would need a good idea for AI system that is both achievable and would provide real benefits to our work. Itā€™s not like we have too much free time. The limited time we could spend on that would need to be assigned to something that has good return of investment, not pure experimentation.

4 Likes

It is very hard to start the team to adopt AI from zero, which is my current case. As at first the other teammates maybe scared by the unknown tech, afraid that ā€œAIā€ is something very hard to understand; or even though they have tried out some LLM tools like ChatGPT, they still donā€™t know the possible applications of AI in testing.

In my team (which is small), I first shared my own examples of using AI in work to my teammates through some 1-on-1 sessions, share everything in person, to make sure each of them see how easy it is to use the tools, and in what kinds of tasks we can use AI tools for assistance.

Once they started to get interested in the topic, they would try out and explore the tools themselves (under enough time capacity). And then we can group up and share feedback with each other, and build up some process for common usages. Last but not least, repeat the review sessions to keep the process improving.

So for teams really new to AI like mine, instead of setting up all the processes for them, it is more important to build the teamā€™s interest in AI, and reserve capacity for the team to explore useful AI tools continuously.

4 Likes

I think this a very short answer for me.
I work with legacy systems which we have manual and automated tests for.
I would suggest we are nowhere near ready and would be starting from ground zero.

Even in our current testing, we donā€™t optimise any testing tools, even those for test management. We rely on Tickets for manual testing and building Automated Tests which are specā€™d by our QA Business lead, and developed by myself with Senior Dev Engineers Reviewing my PRs.

So that probably breaks all the ā€˜standardsā€™ for testing, but it works for us.
We did have access to test management tools but we honestly found no value in these. It quickly became an exercise for the sake of doing it.
By abiding by software standards, principles ands patterns, our automated tests are readable and self explanatory, as well as generic and future-proof.

So that is where we are, but it doesnā€™t mean we canā€™t look to and harness what is on offer from AI and ML.

I can see plenty of scope for AI and ML to assist us, but I think this will not be something decided within my Delivery Team, but rather at a Business level.

We are actively looking at the Tools at that level and how we can harness them.
While working through this 30 Days I have been in close contact with one of the Product Managers(the one who ā€œvolunteeredā€ me for this) and with feedback we will look at formulating a company-wide strategy.

This was going or be 1 paragraph, but we Irish like to talk :smiley:

5 Likes

Strengths:

  1. Technical Proficiency: Our team has a solid foundation in software testing principles, including automation frameworks.
  2. Experience with Testing Tools: We are well-versed in using various testing tools for manual and automated testing.
  3. Adaptability: Team members have shown a willingness to learn and embrace new technologies in the past.

Lack of Understanding of AI and ML Concepts: Many team members are not familiar with advanced AI and machine learning concepts.

Training and Upskilling Programs:

  • Objective: To educate team members on basic AI and ML concepts.
  • Actions:
    • Arrange online courses on AI fundamentals.
    • Encourage team members to pursue relevant certifications.
    • Pair experienced members with AI enthusiasts for knowledge sharing.

Infrastructure Upgrades:

  • Objective: To enhance infrastructure to support AI testing tools.
  • Actions:
    • Assess current infrastructure for AI compatibility.

Process Refinement for AI Integration:

  • Objective: To adapt testing processes for AI-assisted testing.
  • Actions:
    • Conduct process mapping sessions to identify areas for AI integration.
    • Create guidelines for incorporating AI into existing testing frameworks.
    • Pilot AI tools in specific test scenarios to check effectiveness.

Challenges and Strategies:

  1. Resistance to Change:
  • Strategy: Foster a culture of learning and experimentation.
    • Highlight success stories of AI adoption in testing from other companies.
    • Encourage open discussions on the benefits of AI for testing efficiency.
  1. Data Privacy and Security Concerns:
  • Strategy: Ensure compliance with data protection regulations.
    • Involve legal and compliance teams in data acquisition and usage plans.
    • Implement strict protocols for data anonymization and encryption.
  1. Resource Constraints:
  • Strategy: Prioritize actions based on feasibility and impact.
4 Likes

Our organization called 2024 ā€œThe Year of AIā€.
We are still in the studying phase:
We started internal training like ā€œNavigating the AI Landscape in QAā€,
attended EuroSTAR Crowdcast sessions last month,
identified individuals as AI leaders and IP point-of-contact for proof-of-concept.
And:
My team members and I are using AI chatbots like ChatGPT & Copilot to solve small problems.

5 Likes

Hello, @testingchef and fellow learners!

Thanks for this challenge.

I feel that this 30-day challenge teaches you all the basics that one needs to learn to responsibly incorporate AI in their testing workflow.

Here is the complete playlist of my learnings (chapter by chapter) on AI in Testing that I had been posting on YouTube. Check these out here:

AI in Testing by Rahulā€™s Testing Titbits - YouTube

Also, here is the video blog on todayā€™s task explaining the importance of each and every topic thatā€™s part of my learnings:

Do share your feedback and if you like this, do share this playlist with your team.

Thanks,
Rahul

2 Likes

Hi, everyone,

there is me insights for today challenge, which allows practice evaluate and look at my team readiness for embracing AI-assisted testing:

The strengths your team possesses:

sufficient human resources
professional, specific and strong knowledge in the field of IT
the ability to quickly assimilate new knowledge
motivation apply new tools and techniques
continuous learning policy
good team spirit, effective communication and collaboration skills

Gaps youā€™ve identified:

lack of practical skills of working with AI tools
lack of specific theoretical knowledge in that field
gap of development of the methodologies, guidelines for working with AI tools, policy for companies data protection and security

The specific actions planned to bridge these gaps:

develop a required strategies, plans for working with AI tools
carry out a need analysis, in which areas AI could be used, what tools would be most suitable
assess threats and opportunities and communicate it to the team
organize introductory and constant training of employees
promote responsible, safe use of AI tools

1 Like

Day 27

Assess Current State and Identify Capability Gaps

Lets start with the current state:

  • I think we would need to do more to find out where our current testing pain points are to be honest, before even beginning on the AI in testing path. Perhaps we could start with:
    • Identifying scenarios, hidden requirements and error handling needs earlier and during story kick off.
    • Selecting unit and widget tests during development.
    • Exploratory testing note taking and guidance.
    • Identify and add integration (in Flutter tooling terms basically end to end) test candidates.
  • The team is fairly ignorant of the current capabilities of how AI in testing could be used, so some education and research time required.
  • The team shows resistance to using a Generative AI within the product, dismissing it as untrustworthy. Education on using models trained on data of our choosing seems important.
  • The team is heavily wedded to AWS, so a good place to start might be tooling in that area, as there is less friction. Maybe a spike type activity with the AWS Bedrock set of tools.

Develop a Roadmap

Iā€™ve always rather liked the Now/Next/Later format for roadmaps, where the now is small, next is a little bigger and later is a park for the bigger less certain ideas.

  • Now - spike for AWS Bedrock capabilities, followed by a hack day type exercise for adding a Generative AI to the product. Would use something like LangChain and Smith to build as it has good testing and observability capabilities. Basically try and get the team excited by AI.
  • Next - Investigate how to select unit and widget testing scenarios with a private model, possibly using our codebase as training, plus best practices for low level testing in Flutter and beyond. I say this because Gen AI is good at structured tasks, less so with exploratory tasks.
  • Later - train our own model using data from Github and Jira about the product, including testing notes. Use that to generate test ideas, while training/creating guidance on prompt engineering. This is to aid with exploratory testing.

This might all change after assessing current testing capabilities, but is a starter.

2 Likes

Hi There

Assess Current State and Identify Capability Gaps:
My team comprises individuals with varying talents: some excel in manual testing, others are adept with automation tools, and a few are already utilizing ChatGPT. However, we lack standardized guidelines for the use of AI tools at the organizational level.

Develop a Roadmap:
Establish organization-wide guidelines.
Implement training programs and assess their impact.
Create a core team at organizational level to awareness about AI benefits and use cases.
Consider recruiting additional skilled team members if necessary.
Focus on infrastructure development.

Challenges and Strategies:
Ensuring data security.
Preparing the team to embrace change.

Thanks
Vishnu

1 Like

Hi Everyone!
This a great laid-out practice of practising the ā€˜leadership qualitiesā€™ of any individual.
Current Team Status:

  1. My team has many experienced people having hands-on experience with ā€˜Manual Testingā€™ so they are good at it, so when it comes to learning something new, they are very hesitant to do that and so consider it as a ā€˜Time -wasteā€™.
  2. People in my team tried to do that but due to a lack of self-exploration on the topics or the challenges they encountered, they had to stop working in the ā€˜learningā€™ direction.
  3. Some are too comfortable to move out there ā€˜Manual Testingā€™ zone and try anything new.

RoadMap: (for those who have team members having the same characteristic behaviour I mentioned above) :

  1. Basics of AI.
  2. Choose one tool at a time for a while, so that everyone can learn, work and help others at their own pace so that they can implement the tool in their day-to-day activity.
  3. Have Presentations about the topic considering both the advantages and the disadvantages of the tools they are using, to make them confident about why and how they are using the tool.
  4. Share blogs and have discussions on them.

Strength: Experiencing Manual testing.
Challenge: Too comfortable with long and tedious testing rather than using AI tools.