🤖 Day 3: List ways in which AI is used in testing

Welcome to Day 3 of 30 Days of AI in Testing! Today, we’re going to go deeper into the practical side of AI in Testing. Your mission is to uncover and list the many ways AI is changing our testing practices.

Task Steps

  1. Research to discover information on how AI is applied in testing.
  2. List three or more different AI uses you discover and note any useful tools you find as well as how they can enhance testing, for example:

Test Automation:
1. Self-healing tests - AI tools evaluate changes in the code base and automatically update with new attributes to ensure tests are stable - Katalon, Functionize, Testim, Virtuoso, etc.

  1. Reflect and write a summary of which AI uses/features would be most useful in your context and why.
  2. Post your AI uses list and reflections in reply to this topic.
  3. Read through the contributions from others. Feel free to ask questions, share your thoughts, or express your appreciation for useful findings and summaries with a :heart:.

Why Take Part

  • Discover New Way to Use AI: Finding out how AI is used in testing shows us new tricks and tools we might not know about. It’s all about discovering useful ways to support our everyday testing tasks.
  • Make It Work for You: Seeing which AI solutions fit what you’re working on helps you pick the best tools and solutions. It’s like choosing the right ingredients for your recipe.
  • Share the Smarts: When we all share what we’ve learned, we all get smarter together. Consider this a jigsaw, where everyone brings a piece of the puzzle.

:chart_with_upwards_trend: Take your learning to the next level. Go Pro!


Hello there :slight_smile: !

After a quick search through this article: “Top 18 AI Testing Tools in 2024”, I discovered some new tools and some tools that I know their story before AI :sweat_smile: .

I will definitely re-visit selenium to see what kind of changes it has after AI integration.
Another tool that I used before but I didn’t had the time to integrate in my test project is Applitools, I tried to use a free tool to do visual testing but it is not even comparable, Applitools seems to be fantastic, I will definitely integrate it in my project to save my time :sweat_smile: .
I tried to use Mabl before, but I don’t know if anyone has the same experience as me, but when you get too used on coding the no-code tools feels too rigid. Maybe it is my inexperience with the tools :grimacing: .

But for me, the tool that can be used more efficiently in software automation testing is GitHub Copilot, or any AI tool that helps while coding, at the beginning it gives you some random tips, but after some time it feels it is reading your mind, right now it is saving a lot of my time while automating.

I am really looking forward to the comments about this topic :slight_smile:


Hi all,

My name is Bill, I have worked as a manual software test engineer in Ireland since 2017, I’m hoping to learn how AI can be used to improve the STLC and maybe make some connections along the way. For my context I’m currently focused on Web testing for desktop applications.

Best regards



AI Applications in Testing

AI has significantly impacted software testing, offering a range of tools that enhance automation, accuracy, and efficiency. Here are three key AI applications in testing along with their benefits and notable tools:

  1. Test Automation:
  • Self-healing tests: AI tools like Testsigma, Mabl, and Functionize automatically update test attributes to ensure stability.
  • Useful Tools: Testsigma, Mabl, TestCraft, Testim.io, etc.
  1. AI-Augmented Test Techniques:
  • Test Script Generation: AI analyzes requirements and existing test cases to optimize scripts quickly.
  • Test Data Generation: AI generates and refines test data for comprehensive coverage.
  • Intelligent Test Execution: AI organizes tests efficiently across devices and environments.
  • Smarter Test Maintenance: AI minimizes redundant testing through self-healing mechanisms.
  • Root Cause Analysis: AI helps identify issues and their causes effectively.
  1. AI-Powered Automation Testing Tools:
  • Testsigma: Streamlined test development with natural language processing and cloud-based architecture.
  • TestCraft: AI-driven tool for manual and automated testing based on Selenium.
  • Applitools: Visual management and AI-powered UI testing platform.

Summary of Relevance

In the context of software testing, the most useful AI features would be self-healing tests for maintaining stability, intelligent test execution for efficient testing across various platforms, and AI-augmented techniques like test script generation to speed up the testing process. These features can enhance testing by improving accuracy, reducing maintenance efforts, and accelerating the overall testing cycle. Incorporating these AI capabilities can lead to more robust and efficient software testing processes in any development environment.


Hello @sarah1

Diving into the realm of testing practices, AI :robot: has brought about significant changes, offering a plethora of innovative capabilities across various domains​:100:

  1. Test Automation:

    • :thinking: Imagine having tests that heal themselves! Tools like Katalon Studio, Functionize, Testim, and Virtuoso make this a reality by automatically updating tests to match changes in the code base, ensuring stability.
    • Moreover, predictive test maintenance tools like Mabl and Test.ai analyze test outcomes and suggest actions, saving testers valuable time and effort🙌
  2. Test Data Generation:

    • With AI-based data generation🛠️ tools such as Tricentis Tosca and GenRocket, testers can generate realistic test data effortlessly. This not only enhances test coverage but also improves accuracy.
  3. Visual Testing: :compass:

    • AI-powered visual testing offered by platforms like Applitools and Percy detects visual disparities in UI elements across different devices and resolutions.
      This enhances test coverage by ensuring consistency across varied environments.
  4. Defect Prediction:

    • Tools like DeepCode and DeepSource analyze code patterns to predict potential defects before they occur. This proactive approach aids in preventing bugs and ensuring a smoother development process.
  5. Performance Testing :clock7:

    • AI-driven performance testing solutions such as Apica and LoadRunner simulate real user behavior, helping identify performance bottlenecks. This optimization leads to enhanced application performance and user experience.
  6. Natural Language Processing (NLP) Testing:

    • For testing conversational interfaces, AI-based NLP testing tools like Botium and Testim are invaluable. They understand and validate natural language inputs, ensuring the robustness of these interfaces.

AI features like self-healing tests and predictive test maintenance would be incredibly beneficial.
Working on a complex software project with frequent code changes demands stability and efficiency in testing.
These AI-powered capabilities promise to streamline our processes, allowing us to focus more on delivering high-quality software.


Hey @sarah1

Considering my daily usage of Cypress, BrowserStack and Percy for visual testing, the following AI features might be most useful:

  1. Intelligent Test Case Generation: With Cypress and BrowserStack, I can benefit from AI-powered test case generation to automate the creation of test cases based on my requirements/existing test scenarios. This can help in reducing manual effort and ensuring comprehensive test coverage across different browsers and environments.

  2. Predictive Analytics for Test Prioritization: Integrating tools like Testim/mabl with your existing testing setup can help prioritize tests based on their likelihood of failure, enabling you to focus our efforts on the most critical areas of the application.

  3. Self-healing Tests: While Cypress provides robust test automation capabilities, incorporating AI-driven self-healing tests can further enhance the stability and reliability of the test suite. Tools like Functionize/Testim can automatically update test scripts to adapt to changes in the application & reducing maintenance overhead.

By leveraging these AI features, I can streamline my testing workflow, improve test coverage & ensure the timely delivery :truck: of high-quality software products across different browsers & environments.


Hello Everyone!

Thanks @sarah1 for sharing the day 3 exercise.

Here are my responses to the tasks shared today:

  1. Research to discover information on How AI is applied in Testing: I referred to this meetup recording on “Revolutionizing Testing with ChatGPT & AI” by askUI. Revolutionizing Testing with ChatGPT and AI: A Paradigm-Shift (youtube.com)

  2. Use cases & tools that I found valuable:

Use Cases:

  • Test Case Generation / Writing.
  • Test Data Generator
  • Test Checklist Generator
  • Test Report Generator - Ex: In HTML format.
  • Optimizing Test Code
  • Implementing Exception Handling
  • Bug reporting / Drafting


  • Bing Copilot
  • Code GPT - VS Code Extension
  • Google Gemini
  • Yattie
  • Bugasura
  1. Here is the summary of my learnings from today:

Also, created a video sharing my task journey and learnings from Day 3: Usage of AI in Testing | Day 3 of 30 Days of AI in Testing - YouTube

Check it out too :slight_smile:

Looking forward to the feedback from fellow participants as well as reading their responses. Thanks!

Rahul Parwal


Hi everyone,

The possiblities to use AI in testing are really various:
1.) Testcases & Coverage based on requirements AI is not only able to generate Testcases. It can increases the amount of testcases where a tester might not think of.
2.) Improved Priorization: AI can priorize tests - critical functions of a system can be recognized and tested with priorization.
3.) Continous, faster and more accurate test executions: Testcases which were created, automated and executed by AI improves and accelerates the test process, tests results and reduce costs.
4.) Improved Testdatamanagement: Masquerating of test data can be automated by AI.
5.) Defect analisys: AI can analyse test results and help to find the root of the problem faster.

Examples for AI Tools:
Tool for generating test data: Mostly AI
Tool for Testmanagement: aqua ALM
Tool for no-code Testautomation: ACCELQ

For me as a Tester it would be interesting and helpful to work with tools that can create testcases and provide new ideas. No-code test automation tools are also interesting, as you can also gain an insight into automation.


Hi everyone,

AI can be applied in different ways to testing.

Introduction: AI and Machine Learning have reshaped the landscape of software testing, offering QA teams unprecedented capabilities.

AI/ML in Software Testing: The fusion of AI/ML with software testing enhances decision-making and testing efficiency. Approaches include building custom AI, leveraging foundation models through APIs, or using off-the-shelf tools.

1. Automated Smart Test Case Generation: AI, exemplified by ChatGPT, transforms test case creation. ChatGPT can generate Selenium unit tests from simple prompts, addressing the challenges of script creation and maintenance in agile environments.

2. Test Case Recommendation: ML learns user behavior, suggesting test cases aligned with real-world scenarios. This becomes a predictive analytics engine, aiding QA managers in making informed decisions.

3. Test Data Generation: AI simplifies the generation of comprehensive test data, addressing challenges in testing complex scenarios like global eCommerce.

4. Test Maintenance for Regression Testing: AI introduces a “Self-Healing Mechanism,” automatically adjusting test scripts during code changes, reducing the burden on testers in dynamic Agile environments.

5. Visual Testing: AI addresses challenges in automating visual testing by learning to identify and ignore insignificant visual differences, improving accuracy in detecting meaningful UI changes.

Benefits of AI/ML in Software Testing: AI accelerates test creation, enhances maintenance, provides clear recommendations, streamlines processes, and empowers testers without replacing them. It addresses challenges posed by evolving technologies.

Challenges of AI/ML in Software Testing: Challenges include ensuring training data quality, addressing unforeseen test case scenarios, balancing overfitting and underfitting, and combating model drift. Overcoming these requires careful planning and continuous monitoring.

Best Practices with AI/ML in Software Testing: Practices include gaining foundational AI/ML knowledge, being patient during AI development, mastering prompt engineering, and remembering that AI is a tool, not a replacement for testers.

Testing Using AI vs Testing For AI Systems: Distinguishing between testing using AI and testing for AI systems is crucial. Testing with AI involves leveraging AI models for testing purposes while testing for AI systems focuses on ensuring the AI models perform as expected.

Challenges of Testing AI Systems: Testing AI/ML models are complex due to intricate algorithms, nearly infinite possible results, “black box” nature, susceptibility to adversarial inputs, and evolving behavior over time.

AI in software testing is not just a trend; it’s a necessity for staying competitive. As QA professionals embark on this journey, adapting and collaborating with AI technologies will unlock unprecedented testing efficiency and capabilities.

Resource: A Guide To AI/ML Testing For Software Applications


Since I’m new to automation, the information shared is quite helpful. I am grateful for your insightful remarks.


I only have experience with ReportPoral. The tool allows you to upload results from automation suite and can automatically say if a failure is caused by a bug in the product, problem in test environment or a bug in test suite.

I haven’t found it very useful - I aimed at 100% test pass, and whenever that goal was not met I already knew the reason, or was investigating it. Most of the problems we encountered in a project were new (and not recurring), so when tool categorized something as “Environment issue”, that was not terribly helpful - I need to know where exactly in environment the problem is, and how to fix it.

But I have to give that I have left this project soon after we finished model training phase. Maybe my experience would change later.

This is recent and you might have heard about this, but Meta (Facebook) created test generator that can create new unit test, run it, check if it is stable and actually increases coverage, and submit as PR. Here’s article discussing that (there’s also a link to original paper inside): Meta’s new LLM-based test generator is a sneak peek to the future of development. It sounds impressive, but when you look at numbers, it’s not really a no brainer: 57% built and passed, 25% were actually accepted by humans. In larger sample, it was able to improve only 10% of “test classes”.

I guess 10% is not bad if you let that thing run in the background and notify people when it has improvement proposal. But if human needs to initiate the process, and it will be successful only in 10% of attempts, then I don’t see it being used - people will get demotivated too quickly.


Hi fellow testers,

Research to discover information on how AI is applied in testing - today I chose AI Testing: The Future of Software Testing

List three or more different AI uses you discover and note any useful tools you find as well as how they can enhance testing - Katalon advertises that’s it’s AI functionality helps with test script creation, test dataset generation and self-healing automated tests.

Reflect and write a summary of which AI uses/features would be most useful in your context and why. - Within my context the most useful AI feature out of the above three would most definitely be the self-healing automated tests as I am pretty much solely responsible for the maintenance of multiple automated test suites, including at the UI level and at the API level and it is incredibly time consuming when some wide spread but low level feature is changed in the code that causes a lot of my tests to break, I then have to manually fix everything up one by one. These failures I find are often caused by controls that have been changed so having an AI identify which control has changed and then fixing them up with a new working locator automatically would be amazing.


Hello :pray:

Reflecting on these AI tools for mobile test automation, I’m truly amazed by the innovation and efficiency they bring to the testing process. :star2:

Testim :robot:: The ability of Testim to speed up test creation and ensure stability using AI is remarkable. It simplifies the tedious task of identifying UI elements and adjusting tests dynamically, making test maintenance much more efficient. :arrows_counterclockwise:

Appium Studio by Experitest :iphone:: Appium Studio’s AI-driven solutions offer a comprehensive approach to mobile test automation. Features like automatic test recording and smart test maintenance greatly reduce the burden on mobile developers, allowing them to focus more on building quality apps. :hammer_and_wrench:

Test.AI :brain:: Test.AI’s specialization in AI-driven testing solutions is evident in its capability to automatically generate test scripts and identify UI elements. This facilitates faster and more accurate testing, enabling teams to deliver reliable mobile applications with confidence. :dart:

Bitbar :calling:: Bitbar’s AI-powered solutions, coupled with real device testing and automated test execution, significantly enhance testing efficiency. The AI-driven test optimization ensures high-quality releases, which is crucial in today’s competitive mobile market. :chart_with_upwards_trend:

Eggplant :egg:: Eggplant’s AI-driven testing platform offers robust capabilities for mobile testing across various devices and platforms. By leveraging AI and machine learning, Eggplant streamlines the entire testing process, empowering teams to deliver flawless mobile experiences. :egg:

Overall, these AI tools are revolutionizing mobile test automation, enabling developers to create high-quality apps more efficiently than ever before. By integrating AI capabilities into their testing processes, teams can improve productivity, increase test coverage, and ultimately deliver exceptional user experiences. :rocket:

Connect with Manoj Kumar B :star2::man_office_worker:


Hello there!

It is a wonderful opportunity to learn about AI from different points of view. I am working as a manual software test engineer with 10 years of experience. I am new to automation and AI testing. It means a lot to know about how AI testing can be done and which resources are available.

I’d appreciate any suggestions or resources.

Thanks you,


Hi, I think a good usage is to use AI for creating test scripts and tools. E.g. a colleague who has programming background, but hasn’t done it for many years was able with the help of Copilot to create a script to get some relavant information out of a log file of an installer.


AI uses methods and techniques to increase the test effectiveness. AI is applied in software testing to prioritize tests, test case generation, prediction of defects, test automation, and regression testing effectively.
AI tools that can enhance testing
Mabl – Users can create automated tests with low code and reduce test maintenance with intelligent features like auto healing.
TestCraft – AI helps to generate automated tests and generates test scenarios.
Tosca – Codeless AI approach to optimize and accelerate the end-to-end testing

AI uses that would be most useful for me is to do the test coverage and generate test cases to write end to end tests


Hello everyone,

So far, we are all aware that every day new tools emerge that have AI in their main engine, and some of the most well-known applications currently in terms of natural language and those that I have had the opportunity to apply to my work area are:

1- GPT-3 (ChatGPT).
2- Azure AI.
3- Google BARD.

Starting from different approaches and capabilities, each of these tools has been developed by different organisations and companies and the choice of the right artificial intelligence tool for use as QA, will depend not only on the technical needs, but also on others less considered, such as the privacy of the data we use to feed the AI.

That said, a wide range of new challenges opens up for the QA of the current and the future, so in the area of testing, it is about identifying each of the QA processes in which AI can accompany us. Automating a large part of routine and repetitive tasks, freeing ourselves to focus on more strategic aspects and of greater added value and that AI cannot replace


AI is revolutionizing testing practices by enhancing efficiency and accuracy. Here are three key AI applications in testing:

1. Generating Test Cases from RFC Documentation

  • Use: Automates the creation of test cases from technical RFC documents using NLP.

  • Tools: Custom solutions with NLP libraries (e.g., NLTK, SpaCy) can be developed, leveraging AI platforms like IBM Watson.

  • Benefit: Saves time and ensures comprehensive coverage of specifications.

2. Integrating High-Fidelity Mockups for Test Case Generation

  • Use: Generates test cases by analyzing UI designs from tools like Figma.

  • Tools: Visual AI tools (e.g., Applitools) could be adapted for this purpose.

  • Benefit: Aligns tests closely with user experience and design intentions early in development.

3. Self-Healing Tests

  • Use: AI updates test scripts automatically to adapt to changes in the application.

  • Tools: Testim, Katalon, and Functionize are notable for self-healing capabilities.

  • Benefit: Reduces maintenance burden and improves test reliability.

Reflection and Application

Integrating AI for generating test cases from RFC documentation and mockups can significantly streamline testing processes, especially when combined with self-healing tests using tools like Testim. This approach ensures comprehensive test coverage, early UI validation, and maintains test relevance over time, enhancing overall testing efficiency and reliability.


Hello all.
I took a look at the AI tools currently available on the market, and most of them don’t instill confidence for me to rely on them. What I’ve noticed is that ChatGPT can handle a significant portion of the work when it comes to assisting with testing (test scenarios, code writing assistance …). Some may use a dedicated application for generating tests, while others may use a separate application for coding assistance. However, I am quite satisfied with what GPT offers.

Of all the tools I’ve read about, Azure AI has caught my interest the most. The tool shows promise, however, it’s still in development, and I see that it has integration issues. So, I would wait a little longer to go into the deep with Azure Ai.


Judging by some of the comments I see on this forum, am I the only one that thinks AI perhaps assists some of the authors? Surely the point of a forum is to escape the AI generated blog posts that have taken over this past year or so and concentrate on humans? Am I gatekeeping unncessarily? I guess I just find it a bit weird to see perfectly bullet-pointed lists springing up everywhere