šŸ¤– Day 3: List ways in which AI is used in testing

In my context, some areas we are struggling with a little at the moment are:

  1. Missing certain use cases on device/browser combinations.

  2. Brittle end to end tests due to content changes on our website.

  3. Some test suites taking longer than we’d like.

  4. Alerts firing too frequently due to slow performance.

In my research, I found some potential ways that AI assisted tools could help us with these things:

  1. I didn’t find a specific tool that could help with this but AI assisted tools in general could help us identify these types of scenarios based on usage in production.

  2. Self healing tests could help with the brittleness of the tests we have by monitoring our site for changes to the UI and adapting our tests.

  3. I read about a tool called Launchable which analyses your test runs, sees which ones are most likely to fail, and runs these tests first. It can also provide insight into which tests are flaky as well as never-failing tests :face_with_peeking_eye:

  4. I came across a tool called BigPanda whose aim is to provide software teams with intelligent alerts. They acknowledge the large amount of data we have access to now and not all of it is relevant to every situation. They aim to gather relevant data for incident management to speed up analysis and fixing of the problem. Further down the line, it could analyse the frequency of the performance alerts we generate and recommend a better system to highlight these to us.

4 Likes

Hi there! Going to write mine first before reading the comments.
I watched half of this video this evening. https://www.youtube.com/watch?v=x4HV-zXy4t4
I really liked how they mentioned treating AI like a sidekick :smiley: That’s a good way to remember it.
They talked about Kalaton studio’s jira plugin where you can use it to generate tests that validate the user story. I liked the mention of how even a BA could use it to see if they have enough information in the user story. Then a tester could come along and use the tests as a starting block. Then use their experience/domain expertise to think, what else? I feel this use of AI could be very useful. I have observed teams spending a lot of time uplifting user stories in refinements / or lots of back and forth, so it could make the process more efficient.

5 Likes

Not had much time to do this fully but here’s some actual real uses of AI in Testing that I’ve experienced first hand:

  1. Defect analysis and duplicate identification. We’re building a tool that uses AI to identify when a defect is being reported whether it is similar to existing defects. The tool can also analyse defend databases and identify those with close matches. This tool is very useful for clients with huge backlogs and/or multiple teams working on the and product.

  2. Code generation. AI has replaced Google and Stack Overflow for many. It’s a lot quicker and exact is it’s responses. As someone who doesn’t code much I’ve used it to build a couple of little tool for launching apps, creating config files and sorting data.

  3. Assisting test case ideation. It’s really useful to think about how you’ll test something, write down all the keys ideas. Then run the question through AI and see what it comes up with. Hopefully nothing you’ve missed but sometimes things are so obvious you become blind to them. I find AI is great to just sense check and get a different view on the same problem. I only ever go to AI after doing it myself though and always scrutinise the AI suggestions.

2 Likes

GitHub Copilot is a revolutionary tool developed by GitHub in collaboration with OpenAI. Launched in 2021, Copilot leverages artificial intelligence to provide code suggestions directly within your development environment, making code writing and program creation easier. It functions as an extension for certain integrated development environments (IDEs), such as Visual Studio Code.

Here are some key features of GitHub Copilot:

  1. Code Autocompletion: Copilot automatically suggests code snippets as you type. It uses language models trained on large datasets to predict the next code based on context.
  2. Error Correction: In addition to suggesting new code, Copilot also helps identify and correct common errors in existing code.
  3. Test Generation: The tool assists in creating automated tests for your code, promoting good test-driven development (TDD) practices.
  4. Multilingual Support: GitHub Copilot provides suggestions in various programming languages, making it useful for developers working across different environments.

To integrate GitHub Copilot with Visual Studio Code, follow these steps:

  1. Installation: Ensure you have VS Code installed. Then, install the GitHub Copilot extension directly from the VS Code extension store.
  2. Activation: After installation, activate Copilot and log in with your GitHub account.
  3. Usage: Start typing your code as usual. Copilot will provide relevant suggestions as you write.

Exploring this tool can optimize your workflows and enhance your development experience. Give GitHub Copilot a try and discover how it can boost your productivity! :rocket:

: How to Use GitHub Copilot in VS Code?
: Artificial Intelligence for Automated Testing - Github Copilot
: Github Copilot: Artificial Intelligence for Testing | Udemy
: How to Use Github Copilot in Automated Testing - YouTube

4 Likes

AI is increasingly employed in various ways to enhance and streamline the testing process. Here are several ways in which AI is utilized in testing:

  1. Test Automation: AI helps in creating and maintaining automated test scripts. Machine learning algorithms can learn from manual test cases and generate automated scripts, reducing the effort required for script development.
  2. Automated Test Case Generation: AI algorithms can analyze requirements and generate test cases automatically, identifying potential edge cases and scenarios that might be overlooked by human testers.
  3. Dynamic Test Data Generation: AI can generate diverse and realistic test data, ensuring comprehensive coverage of different scenarios and conditions during testing.
  4. Defect Prediction and Analysis: AI models can predict potential defects by analyzing historical data, code changes, and other relevant information. This helps prioritize testing efforts on areas more likely to have issues.
  5. Performance Testing: AI is used to simulate realistic user loads and behaviors in performance testing scenarios, helping identify and address bottlenecks and performance issues.
  6. Visual Validation Testing: AI-powered tools can compare screenshots and identify visual differences in user interfaces, helping ensure consistency across different devices and browsers.
  7. Natural Language Processing (NLP) for Requirements: AI, through NLP, can understand and process natural language requirements, assisting in the creation of test cases that align with the specified functionalities.
  8. Predictive Analysis for Test Planning: AI can analyze historical testing data and project timelines to optimize test planning and resource allocation, improving overall testing efficiency.
  9. Security Testing: AI tools can simulate cyber-attacks and analyze vulnerabilities in software applications, aiding in the identification and mitigation of security risks.
  10. Regression Testing Optimization: AI helps in selecting and prioritizing test cases for regression testing, focusing on areas of the application more likely to be impacted by recent changes.
  11. Automated Test Maintenance: AI can automatically update and maintain test scripts as the application evolves, reducing the effort required to adapt tests to changes in the software.
  12. User Behavior Simulation: AI can simulate user interactions with the application, helping testers understand how real users might engage with the software under various conditions.
  13. API Testing: AI can be applied to automate the testing of APIs, ensuring the proper functioning of the interfaces between different software components.
  14. Chatbot Testing: AI-driven chatbots used in applications or customer support can be tested using AI tools to simulate user interactions and assess the chatbot’s responses.
  15. Explanatory Testing: AI can provide insights into test results by explaining why a particular test case failed or succeeded, aiding in quicker issue resolution.

These applications demonstrate the versatility of AI in testing, improving efficiency, accuracy, and coverage throughout the software development lifecycle.

4 Likes

Hi,
Here are the ways in which AI is used in testing:

Sure, here are some ways AI is used in testing:

1. Automated Test Script Generation:

  • AI can analyze existing test cases and user behavior to automatically generate new ones. This saves testers time and effort, especially for repetitive tasks.
  • AI can identify patterns and variations in user interactions, leading to more comprehensive test coverage.

2. Test Data Optimization and Management:

  • AI can generate realistic test data based on historical data or specific scenarios. This eliminates the need for manual data creation and ensures data quality.
  • AI can help optimize test data sets, reducing redundancy and ensuring the most relevant data is used for testing.

3. Image Recognition for Visual Testing:

  • AI algorithms can be used to automate visual regression testing, comparing screenshots of an application’s UI across different versions to identify any visual changes that might break functionality.
  • AI can detect visual defects like misalignment, incorrect colors, or UI element inconsistencies.

4. AI-powered Defect Detection:

  • AI can analyze logs, user behavior data, and application performance metrics to identify potential defects or bugs.
  • AI can learn from past defect patterns to predict and prevent future occurrences.

5. Smart Test Execution and Prioritization:

  • AI can prioritize test cases based on risk assessment, user impact, and historical data, focusing on areas most likely to contain defects.
  • AI can self-learn and adapt test execution strategies based on test results, optimizing the testing process over time.

6. Chatbot Testing and Conversational AI:

  • AI can be used to create chatbots that interact with chatbots or virtual assistants within an application, testing their functionality and identifying conversational issues.
  • AI can simulate various user interactions and conversational flows to ensure a smooth user experience.

The use cases for AI in testing are constantly evolving. As AI technology advances, we can expect even more innovative ways to leverage its power for efficient and comprehensive software testing.
Thanks

2 Likes

My thinking:

  • Test Data Generation: By providing AI tools with corresponding data rules, they can help generate test data that includes various scenarios. The corresponding article is: Test Data That Thinks for Itself: AI-Powered Test Data Generation

  • Defect Prediction: AI can analyze our historical data to predict areas of the codebase that are more prone to defects or project risks, thus allowing us to focus our testing efforts. The corresponding article is: How Can AI and Machine Learning Predict Software Defects?

  • Visual Testing: AI-driven visual testing tools (such as Applitools, Percy) can identify visual differences across various browsers and devices. [Applitools: AI-Driven Test Automation

  • QA Knowledge Base: By feeding our existing QA knowledge base information to AI, we can train our own AI knowledge base bot to help improve the efficiency of the knowledge team.

  • QA Test Tool Development: AI assists us in developing testing tools.

2 Likes

AI tools useful in my context (as a blackbox manual tester):

  1. Automated test case generation / Visual testing - Kataon
    Their team is working actively on the AI features with a community support, and the onboarding flow / usability is smooth when I try it out. Not sure how powerful it is yet, but it is a nice tool even without the AI features.

  2. Test case / Test data / Translation text / Error messages generation - Google Gemini, ChatGPT
    Need a lot of time to tune the prompts, and often have inaccurate or out of context results, which actually take a lot of time to tune and review. However it could generate fast and well formatted esp. when creating repetitive or test cases.

2 Likes

Hello Everyone,

Here are my answers to day 3 tasks
Some areas where AI can be used in Software testing

  1. Test case generation: AI algorithms can generate test cases automatically based on the application’s specifications, requirements, and historical data. This helps in increasing test coverage and identifying edge cases that may not be obvious to human testers.
  2. Defect prediction: AI can analyze code changes, historical defect data, and other factors to predict areas of the code that are more likely to contain defects. This allows testers to focus their efforts on high-risk areas.
  3. Log analysis: AI can analyze log files generated by the software under test to identify patterns and anomalies that may indicate defects or performance issues.
  4. Test result analysis: AI can analyze test results to identify patterns and trends that may indicate the presence of defects or areas for improvement in the testing process.
  5. Test data generation: AI can generate test data automatically, ensuring that the data covers a wide range of scenarios and edge cases.
  6. Visual & Accessibility Testing: AI-powered visual and accessibility testing can help test bugs that normal human eye can miss.

Below are some AI-powered tools.

  1. Applitools
  2. Katalon
  3. Testim
  4. Github Co-pilot

Thanks,
Akanksha

3 Likes

Hi Mirek,
Thanks for the sneak peek into usage of AI tools into real testing scenarios. Up until now I see more people talk about AI testing’s benefits than actual uses :sweat_smile:
As far as my experience, I’ve used chatgpt to help build small widgets to perform repetitive tasks.

3 Likes

I hear you Dan,
That’s exactly what I thought. We need examples of people actually using the damned thing instead of listing down the cliched Pros and Cons.

1 Like

I’ve been searching through the internet to find a testing tool with AI in order it can produce lots of data and found this one BlazeMeter
I have not used it in my testing tasks yet, but will be appreciated to receive a feedback about it or any recommendation what tool suits my needs the best :pray:

2 Likes

Great insights here.

1 Like

Reviewed article - https://www.code-intelligence.com/blog/ai-testing-tools
Some really interesting stuff!
For me (a functional ā€œall roundā€ tester, not a SDET), I sometimes struggle to understand how AI can help the functional tester without a lot of coding experience. Here goes:
I found the tools to generate testcases and coverage useful, saving time but also capturing some scenarios that a tester might not think of. Also those that provide test result analysis and help to find the root problem quicker.
Some tools that stood out:

TestRigor : Identifying the elements as they appear on screen and allowing testers to focus on what needs to be tested, rather than coding implementations and test maintenance.

Digital.ai Continuous Testing - Like that this tool considers Performance and Accessibility as well as functional tests and again that it’s designed for non coders.

Also Testsigma as a non coder platform that can be utilised across many platforms. I think a couple of past colleagues in the same position as I used this and were successful.

3 Likes

Good morning,

AI uses I have found:

  1. Test case creation using LLM via something like Chat GPT or Copilot. Requires good use of prompt engineering to be able to gain the right level of detail.
  2. Test automation - can be hooked up with selenium, but also via tools that lessen the need for coding. This appeals to me with a heavily manual background.
  3. Test data creation - use of tools can generate realistic test data.

These 3 above are where I take the most interest, in terms of quick wins for my current role. From there I hope to build my knowledge and implement other, more complex uses.

2 Likes

Ok here’s my take.
All the mentioning of tools that provide low-code solutions along with the buzzwords self healing tests, predictive analytics and stuff are mostly paid and may not actually suit your bespoke needs.

Here’s some way I have used AI to aid my testing:

  1. Test data generation:
    – I fed chatgpt with a setup of the kind of industry my software would be used in and then asked it to provide what would be realistic data. The response was pretty good. I could have gone 1 step ahead to ask it to generate a script that feeds data like that into the software using the browser console.

  2. Small test scripts generation:
    – I asked Chatgpt to give me a code snippet in JS that checks and prints unique identifiers that could be duplicated. I was impressed.
    – I asked Chatgpt to give some code for Playwright. Did this several times and 60% results gave me code that didn’t work properly. Sometimes, it suggested ā€œbuilt inā€ methods that did not exist.
    – I asked Chatgpt to help me build an excel sheet but ended up doing it myself.

  3. Identifying code issues:
    – This is where Chatgpt shines, it is able to suggest fixes and work arounds for lines of code throwing error.

  4. Understanding code and asking for improvements:
    – Another great benefit of Chatgpt has been that it explains bits of code in great detail. I enjoy understanding concepts of devops as well.
    – Clean code with error handling and comments is what Chatgpt does best and I believe tools like Co-pilot will also help

  5. Generating test cases:
    – Nope. It failed. See, every software out there is catering to a large set of users and solving an even larger set of problems. It’s got limitations and business rules and what not. There’s a ton of things your software will do that won’t go by the book. Feeding all of that in a prompt and then asking for suggestions tends to consume more time. Doing it bit by bit on a smaller level will, in my experience, be better as you’re able to guide the testing using your own conscience.

5 Likes

There are a growing number of new tools that are interesting, but I am yet to see sufficient evidence in the marketplace to see how they really enhance testing. There are some tools like LoadRunner, and Jira that claim some AI utility, but the true effectiveness is out for review.
One area that I have seen actively being used and effectively, is building utilise that utilises the ChatGPT AI. A good video is Prompt engineering with spreadsheets

3 Likes

Hi Guys.

Create resilient end-to-end test:

  • Smar locators: enable multiple users/teams to work on one single application by allowing users to select and change a specific item
  • Dynamic Locators : These use multiple attributes of an element to locate it on the page

Tool : Testim

Visual testing: It finds visual bugs in apps and makes sure that no visual elements are overlapping, invisible, or off page or no new unexpected elements have appeared

Tool: Applitools

identify flaws and vulnerabilities with each code change: : allowing more robust automated testing to occur throughout the development lifecycle, assuring high-quality code while seamlessly integrating with various coding environments.

Tool: Code Intelligene

We already saw that AI can help us a los in different aspectes of our daily work. In my case it would be very useful the visual testing to caompare new designs with existing ones and of course al the things on a specific page.
AI came to help us a lot in seizing time and focus on other aspects that we were not payint to much attention.

2 Likes

For now, I think regenerative AI tools can be mainly used to facilitate the testproces. I only have hands on experience with AI and explorative testing, mainly ChatGPT 3.5.

In regards to explorative testing:

  • Creating testdata based on specs given
  • To start with a strategy
  • Test subjects base

Regenerative tools are not trustworthy enough (yet) to be used for uncontrolled testing to my opinion and miss the deep context for testing an application properly.

2 Likes

I have picked 5 topics from the blog post, AI In Software Testing — How can AI be used in software testing?(URL : AI In Software Testing — How can AI be used in software testing? | by Testsigma Inc. | Medium)

Automated Test Case Generation:

AI can generate test cases automatically based on specifications, code analysis, and historical data. This speeds up test creation and ensures comprehensive coverage, reducing the manual effort required.

I found a company called Taskade that states that they have an ā€˜AI Test case generator’ (URL: AI Test Case Generator | Taskade). It looks promising but I need to look into it

Tricentis mention in the blogpost ā€˜Myth vs. reality: 10 AI use cases in test automation today’ (URL: 10 AI Use Cases in Test Automation - Tricentis) a reality and myth about automated test case generation

Regression Testing Optimization:

AI can prioritize test cases for regression testing, focusing on the most critical areas affected by recent code changes. This optimizes testing efforts and reduces the time required for testing cycles.

Katalon comes with a new tool that is really useful for regression test optimization, see blogpost ā€˜Regression Testing: Embracing the Power of AI and Automation’ (URL : The Power of AI in Regression Testing | Katalon)

Defect Prediction:

AI can predict potential defects by analyzing code changes, commit history, and other relevant data. This allows testers to address issues proactively.

Validata shows on their web site a good description of AI-powered Defect Detection and Prediction (URL: Validata Software - AI-powered Defect Detection and Prediction). The tool Validata Sense.ai supports this effort

Test Data Generation:

AI can generate diverse and realistic test data that covers different scenarios, ensuring thorough testing of various conditions.

Several vendors are are there to help out with test data generation, like Testsigma (URL: https://testsigma.com), Datprof (URL: https://www.datprof.com/), Mostly AI (URL : https://mostly.ai/)

Log Analysis:

AI can analyze log files to identify errors, exceptions, and patterns that may indicate issues, helping testers pinpoint defects quickly.

LogicMonitor has a interesting blogpost about their log analysis tool, How to Analyze Logs Using Artificial Intelligence(URL: How to Analyze Logs Using Artificial Intelligence | LogicMonitor)

For me, these 5 topics key for me in using AI support. Note that I’m not a user of these tools, but finding out about them for this day exercise triggered my interest

6 Likes