šŸ¤– Day 3: List ways in which AI is used in testing

Hello fellow testers. :smiley_cat:

I will start by sharing the site I find useful for discovering new AI tools for all sort of things.

Here you can search for the AI tools out here by the categories of interest, one of them is also Software testing.

So I am going browse there and explore some interesting new AI tools for testing.

This is my list, I couldn’t resist but to share a bit more then just 3:

  • You can easily modify the screenshot of the app with your vision (screenshot to design option) - related more to design, but sometimes it comes in handy to visualize how would you improve the UI of the app
  • chatGPT specialized for QA. Obviously I can use a QA expert in my team even in the AI form :slight_smile:
  • chatGPT with focus on Software testing, test automation and AI based on expertise and know how possessed by Jason Arbon. This is new to me, like having experts like Jason on tips of my fingers in each and every moment. This is the next level chatGPT from the previous one I listed. Crazy!
  • Test automation platform for Visual AI e2e tests supporting multiple platforms and browsers
  • UI automation without the need of dealing with selectors, only human readable scripts and it supports cross device automation
  • Low-code test automation solution with features like auto-healing to API testing
  • Security testing tool that uncovers business logic defects to prevent application vulnerabilities
  • Set of tools designed to enhance digital accessibility. I think important type of testing that is often marginalized for some reason.

Listing the tools I think it also gives a sneak peak in what kind of way AI can be used in testing :smiley_cat:

10 Likes

I’m not sure if that’s the case but I tried using ChatGPT to help me with the research part of today’s task and found it to be genuinely useless. I reframed my questions several times in attempt to obtain the information I needed but it produced very little real value. I went back to good old fashioned manual research.
Fine to use these tools in their assistive capacity, which I think is their strength, but wrong to just regurgitate their output without applying your own thoughts to the task. If there is anyone doing this (I hope you are wrong) then they won’t get much from the process themselves and their posts will create noise that will distract from those giving genuine insight.

3 Likes

I found this task really hard as there is so much information out there its difficult to work out what is genuinely useful and what is hype. Its even more difficult to work out which tools and aspects of AI could be applied to my specific context. For this reason, I won’t be making my own list but I’ll be watching the posts of others with interest. Apologies for the cop out!

5 Likes

I have been reading through the contributions from everyone so far and adding to my bookmarks! :slight_smile:

The subject that has stood out for me was around self healing automated tests. So I started to read a bit more.

On the TestRigor blog there is this post.

I find the idea if self healing tests a bit risky.
The main example given seems to be around identifiers changing name for example, so a standard automated test may fail but one using self healing can update the identifier used and will just put out a notice about this discrepancy.

So in a Continuous Deployment workflow, this would mean your code can still go to production for example.

Even thought the basic example being that an identifier name changed and maybe it was missed from one test case, and now the test fails, even though functionally nothing is different, and the AI used can identify that this was a missed update and continue to run the test suite I find it risky.

This might be because I was working in a highly regulated environment where these discrepancies could cause an audit issue, rather

The other example of a checkout flow changing I do find more problematic. The re-ordering of the flow could actually be a bug and if a self healing test accepts it code could be in prod for a little while before the notice from the test suite is reviewed.

Has anyone got proper experience with self healing tests in a regulated environment? I might be overthinking this.

3 Likes

I don’t think this is a cop out. You are joining in by observing and seeing if anything feels like it applies to your context.

Maybe there is a concept that interests you and you can have a deeper look at that. I might be doing it a bit differently too as you can see in my post :smiley:

4 Likes

Thanks @lisboalien for sharing the link. I haven’t used Selenium for a long time since my last automation practice. After introduction of AI in Selenium , I would love to see the new changes introduced and its practical implementation.

2 Likes

Hah indeed. Everyone else’s posts are much more educated and academic than me. I want to see practical examples- early days yet though :slight_smile:

2 Likes

Thanks @parwalrahul for sharing this :+1:

2 Likes

I was interested in the link which says that Selenium ā€œis now equipped with AIā€ but I can’t find any more detailed information on this. Is the statement true? If so, what are exactly are the AI features of Selenium? Do you, or anyone else on this forum, have any insights on this?

1 Like

I totally agree, most no-code or low-code tools have limitations that grate when you’re used to writing your own frameworks.
I’d always recommend learning to write code over learning a tool, it’s just so much more versatile. It’s a bit like cooking - learn to use a knife properly, and you don’t need the 12 gadgets that clog up your kitchen to produce small cubes, medium cubes, matchsticks … etc.

7 Likes

This may be Selenium Sage - Robin Gupta did some work on integrating selenium documentation and chatGPT to have a Selenium-aware chatbot. He was using this to answer some newbie questions, but like with most open source projects, it is not fully welcome to start filling spaces with generated text. While he himself is perfectly capable of recognizing if the answer is correct and helpful and it would fast track him on the answers, we were not entirely certain that left unattended that would be the content to fill space with.

I follow Selenium project closely (as a member of Selenium Project Leadership Committee) and have not noticed the core project would be introducing AI. However, Selenium drives browsers for most of the AI-powered other tools, and since it is open source library, it may be completely hidden from the user.

7 Likes

My personal use of specific test tools is very limited - I generally write test frameworks and automated tests, and have a very ambivalent relationship with no-code or low-code tools. I think those can be great to bridge the gap for someone who’s scared of writing code, but ultimately I see the role of the tester morphing towards becoming a more versatile member of the development team who’s capable of picking up other tasks than purely quality and test related ones.
To that end I think it’s useful for testers to develop at least a solid basic understanding of code, and once you’ve got that it’s just so limiting to use a tool.

All of that is a long winded way of saying I mostly use Github copilot, Gemini and ChatGPT for helping me code faster and figuring out gnarly syntax issues.

But because I’m here to learn, and want to approach this with an open mind, I used JarbonAI to tell me what else is out there, and I’ll have a look at some of the tools in the coming weeks.

3 Likes

Diving into how AI is shaking up the testing game:

  1. Seeing is Believing: Ever tried playing spot the difference with your app’s UI? Tools like Applitools are using AI to catch those sneaky visual bugs that slip past us. It’s like having eagle eyes for your app’s look and feel.
  2. Smart Tests that Fix Themselves: Imagine writing tests that adapt on their own when your app’s UI changes. That’s what Testim and Functionize are all about. It’s like your tests get a mini-brain upgrade to stay sharp without you micromanaging them.
  3. Bug Hunting Like a Pro: AI’s not just about cool tricks; it’s getting down to business finding bugs faster than a detective. With tools like Katalon, it’s like having a sidekick that’s always two steps ahead, making sure those pesky bugs don’t stand a chance.

After diving into all this, I’m pretty stoked about weaving some AI magic into my testing toolkit. Sure, it’s not going to do all the heavy lifting for us, but it’s like having a smart assistant to help out with the grunt work and keep us from missing out on the fun stuff. And hey, while AI’s learning the ropes, it’s a chance for us to up our game too, blending our know-how with new tech to push boundaries.

For anyone else geeking out over how AI can spice up our testing routines, diving into the details might spark some cool ideas for your own projects. Let’s keep pushing the boundaries and see where this AI journey takes us in testing! :star2:

4 Likes

Sticking to the 30-days challenge, Day 3. Practical applications, personal reflection rather than research:

Explaining code. Especially on particularly tired day while being aware that I cannot share secrets, I like to ask chatGPT to explain to me what changes with code in some pull request so that I would understand what to test. Answers vary from useful to hilarious, and overly extensive.

Test ideas for a feature. Whenever I have completed an analysis of a feature to brainstorm my ideas, I tend to ask how chatGPT would recommend me to test it. Works nicely on domain concepts that are not this company only, and I have a lot of that with standards and weather phenomena.

Manipulating statistics. I seem to be bad at remembering excel formulas, but I use a lot of cross-referencing test generated results in excels. ChatGPT has been most helpful in formulas to manipulate masses of data in excel.

Generating input/output data. Especially with Co-pilot, I get data values for parameterized tests. Same test, multiple inputs and outputs generated. More effort into reviewing if I like them and find them useful.

Generating (manual) test cases. I have seen multiple tools do this, and I hate hate hate it. I always turn off steps from test cases and write down only core insights I would want the future me to remember in 3 months.

Generating programmatic tests. Copilot does well on these on unit testing level, but I am not sure I would want all that stuff available. Sometimes helps in capturing intent. But I prefer approvals of inputs and outputs over handcrafted scripts anyway for unit level exploratory testing.

Generating tests based on models. Has nothing to do with AI, but is a pre-AI practice of avoiding writing scripts and working with more maintainable state-based models. Love this, especially for long running reliability testing.

Generating database full of test data. Liked tools of this. I think they are not AI though, even though they often claim they are. The problem of having realistic pattern but not real people’s data is a thing.

Refactoring test code. Works fine at least for robot framework and python tests as long as we first open the code we want to move things from. Trust things to be pre-aware and suffer the duplication. We’ve been measuring this a bit, and copilot seems to encourage duplication for us.

I’m sure there’s more and I just don’t have the energy for being systematic. :slight_smile:

16 Likes
  • Visual/ Screenshot testing definitely adds value to the coverage that’s not possible through regular automation, where most validations are limited to planned assertions. Tools like applitools can greatly help in checking items that are not explicitly asserted. It will come handy for testing in production and we have been using it for sometime now.

  • It also helps to simulate cross browser tests in a much more faster and efficient way across different combinations instantly, it also avoids access restrictions as its trying to simulate across different browsers.

  • I also wish to have some utility which understands the errors happening on the UI during testing and learns to call out the root cause automatically like configuration /wrong input / DB error/defect and help to fix the gap in domain knowledge for the tester.

4 Likes

The area that I have found the most interesting is how AI is going to be used in Test Scenarios, there is always a fear that you as a tester are going to miss that doozy of a scenario that will find its way through the traditional ways of testing and ends up in your production. Today I was reading about ā€œIntelligent Oraclesā€. This part of the AI in testing is going to be most valuable in creating robust software. I am excited by the fact that it looks AI Testing is now going to be able to recreate what I would call ā€œChoatic Usersā€ something that traditional Automation has been weak in. If anyone has a practical example of this working I would love to hear from you.

A few weeks back I was given a rather mundane task of hunting though a large directory of documents and files to identify test evidence. All of these docs and folders were supposed to have an ID related to the backlog so I started searching on those Individual IDs from the trop of the main directory and let the search crawl through them. After about hour of this I was sick with boredom. Then a voice from my past rose up saying ā€œAutomate the boring stuffā€. The way of test machines are set up there is no scope for any type of coding the only option I had was Shell scripts- the problem was I have zero experience in writing Shell Scripts. So I went on Bard(Now Gemini) and started to ask ā€œWrite me a shell script that finds files that contain the following ā€œXXXā€ in the title and output these results to a csv fileā€

The resulting code was mixed, to be honest, in that when I tried to run the script it threw all sorts of errors. However, I did manage to get the script to work with some refinements on my prompts to the AI and some editing of the script with the help of a more experienced colleague. The results again could have been better but it did return enough information to me to work out what was there and what was not there. So there was a considerable amount of rework which was time-consuming- or felt time-consuming. But in the long run, I now have something I can share with the team that can potentially save their time.

4 Likes

Day 3

List three or more different AI uses you discover and note any useful tools you find as well as how they can enhance testing.

I watched Daniel Knotts videos on AI tooling on youtube:

  • Recognising patterns in your tests and suggesting improvements/optimisations
  • Visual AI to see design differences in your mobile or web application.
  • Self healing - an ID has been changed and can adapt test cases accordingly.
  • Visualise user journeys to help focus your tests on what the users are doing.
  • Using natural language processing to turn plain text into the code for an automated test (or even test steps).
  • Analysing test run data to detect trends and suggest improvements and optimations.

These include the usual suspects, Testim, Mabl, Applitools etc.

Reflect and write a summary of which AI uses/features would be most useful in your context and why.

In my context I think:

  • Recognising patterns in your tests and suggesting improvements/optimisations - spend a lot of time reviewing unit, integration and acceptance tests for consistency and code quality so would be nice to do this, as long as I could provide guidance for the patterns I want to see to the tool in the prompt.
  • Analysing test run data to detect trends and suggest improvements and optimations - our tests literally run thousands of times a day so reports are not much use. Gathering information of areas where tests are problematic (matched to the class that the failure is tied to) over time would be really useful.

The others (self healing, generating test steps, OCR) seem like they might be useful for teams with separate testing functions (other companies) but most can be mitigated if the cross functional development team works well together.

4 Likes

Hello all,

After a short investigation I find the following AI applications in software testing:

  1. Automated Script Generation
    This application makes creation of test scripts effortless and faster by covering all important functionalities of the software under testing.
    Tools: Testim, Katalon Studio.

  2. Test Case Optimization
    By analyzing testing data and identifying patterns, AI-powered tools allow testers to focus on the most critical cases. They can also recognize redundant test cases and that have to eliminated, saving effort and time.
    Tools: Applitools, TestCraft.

  3. Automated Test Execution
    These tools execute automatically test suites by reducing the human intervention. In this way QA engineers have more time for exploratory testing. Also, they make defect analysis, identifying bugs to be fixed.
    Tools: Testim, Katalon Studio.

  4. Self-Healing
    These tools can identify the defects and fix them automatically, keeping test solid.
    Tools: Healenium, Testim.

I think that Healenium and TestCraft would be a good fit in a test automation process. I believe that their capabilities can improve efficiency and effectiveness of test suites. I also like that they have free plans, and are compatible with common technologies and frameworks. So, they would probably fit in many projects and QA engineers would have the opportunity to familirize with an AI-powered tool for free.

Link: AI in software testing

2 Likes

Hi, everyone,

:white_check_mark: AI has pattern recognition and image recognition capabilities that together help to detect visual bugs by performing visual testing on applications. AI can recognize dynamic UI controls irrespective of their size, shape and analyses them at a pixel level.
:white_check_mark: AI in testing increases the test coverage as it can check the file contents, data tables, memories, and internal program states seamlessly.
:white_check_mark: AI in testing helps in early and fast bug identification, which ultimately reduces the defects and makes the product bug-free.

I found, that here are 4 main categories of AI-driven testing tools:

Differential tools
These tools leverage AI and ML algorithms to identify code-related issues, security vulnerabilities, regressions, etc. This is achieved through code scanning, unit test automation, for e. g . DiffBlue, Launchable.

Visual AI testing tools
Visual AI testing tools addresses a user experience layer of testing and scales the validations and look and feel of a UI (user interface) across digital platforms (mobile and web mostly). for e. g. Applitools, Katalon.

Declarative tools
These tools aim to enhance test automation productivity and stability. These tools leverage AI and ML and have significant abilities related to Robotic Process Automation ( RPA), Natural Language Processing (NLP), Model-based Test Automation (MBTA), and Autonomous Testing Methods (AT). The main aim of these methods is to eliminate tedious, error-prone, repetitive tasks through smart automation, for e. g. Tricentis.

Self-healing tools
Self-healing tools have been developed that are mostly based on a record and playback mechanism, wherein the main ML engine resides in the self-healing of the recorded scripts, for e.g. Mable, Testim, Functionize, Perfecto.

2 Likes

I think I’ll start by trying out the tools mentioned by Marijana, as well as exploring others because there are already plenty available. I’ll dedicate some time to checking them out and seeing if they can be useful for my project.
I’ll write a post about it later once I’ve done all the research. :woman_technologist:

4 Likes