This was one of the earlier steps I was looking forward to how Iāll handle, and even though I have a late admission due to being busy yesterday, Iāve combed through quite a few blog posts to get a better understanding of the areas we can use AI for testing. While I wonāt list them all here, a trio of Medium blog posts by Rodrigo Alvez Costa were the ones I was most intrigued with: List: AI & Testing | Curated by Rodrigo Alves Costa | Medium
Many software testing tools and frameworks are being revisioned to integrate AI to their standard operations: the brands I encounter the most remain Katalon, Selenium, Testsigma, and Applitools, among many more I have difficulty keeping in mind. While some of these AI integrations have more specific goals (such as visual locators for UI testing or test data generation), the most common application of AI at this juncture seem to be:
Self-healing tests - which is a given, since itās an application that automates one of the most time-consuming sections of a software testing process
Automation test maintenance - in a similar vein, basic automation tests can be augmented and ātaughtā to AI as standard operations for any line of software, and complex automation segments can be automatically sustained through the self-healing attribute mentioned above
Analysis of the extensive test data - in complex projects with a large amount of testing output, AI-based tools can gather and process the data faster and more efficient than any tester can
Creating and updating unit tests - can ease the workload of developers (and testers, if theyāre involved) by standardizing the earliest unit test procedures
Iām certain there more, and likely more vital, applications I couldnāt yet encounter. And itās apparent that the use of AI will continue to expand into the most essential areas of software testing.
For my own end, thinking about the game testing sessions I did, I can see that AI can seriously ease some of the more chore-like exercises such as sitting through to the end of missions, trying different ways to beat (or break) a mission, trying certain changes in scripts and mission credentials and trying whole missions again for small changes, and more. Learning more about this subject feels more vital then it did before - for my career, my personal growth, and for better understanding of the future possibilities for both myself and the world.
List three or more different AI uses you discover and note any useful tools you find as well as how they can enhance testing, for example, Test Plan and check for Vulnerabilities - * Predictive Analytics ā, Security Testing ā, and Anomaly Detection ā. Iāve chosen the ones that are paid and offer a free trial https://www.loadmill.com/
Reflect and write a summary of which AI uses/features would be most useful in your context and why:
Iāve considered that one of the blockers in our team, is the knowledge, of new QAEs and existing ones, to support the team and organization, it requires a good knowledge of the product developed and what your team does and offers to the market. It might seem simple, but is not, something The basic is having the knowledge and expertise of the product. Have an AI tool that suggests an API test plan, that covers happy and unhappy paths that could benefit the individuals which seems harder to understand how can this be tested? and how can I discover edge cases? Regarding vulnerabilities, we live in a digital era, where every day exists cyberattacks so this seems interesting, in a way, for now, beneficial but in the future AI will be attacked as wellā¦
Iām late to the party. Hereās a list that Rachel Kibler and I made last fall:
Creating a bug taxonomy
Identifying clusters of bugs
Enhancing bug reports
Checklists, anyone?
Test specifications
Prioritizing/focusing testing
Standardizing the format of bug reports, test cases, etc.
Create FAQs based on your testing notes
Solve code problems
Here are some from the How to Test Your Time Machine book by Noemi Ferrara:
Grouping tests
Data analysis (eg., grouping failures to understand what parts of the app have red flags)
Automated visual testing
Smart DOM finder to find the best suited element for an automated test
Testing by voice (voice recognition tools use AI)
Smart test balancer ā what tests to run, where and when to run them in the deployment pipeline
Produce text from voice or from visual input such as an image
I think chatgpt should do 30 days of testing too. We should be checking to see if generative AI can do 30 days of testing better than we can, or at least adequately. Test the testers.
Sorry for my lateness: I did start a post yesterday, but ran out of time & energy to finish. However, that has proved a blessing, as I have now had time to read the other posts to date, and many of them are valued by me, thanks. But the info overall is crying out to be analysed into a table of tools orthogonal to features: I imagine that such already exists somewhere but I havenāt looked yet. I saw @mrukavina ās ref to theresanaiforthat.com (TAAFT), where I will be heading soonish, thanks.
The following is shooting-from-the-hip rather, but I sense some groupings of tools:
existing tools which have added, or claimed to have added, AI features / capabilities;
new blockbuster tools, which seem to be mainly the mass-market LLMs plus maybe other generative AIs;
genuinely new focussed tools which should be of particular interest to software testing. I actually canāt recall any of these in a hurry, but this would be my greatest interest (I think).
However⦠again I wish to try to do something different: taking a step back and viewing the landscape. I see three existing āculturesā of software testing, all of which could / can use AI:
traditional, aka āold-schoolā (still needed in domains such as safety-critical? Discuss);
agile & DevOps, heavily hefted to the āautomation pyramidā, and hugely tools-centric; and
context-driven, which as you may have seen from some former posts, tends to question some / many of the claims of AI vendors.
It seems to me that AI is now adding 4th and 5th cultures: those that are (4) enthusiastic to apply all kinds of AI to all kinds of testing, versus (5) seeking to genuinely break new paradigms of testing by questioning the whole ethos of AI and its opportunities / threats to humankind.
Echoing several respondents, I have noticed that āChatBotsā seem to like to chat (thereās a clue in the name obviously), but they offend one view of intelligence: they pretend to be emotionally intelligent by wasting verbiage and questioning their own accuracy. They appear to be pre-programmed to assault the Turing Test with lots of waffle (and it seems some precautionary censorship). Big story about Google Gemini recently misapplying some diversity / āantibiasā targetsā¦
Anyway, as a thought experiment: what will AI say about some of the trad āold-schoolā activities in software testing? I am aware that such concepts live on in agile & DevOps under new guises, eg quadrants, definitions of done, repetitions within user stories. (I do have a paid subscription to ChatGPT4 etc but have run out of time to exercise it this eve, so this is Microsoft Bing/Copilot):
Test strategy:
Search: AI offers āfaster, clearer, easier, and budgetedā; āstrategic platformā, āleverageā, ādeliver more qualityā;
Copilot: <some obvious, old stuff about the document>;
Risk basis:
Search: āgenerative AI risk mapā, āhow to make a threat model practicalā;
Copilot: āGenerative AI tools, like ChatGPT, excel at producing large quantities of information. However, their accuracy can be variable. For instance, a Purdue University study found that ChatGPT answered 52% of software engineering questions incorrectlyā;
Search: ādetermine the necessary testsā [EEK?!]; āremove the redundant ones that create noiseā, āfaster, higher accuracyā [WTF?]
Copilot: computer vision bots;
[I HAVE OMITTED TEST DATA & REGRESSION TETING BECAUSE HUGELY COVERED IN OTHER POSTS];
Acceptance:
Search: āembark on a journeyā, āunderstand the pivotal roleā [SNORT];
Copilot: āsignificantly transformed the landscapeā, āletās delveā.
Sorry the above is a whimsical quick-snapshot, but I hope that you can see the challenges of ACTUALLY UNDERSTANDING and actioning the various things that AI suggests.
Research to discover information on how AI is applied in testing.
We can save time for data analysis and trend identification so the manual testers can focus in the most important tests to be covered.
We can increase our speed and efficiency doing cost savings and improving the quality of the software.
List three or more different AI uses you discover and note any useful tools you find as well as how they can enhance testing, for example:
Test Automation:
ļ§ Self-healing tests - AI tools evaluate changes in the code base and automatically update with new attributes to ensure tests are stable - Katalon, Functionize, Testim, Virtuoso, etc.
ļ§ Generate test cases with the AC ā The tools will evaluate the AC and then create the test cases to satisfy that acceptance criteria, it also can create a BDD-style scenario.
Chat GPT.
ļ§ Provide backend mock responses and fix some test: The tool gives you hints and code samples to test an API which can save some time of writing, even thinking.
Postbot from Postman.
Reflect and write a summary of which AI uses/features would be most useful in your context and why.
I guess all of them are useful for me as I work with manual and automated tests.
Here are some examples of ways that AI could be used to enhance testing:
Self-healing tests - Evaluating code base changes and automatically updating new attributes in test cases to reduce flaky tests. e.g. MagicPod, Tricentis Testim, EndTest, Autify
Visual Testing - Leveraging AI for analyzing visual differences between pages in the app. e.g. Percy, Applitools, Autify
Reporting and Analytics - Generating useful reports and actionable insights based on Historical data gathered from the test runs. e.g. Report Portal
Write Test Cases - AI can be used to generate test scripts from instructions written in Plain English or by interacting with the app. e.g. TestRigor, Relicx
Based on my findings, most tools using AI have self-healing features and are working to improve their features by integrating AI more into their processes.
Same here like Lara, I think its more useful to dedicate my time exploring some of the tools and reviewing/recommending some of them
Iāll make sure to get back to this point and hopefully recommend something useful for us.
One of our developers explained his approach. He would use Chat GPT for pairing, and while doing TDD, he would start by writing the test himself, and then ask ChatGPT to write the code that would turn the test green.
In the old days this was called doing ping pong.
For test reporting, when we write a long test report, we use ChatGPT for setting the tone, specially when writing it in English when it is not your mother language.
was a very interesting article and you can see where AI should be viewed as an assistant for what we do.
In terms of application for myself, it is difficult to say as the system I work with is a desktop legacy system, 25+ years old and looking and feeling it. However it works well and Clients love it.
For me UI testing is manual as these are old Windows Forms. APIs are WCF. What we have is a bunch of algorithms and calculations for trading and risk management
What peeked my interest was Precision Testing. I manage a monolithic set of regression tests so i see this as a winner out of the box for me.
i will look further into this, but if anyone has any ideas to share, that would be very much appreciated
One way AI will be extremely useful is tracking and comparing data. It can help us to analyze the data and see trends quicker than we would be able to do with the human brain.
To go along with the above benefit, we could also use that to see where high risk areas are.
Another time saver is having AI look at code changes and evaluate which tests need to be updated or run.
The feature of AI that stands out the most to me when talking about QA is self healing. While we can automate UI testing, it tends to come with quite a bit of maintenance.
Once AI can look back on future data and predict what the issue is with any small changes in the UI and fix it that will be a huge time savings when it comes to maintenance of automated tests.
AI Can Quickly Generate Test Data for Data-Driven Testing
AI algorithms can also analyze test results and provide insights on failures, trends, and areas that require further testing, enabling teams to continuously improve their testing processes.
Most applicable:
All of them to be fair but this would be super useful give that our application is already 2.5 years old and we have both a desktop and mobile version which are separate.
AI-powered visual testing tools enhance the process by accurately identifying UI changes before and after deployment. Unlike traditional tools, AI considers changes that impact users, leading to more precise bug detection. This advancement alleviates the challenging task of manually spotting visual differences, streamlining the testing process significantly.
The possiblities to use AI in testing are really various:
Mabl is an AI-driven test automation platform that includes features for automatically generating test documentation.
āTestim Test Prioritization.ā Testimās AI-driven test prioritization feature uses machine learning algorithms to analyze various factors and prioritize tests based on their likelihood of finding defects and impact on the application
Eggplantās AI-driven testing platform offers robust capabilities for mobile testing across various devices and platforms. By leveraging AI and machine learning, Eggplant streamlines the entire testing process, empowering teams to deliver flawless mobile experiences
In my testing context, the most useful AI features would likely be AI-powered test automation and intelligent test prioritization. These capabilities would enable us to automate repetitive testing tasks, focus on critical functionalities, and optimize test execution based on the risk and impact of code changes. By leveraging AI-driven techniques, we can enhance the efficiency of our testing processes, accelerate time-to-market, and deliver high-quality software products that meet user expectations.
AI features like self-healing tests and predictive test maintenance would be incredibly beneficial.
Working on a complex software project with frequent code changes demands stability and efficiency in testing.
So Iām going to repeat what I mentioned in Day 2 about the understanding around a problem.
Iām a big fan of The Glass Cage by Nicolas Carr and Iāve always liked the idea of algorithmic versus heuristic based activities. A heuristic based activity would be something that is creative and hard to define. For example, capturing the emotion of a beautiful skyline in a painting. Whereas algorithmic activities are more distinct in their actions. For example, making a cup of coffee.
I like this way of thinking because it connects to other ways of sense making such as Cynefin which also talks about the difference between complex and clear problems. Heuristic activities work well in the complex space, whereas algorithmic activities work well in the simple space.
So what does this have to do with AI? I think itās important because it connects to the discourse around what AI can and cannot do. In the Large Language Model world there is this attitude that because it generates itās equivalent to heuristic problem solving, and whilst I think it can help in that space, LLMs are more effective with algorithmic problems. My reasoning being that algorithmic problems are based on known knowledge, which is what LLMs are trained upon. Meaning they are more tuned to what is known and generating outputs in that space than being wholly heuristic driven.
To bring this back to todayās question. I think AI works best in places where algorithmic activities occur in testing. Such as:
Generating boiler plate classes and objects for automation
Producing production code based off of provided unit tests
Creating new data sets based off of formalised data structures
In a nutshell, if itās something that you can comfortably define and explain to another person then AI is more likely to be effective than in a situation where we canāt define and explain the problem.
I found an interesting article, I try to summarize it a bit:
AI and machine learning can be used in software testing to automate many steps in the testing procedure. It can analyze the application being tested and generate test cases to cover important parts of the application. Software testers can save time and effort by relying but always critical thinking that the most important parts of the program are tested consciously.
on-demand environments in the cloud make it easy to run tests in parallel across browsers, devices and OS while we take care of the heavy lifting of setup and maintenance.
Easy no-code test recording + powerful full-code scripting.
Map automated tests to existing manual tests with one-click integrations
Optimize test coverage and run the right tests at the right time with dynamic test suites and smart scheduling.
tests can be generated based on your own documented test cases
desktop testing, web testing, mobile testing, API testing
Test Code Generation is an important aspect with ability of auto test generation.
Chat GPT and other LLMās can generate test code for you, migrate test code to another language, within DIEās tools like Github copilot are a great support, create test code from comments, create complete test scripts from test input. All three tools listed are able to support Test Code generation with some various focused aspects and different boundaries.
I find one of the most valuable and best uses of AI in Software Testing is taking those time-consuming tasks with so many manual processes, and having it completed for you with a click of a button! Itās like having a free assistant!
Other great uses for AI in software testing: Generate scripts, optimize and execute test cases, Detect and fix defects.
All of these uses are great time savers and make your day just that much easier!
Forethought, apparently its in a lot of ways that I thought of before, but seeing the promises some of them make, seems impressive. On the other hand, though, a lot of the tools posted have a cool demo video you can watch, but a lot of them seem unexpressive and focused on marketing aspects rather than actual live demos of use cases. So my skeptical sense is tingling.
Test Case Generation. As mentioned several times in the replies above, ChatGPT and such can help write test cases and test steps. Especially useful for juniors and new entries into the industry. Cautious to feed the bot with sensitive data or code.
Test Plan Creation. Similar to above, just for test plan documents. Can create an entry point to prepare a master or release test plan. But no chatbot can know your entire project with all management and technical details, so you need to re-prompt a few times and probably fill out the information yourself. Other software can have the potential to automatically generate test plans based on click-and-record features or just by describing the features under test.
Automatic Test Code Generation: Great potential because the quality of code auto-generated by AI will only increase from now on. Great for local tests and tools like Co-Pilot, Postman AI, and so on can increase speed in test implementation. Never blindly copy and paste code and then push it to production! It is important to understand the concept of the code you are gonna use. Know why it does what and how it does this. Also great for beginners to learn code and upskill everyone.
Automatic Code Improvement: Similar to above, just to refine existing code. Its gonna get interesting when tools do not rely on training data (with maybe dubious origins?) but when they learn your style and your projects architecture and conventions.
Automatic Test Generation by UI clicking: Click-and-Record feature on steroids. Navigate your app on the UI and tests, test code, and test plan are automatically saved.
Self-Healing Tests: We all hate spending so much time on maintenance, donāt we? I share the same reservations as someone mentioned above, there is potential that some things slip through our fingers. But the addition of single parameters or changes to existing ones is very common and that could easily be handled by it.
Trend analysis in data
Shows huge potential when comparing change failure rates, environment analysis, areas where tests fail often, and analyzing root causes (and comparing them over periods/test runs).
As usual with any metrics, be careful that they do not become a tool of blaming or finger-pointing.
Self education. While no professional course or education institution, chatbots can help you gain first-hand insights into many topics, including QA and tech-related ones. One can easily get a basic grasp of most concepts explained by chatbots (nevertheless requiring further exploration). Can be an alternative to static wikis due to the possibility to ask questions and frame responses in custom formats/wordings. Beneficial for juniors, new entries, and project managers.