🤖 Day 13: Develop a testing approach and become an AI in testing champion!

Day 13 is here! We’ve covered a lot of ground in a short period of time. We’ve examined various ways that AI could support testing and empower testers. We’ve examined some of the risks inherent in using AI, and we’ve experimented with some tools.

Today, we will focus on how the information we have collected could be used to improve our overall approach to testing. AI in Testing won’t happen by itself - it needs AI in Testing Champions.

Today’s Task:

  • The As-Is: Consider your team’s current testing practices, how work flows from feature to delivery, and the role of testing in that flow.
    • Consider testing related activities such as:
      • Test Data Management
      • Test Design
      • Test Planning and execution
      • Managing Defects
      • Test Reporting
    • Which areas are most challenging or time-consuming? Which areas need improving?
  • Where does AI add value?: Based on your experiences in the challenge so far and using contributions from others, consider:
    • Where would AI add the most value in your workflow?
    • Pick one area of improvement (or more if you want) that you want to focus on
    • How would you use AI in that area, and what would the impact be?
    • What AI Risks does it introduce, and how would you mitigate them?
  • Become an AI in Testing champion: Imagine you need to convenience your peers, manager or company to invest in AI in Testing. Based on your ideas from the previous tasks, create a visual or short report that outlines your approach.
    • Capture the current citation and challenges(s)
    • Show where AI in Testing could improve the workflow
    • Outline any risks and how they can be mitigated
    • Describe how your proposals will improve the current situation.
  • Share your approach with your fellow AiT Champions: share your ideas by replying to this post.
    • Reminder: Don’t include anything that is sensitive to your company

Why Take Part

  • Become an AI in Testing Champion: The adoption of AI in Testing needs people to understand how it fits into testing and champion its use. This task helps you develop the skills to become an AI in Testing Champion for your organisation.

:clap: Learn smarter with your team. Go Pro!

4 Likes

Hi my fellow testers, here is my response to today’s challenge:

Which areas are most challenging or time-consuming? Which areas need improving?

The first area that immediately comes to mind is the maintenance of our test automation suites as they have the potential to be very fragile to changes. I think I would also benefit from having AI assistance in thinking up test cases but it would need to know my exact context.

Where does AI add value?

  • I would add AI Into our test automation in the hopes that its ability to analyse where controls have changed in some way and mitigate that change e.g. a rename would mean it could fix it on the fly and so wouldn’t require me to fix it manually

  • The two risks I mentioned in a previous days challenge, data privacy & context awareness would apply here but in order to use the AI in the area I need it to then both of those would need to first be resolved as the AI would be of no use without awareness of my context & without it protecting confidential data then I could not give it that context

Become an AI in Testing champion: Imagine you need to convenience your peers, manager or company to invest in AI in Testing. Based on your ideas from the previous tasks, create a visual or short report that outlines your approach.

The current test automation situation is unsustainable. In a worst case scenario where controls get changed the tests are fragile and can take weeks to fix. I would like to propose that we add an AI tool to our test automation so that any tests that break due to these sorts of reasons the AI tool can use its self-healing feature and auto-fix these tests. This would free up a lot of time which I instead could spend on more valuable tasks such as exploratory testing & impact analysis. The AI tool that we choose would need to protect the confidentiality of our data and would need to be aware of my context before it could be added to our workflow.

7 Likes

AI in software testing presents numerous benefits and challenges. One significant benefit is the improved accuracy and efficiency in the testing process, leading to faster bug detection and reduced testing costs.
benefits-of-AI-in-software-testing

AI can enhance test coverage by generating a high number of test cases covering various scenarios, prioritizing critical testing scenarios, and optimizing the testing process.
benefits-of-AI-in-software-testing

However, challenges include the need for specialized expertise to implement AI testing systems, integration complexities, data challenges, incomplete test coverage, and the risk of over-reliance on technology
benefits-challenges-using-artificial-intelligence
& challenges-in-ai-testing

To mitigate these challenges, teams can upskill members, collaborate with experts, leverage cloud-based AI platforms, and define clear goals for AI-based testing
benefits-challenges-using-artificial-intelligence
& challenges-in-ai-testing

Overall, integrating AI into software testing can lead to higher efficiency, accuracy, and quality in the development process.

4 Likes

Hey all,

I feel like I would get most of the help from AI on the Test Automation area. When starting with the setup of the regression suites I can check the best way to design framework with AI. Along with that when I have a major blocker on implementations I can have a conversation with AI and resolve it on the go. But the risks like we had a chat yesterday would be there, I would try to mitigate it by not giving any confidential/private user data as prompt input.

First of all, I would propose my QA team to spend some quality time to upskill themselves by getting proper training on the AI tools/usage/risks. Then, once everyone are on the same page we can do a trial program to see how it is generating the efficiency in the workflow based on the result of it, we would propose the next steps on how to implement AI in the workflow.

8 Likes

I drafted the daily workflow of our QA team on Figma, and added some sticky notes on which parts could be assisted with AI tools.

Most of us in the team are non-technical tester, so our focus will be more on generative writing and formatting, which would have more applications in our current workflow.

  • Purple boxes are our tasks in routine (mainly exploratory test, functional test and regression test)
  • Yellow notes are what tasks could be improved by AI tools
  • Blue notes are what else could be assisted by automation
  • Green note is what must be done by human testers vice versa

14 Likes

Hi All

Please find the below proposal idea.

AI in Testing Champion Proposal

1. Introduction

Currently, our mobile app testing for transportation application primarily relies on manual processes, Swift, XCUITest for UI test automation, Azure Wiki for test case design, and Azure Pipeline for CI/CD and reporting. However, with a two-week release cycle, our testing process faces challenges in efficiency and coverage.

2. Challenges

  • Test Data Management: Generating diverse test data is time-consuming, with approximately 30% of testing time dedicated to data management activities [1].
  • Test Design: Creating comprehensive test cases requires extensive effort, with approximately 14% of the total project effort spent on designing test cases [2].
  • Test Execution: Manual execution of UI tests is labor-intensive and can take up to 75% more time than automated testing [3].
  • Managing Defects: Identifying and resolving defects within tight release cycles is challenging, with costs ranging from $25 to $1,500 per defect [3].
  • Test Reporting: Compiling meaningful test reports is resource-intensive and can take up to 55% of total testing time [3].

3. AI Integration

Incorporating AI at every stage of the testing process can revolutionie our approach:

  • Generative AI for Efficient Test Data Generation and Management: Generative AI can create synthetic data resembling real-world scenarios, reducing storage requirements and ensuring fresh data for continuous testing [4].
  • AI-driven Test Execution Optimisation: AI algorithms can optimise test case selection and execution order based on past executions and user interactions, leading to reduced manual effort, improved coverage, and faster time-to-market [5].

4. Area of Focus: Test Execution Optimisation

How AI Will Be Used:

  • Implement AI-driven test automation using Swift and XCUITest with machine learning capabilities.
  • AI algorithms will optimise test case selection and execution order based on past test executions and user interactions.
  • Predictive analysis will identify high-risk areas and prioritise test execution, enabling proactive testing efforts.
  • Continuous monitoring will detect anomalies in test results, triggering alerts for immediate investigation and resolution.

Impact:

  • Manual effort reduction for test execution by up to 50%.
  • Improved test coverage and reliability, leading to enhanced software quality and reduced defect leakage.
  • Faster time-to-market due to expedited test execution and early defect detection.

5. Mitigating AI Risks

  • Rigorous validation of AI models to ensure accuracy and reliability.
  • Regular monitoring and refinement of AI algorithms to adapt to changing testing needs.
  • Training and up-skilling testers to understand AI-driven testing methodologies and address potential challenges.

6. Conclusion

AI integration in our testing process presents a significant opportunity to enhance efficiency, effectiveness, and software quality. By championing AI in testing, we can unlock its full potential and drive innovation in software quality assurance.

References:

  1. Survey: [How to Manage Test Data in Your Test Automation Project]
  2. World Quality Report 2020: [https://vates.com/test-driven-development-tdd-building-quality-software-through-testing/]
  3. Maximising Testing Efficiency:[https://www.nousinfosystems.com/insights/blog/how-test-data-managem]
  4. Generative AI for Efficient Test Data Generation and Management: [Generative AI for Efficient Test Data Generation and Management | LambdaTest]
  5. Ultimate Guide to Test Data Management: [https://magedata.ai/ultimate-guide-to-test-data-management/]

Other References:
i. https://theqalead.com/tools/best-test-data-management-tools/
ii.https://testautomationforum.com/test-data-management-using-ai-powered-synthetic-data-generators/

Thank you

8 Likes

Happy Wednesday everyone, the weekend is in sight and we all have smiles !

We do not follow a test management structure and use Test Management Tools. Rather we rely heavily on Regression Tests which are developed in .Net.
So I would expect my answers to be different to most, but maybe there are others out there enjoying the world legacy systems of over 20 years vintage.

That is not to say that we will ignore newer concepts and tools like ML.
I already use Git Co-Pilot and enjoy the benefits of this taking away mundane code for me.

I would say working on this 30-day project has come at a good time as we look to introduce new Rest APIs with the probability of more modern technology than currently used.
Over the next few weeks I will look to ML to see where it will assist me with this project, it has been over 10 years since I have worked with Rest so I am happy to say start from scratch with 0 knowledge.

Another area I am eager to look at where ML could assist is in upgrading some of my tests. The current tests are in .Net FW 4.7 and .Net Core 3.1
I will be working on getting the .Net Core 3.1 tests upgraded to .Net 8(probably via .Net 6 first) so eager to see where co-pilot or other tools can assist

I might return and update this share when I get to work on both projects and report on how AI helped, hindered or broke everything :smiley:

Anyone sharing their thoughts and experiences would be greatly appeciated.

3 Likes

We’re a four person team, all working part time on our web-based app. For quite awhile now, our developer has used the Tabnines AI assistant in his IDE. He finds it saves him time, and I think the quality of his coding and his automated tests has even gotten higher. I’m occasionally using AI IDE assistants to explain code so I understand it better. I’m also occasionally using these tools to generate test ideas or test data, I’m not in the habit of that so I need to build that habit. When I work solo, I need something to help me think more laterally. My little team welcomes any experiment ideas I propose, so I’m lucky! There will be no resistance to trying new things. (I live in magical unicorn land!)

5 Likes

I love your visual here! I’m curious if you think that AI tools are really helping to improve the writing? I’ve heard good things, but I haven’t really tried to have it help me. I like to think I’m already a good writer!

2 Likes

Day 13

The As-Is: Consider your team’s current testing practices, how work flows from feature to delivery, and the role of testing in that flow.

Test Design

Of the areas listed in the challenge, I think test design is our biggest challenge. I try and assist developers to do exploratory testing, but often the tests are limited to the acceptance criteria, rather than searching for hidden requirements and looking at how changes in one place can affect another.

Where does AI add value?

My ideal would be:

  • Create an internal model from open source, one from hugging face perhaps.
  • Train it on a few things:
    • Our internal model for how the app, backend and web systems work (we have diagrams, specs etc)
    • Jira tickets and associated comments
    • Test results from our CI
    • Commit history in Github for hotspots of change.
    • Books of test design techniques that can be used, boundary values, state transition, equivalence. The good stuff.
  • Then we can ask questions of what to cover testing wise.
  • I would also add some conditions on top to filter the model’s answer, to fit with how and what we want to test. Plus how much depth, with a few previous examples to guide it.

Become an AI in Testing champion: Imagine you need to convince your peers, manager or company to invest in AI in Testing. Based on your ideas from the previous tasks, create a visual or short report that outlines your approach.

I would try and convince people with:

  • Time saved generating tests and designing them.
  • Better able to find more important problems first.
  • The whole team can test to a consistent set of scenarios.

And its a very cool thing to do of course. :slight_smile:

3 Likes

I haven’t tried an AI assistant in my IDE for explaining code, I like that idea. Thank you. :slight_smile:

Sometimes its hard to follow code, especially for callbacks, event streams and state notifiers!

1 Like

Hello, @billmatthews and fellow participants,

I loved today’s challenge. It gave me a thought window to identify high-value tasks that I can do with AI.

I have also mind-mapped my strategy for building a case (with study and results) for advocating AI in Testing.

Here is the mindmap summary of my today’s task:

Also, I did a video blog explaining my thoughts and case study plan for today’s task:

Day 13: Building Case Study for AI in Testing | Become an AI in Testing Champion-Ministry of Testing (youtube.com)

Do share your feedback and thoughts!

Thanks,
Rahul

4 Likes

Just be careful, because sometimes I’ve seen an AI assistant start making things up about code that isn’t in the repo! As long as you keep it simple, it’s good.

6 Likes

Hello All, can’t believe I finally caught up to the right day of the challenge :grimacing:

I used to work for an agency that would create AB tests for different clients, we didn’t follow a structure or use management tools and because of insane time constraints we often had to cut down or divide test planning tasks to when we had time.

The problem:
with this approach is that it resulted in test execution based on incomplete or inaccurate plans.

How can AI help:

  1. AI would’ve been a great help in accelerating and saving much-needed time on the test planning phase (writing test cases especially since most clients were e-commerce clients and the basic test suite would be the same for most) and then the tester can only focus on writing tests for the changed elements for the created test against the original.
  2. Training AI to identify the severity and priority of issues.
  3. Getting help with analyzing and reporting test results
  4. we always dismissed the option to automate since AB tests routinely change elements on the website and the cost of it is not worth it but I believe with AI assistance it might be an option on the table

Risks:

  • Although incorporating AI within the test process would have increased efficiency, the training process for it should be monitored closely, especially regarding the 2nd point from above since we have different standards for severity and priority (ex. spelling errors might have cost us the client)
  • Automation prospect: AI tool should be able to carry itself well with UI changes or it will not be a valid option I guess
2 Likes

I am testing backend servers for mission-critical applications.
I cannot see how AI can help except in providing ideas (code snippets) for automation.
We must follow certain processes to pass audits and retain certifications. (These are time-consuming but require approval that AI cannot be trusted with.)
We need to access multiple servers which require dual authentication with single sign-on. (An AI will not get permission.)

I am short on time for todays challenge and this one takes some time to think about it, so I am going to take more time and get back to it at later point. And I love the idea behind the task, I think it deserves good amount of thinking.

I know that is not the point of every day challenge but it is sometimes challenging to make it work with the work obligations and private stuff.

Not saying this is justifying it but at the moment I have no other choice :slight_smile:

1 Like

Hey there :wave: :wave:

  • Consider testing related activities such as:

    • Test Data Management - 5% , I automate most of the data that I use on testing, so I flood the test database with new registers every time :sweat_smile: .
    • Test Design - 25%, after I created some templates, the design is pretty much done.
    • Test Planning and execution - 40%, automating test cases into cypress alone I need to generate test cases dynamically in a way that I spend less time doing maintenance and implementing new test cases. Running all the test cases (now ± 1700 e2e TCS) takes a lot of time too :grimacing: .
    • Managing Defects - 20%, The bugs need to be reproduced, specially those reported by clients, so sometimes it takes a LOT of time.
    • Test Reporting - 10%, The report I have right now is in cypress cloud, but I know that I need to improve that.
  • Where does AI add value?: Based on your experiences in the challenge so far and using contributions from others, consider:

    • Where would AI add the most value in your workflow?

      • Helping me to create tickets based on my templates, improving my reports and helping me to automate test cases with better solutions.
    • Pick one area of improvement (or more if you want) that you want to focus on, how would you use AI in that area, and what would the impact be?

      • I already started using chatGPT to help me to create tickets, I passed my templates using some old tickets with title, the problem and the ticket examples, now I just need to pass the problem and the ticket is ready :raised_hands:.
    • What AI Risks does it introduce, and how would you mitigate them?

      • The risk would be writing tickets that doesn’t make sense if I don’t review the generated text before creating the ticket.

Become an AI in Testing champion

Here it is an example of an implementation I want to use in my company, basically I want to generate BDD documentation passing the cypress spec.

1 Like

As in our region English is not the first language (but second), those AI tools for improving writing maybe more useful here. Sometimes even though we have finished our own writing, we may still click the AI assist button to check if there are better phrases.

Also with that we can hire more student helpers, who may have the creativity to find bugs but maybe not as good in English writing, which could also save some cost on hiring.

Great visuals Joyz. I might steal this :sweat_smile:

1 Like

@crmueller What if the AI understood the requirements of your audits and certifications?