🤖 Day 15: Gauge your short-term AI in testing plans

Well done! You’ve made it halfway through our 30 Days of AI in Testing challenge! :tada:

After covering so much content, the midpoint is a great time to take a breather and reflect on our individual readiness to adopt AI in our testing practices. As we’ve discovered in recent tasks, the path to integrating AI into our testing workflows is not a one-size-fits-all approach. Each individual tester may have unique circumstances, priorities, and constraints that shape their adoption readiness.

Today’s task aims to provide a snapshot of our community’s AI adoption readiness by asking a straightforward yet insightful poll.

Task Steps

  1. Answer this question:

How likely are you to use AI in Testing within the next 6 months?

  • Very unlikely
  • Unlikely
  • Likely
  • Very likely
  • I already use AI in my testing activities
0 voters
  1. Bonus Step: If you’re open to sharing, share your answer to the poll by replying to this post. Explain the reasons behind your choice, such as organisational priorities or resource availability. What specific areas or use cases are you considering, if any?

Why Take Part

  • Share Your Perspective: By contributing your stance and rationale, you contribute to the collective understanding of the community’s inclination towards AI adoption, which can inspire, motivate, and perhaps even shift perspectives on readiness and the pace of change.
  • Learn from Others: Engage in the discussion to gain insights from others’ plans, experiences, and strategies, which can inform and refine your own approach to AI in testing adoption.

:rocket: Level up your learning experience. Go Pro!


Hi there :raising_hand_woman:

Before entering the challenge, I tried some attempts in using AI in my daily tasks for my job, but I always felt lost, like there should be a better way of doing what I was trying to accomplish.

So with our daily tasks I found the path I was searching for, now I use ChatGPT to create tickets for me, and I want to implement a way for ChatGPT or another tool to help me document all my automated tests.

This challenge is helping me to improve my professional skills, thank you guys :wink:


Hello guys,

After participating in the challenge, I’ve realized the potential of integrating AI into my testing processes. Tasks such as test automation, test scenario creation were highlighted as perfect places to use AI. The outputs generated during the challenge demonstrated the tangible benefits AI can bring to these aspects of testing.

However, while embracing AI in testing, I am mindful of privacy concerns associated with the data supplied to AI systems. It’s essential to adhere to organizational regulations & ensure compliance with data protection standards.

In summary, the recent challenge :30_days_of_testing: has significantly influenced my decision to integrate AI into testing within the next six months.


Hello my fellow testers,

The main stumbling blocks for me are the data privacy issue & that none of the AI testing tools on the market at the moment really know my context, until both of these issues are resolved then AI as a whole will continue to be of limited use to me outside of either using it like a search engine or asking it to explain a block of code I don’t understand.

However if both of those issues are resolved in the near future then I can see trying to use AI in all areas of the testing process from analysis the requirements to helping with the maintenance of automated tests.

I am choosing to be optimistic in my answer to the poll and hope that these issues get answered within the next six months or so.


It obviously reduces the effectiveness of using AI in many contexts significantly, but sometimes I find I can come up with a simple “toy problem” analogous to the actual issue I have (where I care about data privacy) and still get some insights from e.g. LLMs without sharing any data that’s significant.

Of course, if your issue that no AI tools know your context is significant enough to stop it being useful, this workaround just makes that worse!


I voted likely. My reasons are - as usual - weird.

I expect to be searching for work after this month, and intend to use some of that time to exploring the possibilities of using AI in testing plans. If nothing else, it would give me a bit more familiarity with the tools while I look for work.

I don’t have any specific use cases I’m looking at - I’m looking to explore rather than to fill a specific need.

1 Like

It’s smart to simplify problems for AI as a workaround to avoid data privacy concerns, but yeah you’re right if the AI doesn’t get our context right, it might not help much. Like @adrianjr said let’s hope these issues get answered within the next six months/more.

1 Like

Day 15

Currently, I use AI in my testing in 3 ways:

  • Generating utility scripts for bespoke test data, usually throwaway.
  • Putting together tool recommendations, comparing options, surfacing ways of doing things I don’t know.
  • Coming up with further test design ideas, based on a list I have generated.

However, in all of these things, I have to sanitise prompts somewhat so not to risk exposing sensitive information.

I have been looking at GPT’s by ChatGPT: Introducing GPTs

You can opt out (apparently) from using the data from your chats for overall Chat GPT training.

Alternatively, deploying a model locally and training it using select internal data. There seems to be a few ways to do this, so will start to experiment with them:

I personally need more depth in the area, beyond naively using tools, maybe giving them the right information maybe not. However, the prompt engineering part of 30 Days of Testing in AI has been really, really useful. I can already see the benefits in output from the models I do use.


I voted likely.

As I mentioned in my introduction post 2 weeks ago, my company is currently going head-on into AI, and there’s an expectation that we will explore incorporating AI tools into our daily work. I can’t tell if that will happen in next 6 months, but by the end of the year, maybe within next 12 months, I expect to have a company-wide resource of tried, tested and recommended AI tools; perhaps there even might be a requirement to introduce some of them in our work.

This challenge forced me to try AI tools for testing work, and I’ve been trying to pick things where I have other work that I can benchmark against. So far my experience with tools I tried was that they mostly do OK work; occasionally they cover something I did not think about, almost always there’s at least one thing they left out. So I don’t really see myself outsourcing a lot of work to these tools. When I do the work myself I might be a little slower, but I obtain better results and benefit from gaining experience and being able to pattern-match a case in the future.

This is not really testing related, but there are two areas when I will experiment with AI a bit:

  • Instead of writing blog posts, dictating them and then using AI to turn them into text. I used that on one of my presentations and while process was very slow (more than 1 minute processing for 1 minute of audio), the results were pretty good. I’m still not very used to editing speech. But that process might be easier and faster as I gain more experience.
  • Translating things offline. I have a bunch of notes in my native language that I would like to translate to English and publish online. I could do it, but it’s thankless and not very engaging work. So I will try some of the translation models and see how far they can get me. So far I only explored what tools could be used.

The main path towards solving both the data privacy (from the perspective of keeping your data private) and context understanding challenges is likely to be building and hosting your models internally. For LLMs, this will probably be using open-source pre-trained models (so nothing is shared with 3rd parties) and then some fine-tuning and domain adaptation approaches to improve contextual understanding. It’s doable but really needs people who can build and train models. I can point you in the direction of some resources if that’s of interest.

We may see some general-purpose “testing” LLMs and I think that is what the team at https://test.ai/ are doing which might solve the contextual understanding but not the privacy concerns…unless they adopt differential privacy techniques.


Well…if you want to dive into local llms…let me know - i can share some resources…and we might be doing something later in the challenge along these lines


Hello @dianadromey and colleagues,

Thanks for today’s task. It opened me to look over my AI in Testing Plans for the coming year.

I also added a new prompt for drafting bug reports. I will be adding it to AI Prompt Repository for Testers - Rahul’s Testing Titbits

Also, here is the mindmap with my AI in Testing Plans:

I did a video blog on today’s task where I explained how can one use AI in Testing and make plans to use AI in day-to-day work.

Check it out here: My AI in Testing Plans | Drafting Bug Reports via AI - Day 15 of 30 Days of AI in Testing Challenge - YouTube

Do share your feedback.



I will be using AI in two key areas:

  1. Tools such as Grammarly ( I use this already :smile: ), ChatGPT to give me more information and embedded AI in tools such as Jira.
  2. Code: Testing out how reliable the code generated by LLMs is to work out how much I can use it.
1 Like

I have answered ‘I already use AI in my testing activities’ in view of using Co-Pilot within Visual Studio.

As this is very much a small part of the overall technology I was wondering if ‘Very Likely’ was more accurate.

But I am an optimist :smiley:

Definitely, going forward, I see a great deal of scope here and have already had discussions with some colleagues.

I think my first step will be drafting a company wide guideline on use and control of ML in general and testing in particular.

This is a tool and it is up to us how we use it.

1 Like

Hey Bill, I am keen to snag some resources on this :slightly_smiling_face:

1 Like

Hey folks,

As we hit the midpoint of our 30 Days of AI in Testing challenge, I’m excited to share how AI has become a real game-changer in my day-to-day testing routine for iOS development using Swift and XCUITest.

Test Case Generation:
Generating test cases used to be a time-consuming task, but with AI, it’s become a breeze. Using machine learning algorithms, I can analyze past test cases and code changes to predict potential areas of risk and automatically generate comprehensive test cases. This saves me loads of time while ensuring thorough test coverage.

UI Test Automation with XCUITest:
UI testing in iOS apps can be tricky, especially with dynamic elements. Thankfully, AI-powered test automation frameworks like XCUITest make it much easier. By leveraging AI algorithms, XCUITest can intelligently identify UI elements, adapt to different screen sizes, and handle localization nuances. This reduces manual effort and increases the reliability of UI tests.

With AI by my side, testing iOS apps has never been smoother. Looking forward to exploring more AI-powered solutions in the days ahead! :rocket:


We have a task later that touches on this so i’ll create a resource list for that


I’m part of an AI engineering team and passionate about using AI tools for testing and automation. I’ve already experimented with AI for model creation, training, testing using Vertex AI, also designing, and generating end-to-end testing datasets using ChatGPT, testing code and automation using Gemini. I’m eager to delve deeper and explore how AI can be applied to various aspects of testing, including different test design approaches, code testing, automation script creation, project quality tracking, reporting and metrics generation, and even monitoring.


Test Strategies and Test Plans are already designed (based on roadmap) for projects in the next 6 months; I cannot introduce new ways of testing. It is unlikely that I use AI for testing short-term. But I will start using AI tools (esp. ChatGPT) to get ideas, if I am stuck with automation.



before this challenge I used a little bit ChatGPT and Postman AI assistant, but mostly for information searching, training, also generating realistic data. Now thanks to this challenge in these two weeks I learned what the advantages of AI tools are, how they can be used in testing, what are the ways in which they are used and possible risks.

I would like firstly to dig deeper into the use of Postbot in the near future and apply it practically in my workspace for creating API documentation, design tests cases, test suits, debugging, also start to use other AI tools in me daily work tasks.