🤖 Day 30: Consider what your AI Test Buddy would do for you

We’ve reached Day 30 of our 30 Days of AI in Testing Challenge!! :partying_face: Bug congrats for participating on any day throughout this challenge! :clap: All contributions make a difference and add to the value of this month-long initiative, so thank you for getting involved in whatever way you have. :pray:

Today, we invite you to dream big and envision the ultimate AI companion for your testing adventures. Imagine an AI assistant tailored perfectly to your needs, enhancing your testing processes and acting as your right-hand entity in navigating the complexities of software testing.

Today’s Task:

Design your ideal AI testing assistant. Think about the functionalities, attributes, and interactions that would make this AI companion invaluable to your daily testing activities.

Task Steps:

  1. Envision the Perfect Assistant: Reflect on your daily testing routines and identify areas where an AI could offer support. What features and capabilities would make an AI assistant truly effective for your needs?
  2. Design the Persona and Interface: Get creative with how your AI Test Buddy would present itself. What would its persona be? How would it communicate with you, and through what interface?
  3. Outline Key Functionalities and Limitations: Detail the tasks your AI assistant would excel at. Could it automate mundane tasks, generate test cases, or provide real-time insights? Equally important, acknowledge what it wouldn’t do.
  4. Share Your Vision: Bring your AI Test Buddy to life by sharing your concept by replying to this post. Feel free to include sketches or a detailed description. Paint a picture of how this assistant would integrate into your workflow, improve productivity, and enhance your approach to testing.

Why Take Part

  • Inspire Innovation: Set a vision for future tools and inspire potential tool developers with what’s truly desired in the field.
  • Anticipate the Future: This task encourages you to think ahead about the evolving role of AI in testing, potentially preparing you to embrace upcoming advancements.

:clap: Learn smarter with your team. Go Pro!


Hello Simon & community,

Thank you again for the wonderful :30_days_of_testing:Challenge, thoroughly enjoyed it & learnt a lot. It’s just like the challenge started just yesterday, time flies!

  • Automation support (Script Generation/Maintenance, Self-healing tests etc.)
  • Test Results/Patterns Analysis, Insights
  • Test Case Generation
  • Defect Prediction

My AI companion is Sheldon (yeah you guessed it right, the famous fictional brilliant physicist from Big Bang :earth_asia: Theory) Unlike the rigid Sheldon, my AI buddy would have friendly/approachable personality.

The UI should have different widgets each covering the features that I have mentioned already, along with a chat. Desktop application would be handy or even web application works for me. Plus, It should have the capability to work through voice commands.

The functionalities would be the same as the features listed above.

The limitations would be:

  • Lack of creativity/critical thinking
  • Potential bias
  • Need to constantly validate its suggestions, feeding up with the latest data
  • Integration challenges

Sheldon would make my job faster/bit better. It would take care of the repetitive stuff like writing basic test cases. This would free me up to focus on the more important things like figuring out the best ways to test new features. Also, Sheldon would be constantly analyzing the test results & letting me know if there’s anything I need to worry about.


Hello, @simon_tomes and fellow learners,

It’s a little emotional moment for me to submit the last answer to this challenge. Last month joined MoT and all of us learners like a family.

Loved this last challenge about writing our vision about AI Testing Buddy for us. Here is a draft of my AI Testing Buddy (Assistant), aka AI Testistant in this mindmap. Check it out here:

I have also done my video (like always :D) about my thoughts on this AI Testing Assistant. Check it out here:

Do share your thoughts and feedback!

Signing off :orange_heart:


:rocket:Congratulations to everyone who participated and thankyou for your contributions and engagement throughout this month long initiative.:100::tada:

Here’s to continued learning, growth and collaboration in the exciting field of AI.
Cheers to the community!!:sparkles::dart:

Signing off with gratitude and excitement for the future!:heart:
@parwalrahul @manojk @connmc @conrad.braam @msh @adrianjr @mirza @sara @sarahk @indres
++ Simon & Rosie

AI revolutionizes the testing process, making it faster, more efficient, and insightful.
By automating mundane tasks and providing real-time analysis, it frees testers to focus on high-value activities like test strategy design and exploratory testing.

With AI by their side, testers can navigate complex testing scenarios with confidence, ensuring the quality and reliability of software products.
Together, they form a dynamic partnership, driving continuous improvement and innovation in the field of software testing.

Limitations :slightly_smiling_face:
AI acknowledges limitations such as lack of creativity or critical thinking, potential bias in recommendations, and the need for constant validation and updating of data. Integration challenges with existing tools and systems may also arise.


Hi my fellow testers, well what a bittersweet day this is. Thank you all for your contributions over the past month, it has a truly been a daily highlight to share my thoughts with you all and read your thoughts in return.

Now onto today’s final challenge.

Envision the Perfect Assistant: Reflect on your daily testing routines and identify areas where an AI could offer support. What features and capabilities would make an AI assistant truly effective for your needs?

I would need my ideal AI testing assistant to be completely context aware at all times so maybe it would need to be baked into the operating system so that it could know every app or program I have open so that it could be of most use for all the tasks I would need it to do.

It would be great if it could analyse me entire workflow day to day, learn from it and then start suggesting ways it could help e.g. automate parts of it itself, suggest efficiencies that it could implement.

At a less broad level it would need to be my complete test data generator in every format I need the data in.

Also as it is completely context aware then I would be able to feed it the requirements and any documents of an upcoming workload and from that it could test those requirements and also generate a set of test cases for each piece of software it would be used within. I would also be able to tell it the type of testing I want cases for e.g. testing for performance.

I would not want it to do any actual testing for me as that is that part of the job I love the most and also would hope a human would still be better at, at least for as long a time as possible.

The AI assistant would be able to self heal any automated tests where they are failing due to locator changes and upon doing so produce a detailed report on what exactly it did. It would also be really easy to undo any of these automated actions.

Any bugs the automated tests find would have individual reports auto generated with screenshots and clear reproducible steps.

Design the Persona and Interface: Get creative with how your AI Test Buddy would present itself. What would its persona be? How would it communicate with you, and through what interface?

I feel I am biased here to want an assistant with traditional text and speech input and output as all the current AI LLMs seem to have this format but I am open to other ways of interacting once I am aware of them.

It would have the persona of an expert software tester with expertise in requirements analysis, impact analysis & the occasional testing joke.

Outline Key Functionalities and Limitations: Detail the tasks your AI assistant would excel at. Could it automate mundane tasks, generate test cases, or provide real-time insights? Equally important, acknowledge what it wouldn’t do.

I think I’ve listed the main functionalities I would need it to perform above so I’ll focus here on the required limitations.

Reversible actions: any action upon the software the assistant performs needs to be easily undoable.

Data privacy: no data leaves the confines of our company but can be used as training input for the model as long as its only used locally.

Sleep mode: it’s possible to temporarily deactivate the assistant if you just went to go old school and do something yourself the AI would now normally do.


Hi, everyone,

congratulations to everyone who participated in this testing challenge. It was really great to deepen knowledge, experiment with AI tools, hear many different opinions and insights :tada: :partying_face:

I imagine, that ideal AI testing assistant would be a universal, widely applicable tool used inside our company, which would be used by the entire IT team like developers, business analysts, project managers and, of course, by software testers.

It would have different functionalities and integrations with other platforms, so it could be adapted to perform different tasks, such as test automation, visual testing, security testing, self healing testing, performance of repetitive tasks, data analysis, prevention and prediction, reporting, etc.

This would help our company to avoid using different resources, employees could be able to work with one main tool, saving time, financial costs, etc.

Another advantage would be that the tool would use the latest information and manage real data, so employees would get just relevant data.

In addition to all this, it would be a green tool that would only use sustainable energy resources, data for training and analysis would be collected responsibly.

Employees would be assured of internal information and data security, privacy, and special protection was ensured in this place.

Since this tool would help to avoid routine performance of tasks, reduce the potential risks due to errors, inaccuracies, it increase the team’s own productivity, creative potential, employees could devote more time to innovation.


I think it would need to have different interfaces - I can imagine at least a chat, code comments and virtual assistant. Chat and code comments are pretty self-explanatory, especially since tools with these are available already. By virtual assistant I mean something like Microsoft Office Clippy, but actually useful. It could watch what I am doing and at the exact right moment offer help - “looks like you are trying to refactor this function, do you want me to make changes in all calling sites?” If we are dreaming, it could even discover that I’ve been reading Jira ticket and say “do you want me to download container image with that change, start it up and set up screen recording and log analyzer for the testing session that you are about to start?”

Of course I won’t be able to use so intrusive tool on my work computer unless my company has some kind of agreement with the vendor. Ideally it would work fully offline and locally, or only on internal company network.

It should have holistic awareness of my team’s work - there are many code repositories we own or participate in, and sometimes you need to coordinate a change across few of them. The ideal assistant would help with that coordination, or even teach these team members who are mostly comfortable in one specific corner of our work.

I know how to do my job. But sometimes I have to do things I do rarely or never did before. This is where testing buddy could be really helpful. I could ask it questions like “I have this problem, how other teams in my company solved it?”, “I need access to this system, what is the process?” or “I have this HR / payroll-related problem, what is the proper way of solving it?”; if it could answer them truthfully, or even point me in the right direction, that could be a real time saver.

Finally, my company announced it wants to achieve net-zero emissions by 2030. So AI testing buddy would need to somehow fit in that plan.

I know it’s not a fully polished plan, but these are few points that come to my mind.


Hi all, the last post on the 30 Days, but hopefully not the last I hear from all involved.

Currently, my position is to maintain a current set of Automated Tests, find ways to improve these, ensure overall quality and delivery, and probably the most enjoyable part, helping teammates with minimal or no coding experience to write automated tests in .Net

Based on this I would suggest my Assistant would:

Where I have improved the code with a refactor, search and highlight other parts of the code this can be applied. For Refactoring I use ReSharper, so perhaps a Resharper type assistant that can drill through the code upon the specific refactoring, and suggest application. Not automatically change.

Be scheduled to find suggestions and posts from some of the Software Experts I follow and prompt me on where these can be applied.
This could also be applied to QA experts I follow and interactively suggest the latest updates and improvements I can apply.
To me, there are so many “experts” out there that it is difficult to cut through the noise to get to the best suggestions

The last of my main remits I can see the likes of Git Co-Pilot assisting my colleagues to write code, but I am not sure I would go further than that.
I think that in the case of coding, PRs are the best way to teach. Let people write what they think and then analyse and discuss what they have done.

I always say, if you are not enjoying what you are doing and can’t have a laugh in work, then you are in the wrong job.
The persona would be upbeat and relaxed, as I would be designing it for others too. So hopefully that would reflect my own persona.

I think my assistant would have to be trained up in the business critical areas that I have little or no experience in, so Security, Performance and maybe some Devops tasked.
For mundane coding, I have already seen where git Co-Pilot already saves time in assisting with the likes of creating DTOs.
The Assistant would keep me current on the latest .Net and QA evolutions, and help in raising the standards already in place.
The Assistant would also understand the levels of knowledge and expertise and adjust accordingly to the person using it.

For things not to be used for. Maybe simply saying it would only ever be a tool sums it up. It would not be used as a shortcut that circumvents any Security standards, and not get access to production code. Any tools would have to be analysed by my IT Security colleagues.

Failing all the above. my assistant will make me a cup of tea and give me the next set of lottery numbers for a £10m jackpot, and I will enjoy early retirement :smiley:

Thanks to everyone for making this an interesting and enjoyable project.


I can’t think of an ideal assistant yet, but I tried to build a mini bug reporting assistant in these 2 days, with everything I learned in this 30 days :intellectual:

AI Bug Report Generator: https://bugreport.oursky.app/

This is a Retrieval-Augmented Generation (RAG) tool built with the following high-level procedures:

  1. Extract a long list of old bug reports from our GitHub repositories
  2. Process the issues into a CSV with different columns, e.g. “Title”, “Description”, “Steps to Reproduce”, “Expected Results”
  3. Vectorize the CSV data and turn into a database for indexing, e.g. Pinecone
  4. Make a chatbot integrated with LLM, e.g. Streamlit x LangChain x Google Gemini
  5. Design questions to guide user to provide required context, e.g. steps to reproduce, expected result
  6. Find some closest samples from the vector DB
  7. Customize a prompt to include the samples for few-shot training
  8. Send the enhanced prompt to LLM
  9. Return the response to user with some more testing tips
  10. Deployment :rocket:

More than 70% of the scripts in this mini project is generated by GPT, including many debugging process and the README files.

You can also checkout the source code on the GitHub repository:

Hope you enjoy it :yay:


It is really great work @joyz Awesome :clap: :clap:
Thanks for sharing the git


Hello everyone.

I envision the ultimate AI test assistant:

In the course of my testing routine, I have identified numerous areas where an AI could provide invaluable support. Equipped with advanced capabilities and features, the AI assistant would streamline testing processes and serve as an indispensable aid in navigating complex software.

Designing the persona and interface:

My AI Test Buddy embodied a friendly and knowledgeable persona that communicated seamlessly through a user-friendly interface. The interface included intuitive design elements and interactive features to ensure effortless interaction and comprehension.

Showcasing key functionalities and limitations:

My AI Assistant excels at automating everyday tasks, creating test cases and providing real-time insights. However, it also acknowledges its limitations and knows that certain tasks require human intervention and expertise.

Sharing my vision:

When I bring my AI test buddy to life, I envision its integration into my workflow through detailed descriptions and illustrations. By enhancing productivity and refining testing approaches, this assistant revolutionised my testing experience, inspiring innovation in future tools and preparing for the evolving role of AI in testing.

Thank you


Day 30

In my mind, this is tied to contemporary exploratory testing championed by Maaret Pyhäjärvi. We should use AI to create assistants that maximise what great testing looks like over squeezing efficiencies out of what we already do.

Concepts such as:

  • The Automation Gambit - creating an executable specification encouraging learning of how to test, be resourceful and document all at once.
  • Parameterising unit tests to maximise their value and find new information.
  • Unattended testing - generating data for a report for example, automating the data creation, changing report generation parameters and checking the output.
  • Attended testing - using logs/events/metrics to augment your exploratory testing.
  • Bug fixing - fixing bugs together with developers or fixing them yourself

Envision the Perfect Assistant

I would call it Zenko, meaning a benevolent and clever fox from Japanese folklore, that grows wiser with experience. Also, you can have a cool logo of a fox with lots of tails. Perhaps, as you learn together, more tails can be added.

With the above in mind, I would like the assistent to:

  • As one tests, build the executable specification as shown above, but be able to access the current specification as it is to suggest coverage improvements or warn about duplication.
  • Suggesting where a lower level (unit, component, integration test) covers what you are testing, or how one might expand that coverage.
  • Expose the configuration/environment variables of a system and be able to change on the fly, maybe best when testing against a locally running application.
  • Consume logs as part of the model and display snippets of them if you describe something as a bug in your notes or indicate a problem/question.
  • Suggest areas of the code where the fix for a problem might be so you can better point developers in the right location (or for you to try fix it yourself).

I think the main thing for such a tool is to look to how we can make testing better and more teachable, rather than trying to hide or automate the skilful part.

I would be one of those frustrating product owners I think with a cool idea but a very loose grasp on how to make it a reality!


I love what you’ve listed here, I think I’d like something quite similar!


+100! I love your perspective - having something everyone uses that encourages collaboration.

1 Like