šŸ¤– Day 15: Gauge your short-term AI in testing plans

Well done! Youā€™ve made it halfway through our 30 Days of AI in Testing challenge! :tada:

After covering so much content, the midpoint is a great time to take a breather and reflect on our individual readiness to adopt AI in our testing practices. As weā€™ve discovered in recent tasks, the path to integrating AI into our testing workflows is not a one-size-fits-all approach. Each individual tester may have unique circumstances, priorities, and constraints that shape their adoption readiness.

Todayā€™s task aims to provide a snapshot of our communityā€™s AI adoption readiness by asking a straightforward yet insightful poll.

Task Steps

  1. Answer this question:

How likely are you to use AI in Testing within the next 6 months?

  • Very unlikely
  • Unlikely
  • Likely
  • Very likely
  • I already use AI in my testing activities
0 voters
  1. Bonus Step: If youā€™re open to sharing, share your answer to the poll by replying to this post. Explain the reasons behind your choice, such as organisational priorities or resource availability. What specific areas or use cases are you considering, if any?

Why Take Part

  • Share Your Perspective: By contributing your stance and rationale, you contribute to the collective understanding of the communityā€™s inclination towards AI adoption, which can inspire, motivate, and perhaps even shift perspectives on readiness and the pace of change.
  • Learn from Others: Engage in the discussion to gain insights from othersā€™ plans, experiences, and strategies, which can inform and refine your own approach to AI in testing adoption.

:rocket: Level up your learning experience. Go Pro!

2 Likes

Hi there :raising_hand_woman:

Before entering the challenge, I tried some attempts in using AI in my daily tasks for my job, but I always felt lost, like there should be a better way of doing what I was trying to accomplish.

So with our daily tasks I found the path I was searching for, now I use ChatGPT to create tickets for me, and I want to implement a way for ChatGPT or another tool to help me document all my automated tests.

This challenge is helping me to improve my professional skills, thank you guys :wink:

7 Likes

Hello guys,

After participating in the challenge, Iā€™ve realized the potential of integrating AI into my testing processes. Tasks such as test automation, test scenario creation were highlighted as perfect places to use AI. The outputs generated during the challenge demonstrated the tangible benefits AI can bring to these aspects of testing.

However, while embracing AI in testing, I am mindful of privacy concerns associated with the data supplied to AI systems. Itā€™s essential to adhere to organizational regulations & ensure compliance with data protection standards.

In summary, the recent challenge :30_days_of_testing: has significantly influenced my decision to integrate AI into testing within the next six months.

6 Likes

Hello my fellow testers,

The main stumbling blocks for me are the data privacy issue & that none of the AI testing tools on the market at the moment really know my context, until both of these issues are resolved then AI as a whole will continue to be of limited use to me outside of either using it like a search engine or asking it to explain a block of code I donā€™t understand.

However if both of those issues are resolved in the near future then I can see trying to use AI in all areas of the testing process from analysis the requirements to helping with the maintenance of automated tests.

I am choosing to be optimistic in my answer to the poll and hope that these issues get answered within the next six months or so.

9 Likes

It obviously reduces the effectiveness of using AI in many contexts significantly, but sometimes I find I can come up with a simple ā€œtoy problemā€ analogous to the actual issue I have (where I care about data privacy) and still get some insights from e.g. LLMs without sharing any data thatā€™s significant.

Of course, if your issue that no AI tools know your context is significant enough to stop it being useful, this workaround just makes that worse!

3 Likes

I voted likely. My reasons are - as usual - weird.

I expect to be searching for work after this month, and intend to use some of that time to exploring the possibilities of using AI in testing plans. If nothing else, it would give me a bit more familiarity with the tools while I look for work.

I donā€™t have any specific use cases Iā€™m looking at - Iā€™m looking to explore rather than to fill a specific need.

1 Like

Itā€™s smart to simplify problems for AI as a workaround to avoid data privacy concerns, but yeah youā€™re right if the AI doesnā€™t get our context right, it might not help much. Like @adrianjr said letā€™s hope these issues get answered within the next six months/more.

1 Like

Day 15

Currently, I use AI in my testing in 3 ways:

  • Generating utility scripts for bespoke test data, usually throwaway.
  • Putting together tool recommendations, comparing options, surfacing ways of doing things I donā€™t know.
  • Coming up with further test design ideas, based on a list I have generated.

However, in all of these things, I have to sanitise prompts somewhat so not to risk exposing sensitive information.

I have been looking at GPTā€™s by ChatGPT: Introducing GPTs

You can opt out (apparently) from using the data from your chats for overall Chat GPT training.

Alternatively, deploying a model locally and training it using select internal data. There seems to be a few ways to do this, so will start to experiment with them:

I personally need more depth in the area, beyond naively using tools, maybe giving them the right information maybe not. However, the prompt engineering part of 30 Days of Testing in AI has been really, really useful. I can already see the benefits in output from the models I do use.

5 Likes

I voted likely.

As I mentioned in my introduction post 2 weeks ago, my company is currently going head-on into AI, and thereā€™s an expectation that we will explore incorporating AI tools into our daily work. I canā€™t tell if that will happen in next 6 months, but by the end of the year, maybe within next 12 months, I expect to have a company-wide resource of tried, tested and recommended AI tools; perhaps there even might be a requirement to introduce some of them in our work.

This challenge forced me to try AI tools for testing work, and Iā€™ve been trying to pick things where I have other work that I can benchmark against. So far my experience with tools I tried was that they mostly do OK work; occasionally they cover something I did not think about, almost always thereā€™s at least one thing they left out. So I donā€™t really see myself outsourcing a lot of work to these tools. When I do the work myself I might be a little slower, but I obtain better results and benefit from gaining experience and being able to pattern-match a case in the future.

This is not really testing related, but there are two areas when I will experiment with AI a bit:

  • Instead of writing blog posts, dictating them and then using AI to turn them into text. I used that on one of my presentations and while process was very slow (more than 1 minute processing for 1 minute of audio), the results were pretty good. Iā€™m still not very used to editing speech. But that process might be easier and faster as I gain more experience.
  • Translating things offline. I have a bunch of notes in my native language that I would like to translate to English and publish online. I could do it, but itā€™s thankless and not very engaging work. So I will try some of the translation models and see how far they can get me. So far I only explored what tools could be used.
4 Likes

The main path towards solving both the data privacy (from the perspective of keeping your data private) and context understanding challenges is likely to be building and hosting your models internally. For LLMs, this will probably be using open-source pre-trained models (so nothing is shared with 3rd parties) and then some fine-tuning and domain adaptation approaches to improve contextual understanding. Itā€™s doable but really needs people who can build and train models. I can point you in the direction of some resources if thatā€™s of interest.

We may see some general-purpose ā€œtestingā€ LLMs and I think that is what the team at https://test.ai/ are doing which might solve the contextual understanding but not the privacy concernsā€¦unless they adopt differential privacy techniques.

3 Likes

Wellā€¦if you want to dive into local llmsā€¦let me know - i can share some resourcesā€¦and we might be doing something later in the challenge along these lines

3 Likes

Hello @dianadromey and colleagues,

Thanks for todayā€™s task. It opened me to look over my AI in Testing Plans for the coming year.

I also added a new prompt for drafting bug reports. I will be adding it to AI Prompt Repository for Testers - Rahulā€™s Testing Titbits

Also, here is the mindmap with my AI in Testing Plans:

I did a video blog on todayā€™s task where I explained how can one use AI in Testing and make plans to use AI in day-to-day work.

Check it out here: My AI in Testing Plans | Drafting Bug Reports via AI - Day 15 of 30 Days of AI in Testing Challenge - YouTube

Do share your feedback.

Thanks!
Rahul

2 Likes

I will be using AI in two key areas:

  1. Tools such as Grammarly ( I use this already :smile: ), ChatGPT to give me more information and embedded AI in tools such as Jira.
  2. Code: Testing out how reliable the code generated by LLMs is to work out how much I can use it.
1 Like

I have answered ā€˜I already use AI in my testing activitiesā€™ in view of using Co-Pilot within Visual Studio.

As this is very much a small part of the overall technology I was wondering if ā€˜Very Likelyā€™ was more accurate.

But I am an optimist :smiley:

Definitely, going forward, I see a great deal of scope here and have already had discussions with some colleagues.

I think my first step will be drafting a company wide guideline on use and control of ML in general and testing in particular.

This is a tool and it is up to us how we use it.

1 Like

Hey Bill, I am keen to snag some resources on this :slightly_smiling_face:

1 Like

Hey folks,

As we hit the midpoint of our 30 Days of AI in Testing challenge, Iā€™m excited to share how AI has become a real game-changer in my day-to-day testing routine for iOS development using Swift and XCUITest.

Test Case Generation:
Generating test cases used to be a time-consuming task, but with AI, itā€™s become a breeze. Using machine learning algorithms, I can analyze past test cases and code changes to predict potential areas of risk and automatically generate comprehensive test cases. This saves me loads of time while ensuring thorough test coverage.

UI Test Automation with XCUITest:
UI testing in iOS apps can be tricky, especially with dynamic elements. Thankfully, AI-powered test automation frameworks like XCUITest make it much easier. By leveraging AI algorithms, XCUITest can intelligently identify UI elements, adapt to different screen sizes, and handle localization nuances. This reduces manual effort and increases the reliability of UI tests.

With AI by my side, testing iOS apps has never been smoother. Looking forward to exploring more AI-powered solutions in the days ahead! :rocket:

3 Likes

We have a task later that touches on this so iā€™ll create a resource list for that

3 Likes

Iā€™m part of an AI engineering team and passionate about using AI tools for testing and automation. Iā€™ve already experimented with AI for model creation, training, testing using Vertex AI, also designing, and generating end-to-end testing datasets using ChatGPT, testing code and automation using Gemini. Iā€™m eager to delve deeper and explore how AI can be applied to various aspects of testing, including different test design approaches, code testing, automation script creation, project quality tracking, reporting and metrics generation, and even monitoring.

3 Likes

Test Strategies and Test Plans are already designed (based on roadmap) for projects in the next 6 months; I cannot introduce new ways of testing. It is unlikely that I use AI for testing short-term. But I will start using AI tools (esp. ChatGPT) to get ideas, if I am stuck with automation.

4 Likes

Hi,

before this challenge I used a little bit ChatGPT and Postman AI assistant, but mostly for information searching, training, also generating realistic data. Now thanks to this challenge in these two weeks I learned what the advantages of AI tools are, how they can be used in testing, what are the ways in which they are used and possible risks.

I would like firstly to dig deeper into the use of Postbot in the near future and apply it practically in my workspace for creating API documentation, design tests cases, test suits, debugging, also start to use other AI tools in me daily work tasks.

2 Likes