Well done! Youāve made it halfway through our 30 Days of AI in Testing challenge!
After covering so much content, the midpoint is a great time to take a breather and reflect on our individual readiness to adopt AI in our testing practices. As weāve discovered in recent tasks, the path to integrating AI into our testing workflows is not a one-size-fits-all approach. Each individual tester may have unique circumstances, priorities, and constraints that shape their adoption readiness.
Todayās task aims to provide a snapshot of our communityās AI adoption readiness by asking a straightforward yet insightful poll.
Task Steps
Answer this question:
How likely are you to use AI in Testing within the next 6 months?
Very unlikely
Unlikely
Likely
Very likely
I already use AI in my testing activities
0voters
Bonus Step: If youāre open to sharing, share your answer to the poll by replying to this post. Explain the reasons behind your choice, such as organisational priorities or resource availability. What specific areas or use cases are you considering, if any?
Why Take Part
Share Your Perspective: By contributing your stance and rationale, you contribute to the collective understanding of the communityās inclination towards AI adoption, which can inspire, motivate, and perhaps even shift perspectives on readiness and the pace of change.
Learn from Others: Engage in the discussion to gain insights from othersā plans, experiences, and strategies, which can inform and refine your own approach to AI in testing adoption.
Before entering the challenge, I tried some attempts in using AI in my daily tasks for my job, but I always felt lost, like there should be a better way of doing what I was trying to accomplish.
So with our daily tasks I found the path I was searching for, now I use ChatGPT to create tickets for me, and I want to implement a way for ChatGPT or another tool to help me document all my automated tests.
This challenge is helping me to improve my professional skills, thank you guys
After participating in the challenge, Iāve realized the potential of integrating AI into my testing processes. Tasks such as test automation, test scenario creation were highlighted as perfect places to use AI. The outputs generated during the challenge demonstrated the tangible benefits AI can bring to these aspects of testing.
However, while embracing AI in testing, I am mindful of privacy concerns associated with the data supplied to AI systems. Itās essential to adhere to organizational regulations & ensure compliance with data protection standards.
In summary, the recent challenge has significantly influenced my decision to integrate AI into testing within the next six months.
The main stumbling blocks for me are the data privacy issue & that none of the AI testing tools on the market at the moment really know my context, until both of these issues are resolved then AI as a whole will continue to be of limited use to me outside of either using it like a search engine or asking it to explain a block of code I donāt understand.
However if both of those issues are resolved in the near future then I can see trying to use AI in all areas of the testing process from analysis the requirements to helping with the maintenance of automated tests.
I am choosing to be optimistic in my answer to the poll and hope that these issues get answered within the next six months or so.
It obviously reduces the effectiveness of using AI in many contexts significantly, but sometimes I find I can come up with a simple ātoy problemā analogous to the actual issue I have (where I care about data privacy) and still get some insights from e.g. LLMs without sharing any data thatās significant.
Of course, if your issue that no AI tools know your context is significant enough to stop it being useful, this workaround just makes that worse!
I voted likely. My reasons are - as usual - weird.
I expect to be searching for work after this month, and intend to use some of that time to exploring the possibilities of using AI in testing plans. If nothing else, it would give me a bit more familiarity with the tools while I look for work.
I donāt have any specific use cases Iām looking at - Iām looking to explore rather than to fill a specific need.
Itās smart to simplify problems for AI as a workaround to avoid data privacy concerns, but yeah youāre right if the AI doesnāt get our context right, it might not help much. Like @adrianjr said letās hope these issues get answered within the next six months/more.
You can opt out (apparently) from using the data from your chats for overall Chat GPT training.
Alternatively, deploying a model locally and training it using select internal data. There seems to be a few ways to do this, so will start to experiment with them:
I personally need more depth in the area, beyond naively using tools, maybe giving them the right information maybe not. However, the prompt engineering part of 30 Days of Testing in AI has been really, really useful. I can already see the benefits in output from the models I do use.
As I mentioned in my introduction post 2 weeks ago, my company is currently going head-on into AI, and thereās an expectation that we will explore incorporating AI tools into our daily work. I canāt tell if that will happen in next 6 months, but by the end of the year, maybe within next 12 months, I expect to have a company-wide resource of tried, tested and recommended AI tools; perhaps there even might be a requirement to introduce some of them in our work.
This challenge forced me to try AI tools for testing work, and Iāve been trying to pick things where I have other work that I can benchmark against. So far my experience with tools I tried was that they mostly do OK work; occasionally they cover something I did not think about, almost always thereās at least one thing they left out. So I donāt really see myself outsourcing a lot of work to these tools. When I do the work myself I might be a little slower, but I obtain better results and benefit from gaining experience and being able to pattern-match a case in the future.
This is not really testing related, but there are two areas when I will experiment with AI a bit:
Instead of writing blog posts, dictating them and then using AI to turn them into text. I used that on one of my presentations and while process was very slow (more than 1 minute processing for 1 minute of audio), the results were pretty good. Iām still not very used to editing speech. But that process might be easier and faster as I gain more experience.
Translating things offline. I have a bunch of notes in my native language that I would like to translate to English and publish online. I could do it, but itās thankless and not very engaging work. So I will try some of the translation models and see how far they can get me. So far I only explored what tools could be used.
The main path towards solving both the data privacy (from the perspective of keeping your data private) and context understanding challenges is likely to be building and hosting your models internally. For LLMs, this will probably be using open-source pre-trained models (so nothing is shared with 3rd parties) and then some fine-tuning and domain adaptation approaches to improve contextual understanding. Itās doable but really needs people who can build and train models. I can point you in the direction of some resources if thatās of interest.
We may see some general-purpose ātestingā LLMs and I think that is what the team at https://test.ai/ are doing which might solve the contextual understanding but not the privacy concernsā¦unless they adopt differential privacy techniques.
Wellā¦if you want to dive into local llmsā¦let me know - i can share some resourcesā¦and we might be doing something later in the challenge along these lines
As we hit the midpoint of our 30 Days of AI in Testing challenge, Iām excited to share how AI has become a real game-changer in my day-to-day testing routine for iOS development using Swift and XCUITest.
Test Case Generation:
Generating test cases used to be a time-consuming task, but with AI, itās become a breeze. Using machine learning algorithms, I can analyze past test cases and code changes to predict potential areas of risk and automatically generate comprehensive test cases. This saves me loads of time while ensuring thorough test coverage.
UI Test Automation with XCUITest:
UI testing in iOS apps can be tricky, especially with dynamic elements. Thankfully, AI-powered test automation frameworks like XCUITest make it much easier. By leveraging AI algorithms, XCUITest can intelligently identify UI elements, adapt to different screen sizes, and handle localization nuances. This reduces manual effort and increases the reliability of UI tests.
With AI by my side, testing iOS apps has never been smoother. Looking forward to exploring more AI-powered solutions in the days ahead!
Iām part of an AI engineering team and passionate about using AI tools for testing and automation. Iāve already experimented with AI for model creation, training, testing using Vertex AI, also designing, and generating end-to-end testing datasets using ChatGPT, testing code and automation using Gemini. Iām eager to delve deeper and explore how AI can be applied to various aspects of testing, including different test design approaches, code testing, automation script creation, project quality tracking, reporting and metrics generation, and even monitoring.
Test Strategies and Test Plans are already designed (based on roadmap) for projects in the next 6 months; I cannot introduce new ways of testing. It is unlikely that I use AI for testing short-term. But I will start using AI tools (esp. ChatGPT) to get ideas, if I am stuck with automation.
before this challenge I used a little bit ChatGPT and Postman AI assistant, but mostly for information searching, training, also generating realistic data. Now thanks to this challenge in these two weeks I learned what the advantages of AI tools are, how they can be used in testing, what are the ways in which they are used and possible risks.
I would like firstly to dig deeper into the use of Postbot in the near future and apply it practically in my workspace for creating API documentation, design tests cases, test suits, debugging, also start to use other AI tools in me daily work tasks.