šŸ¤– Day 18: Share your greatest frustration with AI in Testing

Itā€™s Day 18! Throughout our 30 Days of AI in Testing journey, weā€™ve explored various applications of AI across different testing activities. While AIā€™s potential is undoubtedly exciting, we cannot ignore the personal frustrations that may have arisen as you experimented with these new technologies.

Todayā€™s task provides an opportunity to share your personal frustrations or concerns youā€™ve encountered while working with AI during this challenge. By openly discussing these individual experiences, we can get a deeper understanding of the potential pitfalls and identify areas for improvement with AI technologies.

Task Steps

  • Identify Your Frustration: Think back to your experiences throughout the challenge. What aspect of AI in testing caused you the most frustration or concern? Here are some prompts to get you started:

    • Limited Functionality: Did you find that the AI tools lacked the capabilities you were hoping for in specific testing areas (e.g., usability testing, security testing)?
    • The Black Box Conundrum: Were you frustrated by the lack of transparency in some AI tools? Did it make it difficult to trust their results or learn from them?
    • The Learning Curve Struggle: Did the complexity of some AI tools or the rapid pace of AI development leave you feeling overwhelmed?
    • Bias in the Machine: Did you have concerns about potential bias in AI algorithms impacting the testing process (e.g., missing bugs affecting certain user demographics)?
    • Data Privacy Worries: Are you uncomfortable with how AI tools might use or store your testing data? Do you have concerns about data security or anonymisation practices?
    • The Job Security Conundrum: Do you worry that AI might automate testing tasks and make your job redundant?

Feel free to add your own frustration if the above prompts donā€™t resonate with you!

  • Explain Your Perspective: Once youā€™ve identified your frustration, elaborate on why itā€™s a significant issue for you in reply to this post. Does it relate to your experience working with AI in testing?
  • Bonus - Learn from Shared Experiences: Engaging with the personal experiences shared by others can provide valuable insights and potentially shed light on challenges or frustrations you may not have considered. Like or reply to those who have broadened your perspective

Why Take Part

  • Identify Areas for Improvement: By openly discussing our frustrations with AI in testing, we can foster open communication and a more balanced approach to its implementation and development. As well as identify areas where AI tools, techniques, or practices need further refinement or improvement.

:clap: Learn smarter with your team. Go Pro!

4 Likes

Hey there Rosie,

My main frustrations from the points above are:

  1. Data Privacy Worries: I donā€™t see myself anytime soon (or far) trusting AI with sensitive data.
  2. The Black Box Conundrum: They are just shady, so yes :grimacing:
  3. Limited Functionality: I hoped it would be better, no matter how I try to enhance my prompt for different tools, I still get underwhelming results. I keep gaslighting the chat (I ask stuff like, are those really end-to-end scenarios :joy: - chatGPT is insecure and will adjust the answers while coPilot just insists they are:face_with_hand_over_mouth:) until they give me something remotely resembles what I am hoping for

I will prob. add other frustration points when I remember them

8 Likes

Hey there :raised_back_of_hand:

My main frustration with AI lies with Limited Functionality, and the The Learning Curve Struggle.
I have a lot of tasks that I think ā€œAI could help me out with itā€ and for me to improve the task I want with the tools I have access to would take more time than me doing that myself, or the solution is not so magic as the one I thought it would be. Some tools promise me that, but all of them are paid, and I cannot be sure it would solve my problems only with their description.

I think thatā€™s it.

12 Likes

Hello my fellow testers,

My two main frustrations with AI in testing are the same two as I have previously talked about data privacy and not context aware. Both of these issues are major stumbling blocks that stop me from being able to fully embrace AI in my job (or at least try to). Without the AI tool protecting my dataā€™s confidentiality then I cannot give it the detailed information which would allow it to help me the most in designing test cases or helping with test automation. It not being aware of my context and not being a domain expert means that it has limited use and will keep getting things wrong.

8 Likes

Hi Everyone

Find my analysis below on the shortcomings of the AI throughout the process.

Limited Functionality: Throughout the 15 Days of AI in Testing, I encountered frustrations with AI tools that lacked essential features, particularly in usability and security testing. Some tools struggled to grasp the nuances of user experience, while others fell short in identifying vulnerabilities, leaving potential gaps in our testing efforts. :hammer_and_wrench: #FunctionalityFrustration #UsabilityTesting #SecurityTesting

The Black Box Conundrum: Dealing with opaque AI tools made it challenging to trust their results and understand their decision-making process. This lack of transparency hindered my ability to learn from them effectively and raised doubts about the reliability of their outputs. :black_circle::question: #TransparencyTroubles #TrustIssues #AIConundrum

The Learning Curve Struggle: Coping with the complexity of AI tools and the rapid pace of development left me feeling overwhelmed. Adapting to evolving methodologies and intricate algorithms required constant learning and adjustment, leading to moments of frustration and information overload. :chart_with_upwards_trend::weary: #LearningCurve #AIComplexity #TechOverload

Bias in the Machine: I became increasingly concerned about biases in AI algorithms and their potential impact on testing outcomes, especially regarding user demographics. The risk of overlooking bugs or inaccuracies due to biases raised significant ethical and practical concerns in our testing processes. :robot::x: #AIbias #EthicalTesting #DiversityInTech

Data Privacy Worries: The handling and storage of testing data by AI tools raised privacy and security concerns. Uncertainty about data anonymization and the potential for unauthorized access left me uneasy about the privacy implications of AI-driven testing. :lock::bar_chart: #DataPrivacy #SecurityConcerns #AIandPrivacy

The Job Security Conundrum: The prospect of AI automating testing tasks and potentially rendering jobs redundant was a significant worry. As AI technology advances, thereā€™s a legitimate fear among testing professionals about the future of employment in the field, adding an extra layer of apprehension to our exploration of AI in testing. :robot::wrench: #JobSecurity #AutomationAnxiety #FutureOfWork

In conclusion, while the 15 Days of AI in Testing provided valuable insights, it also highlighted several challenges and frustrations. Moving forward, open discussion and collaboration are crucial to addressing these issues and ensuring a balanced and effective implementation of AI in testing practices. Together, letā€™s navigate these challenges and unlock the full potential of AI in enhancing software testing methodologies. :bulb::handshake: #AIinTesting #SoftwareTesting #ChallengesAndSolutions

Thank you

5 Likes

Hello people,

The most concerning aspect of AI in testing for me would be the Black Box Conundrum. Due to this lack of transparency, it would be difficult to trust the results which in turn makes difficult to improve the testing process. Also The Learning Curve Struggle would be challenging as day-by-day the AI is evolving and becoming more complex. To be honest all the frustrations seems equally concerning but for now in my case I feel these two areas seems more pressing.

5 Likes

Wow, given how much I complained about various tools in previous tasks, todayā€™s is right up my alley! Where do I start?

Probably the biggest problem is that hype around AI is red hot at the moment. Companies are incentivized into slapping ā€œAIā€ label even on most mundane projects and problems that could be solved faster and cheaper with other technologies; thereā€™s a lot of misinformation in the public space; people are having expectations that current breed of AI is unable to meet, and sooner or later this entire thing is going to be corrected (or collapse, depending on how you look at things). Some researches are worried that the fall will be so large and will drain so much resources, that it will cause another so-called ā€œAI winterā€ (a period without any new developments and when getting research funding is extremely hard).

Licensing problems are also very high on my frustrations list. Itā€™s probably a little bit less pronounced in testing (and it would be interesting to ponder why), but itā€™s huge topic in creative arts communities, like writing and painting. AI models are being trained on vast number of works, they may output parts of these works, and this is all done without proper attribution and without consent of authors. This whole endeavor is highly unethical, and at this point itā€™s too late for companies to change this on their own (because being ethical in this space puts you at disadvantage compared to competition that is not ethical).

Finally, the environment costs of training AI are enormous, and you need a lot of good faith to say it is worth it. At the time when climate changes have real effect on majority of population and, if unstopped, has a real chance of changing civilization way beyond our imagination, what do we do? We pump a lot of energy and water into solutions in search of problem.

10 Likes

My Frustration/Concerns:

  • Trustworthiness (and Manual Checking):
    • AI has limitations and can be limited by its data. This makes it difficult to completely trust AI (what percentage can we trust?). Because of this, manual checking is necessary. However, the question is how much manual checking is enough?
  • Data Privacy Concerns:
    • Iā€™m worried about data privacy because AI tools store all inputs and data. This is why we donā€™t use real patient data in any AI tools. This limitation restricts how AI can assist us. Additionally, AI tools often lack sufficient data or the specific data we need.
3 Likes
  • Limited Functionality: Did you find that the AI tools lacked the capabilities you were hoping for in specific testing areas (e.g., usability testing, security testing)? - AI is like a helper it cant DO everything and definitely not perfect. Without good and detailed context the responses are too general to add real value.
  • The Black Box Conundrum: Were you frustrated by the lack of transparency in some AI tools? Did it make it difficult to trust their results or learn from them? Yes, especially when working in financial services environment
  • The Learning Curve Struggle: Did the complexity of some AI tools or the rapid pace of AI development leave you feeling overwhelmed? Some yes whilst some are super easy to use, there a mix but luckily there are enough tools to help get you started. I think more than the learning curve struggles is the fact that most of the tools arent free.
  • Bias in the Machine: Did you have concerns about potential bias in AI algorithms impacting the testing process (e.g., missing bugs affecting certain user demographics)? - For sure, back to my context and detailed relevant data is crucial to making AI useful.
  • Data Privacy Worries: Are you uncomfortable with how AI tools might use or store your testing data? Do you have concerns about data security or anonymisation practices? - 1. Data Security and Privacy: AI systems in banking and financial services rely heavily on vast amounts of sensitive data. If this data is not properly secured, it could be vulnerable to breaches, leading to severe consequences such as identity theft, fraud, or financial loss.
  1. Algorithmic Bias: AI algorithms can inherit biases present in the data they are trained on. In financial services, biased algorithms could lead to unfair treatment of individuals based on factors like race, gender, or socioeconomic status. This could result in discriminatory practices and undermine trust in the financial system.
  2. Regulatory Compliance: The use of AI in banking and finance must adhere to strict regulatory standards, such as those outlined in GDPR (General Data Protection Regulation) or the Dodd-Frank Act. Implementing AI systems that fail to comply with these regulations can lead to legal and financial penalties for the institutions involved.
  3. Systemic Risks: Overreliance on AI systems in financial decision-making could potentially amplify systemic risks within the industry. If multiple institutions use similar AI models and those models fail to anticipate or respond to certain market conditions, it could lead to widespread financial instability.
  4. Cybersecurity Threats: AI systems themselves could be vulnerable to attacks, including adversarial attacks where malicious actors manipulate input data to deceive the AIā€™s decision-making process. Additionally, AI-powered cybersecurity systems must be robust enough to defend against evolving threats in the digital landscape.
  5. Lack of Human Oversight: While AI can automate many tasks and processes in banking and finance, itā€™s crucial to maintain human oversight. Over-reliance on AI without human intervention can lead to errors or unforeseen consequences that may go unnoticed until significant damage has occurred.
  • The Job Security Conundrum: Do you worry that AI might automate testing tasks and make your job redundant? No I dont, but AI will definitely replace certain tasks and processes. My concerns are more not learning how to use AI to be more effective, faster and efficient.
6 Likes

I already had a number of concerns about AI in general coming into this and the seem to be shared with my fellow contributors.
Privacy, security, safety, nefarious manipulation, these are all genuine concerns of AI.
Job security is another in general concern when you see the advanced robotics employed at somewhere like Amazon.

But in trying to keep an open mind on this project, I feel the following:

We already blindly trust companies such as Microsoft, Postman, Google etcā€¦ to not do anything untoward with our personal, employer and client data.
Why should an AI company building tools to help us be treated any differently ?

We all use open source and 3rd Party tools and packages.
We should be doing this with the caveat of mistrust, and prove we can trust you.
Apply the same to AI Tools.

I do see where AI can assist us, but as for replacing automation and manual testing, I doubt it.
I remember hearing automation will replace manual, but that was borne out of a misunderstanding of what automation should be.
I would see AI Tools helping build a basis to work off, and then I would architect/design the overall frameworks.

In terms of AI in Testing I donā€™t have any frustration as such, because it is a tool i can chose to use, or not.
I have found co-pilot to be very useful but only if I design my classes etcā€¦ properly.

It could be a lot better. We have a quadruple rollover in the UK National Lottery on Wednesday.
ChatGPT, can you give me the winning lottery numbers for Wednesday night?
Ahhhh you can predict, but not see into the future :smiley:

4 Likes

Elaboration on the Black Box Conundrum in AI Testing

The lack of transparency in AI algorithms poses several challenges for testers:

  • Difficulty in evaluating reliability: Without understanding the inner workings of the algorithm, it is challenging to assess the accuracy and reliability of its results. Testers may be uncertain about whether the AI is making accurate predictions or simply providing biased or erroneous outputs.

  • Limited ability to debug and improve: If an AI tool produces unexpected or incorrect results, it can be difficult to identify the root cause and implement fixes. Testers may lack the necessary knowledge and tools to debug the AI algorithm and improve its performance.

  • Hindrance to learning and knowledge sharing: The black box nature of AI algorithms limits the ability of testers to learn from the AIā€™s decision-making process. This hinders knowledge sharing and collaboration within the testing community, as it becomes difficult to communicate and replicate best practices.

  • Potential for bias and discrimination: AI algorithms can inherit biases from the data they are trained on. Without transparency, it is difficult for testers to identify and mitigate potential biases that could lead to unfair or discriminatory testing outcomes.

Addressing the Black Box Conundrum

To address the black box conundrum, AI tool developers need to prioritize transparency and provide users with greater insights into the inner workings of their algorithms. This can be achieved through:

  • Explainable AI: Developing AI algorithms that can provide explanations for their predictions and decisions. This allows testers to understand the rationale behind the AIā€™s output and assess its reliability.

  • Visualization and interactive tools: Providing testers with visualizations and interactive tools that allow them to explore the AIā€™s decision-making process. This facilitates a deeper understanding of how the AI arrives at its conclusions.

  • Access to training data and model parameters: Giving testers access to the training data and model parameters used by the AI algorithm. This enables testers to evaluate the quality of the data and assess the potential for bias.

  • Documentation and support: Providing comprehensive documentation and support resources that explain the AI algorithmā€™s functionality, limitations, and best practices for its use.

By addressing the black box conundrum, AI tool developers can empower testers to use AI effectively, make informed decisions, and mitigate potential risks.

5 Likes

+100 on all the frustrations expressed here already! For me personally, related to the 30 days of AI in testing challenges, itā€™s lack of time, and trying to do this on my own. Iā€™m not good at learning new things on my own. I donā€™t even have a full time job, so Iā€™m impressed with all the people whoā€™ve put so much time and energy into the challenges! Itā€™s a great bonus for me to get to read about what others have learned.

3 Likes

Hello @rosie and fellow colleagues.

Thanks for this wonderful task. There has been a lot going on in my mind when it comes to frustrations/concerns coming from AI in the Testing world and all the things that I see going around social media.

Here is a mindmap of my concerns around AI in Testing:

Also, I did a video blog explaining, why I have explained my reasons behind these concerns.

Check it out here and do share the feedback:

Thanks!
Rahul

4 Likes

My main frustration is that I cannot find enough time to learn and try out more about/with AI. :cry:

3 Likes

Totally get it Claudia! AIā€™s getting more advanced every day & keeping up with it can feel like trying to catch a speeding train. Itā€™s a challenge to find the time to learn & stay on top of all the new stuff.

Hi there,
Some remarks from my side.

Limited Functionality:
The capabilities the A.I. supporting tools are interesting but on paper or in demonstrations. A.I. tools like ChatGPT are not consistent with their output. If you put exact the same prompt in a different chat conversation you get different results (ChatGPT can even start hallucinating)

The Black Box Conundrum:
AI is not transparent. We canā€™t see how AI comes to the response we get from it. How accurate is it? How reliable is it? We just assumes that it is right and that is exactly what we as testers donā€™t like. That is assumptions.

AI helps us with the direction of ā€˜ā€™investigation. We only need to challenge the response we get from AI. To see if this is what we are looking for.

The Learning Curve Struggle:
The learning curve is steep, but doable. Where to start is always a struggle, but Iā€™m glad that MoT organized this event. Working on the daily tasks and reading the contributions of the other participants is gold.

Bias in the Machine:
You donā€™t know on what kind of data the A.I. is trained with. There will always be bias.

Data Privacy Worries:
Customer data privacy is a serious subject of concern. But A.I. need these data to be trained on. Such data must be randomized, i.e. not traceable to a person.

The Job Security Conundrum:
I donā€™t think AI will replace us as testers. As long as AI is producing information without context, testers are still needed to judge the produced information. AI is assisting us by taking away the cumbersome tasks we are facing in the automation of our tests. While AI is doing that, we as testers can focus on the modified parts of the application to do our exploratory testing

3 Likes

Hi
My biggest frustration is having to double check the information returned. Some of it is just way off.
I asked ChatGPT and Gemini the same question about calculations between 2 dates, and I got the same incorrect answer. Then when I pointed out that the calculation was out, the second answer given was also incorrect.
So much for machines taking over, if they cant even do simple maths, also, who tested this??? :rofl:

2 Likes

Day 18

Share your greatest frustration with AI in Testing

Iā€™ll use Rosieā€™s prompts to guide me, like a human Generative AI:

Limited Functionality

I think the more advanced areas are in test idea and design, plus automation. Accessibility, usability and many other ilities are still most scanners with AI ingesting the data and interpreting. Also, Generative AI presents ideas, solutions and tool choices with total confidence, even though the depth of functionality in of the areas is still pretty low.

The Black Box Conundrum

Totally. Using Bugasura yesterday and it generated an elaborate bug from just the word ā€˜Website.ā€™ Claude generated test scenarios from minimal information, with a hierarchy of products, pages and options for users. I still find myself thinking ā€˜how did you get there.ā€™ I guess the worrying thing is that others might not think like that and accept whats given.

The Learning Curve Struggle

It does a little, especially when the case for incorporating it into daily work is still fairly abstract. I think I want to be able to deploy and configurable a model locally, train it on some choice information thats relevant to my context. Perhaps once that happens, the possibilities will become more obvious to me and the overwhelm will be less.

Bias in the Machine

Large corporations and for profit organisations owning and training Generative AI pretty much bakes bias into the system. Iā€™m not so much worried as making sure it will be part of any future work on AI that I might do.

Data Privacy Worries

Of course, as it is so opaque as to where data comes from, there are plenty of occasions where data that shouldnā€™t have been slurped, has been slurped. Again, large corporations can use their size to ingest what they want, which then becomes hard to argue against. Although, many people have already bundled up their private lives and handed it over to Meta/X/ByteDance so is it too much different. Basically, we may worry about such things but a lot of people donā€™t.

The Job Security Conundrum

I donā€™t worry about AI taking testing roles. I do worry about people in charge of budgets and hiring thinking that it can and does. However, we can be pioneers in its use, rather than having AI done to us, we can adapt it to our needs.

2 Likes

Yes! Here is a conversation I had with ChatGPT about how it feels like to have a stubbed toe. What he said isnā€™t important. I found what it said about how it arrived to that conclusion was very important:

What training data did you use to know about the feeling of a stubbed toe?

ChatGPT

I donā€™t have access to my training data, but I was trained on a mixture of licensed data, data created by human trainers, and publicly available data. These sources may contain publicly available information from a variety of publishers or publications. However, the specific datasets used to train me have not been disclosed publicly by OpenAI, the organization behind Assistant. Therefore, I donā€™t have detailed visibility into which publishersā€™ information was included in my training data or whether stubbed toes were explicitly mentioned in the training data. My responses are generated based on patterns in the data I was trained on, but I donā€™t have the ability to access or retrieve specific information about individual sources.

User

How do you know if your training data is accurate?

ChatGPT

I donā€™t have the capability to independently verify the accuracy of my training data. My responses are generated based on patterns in the data I was trained on, but I donā€™t have the ability to fact-check or validate information. Itā€™s important for users to critically evaluate the information I provide and consult reliable sources for important matters.

OpenAI, the organization behind Assistant, has implemented a rigorous training and validation process to ensure the quality of the models. This involves using a diverse range of data sources and carefully curating the training data. However, like any AI model, there may still be instances where the output is not accurate or up-to-date. Users should exercise their own judgment and verify information when making important decisions based on my responses.

User

I shouldnā€™t use you to make important decisions then?

ChatGPT

While I can provide information and assistance, itā€™s important to remember that I am not a replacement for professional advice or critical thinking. Here are some important considerations:

  1. Critical Thinking: Always engage your own critical thinking skills and verify information from reliable sources, especially for important decisions.
  2. Consult Professionals: For legal, medical, financial, or other critical matters, consult with qualified professionals who can provide specialized expertise and advice.
  3. Educational Purposes: I can be a helpful tool for general information, learning, brainstorming, and generating ideas. However, Iā€™m not a substitute for specialized knowledge or professional judgment.
  4. Use Discretion: Be cautious with sensitive or personal information. Avoid sharing confidential or sensitive details with me.
  5. Fact-Checking: Always fact-check important information, especially if it has significant implications.

Remember, while I strive to provide accurate and reliable information, Iā€™m not infallible. Always exercise your own judgment and seek advice from qualified professionals when needed.

2 Likes

Hi, everyone,

I donā€™t have so much experience using AI tools for testing, just mostly with ChatGPT and Postbot, but it is interesting to explore their possibilities.

ChatGPT is good tool for information processing, same tasks like unit tests, especially if companies have their own internal AI tools. In that case I can feel more assured of data security and privacy. But if you use not the own AI tool, I think, that there could be issues with the provision of data and the preservation of their privacy.

Also I canā€™t trast it due to concerns about how they are stored and used and I am not :100: percent sure of the correctness of the results: sometimes they are incorrect, imprecise, requiring additional clarification.

I hope, that AI doesnā€™t completely replace human testers, but more asist them. Human testers bring critical thinking, creativity, and domain expertise that AI tools currently cannot fully replicate.

1 Like