🤖 Day 4: Watch the AMA on Artificial Intelligence in Testing and share your key takeaway

On Day 4 of the 30 Days of AI in Testing challenge, we’d like you to watch this Ask Me Anything on Artificial intelligence in Testing with the incredibly knowledgeable @elsnoman, a seasoned expert in AI and Testing.

During this AMA, Carlos shares his experiences and insights on applying Machine Learning to solve complex testing challenges, his transition to leading AI initiatives in testing, the future of AI in testing and much more!

Task Steps

  1. Watch the “Ask Me Anything on Artificial Intelligence in Testing” with Carlos. You can choose to watch the whole thing (highly recommend!) or choose questions of interest by using the chapters icon on the player or by clicking the chapters from the playbar as indicated with small dots. Take notes as you go.
  2. After watching, reflect on the session and share the takeaway that had the biggest impact for you by replying to this topic. For example, this could be a new understanding of AI’s potential in testing or any ethical considerations that stood out to you.

Why Take Part

  • Deepen Your AI Knowledge: Carlos’s experiences and the many topics covered in the AMA provide a great source of information to quickly increase your understanding of thw vast role AI can play in testing.
  • Engage with Your Peers: Post your key insights from the AMA and see what others think. This is a great way to get different views.
  • Free Access: Here’s an extra incentive to watch! The AMA recording, previously exclusive pro content, is now freely available to all members throughout March 24. Seize this chance to watch this valuable content for free.

:clap: Learn smarter with your team. Go Pro!

15 Likes

Hello there :raised_hands:

I just finished watching the entire video, and it was very enlightening, great questions and great answers.

What caught my attention was the importance of the context. Using ChatGPT sometimes I forget that its context is the entire internet, or the data you give to it, so the answer is not exactly what you want or expected, and thinking about it to totally makes sense that the future would be the tools that specialize in the context of testing or each language you are working with.

The other topic it got me thinking about is the security of the data we share with those AI tools, we need to be mindful of what I am sharing with it and share only what I can and think if it is really necessary to share it to get the answer I want! At the end, I need to always do better questions.

The entire video is gold, so I highly recommend everybody to watch.

See ya :wink:

17 Likes

Hey @dianadromey

After watching the entire video, I have felt the extreme help that a junior tester :baby:t4: can get from AI. Like @elsnoman mentioned at such expertise level, we might not know exactly how to start on something, how to prioritize, risk areas focus etc. And where there are limited resources also, AI would be best buddy to get more insights/feedback.

Also the risk associated with AI not getting trained with the recent data has been clearly explained with shocking real estate scenario :exploding_head: happened. The necessity of human in the loop is definitely needed to guard the quality/trust of the AI.

As a whole, it is such a informative AMA session, thanks for sharing!

11 Likes

Hey Hey @dianadromey :wave:

Just finished Ask Me Anything: Artificial Intelligence in | Ministry of Testing and it was incredibly enlightening!

One of the key takeaways for me was Carlo’s insights into the future of AI in testing. The discussion on how machine learning can be applied to tackle complex testing challenges really resonated with me. It’s fascinating to see how AI can enhance testing efficiency and accuracy, open up new possibilities for improving software quality.

Additionally, his emphasis on ethical considerations in AI testing to be particularly thought-provoking. As AI continue to evolve in the testing landscape, it’s crucial to prioritize ethical practices to ensure responsible and fair testing methodologies.

10 Likes

Hello, @dianadromey and fellow participants!

I completed today’s task of watching the AMA on AI in Testing with @elsnoman. It’s a wonderful resource for software testers and opens up a wide range of possibilities and wisdom for us all.

Here are my key takeaways from this session:

  • Most people are limiting themselves. Be open to possibilities

  • AI will replace the Fake Testers / Test Case Checkers soon.

  • AI has made being good at “human aspects” and creativity back in demand.

    • Communication
    • Persuasion
    • Questioning
    • Story Telling
    • Critical Thinking, etc.
    • Niche Testing Skills
      • Usability
      • Accessibility
      • UX
  • Observability is a key skill if you are interested in improving your own AI models/workflows. You will need it to boost your confidence, as well as fine-tune your AI applications.

  • Making your own Model is possible. There is a lot of Open Source material out there. Reference: HuggingFace.co

  • Specialized AI Tools for Testing will be the things to watch out for. Ex: Postbot.

  • Companies and testers testing AI need to also ensure that what kind of data should be masked by AI even if it’s input into an AI bot by the User.

  • Knowing the legal and software laws is important in the road to the future.

Here is my full summary of this AMA video:

Also, added my video on my today’s learnings/task here: Day 4 of 30 Days of AI in Testing - AI Wisdom, Tools, & Building Your Own AI Models - YouTube

Looking forward to your feedback on this one. Thank you!

20 Likes

Hello!

I think the AMA was great and lift a very good point that you have to think what data you train the AI with that will be the base lline in AI for all decision it makes.

You need to understand what the purpose of the AI is to train it right and also all the time observ what is happening in the world to adopt and change.

But the opertuities are really great if used in right context and for right things.

I think we all in the field need to be looking into this and understand the impact it will make for all of us.

6 Likes

Nice to see that I’m not the only one who makes a visual summary when watching videos! @parwalrahul :eyes:


Regarding ethical considerations, since it was explicitly mentioned in the task description, I think this AMA touches the surface only very briefly. I’m very interested in this aspect of AI, & I hope it is the focus of some later challenges :v:


Before watching this video, I hadn’t thought about how using an AI coding assistant in combination with a TDD approach might lead to better results, but - of course! Yes! Seems like a powerful argument to bring to developers who may be resistant to writing tests. :thinking:
I mean, if you’re using an assistant you’re probably already describing your high-level intention or criteria to generate code suggestions, and writing a failing or stub test would be another (or additional) way to approach that same flow, with the added benefit of having a test at the end, too. :yellow_heart:

The advice to ‘look at the problems you are facing and see which of them AI could help with’ sounds very similar to the advice we should give to testers wishing to get into automation. It’s not about learning a specific language, or picking up a specific tool, but about thinking creatively about how you can use the tools at your disposal to make certain things easier/faster in order to have more time for the interesting stuff! :sunflower: I’m here for it! :sparkles:

11 Likes

I have watched the entire video, and I really liked Carlos’s approach and since he sees the future of testing with respect to AI, I choose to believe his vision of the future for software testing:

"
I think that’s where software testing is gonna be, is I think you’re gonna use AI to do a lot of the automation I think it does that very well. But for everything that automation isnt very good at, well the AI may not be very good at that either.

*An so having humans in the loop to actually analize these things assess them and test them, I see software testers as I think were gonna have a nice boom. I think there´s software testing is gonna be a pretty awesome skill set to have. *

And I think developers are gonna start moving into that space a lot more as well. Because they’re not gonna need to have to write so much code anymore, theyll be able to describe what they want and just verify and assess the code that`s given to them and then hook it
"

6 Likes

Thank you for sharing the video. There are a lot of insightful comments in the AMA.

My biggest takeaways after watching:

  1. Be creative
  2. Keep exploring

Some more reflections:

  • More important to focus on the human element as a software tester to keep yourself valuable in the field, e.g. ability to analyse and assess quality and risks.
  • In the current time, there are still a lot of good AI tools uprising, keep exploring, don’t just stick to prompt engineering in ChatGPT and missing out the bigger world in testing (haven’t even tried Postbot).
  • Open source communities like huggingface are gold mines. The video didn’t mention the time and effort needed to gain enough knowledge to be able to flex on using or even creating different AI resources, but it still sounds worthy to invest.
  • Think about how to solve the context problem when using the tools. As a fan of context-driven testing, hmm, I think even human (or me) cannot always do that…lol
8 Likes

Watched the Ask Me Anything. Here are my thoughts:

  • Interesting that he would refer to ‘AI’ as Machine Learning which is more applicable.
  • We still play a huge role in testing with AI. For example providing open source models with extra information on which to make decisions. This is useful for fine tuning models.
  • Don’t consider Chat GPT etc as a testing tool, but as an aid. It will lack context on the type of work you need to do.
  • Use AI to help solve issues - debugging code for example. You could have the answer straightaway, taking away a huge chunk of time.
  • RE confidentiality of data. Quite a big statement that we either take the creators at their word around privacy etc, or you need to create your own. Feel like this is an area that will require some growth as this doesn’t feel sustainable for that long.
  • Role of a tester is intriguing - might even steer away from the test engineering direction that testing has been going in (whilst remaining important there are more tools that can close the existing skill gap). More of a pivot back towards those ‘original’ testers that excel in the human side of testing - analysis of tests/outputs etc.

A very interesting talk and great to know that there are people out there thinking about testing in AI and how to grow. 99% of the info out there is to do with AI & development so this is encouraging.

7 Likes

watched the whole thing - always good to listen to people passionate. I think a lot here is in not mistaking AI platforms as testing tools - but then, also, the whole thing - AI will not replace testers (or Devs) - that testers (or Devs) with weak AI skills will be replaced by those with stronger ones

7 Likes

I’ve chosen 4 questions:

  • How can you assess the confidence your users have in your AI powered software?
    • Observability techniques are being used. Many AI chatbots have icons where you can tell if you liked a response or not. Sometimes When you click you did not, chatbot can respond with 2 or 3 alternative answers and ask if any of these are better than the one initially presented. The company records input, output, user reaction and chosen alternative output, and uses that to further train the model.
  • How can we make AI unlearn the concepts learned?
    • The implicit answer is that you can’t. Instead, you can start with smaller and more specific model, instead of something very generic like ChatGPT. There’s also a way to tell bot to look into some specific dataset before trying to come up with generic answer. So if you want bot to help with Selenium, you could give it all the Selenium documentation and instruct it to look there for answers first.
  • What is one use where AI can immensely help a junior tester to test better?
    • Juniors can get overwhelmed by all the different terms. So they can use chatbot to paste a screenshot, HTML snippet or the like and ask “how would you <some term, i.e. do performance testing>?”. Chatbot will come back with some answer, and then you can ask “what does that thing mean?”, “why would you do this other thing?” etc. So it can give more direction.
    • Another thing is getting unblocked. If you don’t know what exactly to do, you can ask chatbot for some initial ideas, how much time should be spent or some thing etc.
  • What testing tasks lend themselves better to being done with AI tools?
    • The generic tools - like chatGPT - are not very good at testing and he does not recommend using them. Instead, you want a tool that has more context about your specific situation. These contextualized and specific tools can come up with scenarios, create tests, write some specific code.
6 Likes

Day 4

Watch the “Ask Me Anything on Artificial Intelligence in Testing” with Carlos. You can choose to watch the whole thing (highly recommend!) or choose questions of interest by using the chapters icon on the player or by clicking the chapters from the playbar as indicated with small dots. Take notes as you go.

  • Can you test for biases in AI? You can test for sentiment analysis, is it positive or negative. Use invariance testing and replace tokens within a positive or negative word and see how the model reacts.
  • Assessing confidence your users have in your AI powered software - usually a prompt has a few answers, some tools as if one response to a prompt was better than another.
  • What tool are you using for AI testing? Using Langsmith to test Langchaim, or make it observable. Otherwise, use pytest as a test runner, not as complicated as you might think.
  • How can we make AI unlearn the concepts learned? With open source models you can see what they were trained on, with GPT you don’t know for sure. You can also give GPT files to use as a knowledge base, which is can use first rather than just going out to the internet. Main hub for OS is huggingface.io.
  • How can we use AI with day to day testing? At the moment its cool but not that useful as it doesn’t have the context. Asking for an API test from ChatGPT will do its best, but if its within Postman and can see your other requests, responses etc it will be much better as it has more context.
  • How to get into AI testing? Start using ChatGPT to try and solve some of your problems. Don’t underestimate how powerful the tools are. Research prompt engineering.
  • Security and confidentiality of your data being processed by the AI. If you don’t want to trust a vendor with the data, don’t send it.
  • Whats the difference between machine learning and AI? ML is algorithims and pattern matching, with AI rather than matching and mapping, it is inferring from data to answer a question.
  • Where do you see the role of software tester in 10 years? The analytical part of the testers role will still be very valuable, AI does well with structured endeavours with rules and constraints like writing code but less so with analytical techniques. AI would be better at automation for example.
  • Using copilot to support automation and testing. Using copilot to assist with TDD, write your tests and use copilot to generate functions, classes. Report portal (reportportal.io) to look at test results and check for failure patterns. Starcoder as an open source alternative to copilot.
  • How can AI help with usability and accessibility? AI can help out with standards (like those in Lighthouse), however an AI doesn’t have to work with keyboard and mouse for example so you can’t trust it.
  • Where can AI help a junior tester test better? Learning about all the terms that get thrown around in testing that you are supposed to know, a GPT could help you to get started on a testing problem. You can ask other testers but you don’t always have other testers around.
  • Guard the quality of AI that changes how it behaves in Production. This is known as data drift. If conditions changes after initial model training, it will only ever have the context of the data it was initially trained for.
  • What testing tasks lend themselves best to being done with AI tools? AI is good at structured work, creating scenarios, writing code. We need to use tools that have the context of what we are working on, rather than considering ChatGPT and Gemini as tools

After watching, reflect on the session and share the takeaway that had the biggest impact for you by replying to this topic. For example, this could be a new understanding of AI’s potential in testing or any ethical considerations that stood out to you.

  • I think I really liked the emphasis on Generative AI’s in their web containers are not testing tools per se, they are interesting but not that useful. To be truly useful, it needs to be in context (that word again), like the example of generating tests within Postman with PostBot, rather than just from the ChatGPT response window.
10 Likes

Well, this answers one of the queries I had from yesterday’s task, where I was questioning the practical applications of AI. Great session. My takeaways are:

  1. I’ve built a couple of small AI’s using Pytorch (Python AI library). Knowing that I can use Pytest has meant that I can take a look at building unit test code for the AI I have built.
  2. Further confirmation that taking AI’s and adapting them is important. I have looked at HuggingFace previously, but I now have a better view of what you can use that repository for.
  3. Great to hear that developers will need to spend even more time testing! After many years of experience with people viewing testing as a “lesser” skill, can we now start ribbing developers? :wink:
  4. I will be taking a look at Copilot and Report Portal…
6 Likes

Hi - Great day 4. Veerle and Carlos are both very awesome.

The main takeaway for me is that I need to have an experiment with these tools and embrace the fear of the unknown!

Also a reminder about the difference between AI and machine learning.

7 Likes

Hi fellow testers, this was such a great AMA, thanks for unlocking it. The main takeaways from me were:

  • Testing for biases - this was really interesting, it’s good to know that there are ways to test for an inherent bias but I do wonder how easy it is to even know how to start here, especially if you don’t know what sort of AI model you are testing or what it was trained on
  • Testing within your context - I think this was a crucial point. One of the issues I have currently with trying to use AI to help me in my day to day testing is that it doesn’t know my context, so I either have to try to give it my context up front which requires me to do a good job in explaining it or continue to narrow down its vagueness into something concrete that I can use. The only tool I’ve experienced so far that knows part of my context is the AI tool within Postman, which looks amazing but in my experience it wrote tests that always passed, even when they shouldn’t, so that was interesting in itself.
5 Likes

Hi! I’ve just finished the video and I’m still in shock.
This is my list:

  • How important is the context you give to the AI, if you don’t provider the correct one it could be a disaster. The example he exposed about the real state company it’s impressive.

  • How AI can help us to grow our knowledge, I never imagine that.

  • Motorization is really important to be aware always of the context and there is not a change inside it can destroy your company for wrong decisions.

  • Laws about data, we need to be an expert or have an expert in the company for this to not have problems with anyone. Maybe, Can we use AI for this too?

I’ve taken note about some tools he mentions, because I’m really interesting in the part how AI can help you doing queries, I would like to know if you give the correct context if you can get the best queries in terms of performance. (This is a big problem in my actual company)

Great great video.

6 Likes

That last question about how the use of AI/ML could potentially lead to making minorities/women/disadvantaged folk less visible and Carlos’s answer was really thought-provoking.

His choice of an example where it became very easy to train a resume-screening bot to focus on white males (simply because that happened to be the dominant demographic at that workplace) was an interesting example of the need to remember that correlation is not causation - that is, when you’re looking for people who are say, highly technically skilled and possess certain personality traits, you can’t use names or demographics because the skills and personality traits aren’t bound to those names and demographics.

So, as he said about the data being key, even though Carlos didn’t explicitly say it, we do need to make sure that the training data is relevant and doesn’t accidentally induce bias by way of accidental correlation that has no causative impact on the desired results.

It’s like, if you fed your AI tool a diet of popular female singers because they happened to be the singers that you have on hand, and asked for “singers like these” you wouldn’t get any of the male singers you might have enjoyed. (Super simplified, but yeah, if the popularity of the genre is what you’re looking for and the gender of the singer isn’t relevant, you have to keep the gender of the singer out of the data or balance your data set)

I do agree with others that this talk is a gem from start to finish, it’s simply that that was the point which stuck with me.

8 Likes

Hi @parwalrahul your summaries are amazing… Could you tell me what is the tool you are using for doing? Thanks :slight_smile:

4 Likes

My takeaways from this video:

  • Context being Key. Learned about AI within tools such as Postman are more useful than a standalone tool.

  • The importance of experimenting with these tools and not being afraid to start using something like ChatGPT.It is not a test tool but a good start.

  • Using Co-pilot to help with test writing.

  • The different definitions of Machine Learning and AI.

6 Likes