šŸ¤– Day 4: Watch the AMA on Artificial Intelligence in Testing and share your key takeaway

I thoroughly enjoyed this presentation! (I remember chatting with Carlos a couple of times on MOT chat, although at the time, I had no idea who he was!)

Many testers, including myself, often fret about the potential of AI replacing our roles. However, Carlos made a compelling point: AI excels at following predefined lists and checks, tasks that many testers already perform routinely. The real value lies in the testing that requires critical thinking and, undoubtedly, the most challenging aspect is being creative in our approach. Carlos emphasized that for testers who excel in these areas, the future looks promising. Itā€™s the kind of tester I aspire to become!

Invariance Testing is a concept thatā€™s entirely new to me, but it sounds intriguing. Iā€™m eager to delve into it further, especially since it can help identify any biases that may exist in AI algorithms

6 Likes

Excellent Video. Thanks for sharing the insights.
It is especially interesting to think about the future of SW testing and how we have to build the skills to analyze the output of AI and ask critical questions. Critical Thinking Skills will remain a true asset for anyone in testing :slight_smile:
Thanks for pointing to some tools that can be useful to start with.

6 Likes

A very nice lecture, which has only confirmed my previously formed opinion that AI is a good assistant but must be controlled by humans in the end ( from the learning to observing ). Weā€™ll see what happens when AGI comes to life.

6 Likes

Thank you for sharing this great AMA video!
It is full of details around AI. My favourite question/answer was about difference between ML and AI.
Extra thanks to Carlos Kidman for such a enlightened explanation!

5 Likes

Watched the whole thing and I picked up my takeways there:

  • Before using any tool for AI, just go through their terms because confidentiality and security can be at risk
  • AI can help you for accessibility testing but canā€™t fully replace the human interaction . this is questionable need to think.
  • Train your models using high-quality data (try to avoid bias).
  • The different definitions of Machine Learning and AI.
  • Using Co-pilot to help with test writing.
5 Likes

I found the fact about ā€œDatadriftingā€ very interesting.
Even if an AI system might be well trained with data and even has context attached, it might happen, that out of some circumstances or unexpected changes, the trained data is not adopted and can result in unexpected results. Analyzing and evaluating AI systems is a very important task to identify vulnerabilities and asses risks.

7 Likes

The best part of the video was hearing about tools and websites that can provide help and additional information, like LangSmith, Postbot, and MLCopilot or huggingface.co and reportportal.io.

4 Likes

Really enjoyed Day 4. A great session with @elsnoman. He is a very engaging speaker.

I came away with three takeaways that I would like to share:

  1. In answering the question How can we use AI with day to day testing? I was intrigued by the mention of the use of natural language with Playwright. So I Googled that and came up with a few articles that cover that topic, such as Lahiru Madhawaā€™s Playwright Test Scripting with Natural Language - Part 01 and Luc Gaganā€™s Introducing Auto Playwright: Transforming Playwright Tests with AI.
  2. In answering the question What are your thoughts on using ML to support our own testing? I wasnā€™t quite sure that I managed to understand the workflow that Carlos was describing when he was talking about using CoPilot to support TDD, so I decided to look for more articles on that and found Gio Lodiā€™s Accelerate test-driven development with AI and Paul Sobocinskiā€™s TDD with GitHub Copilot on Martin Fowlerā€™s site.
  3. Finally whilst watching the AMA Video my eyes were drawn to the sidebar that provided other resources and I bookmarked Paul Maxwell-Waltersā€™s article Testing The Quality Of ChatGPT Responses: A Report From The Field because it looked interesting.

So that is Day 4. I might try to go back and do Day 2 and Day 3 tasks. They were over a weekend which included my birthday and also my sonā€™s birthday so we were a bit busy. :birthday: :clinking_glasses: :smile:

4 Likes

While it seems to believe 100% that your response is authentically human-written, suggesting thereā€™s probably some patterns or features it keys on to make those determinations.

In the end, though, the best way to approach crafting responses like these likely is to write a compelling cover letter that puts your skills on display, rather than whether the writing was done by you or an AI.

Basically, I use https://www.perplexity.ai/ to condense my thoughts and lengthy scripts into shorter, more meaningful content.

2 Likes

I am so thrilled and excited by this session.
I watched it at midnight and I was thinking oh, I am gonna watch it real quickly and thatā€™s it but then the magic happened, Carlos happened.
It really was like the best concert I was on and having like mind blowing session with 2 encores.
He is sooo good at explaining things in simple terms and with vivid examples.
Kudos to you Carlos, I would listen to you at any time (day or night :sunglasses:)

Every question was very interesting to me.
I got a ton of useful information but I am going to try to highlight the one that had the biggest impact for me in terms of giving the light or the aha moment on the AI as such.

How can we use AI with day today testing?

One word Context with the capital C.
chatGPT or Bard are not testing tools but they can assist you with testing if they have the right context. The future of using the AI tools in that matter will be having Large Language Models (LLM) and then ā€œspecializingā€ them to your needs, i.e. giving them your context to work with.
New tools that I learned about and did not hear before and will be useful in my day to day work:

  • PlaywrightGPT - use natural language to generate/execute Playwright code in real time.
  • Postbot - helps you speed up your most common API development workflows with natural-language input, conversational interactions, and contextual suggestions.
  • Report portal https://reportportal.io/

And last but not least syntagma that I really really like is chatGPT = great Duck :smiling_face_with_three_hearts:
I love that analogy very much and I love the Duck concept in IT world.
I just got little cute one for my birthday from people that are very special to me (they will know who they are :heart:)

Cheers!
MJ

8 Likes

Roughly reading the whole video, topics such as how to test for AI biases, how to ensure user confidence in AI-powered software, how to use AI to help with day-to-day testing, how to use machine learning for testing, how to ensure data security and confidentiality, the role of AI in usability and UX testing, and the role of the software tester in the next decade, were discussed.

Carlos also shared his thoughts on the AIā€™s role in the future of software development and testing, suggesting that AI will play an important role in automated testing and that the role of the software tester will focus more on analyzing and evaluating AI-generated test results. He also touched on ethical and compliance issues when using AI and emphasized the importance of monitoring AI performance and data drift.

Finally, Carlos mentioned the potential of AI to help junior testers improve their testing capabilities. The entire interview touched on the use of AI and machine learning in software testing, the biases and limitations of testing AI, and how AI can help improve testing efficiency and quality.

The following topics are of more interest to me

  • Can you test for biases in AI?

  • How can you assess confidence your users have in your AI powered software?

  • What tool are you using for AI testing?

  • How can we use AI day to day testing?

  • How to get into AI testing?

  • How do you guard the quality of AI that changes how it behaves in production?

Regarding testing AI biases, Carlos Kidman mentioned that it is possible to test AI bias using the invariant testing technique. This technique involves replacing words to see how the AI reacts. For example, he mentioned replacing ā€œChicagoā€ with ā€œDallasā€ in a sentence and observing the AIā€™s change in sentiment analysis. In this way, biases in AI models can be identified and corrected.

Regarding assessing user confidence in AI software, Carlos mentioned the use of observability techniques. He gave an example of how data can be collected through user feedback (e.g., likes or taps) and analyzed to assess user confidence and satisfaction with AI output.

In terms of AI testing tools, Carlos mentioned that they use a tool called ā€œLing Smithā€, which is part of the ā€œLing Chainā€, to observe the performance of AI systems. He also mentioned using ā€œPytestā€ to automate some test cases.

Regarding the use of AI in day-to-day testing, Carlos suggested trying to use tools like ChatGPT and Bard to inspire creativity and solve testing problems. He emphasized the need for tools to have enough context to be effectively applied to testing.

For how to get into AI testing, Carlos suggested that beginners use tools like ChatGPT and Bard to start exploring, which will help them discover the potential uses of AI in testing.

Finally, on how to safeguard the quality of AI performance in production environments as data changes, Carlos emphasized the importance of monitoring AI performance, referring to the concept of ā€œdata driftā€ and sharing a story about a real estate company that lost money by failing to monitor AI performance. He cautioned that as the environment changes, AI needs to be updated and adapted to maintain its performance and effectiveness.

The most impactful point for me is: how to better utilize the capabilities of AI rather than simply using it

Using AI is as much about improving efficiency and quality as it is about our testing work.

How to make greater use of AIā€™s ability to help us complete our work more efficiently and with higher quality through the provision of cue words and context may be the direction we need to think about in the future.

5 Likes

Understood a lot of things from the video. It gave me an insight to AI in Software Testing and also learnt that there are so many tools to try out in AI. Got to know that there are some tools like Ling smith, Post bot, etc.,

Also, the video made me think that AI mostly gives the analysis it already has. So we need to form our queries in such a way that it gives the correct results with out any issues

5 Likes

I think the AMA format was great considering Carlosā€™ expertise as well as the questions that a lot of people who are starting to delve into AI have. I learned a few more concepts and discovered new tools that make use of AI in its testing.
Thereā€™s always the push for using AI in testing and automating everything that has a lot of testers worried about losing their jobs and it was my first time hearing about Carlosā€™ perspective about how more people might want to get into testing because of the nuances and analysis that a professional tester could provide that AI canā€™t.

5 Likes

Just finished Ask Me Anything: Artificial Intelligence in | Ministry of Testing and thought it was very useful

It gave me positive feedback that what I am already doing is correct for using tools like Chat GPT for a base of writing certain tests and using it for helping with coding issues etc.

I thought there was really good information on how it will affect testing in the future and signs with my thinking of what a modern tester should be. I also thought the information about the open source tools was also very useful and I hope to look into that in the future :slight_smile:

4 Likes

Very interesting talk. My big take away is that AI is ā€œstillā€ a tool, and it requires human input to get the most out of it, and to use a very old term it can be GIGO, so we still need to quantify the results it is generating.
I have not used AI for any test/code generation, but would be interested into how much of the output needs tweaking before it can be used ?

4 Likes

Hello Fellow Participants,

Its indeed an interesting AMA session .

Here are my major take aways

  1. AI does not give answers that we are expecting hence context a d prompts plays important role.
  2. AI will replace ones who are not ups killing and leveraging AI to work efficiently.
  3. AI is not perfect/trained yet, Human verification is still required.
  4. There are a lot of specialized AI tools coming up and will be a game changer in Software Industry in near future.
  5. Major focus should be on human aspect of software testing e.g Critical Thinking, Questioning skills, brake the code attitude etc.

Thanks,
Akanksha

5 Likes

Thanks for unblocking such good content for this event. Carlos shows expertise in the market and software engineering, like Carlos mentioned AI itā€™s a learning model that depends on the algorithm, and can identify natural disasters. I like the part when Carlos mentions the pandemic, people working remotely or stopping working due to constraints on their jobs, and the prices dropping because AI couldnā€™t identify the problem and just followed the patterns.
Also, questions about security, how to keep Machine Learning ethically, be cautious about the data used in ML/AI, so the way data is used, if it exposes personal data, donā€™t use it. For me, thatā€™s the biggest concern in using ML/AI models, security. Unfortunately, I see people posting personal and company data in ChatGPT, donā€™t post it, you are sharing information on the internet, so think about where this information goes. Itā€™s fun to show, but is not fun when is reused in the wrong way and used for cybercrimes.

5 Likes

Is getting the answer the most important thing or is it the learning along the way?

Carlosā€™ example of watching a chat between a group of developers trying to figure out a SQL query was really interesting to me. His point was that people underestimate the power of the tools that they have available and the time they could have saved. I suppose in this example, a whole dayā€™s worth of work might be worth the time saved. But, letā€™s not lose the value that we get from talking to each other, learning from each other, how good problem solving is for our brains. Yes AI can give you the answer, but it canā€™t improve how you solve problems and perhaps worse still, you might not understand the answer and use it blindly. There is a balance to be struck, have a think first, if youā€™re really stuck you have options to unblock you.

A boom in software testing

Understandably as a quality-obsessed person, I loved Carlosā€™ prediction that testing might actually see a boom. He explains that AI is good at things with rules, patterns and structures. So itā€™s actually doing a pretty good job of writing code. But you canā€™t just trust that. You have to watch it, you still have to question it and test it. We still need that analysis of where things could go wrong, thatā€™s the human element that more developers may turn to as the use of AI grows.

Testers love context!

Itā€™s been mentioned a lot already but the idea of AI within a context was new to me. It makes complete sense now that Iā€™ve seen it. I previously considered AI to be catching all of the data ever in existence with a giant net and making it look pretty. But if it can do this within the context of your problem/situation/technology, I can see how powerful this would be. How good is AI? It depends on the context!

How could AI help a junior tester?

I absolutely loved the point that AI could help a junior tester with direction in the early stages of their career. You are so overwhelmed by all these different testing types, terminology, concepts, where do you even start? Asking ChatGPT a question like, how long should I test this for could give more targeted resources.

The quality of the model is only as good as the data you train it on

Observability will be key for AI. It will change its behaviour in production as the data changes. Make sure itā€™s still doing what you expect.

ChatGPT is a good rubber duck

If thereā€™s no one else around, it can be a good tool to ask!

Ethical and law abiding AI

How can we ensure this? Get familiar with data, what is personal data, what are the rules around gathering data, storing data, deleting data, GDPR etc. Look at models with a good track record, see what biases have already been identified and assess whether you can live with this in your application.
Be aware of dangerous biases.

Awesome AMA, thanks for making it available, I learned so much!

5 Likes

This post alone made my whole day! Thank you so much :smile: :pray:

4 Likes
  1. Yes I did watch the whole thing, though I am not very comfortable with the podcast format of ā€œcool, awesome, cool, allright, cool, edgeofseatā€¦ā€ BUT ENOUGH, I AM NOT A PODCASTER. Thanks for all that you people do, anyway!

  2. My main takeaway was this issue of data drift, eg:
    (a) distinction between what model was trained on, versus what users feed back into their usage of the model (rather like the old standing vs running data thing);
    (b) the better-known issue of cut-off dates. For example, I think (may be wrong) that ChatGPT 3 was trained on the ā€œcomplete internetā€ (notwithstanding issues of copyright, other intellectual property matters?) up to a particular date, and would not look beyond that, whereas v4 (and maybe other vendorsā€™ newer models) have live internet access (again, the copyright issues are relevant).

  3. (I know there were only 2 task steps): my second takeaway (maybe this came from the discussion more than the original?) was the correlation-causation distinction. For anything which relies primarily on analysis of DATA rather than processes, this is always an issue? Obviously connected with the whole biases thing.

  4. Finally, I have a different view of the distinction between AI and ML: obviously everyone could read Wikipedia and loads of other sources. But I think I agree with @elsnoman about ā€œArtificial General Intelligenceā€ - though again there are loads of books on the subject. The most intriguing part for me is whether AIs could ever become conscious. I think not, but I canā€™t prove it!

2 Likes