Roughly reading the whole video, topics such as how to test for AI biases, how to ensure user confidence in AI-powered software, how to use AI to help with day-to-day testing, how to use machine learning for testing, how to ensure data security and confidentiality, the role of AI in usability and UX testing, and the role of the software tester in the next decade, were discussed.
Carlos also shared his thoughts on the AIās role in the future of software development and testing, suggesting that AI will play an important role in automated testing and that the role of the software tester will focus more on analyzing and evaluating AI-generated test results. He also touched on ethical and compliance issues when using AI and emphasized the importance of monitoring AI performance and data drift.
Finally, Carlos mentioned the potential of AI to help junior testers improve their testing capabilities. The entire interview touched on the use of AI and machine learning in software testing, the biases and limitations of testing AI, and how AI can help improve testing efficiency and quality.
The following topics are of more interest to me
-
Can you test for biases in AI?
-
How can you assess confidence your users have in your AI powered software?
-
What tool are you using for AI testing?
-
How can we use AI day to day testing?
-
How to get into AI testing?
-
How do you guard the quality of AI that changes how it behaves in production?
Regarding testing AI biases, Carlos Kidman mentioned that it is possible to test AI bias using the invariant testing technique. This technique involves replacing words to see how the AI reacts. For example, he mentioned replacing āChicagoā with āDallasā in a sentence and observing the AIās change in sentiment analysis. In this way, biases in AI models can be identified and corrected.
Regarding assessing user confidence in AI software, Carlos mentioned the use of observability techniques. He gave an example of how data can be collected through user feedback (e.g., likes or taps) and analyzed to assess user confidence and satisfaction with AI output.
In terms of AI testing tools, Carlos mentioned that they use a tool called āLing Smithā, which is part of the āLing Chainā, to observe the performance of AI systems. He also mentioned using āPytestā to automate some test cases.
Regarding the use of AI in day-to-day testing, Carlos suggested trying to use tools like ChatGPT and Bard to inspire creativity and solve testing problems. He emphasized the need for tools to have enough context to be effectively applied to testing.
For how to get into AI testing, Carlos suggested that beginners use tools like ChatGPT and Bard to start exploring, which will help them discover the potential uses of AI in testing.
Finally, on how to safeguard the quality of AI performance in production environments as data changes, Carlos emphasized the importance of monitoring AI performance, referring to the concept of ādata driftā and sharing a story about a real estate company that lost money by failing to monitor AI performance. He cautioned that as the environment changes, AI needs to be updated and adapted to maintain its performance and effectiveness.
The most impactful point for me is: how to better utilize the capabilities of AI rather than simply using it
Using AI is as much about improving efficiency and quality as it is about our testing work.
How to make greater use of AIās ability to help us complete our work more efficiently and with higher quality through the provision of cue words and context may be the direction we need to think about in the future.