Tonights TestChat about testing in AI finished up our month of AI
Q1. What crossovers are there with current testing and testing AI? What existing testing methods can be used to test AI?
A brief summary of the answers:
@friendlytester said "exploratory testing will be key. As I believe it will be very rare there will be a single repeatable answer. So weâll have to continuously be asking questions. "
Noemi rightly pointed out âI think really depends on the AI application itself. Analytics could give us a lot of information about the system being well implemented and achieving its purpose.â and, interestingly âSome existing methods would still be valid (for testing the output of the app) But what about using AI to test an AI application (output)?â
Q2. Part of testing AI involves validating the output, how would you prepare your test data for an AI application? For example @billmatthews discussed using Monte Carlo method to build test data.
âif given the same data, would it learn exactly the same? How long does the âlearningâ take?â
I wondered how much the order of the data matters, e.g. if I sorted it by first name as input, would I get different outputs to sorting by surname.
As @punkmik points out, what biases are you adding in to your data Biases and cultural and social responsibility
Q3. As AI is a continuously learning system, one concern, is how to confirm that the test results are reached in the ârightâ way. For example, are multiple tests needed as proof? Also, how do we deal with negative cases? Will this impact the AI?
âIs there actually a version control that can reasonably be applied to a neural network?â
@ayaa.akl âmaybe multiple of tests on same set of dataâŚâ
âOne characteristic that AI possibly has is that the same thing can be âaskedâ in different ways. So perhaps we could test that the same result (Whatever that result is) is obtained for the different ways of expressing it. In this way we could provide some way ofâ
âTBH teaching an AI to react to negative input without breaking sounds a lot like wading in the kind of cesspool that social media mods have to.â
âthe trickiest part is to ensure the app learns and the users donât âbreak itâ by teaching the wrong things (I think this was the problem for some AI trials in which inappropriate outputs were showing after a while of being âliveâ)â
âis testing even applicable to true AI? Imagine trying to test humans as we test code. Maybe testing wonât be possible; instead it will be the role of guide and teacher in the early stagesâŚbefore our intelligence is surpassed and we become the students.â
Q4. What skills do you believe are necessary to get a job as an AI tester? Is it a case of building on existing knowledge or exploring new emerging technology? Or both?
âI guess for some areas the skills will be hard computer science - on the training side (after all, the training inputs need to validated/organized as well). On the other hand I see a lot of room for crossover from psychology or humanities. Even if I donât really expect a strong AI, ever, if we want the little ones to emulate/enhance the human mind, the same ways of working could very well apply.â
âI think it will be new area of knowledge which will get experience from some communication and cultural studies.â
@alex âSome amount of Education as a discipline as well. How to formally assess knowledge, how knowledge is generated, etc, etc.â