Tonights TestChat about testing in AI finished up our month of AI
Q1. What crossovers are there with current testing and testing AI? What existing testing methods can be used to test AI?
A brief summary of the answers:
@friendlytester said "exploratory testing will be key. As I believe it will be very rare there will be a single repeatable answer. So we’ll have to continuously be asking questions. "
Noemi rightly pointed out “I think really depends on the AI application itself. Analytics could give us a lot of information about the system being well implemented and achieving its purpose.” and, interestingly “Some existing methods would still be valid (for testing the output of the app) But what about using AI to test an AI application (output)?”
Q2. Part of testing AI involves validating the output, how would you prepare your test data for an AI application? For example @billmatthews discussed using Monte Carlo method to build test data.
“if given the same data, would it learn exactly the same? How long does the ‘learning’ take?”
I wondered how much the order of the data matters, e.g. if I sorted it by first name as input, would I get different outputs to sorting by surname.
Q3. As AI is a continuously learning system, one concern, is how to confirm that the test results are reached in the “right” way. For example, are multiple tests needed as proof? Also, how do we deal with negative cases? Will this impact the AI?
“Is there actually a version control that can reasonably be applied to a neural network?”
@ayaa.akl “maybe multiple of tests on same set of data…”
“One characteristic that AI possibly has is that the same thing can be ‘asked’ in different ways. So perhaps we could test that the same result (Whatever that result is) is obtained for the different ways of expressing it. In this way we could provide some way of”
“TBH teaching an AI to react to negative input without breaking sounds a lot like wading in the kind of cesspool that social media mods have to.”
“the trickiest part is to ensure the app learns and the users don’t ‘break it’ by teaching the wrong things (I think this was the problem for some AI trials in which inappropriate outputs were showing after a while of being ‘live’)”
“is testing even applicable to true AI? Imagine trying to test humans as we test code. Maybe testing won’t be possible; instead it will be the role of guide and teacher in the early stages…before our intelligence is surpassed and we become the students.”
Q4. What skills do you believe are necessary to get a job as an AI tester? Is it a case of building on existing knowledge or exploring new emerging technology? Or both?
“I guess for some areas the skills will be hard computer science - on the training side (after all, the training inputs need to validated/organized as well). On the other hand I see a lot of room for crossover from psychology or humanities. Even if I don’t really expect a strong AI, ever, if we want the little ones to emulate/enhance the human mind, the same ways of working could very well apply.”
“I think it will be new area of knowledge which will get experience from some communication and cultural studies.”
@alex “Some amount of Education as a discipline as well. How to formally assess knowledge, how knowledge is generated, etc, etc.”