How many of you wish that there’s something like ‘Alexa! Get this software tested!’
My unpopular answer is yes.
I imagine Alexa as a very fast tester. Things which take me hours to test will be tested within minutes.
Here are some reasons :
- It leaves me time for more indepth or complex testing. In one company the developers had a lot of automated tests on several layers. It took me a while to find some serious bugs with exploratory testing.
- A system consists of hardware, software, and manual procedures. Alexa would only focus on the software.
- I really have to focus on testability. If the system under test interacts with other systems, then I have those systems in place or mock them.
- For testing there is a need for focus. As a tester I can help with test strategy and planning. I can use information from users to determine the biggest product risks.
- I still need my testing skills. In security testing there are a lot of tools available. If I have enough money, then I could make a standard assessment using a tool. The real expert can find more bugs than a tool.
- I would not be surprised, that a limited set of personas is available. But what about a disabled user or secondary school pupil struggling with privacy terms?
- The basic question is whether needs are met. This depends on the context of the stakeholders.
NB There are some drawbacks of a test software like bias and energy consumption.
A good, comprehensive thought process!
About 20 years ago a customer came up to me and asked a question that made me think for a long while.
Back then, I was coding up plugins, someone would come with a spec, I would check it against specs for the identical thing, and then implement the behaviours. This customer was asking, what if a machine could do this. So I actually spent a few weekends trying to generalize and see if it could work, if a machine could learn a protocol and implement it with some help. I got as far as the hard parts but learned one thing. Machines cannot think. 20 years on, I recognize, that an AI/ML routine could do those easy parts that I generalized too. But not the hard parts. So I don’t hold out any hopes of a “Testing Alexa” box, although, that won’t stop me trying to build one.
Both Apple and Google already have such an app for all mobile applications pushed to their stores, which proves that these tools do have a space.
Yes. Machines cannot think, and that’s why there is no ‘learning’
based on context, which is a limitation that many researchers
are trying to overcome.
Apps could try to automate the hard parts too, but the hard part
itself is so vast and very context-driven, so it’s tough, but for
starters, an automated chess playing machine is an inspiration.