I’ve heard Jennifer Bonine speak at a variety of conferences over the past several years and this time she wants to have us consider the “post-modern era of software testing”. Testing is hanging and a lot of the tools we look at as mainstream and every day are actually getting to be older, mature tools. JMeter was introduced in 1998. That’s as old as my daughter, who is about to turn 21. Wow! Do we still look at testing from the same paradigm? How can we get into some of the more modern aspects of testing, specifically AI and ML?
Jennifer showed us a couple of real-time polls to help us see how they interact and react to new information. Again, this example references a lot of what Tariq just talked about (and if you followed along, you know that this is agents getting scores and based on those scores, they allocate agents to do what makes the most sense).
AI and ML can seem like its really difficult but on the surface, it’s an elegant way to look at how systems interact. My guess is that as there become more individual interactions, this process will start to slow down. IF it’s a relatively small but monotonous set of tasks, AI will make short work of it and master it quickly.
One of the most interesting paradoxes of AI is that the simple things are the hardest things for it to accomplish. Additionally, it really helps to think about how AI looks at dependencies and relationships. It doesn’t really have an overall view or qualitative mental map. It has a lot of scores and it can sort them based on previous interactions. The mountain of things that we can keep spinning at the same time and consider it automatic or just “meh, whatever” are actually incredibly complex sequences of events still seen as genuinely challenging for a computer. Complicated but linear activities are much easier for AI to “get its head around”. AI struggles with the everyday and mundane because that level of autonomy is really complex. For reals!
Even if we get all of the details down with regard to software learning and replicating what humans do, there is still a lot of AI/ML that is just super expensive to implement. We can calculate numbers, we can rank options, we can budget movements but a computer at this stage cannot appreciate art or think critically about nuance and perceptions. At this time, that’s still the domain of human brains. Not to say it will always be that way but it is still very expensive to get computers to go that deep.