This topic came up on This Week in Quality today with @scottkenyon , some good discussions came up, I’ll link to the recorded podcast once it is live.
How do you feel about curiousity and creativity with respect to AI?
This topic came up on This Week in Quality today with @scottkenyon , some good discussions came up, I’ll link to the recorded podcast once it is live.
How do you feel about curiousity and creativity with respect to AI?
Some of the comments in the side chat on TWiQ. (Sorry, struggling to tag everyone)
AI cannot replace curiosity, or critical thought, empath, innovation, emotional intelligence, ethical judgement, human intuition or calculated risk taking — @AdyStokes
It’s a personal thing. Like the discussion on using AI in education to either do your homework of use as a tool to provoke knowledge and investigation. It depends on the ethics and drivers of the individual. — Stephen Newton
I’m quite comfortable at the end of the day that AI won’t be responsible for your work; you will be. So when you work back from that, you’ll be using AI as the assistant. — @ghawkes
AI cannot replace curiosity, or critical thought, empath, innovation, emotional intelligence, ethical judgement, human intuition or calculated risk taking
AI can be a great tool, and help speed up what we do, but it can’t give you that feeling of how something feels to use — @AdyStokes
If you think AI is responsible for your work…you’re in trouble. — @ghawkes
Remember that AI is only like any tool if you don’t understand what is a reasonable answer then you can’t use it to get you the answer as it might be wrong and you would never know. — Michael Close
I think there is a lack of critical thinking skills in the world. Laziness or lack of Critical thinking lessons in education. — Stephen Newton
It’s so important we don’t forget about the enjoyment of learning new skills. Perhaps AI can support us though. — Neil Taylor
Honestly, it might boil down to preferences - some people would probably give up curiosity and creativity just to get faster results (and that, of course, can lead to quite the mess afterwards) by overutilizing AI in their work
On the contrary, it has definitely increased my creativity and curiosity. Massively. 2 ys ago I did not automate anything, now I automate loads of stuff. And one request to ChatGPT et al most often leads to another.
At its best: no.
At our worst: yes.
AI at its best (mostly looking at LLMs here) is set up to be in a coaching/guiding mode: asking questions, leading the user to reason and find the answer. Sadly it’s rarely configured to be at its best, but usually set to prioritise pronouncing confident answers whether write or wrong.
When we’re at our worst we take those easy (confident) answers no questions asked, no analysis of whether that’s the risk we want to take and how much the output matters. Conversely at our best it could make us even more curious: why does the AI “think” this - what patterns in the data it’s absorbed cause these associations?
Ultimately working well with AI (or even just in a world where AI is as prominent) as it is now is a new skill-set we have to develop. And I tend to think those of us who manage to remain most curious will do best at developing it.
I suspect selenium did that for about 80 percent of testers a few decades ago.
Ai and test agents seem to have a strong bias towards testing as almost purely a verification model, it works to machine strengths which is what is expected and will given practice likely get fairly good at that despite the hallucinations and non determinism.
Its quick to produce a lot of stuff within that verification model which risks the same potential harm to curiosity and creativity that automation brought on the basis we know its not as good as our critical thinking testers but its good enough, fast enough for the effort spent. Downside is that quality bar could drop alongside those innovative products that stand out from the crowd.
Now for testing to discover, learn and experiment model this will likely remain in the realm of curiosity and creativity with AI being another tool in our box. I like the idea of testing being asking awesome risk related questions that can potentially accelerate the team and product forward, talking with AI llm’s fits that skill very closely, an almost ideal tool for testers in many ways provided we use it well. We will skeptically likely find wonderful ways to use it which is just as well as end users are going to be doing the same, building agents to use the products our teams build.
It may even force change for those in a testing to verify model particularly those dependent on a UI due to that non-determinism element and at some point end users using agents might not need that UI or browser at all so rather than remove it could force an entire shift towards curious and creativity in testing.