I thought it was really interesting and resonated a lot.
In my (shortly former) work we (mostly) only have the Devs to do testing and a lack of wider domain knowledge & user consideration that teams focused on their features has led to challenges but recently we spent an entire sprint focused on user journeys. We used personas and Iâd be asking "why is the user using this feature?â, âwhat are they trying to achieve and why?â, âwhat is their typical workflow?â and âwhat are they doing with the output of your feature?â.
We worked together to craft journeys with different personas, each with their own logins and permissions, then looked to test as a customer. We found a few fascinating bugs⊠and discovered how terrible some of the workflows we were contributing towards were.
Automated tests are fantastic for verifying that what you expect to be true is true but to achieve quality we need to have that hands on look at our solutions to see if they actually solve the problems and are a good experience.
Testing extends beyond processing scripts it encompasses perception, intuition, and authentic user interactions. This makes testing with a human focus far more crucial than relying solely on automated systems.
I always think about the Jeff Goldblum quote from the original Jurassic Park âYouâre scientists were so preoccupied with the fact that they could, they didnât stop to think if they shouldâ. With AI, it doesnât have a moral compassâŠit only responds based on the data it has access to. AI doesnât know whether it should do something, the context behind what youâre promptingâŠonly humans can do that.
There is a murderous frenzy right now in the industry with software vendors wanting to get one step ahead and stamp AI all over their products hoping customers will jump in with blind faith that theyâre increasing their capability needing less specialised resources. But theyâre marketing their tools at replacing the very people responsible for building the software. They wonât replace people, but thats how theyâre marketing the software i.e. âTools that can read all your stories and generate all the test casesâ or âno code automation toolsâ or âagents that generate all the code devs needâ.
There is one overriding principle that will see us through, who is responsible for the software? AI or people? Vendors are behaving as if its AI, the reality is is that its people who will always remain responsible. To remain responsible donât you have to understand whats been built? To iterate the software donât you have to understand whats there already and what it impacts? To sell more of your software, donât you have to understand not only what your user/market wants, how they will use it and what business problems theyâll solve using it?
Donât get me wrong, I see AI as a useful tool added to our arsenal but one we choose to use for tasks we trust it will help with. With all the dare I say irresponsible marketing of some of these AI tools and agents, I have faith that humans will remain responsible for the quality of their software will push back until vendors adapt their tool AI capabilities so that people can benefit and remain responsible for the outcomes.
Thanks for sharing your experience. Exactly that we would always need humans. How would we otherwise understand the flows and pain usersâ experiences!
100% Automation only let us know what we ask it show. There will be unknown unknown until we find it out! Great insights!
This is some awesome analysis Gary. These are exactly my thoughts too. I am confident that most of the quality specialist would agree 100% with what you have said. AI, yes for sure nice tool in our bucket but obviously cant be use 100% instead of human. I can think of one simple analogy for it like a advance calculator. We donât use calculator for simple arithmetics. However for complex maths we take help of scientific calculator not because we cant do it manually, but I guess just to save us some time, yet we verify our solution, dont we? I guess same is with the AI.