I asked someone their opinion on what to be included in test coverage
Their reply was “Ask Copilot”
Will this become a norm where people might not have opinion and they are dependent on AI
I asked someone their opinion on what to be included in test coverage
Their reply was “Ask Copilot”
Will this become a norm where people might not have opinion and they are dependent on AI
Asking AI is not a bad thing to suggest in theory. But whoever is asking needs to know what to ask, what to accept and reject, how to assign resources like time to those suggestions, the value of each one in context based on their understanding of the needs of their test clients. So “ask AI” needs more words around it to be useful.
So it’s a dangerous thing to direct someone to an AI without knowing more about the situation. If I said you can solve your problems with a baseball bat… well, to some people that means take up baseball…
I think that replies like this one specifically are the equivalent of Let Me Google That For You, which is banned on StackExchange for good reason and should be considered, as a general reply without addition or context, worse than useless.
AI is like any other tool in the sense that it empowers people. For good or otherwise. It can empower people to pretend that they understand things that they do not. That can be dangerous. But I suppose the real question is: Is the danger and limitation of AI worth the financial savings to companies? Because that is the question that will drive the industry. And as long as companies value craft and capability enough to pay for it then AI will remain a tool in the hands of good testers. Because testing becomes what people pay for. If they want certifications, they get certified applicants. If they want tool users, they get those too. And AI will be the same story. Of course people will use AI to try to leverage jobs they cannot really do for the money, and that’s another challenge that either companies will overcome, or choose not to overcome.
Haha, that “Ask Copilot” response pretty much sums up where we are going😅.
The problem isn’t about a dependency on AI but how one uses the technology. It becomes a crutch if people stop forming opinions or thinking critically. That, then, is at risk. However, look at AI as a second brain: it validates, speeds up, or gives you an alternative perspective. AI, then, arguably makes our own thinking better.
AI is great for suggesting coverage areas during testing, but it can’t completely replace the context, intuitive sense, and domain knowledge that a human tester brings. So maybe the new norm shouldn’t be “Ask Copilot” but rather “I’ve got an idea, let’s check if Copilot agrees.”
I have no doubt that at some point we will be.
Consider the example of low code automation tools. Take the tool away and many of those creating scripts with those tools will likely have no chance of building automation from scratch.
Ai amplifies that model exponentially and applies to a broader range of things, to take it away and a lot of people will offer very little value.
The question is then is this okay, is it the natural progression. Would society collapse if the internet collapsed, are we prepping for this?
AI will likely have a higher chance of being switched off though, could be security reasons, cost, morale etc, I know projects and clients who due to current risks do not allow it. Those teams still do very well and the AI productivity hype is still not so evident but for those coming into the industry now, maybe only knowing AI its likely yes they will be over dependent.
I shall refrain from asking AI about AI preppers as I quietly plan my escape.
AI isn’t going to turn intelligent people into idiots. Intelligent people will use AI where it’s useful (which it may or may not be in their context) and will never give an answer like “Ask Copilot”. Sadly, these people are the tiny minority.
The testing “profession” is plagued with lazy people who just want to paste a URL into a tool and click a button to do their security or accessibility or API testing or whatever. They ask stupid questions on forums such as “What are the test cases for a e-commerce website”. Total reliance on AI is just the next predictable step for these people.
You can’t fix this for other people, and eventually there will be an awful lot of them. You just need to make sure you are not that person and you do not employ that person. A good tester won’t have any problem doing that.
On a tangent there are more articles coming out where AI is deemed like a human, friend, with empathy etc, adding human form and tone adds significantly to this risk of misconception with potentially tragic results.
Even for a second forgetting that its a toaster will create a dependency.
Even the llm’s despite the AI names can tell you they are not actually AI at all, they are trained models.
The sales pitches want you to ignore this and perhaps even want you to become dependant, its a bit of a wild west encouraging dependency, no westworld pun intended.
haha - high-five
to Urdu/Hindi speakers ![]()
Currently I am testing a new feature, to be honest I have not ask AI agent yet, not because I wont, but because I did not feel a need to. I derived all the scenarios from my experience of the product, as I believe testing is first of all ‘context dependant’. Until the AI agent knows the context it can guide what to be tested further. However usually the agent would tell you generalise idea around testing. There is no harm in it to ask I guess. Why not. However solely depending and just testing what agent tells might missed the innate nature of end users’ experience scenarios and context.
I still use AI as an aid rather than completely relying on it (like a real resource). In my context the work load doesn’t allow spending time to teach the AI all about our context so it can actively help us.
It has allowed me to speed up development related work though.
I think it’s perfectly reasonable to regard AI as a friend. Like AI, my friends are unreliable, prone to unnecessary repetition and hallucination, wrong most of the time, and are incapable of original thought. At least AI is happy to work 24/7, isn’t drunk most of the time and doesn’t want to borrow money I’ll never get back.