What are the risks of over-relying on tools?

No doubt tools help us. We use them day in, and day out. Yet what happens if we rely on them too much?

I’d like to hear your stories. Can you share an example of when you’ve relied too much on a tool? What was the impact? What did you learn from the experience? How did you get back to human thinking and human doing?

What are the risks of over-relying on tools?

3 Likes

I have a clear case of over-relying on tools.

Once, I had a task to do performance testing to a set of backend services. I found a tool, that allowed me to write scenarios in plain code - it was Gatling (with code on Scala).

I implemented few scenarios, prepared an environment and executed simple performance test. Then I opened generated report with test results - and I saw that the graphs are green, everything works very fast.

I increased load a bit, run it again - and got pretty much the same results.
But it was a bit suspicious, so I dig into it.

It turned out, that the services were implemented in such a way, that in any case of request - they returned 200 OK response. In case of an error - it was added into the body of the response.

That was the reason why generated test reports were “too good”. Gatling just found 200 OK status codes and seemed that performance was great.

So … It is always nice to know your system and your tool well, before relying on it.

3 Likes

I had an example recently where we were offered a free trial to an AI automation tool. Its key benefit was meant to be the usual “no code, design automated tests from scratch etc.”, my old sceptical head was firing alarm bells…but I thought no, maybe its time to take fresh look at these.
So a few of us trialled it for a few weeks from the tech testers to the exploratory/manual testers. The main conclusions were:

  • it didn’t cope with everything, on screen custom objects etc.,
  • it wasn’t “AI” - I mean it didn’t learn anything itself, if it couldn’t do it - it couldn’t do it. You had to do the work to train it

Now the danger with this tool was, testers could then start just focusing on the tool and getting it to work for them and focus less on what they’re meant to be testing. Get your processes efficient and understood first and then use tools to solve specific challenges or find opportunities to get faster/ better/cheaper.

1 Like

ChatGPT… Need I say more? :stuck_out_tongue:

AI can be relied on if used properly with proper prompting. A tool cannot function without a humans interaction.

I had the exact same issue when I had to make a POC for a client that used “RESTFUL APIs” and I quote “state of the art APIs” :stuck_out_tongue:

2 Likes

Risks include:

  • Testers take the tool’s results at face value. With some tools and some types of testing, there are ways for testers to validate the results, but the risk is that they don’t. In fact, I suspect they rarely do. With some tools, it isn’t even possible to validate the results. Pretty much every tool I have evaluated gives incorrect results under certain conditions.
  • Testers stop thinking of tests they could do outside of what the tool can do.
  • Testers don’t learn how to do the test without the tool.
  • Testers stop thinking about how the tests could be done better, faster, more accurately or whether they need to be done at all. They just drop into a groove and do the same thing every time.
  • Testers don’t learn the technologies that underly the tool and the application.
  • If the tool is doing everything and the tester isn’t adding any significant value, there’s a good case for replacing them with someone cheaper or automating everything they do. For instance, I see accessibility testers (at other companies, not mine) test websites by putting the URL into an automated testing tool, clicking Start, then copying and pasting the tool’s HTML output into a Word document and delivering it to the client with no additional analysis or insight whatsoever. Literally anyone could do that, including the client.

We use a lot of tools when doing both manual and automated accessibility testing, but I constantly hammer home the fact that the tool is just to support us and that we ALWAYS make the pass /fail decision based on inspection of the source code and user interface. When we use an automated testing tool to test a website, we manually verify every single issue it reports (and delete or correct most of them because they are incomplete or inaccurate).

Before using a new tool, we evaluate it to find out what it does and doesn’t do, and what it gets wrong. Then we know whether we need to use additional tools or use manual testing to fill the gaps. Increasingly, we are building our own small tools because it’s easier than evaluating other people’s.

I do agree with the testers no longer seeing what is outside of the tool, not being able to test without the tool and whether things can be done better. I see this with performance testing mostly, if it can’t be done with Loadrunner or JMeter then it shouldn’t be part of the test (according to them). For example monitoring, Loadrunner’s monitoring is incredibly unreliable and JMeters monitoring is non existent without a huge wad of plugins. In both cases the liberal use of a dedicated monitoring tool can do a better job in pretty much every scenario.

Loadrunner’s sales pitch unfortunately doesn’t help matters, as it blinds users into thinking it can do everything in a one stop shop. Possibly in a lab environment yes, but in the real world putting all those testing eggs into the Loadrunner basket is going to fail.

1 Like

I would say that management at my company relies too much on tools or the idea of tools for testing, especially automation tools.
Not matter if it’s not enough testers, too slow progress, issues in production - the solution is always a tool. First QTP was the solution, then Playwrite and next AI. No idea if we maybe actually need more people or the cheap contractors were a bad idea.
Of course accessibility testing can be 100% automated with the right tools - right?

1 Like

I would say that the topic should be a bit different :slight_smile: I don’t think that in general over-relying on tools is an issue pre see :slight_smile: I would say that we shouldn’t rely on tools that are unreliable or we know little about. It’s more about the proper usage of tools and understanding the risks of using them. Like with everything, if you think that a particular tool may introduce some risks or failures, then you need to implement some redundancy check mechanisms just to be sure that you can rely on those tools and if something is wrong you can follow your “plan B”

1 Like

Instead of answering this question as a tester, I would prefer to answer rather as a human being, highlighting the over-dependency on smartphones which has impacted the real connection to the point that people are obsessed with the virtual world.

In the last few years, we have become so dependent on smartphones that we now care for that physical device more than real human beings around us.
For every aspect of life, it feels like our life is incomplete without a smartphone.
And the dependency and craze for smartphones is just going to increase day by day.

PS: MOT doesn’t have a mobile application else it could have increased my screen time further by 30-40 minutes. :grin:

1 Like