What does a boosting growth with AI mean for software testers?

In the UK AI is being used to help boost growth in the economy.

  • What do others think about this?
  • Is this happening in other countries?
  • Is your organisation doing anything internally to support AI education and projects?
  • What should software testing and quality assurance professionals be thinking and planning for as part of this?
1 Like

I think IT has been talking of nothing else for at least the last year! Perhaps now though we’re getting more balance - certainly the number of one-sided articles has slowed a bit, in my view.
It’s inevitable that boosting productivity without extra demand will mean job losses and perhaps we’ll see more ‘civilians’ doing IT work (in the same way that MS Access and Excel has democratised work automation and app delivery in the past).
This might be good news for testers but not necessarily for developers, certainly juniors.
Imagine testing the app that Doris or Steve from HR has written to onboard new starters though.
As ever, Government loves anything that supposedly saves money but also knowing Government, I am skeptical that they really know anything about big IT projects!

2 Likes

Much the same way Wikipedia, Google, Stackoverflow boost growth. One might say first, but


The difference for me is that Chatgpt et al are happy to explain why’s. So far, Chatgpt has been no magic wand for me result wise just better than the earlier alternatives. Whats interestingly difficult is still difficult, whats boringly difficult chatgpt et al will make simple.

AI would definitely have boosted my growth in tech school, being a guy not happy to solve difficult exam questions which I did being the kind of guy that later loved solving sudokus, but there were never any explanations why that suited me where I was. It applies a lot to the programming world too. the problem is that I as most bragging wise have an IQ of 150 when one really has something like 125 and the guys that came up with the calculus methods - and difficult programming constructs - really had 150 or higher. And at least I need “how did these guys come up with this??”. Going from the free flow programming to Object oriented programming made me skip programming altogether back in the early 90’s. I felt, I was just too stupid.

But that’s maybe just me. Stuff like “proxys” or “Factories” didn’t come easy for me. Now ChatGPT DOES understand those concepts on a level maybe above the creators, actually knowing how to explain the stuff on a level of “Making Object oriented programming the next logical step to take after knowing basic progamming”. Not being Einstein smart I asked Chatgpt to explain the two relativity theories and made a heck of a job of it. I understand those theories much better now. For you younger guys, take the opportunity to make AI explain all the automation stuff and other testing stuff - On the level where you are! Acting being much smarter than you are is a must in business and all over the place, but you are where you are, and ChatGPT et al can make you smarter - if you start on the level where you really are now. I’m old enough to say such things.

The intelligence of AI is NOT unlimited. If you take areas like Philosophy or Politics that really rocks my boat more that tech(but did not bring milk on the table for my kids) its VERY obviously trained on internet. It is what it is and what it is is pretty apparent. And well, marrying a woman smarter than me and thus having a son that knows AI better than most, explaining well, have kind of brought AI down to earth a little for me. It is what it is.

But what I think AI will make you grow with, is being a guy you can ask stupid questions to, that do not roll his eyes but gives you good answers. Producing good results will make the organization you work for grow.

There are both positive and negative aspects to this question :

Positive aspect :

  1. As the economy grows, cyber crime also grows and with the help of AI people are finding new ways to commit crimes, so now security testers are required more than ever before.
  2. Many AI-based startups are coming into the market which brings new opportunities for testers.
  3. As AI is growing, there are a lot of new things in the market for the testers to learn and upskill themselves.
  4. Since there are many AI tools in the market, so it has increased the productivity of the testers as many manual and repetitive tasks could be easily done with the help of these tools. For e.g. qa documentation, automation code, etc.

Negative aspect:

  1. People are getting replaced on the ground that AI is now too smart and can do their tasks, so obviously company revenue may not go down in the short term but in the long term, it will have serious impacts. In my country in many organizations, people are being laid off directly, because organizations claim that their positions are now redundant after the ai tools in the market. and these include testing positions.

So AI has brought some positive changes for the testers in the market however it comes with its own cost.

I would love to know!

The current AI hype cycle started when, 3 years ago? During that time I heard almost every business expects their niche to be disrupted, that this is paradigm shift, that this is like an industrial revolution, that you need to jump on a train or risk being left behind. Since AI research is a field with a rich history of overpromising and underdelivering, I feel 3 years is enough time to start asking a question: what are the actual results?

I guess couple of guys got much richer. And couple of other guys got praise and recognition they otherwise wouldn’t (like latest Nobel prize laureates). Good for them.

But otherwise? Everyone is showing “AI” and “LLM” into their product like there is no tomorrow. What ROI do they achieve? How does that translate to actual new users? To new transactions? What actual problems were we able to solve in last 3 years? How did AI contribute to solving these problems? Somehow, these questions are rarely asked. And I think that is quite telling.

Few days ago I listened to this podcast about using AI in the system that accelerates obtaining expertise. The idea is: the only way to obtain the expertise is going out there and doing the thing, making mistakes and trying to not make them again. Podcast guests are developing a system that models real-life situations closely, so you can try in environment where stakes are low. They’ve been doing that for 20 years. The current breed of AI allows them to introduce even more randomness in the system, making it easier, cheaper and faster to generate novel situations that don’t follow a very strict pattern. And while this is the case of AI usage that I can support, it also strikes me how AI delivers only marginal utility - the system they had worked pretty well even before they added the AI.

Here’s a link if you are interested: