Is “I don’t care.” a reasonable answer? I’m not trying to be rude or frustrating to anyone, but AI is something that I feel I can’t control in any respect, so I have to focus on what I can control Apologies if this comes off badly.
I’ll vent to myself for a few minutes and try to come back with a more helpful reply.
In every field, a new competitor come with break-through technology and rest of the companies who couldn’t afford such growth are either bankrupt or merge.
In AI also, may be same thing will happen, the rate at which LLM based companies are being coming in the market and cost of the running this companies are stunningly very high. Every LLMs are trying to grab the market but few will succeed and few will fail. So when that moment may be then we can say that AI bubble has burst.
As of now it’s hard to comment because as i say everyday new LLM based startup is coming with few new changes.
Also Deepseek gain recognition only when it beats OpenAI on app store, so just because many people downloaded it doesn’t guarantee it is the best in the market. Let’s wait and watch with popcorn.
I’m not sure what AI bubble burst means.
If I can get something useful to speed up my work from an AI tool, I might use it, and analyze/review/test what I got.
Competition among AI tools is good, it means maybe better information provided at some point.
The only thing we’ve used AI for is checking that people aren’t submitting content written by AI.
Ok, that’s not strictly true, but it’s not far off the truth. I’d love to explore it more, but I also feel like it’s a huge distraction from our current day-to-day needs.
Stepping into the crosshairs, I believe that as Testers, we should be concerned about AI and its latest revelation.
I am not referring to how we utilise AI for testing or whether AI will replace testers.
I want to understand how we can robustly test these systems. The world is increasingly embracing the use of AI, and as models become more integrated into people’s lives on various levels, it becomes crucial to be able to test them effectively.
So, why is this revelation so important?
Well, if we are to believe the development costs, it eliminates a significant barrier to entry for many businesses. If a model can be developed with a fraction of the budget, it has the potential to greatly increase its deployment.
What is the risk?
The lack of expertise and oversight in robustly testing these models inevitably leaves room for poorly designed models with inadequate control mechanisms and safeguards to be deployed. In my opinion, this poses a substantial economic and reputational risk for companies and individuals.
That is why testing is more important than ever. Although it may require some adaptation and painful changes, important things are rarely easy.
It hasn’t burst yet, because there are still large numbers of AI evangelists out there insisting that entire swathes of industry can be replaced by LLM’s right now (spoiler alert: no they cannot).
The DotCom bubble burst after people stopped trying to cram every possible type of commerce onto a website, and started realising the limits of the tech. The actual practical uses remained, and grew into the internet we take for granted now.
The AI bubble isn’t going to burst until there’s a broader understanding that these things don’t think, reason, analyse, and they never can - the fundamental architecture that makes up an LLM just doesn’t work that way. Once that understanding comes, along with understanding of what the real strengths and abilities of these models are, then they’ll slot naturally into appropriate niches and become background, just like the modern internet.
The only danger to our profession, and many others, is management who buy the hype and implement policy changes without any deeper insights, before the bubble pops…
From my point of view, it’s dangerous to not care about this. We learn, we adapt, we grow.
As with every new breakthrough, there’s a lot of skepticism and fear involved, those are normal human emotions. You know how Bill Gates famously said there’s maybe room for 5 PCs on the whole world? He couldn’t imagine today’s world. Same with AI today.
Developers must keep in the loop with new technologies cause programming world keeps reinventing itself, for example if you stayed too long in vanilla JavaScript when NodeJS was invented, you would be left out and dumped on the outskirts. Same with AI, it is simply unavoidable. AI’s here, it’s a powerful tool, learn to use it to your advantage.
Don’t be like taxi drivers that got ran over by Uber
All that said, it also a thing with our own mortal age, as we get older we are more reluctant to try new things when the old ones were working for us for so long.
I appreciate this. I think where I struggle is that in my current stage of life, AI is a bit low on the list of priorities. And, I know it will affect everything. It already has. I feel overwhelmed when I try to consider what part I have to play when it comes to AI.
You are right when you say it’s dangerous not to care. I do care, far too much in some ways, and so my first reaction was more thoughtless than anything. And, I want to be aware of being reluctant to try new things so thank you for that reminder also.
I’m with you Judy. Computers are complicated, and even though computers represent the 3rd industrial revolution in a way, AI is not a 4th revolution. Mechanisation caught many people by surprise, as did mass-production, and todays digitization of production is merely augmented by AI. AI also suffers from arriving at a bad time, probably like other painful revolutions did.
Although AI is stealing jobs in the same ways, we have to take it seriously, but the amount of studying it takes to master the monster still makes it unusable to the masses. But that should not stop us from dabbling, for now I find AI to be a very useful sounding board for ideas. AI helps me get over the writers block of starting to write a document or blog by providing a decent pre-filled starting template. That’s as deep a water as I am happy to go.