What are current biggest HOT topics in QA world?

Hi fellow testers,

I’ve been wondering what are the hottest news in our testing world.
I see mostly stuff connected to ChatGPT and how It will affect the job or our lives.
Is there something similar going on with other technologies, news etc?

1 Like

Hello and welcome!

I’m not sure how to answer your question, because I don’t know the value you’re looking for from an answer.

If you want an accurate description, that’s a problem. We need to define what the testing world is, and then break down discussions from there into codable topics, then decide how far back “current” means, add them all up and pick the top few.

Now you might be thinking “wow, what a pedantic squirrel”, which is unlikely to be the whole truth because I doubt we both thought of squirrels, but we can mine the pedantry for utility.

Perhaps the testing world is really testing worlds, each with its own values and definitions and culture. Perhaps current discussions are really old discussions stated again - so not really new, just perennial.

I find that what constitutes a hot topic depends on where you go and who you talk to. AI is a popular topic in some areas because it’s good at capturing our imaginations. It’s a new powerful tool that will eventually shape the way we work, maybe take our jobs, and nobody wants to be left behind. It also looks like it’s smart, which is what it’s designed to do. The people in testing I tend to hang out with and talk to (my “testing world”) don’t talk much about AI, because they generally dismiss it as not particularly useful in a general, testing sense.

I always see a lot of churn in testing topics. The use of test cases picked up a while ago, and then died out. Many of them come back a lot, like the “what should testers call themselves” question. Testing is a huge social and philosophical tornado of science, epistemology, psychology and communication, from which is thrown all manner of topics that land across the testing discussion landscape.

If you’re looking for a constant, looming shadow of a topic that seeps into the discourse of software testing like oil into a pelican then it’d be the replacement of testers. New tools, new methodologies, new processes, testing shapes, the AI from The Matrix, all designed to reduce cost by eliminating something people don’t fully understand. That’s always hot, although not publicly criticised to the degree I’d prefer. They truly are after our lucky charms.

So, in short, no.

6 Likes

At a local level, we talk a lot about “how do we test this?” Often we talk about tools, but usually it’s about techniques and people.

How about AI builds full actual end-to-end tests now: https://youtu.be/rZEmxEmwFvs ?

Every meeting I go, people tends to ask the same question “What’s the evolution of the QA role”? Is QA evolving? And often the answer is yes! QA went from a tester pushing buttons to something more like support engineering, finding issues, investigating why the issue is happening, automating process (Not only tests) and even creating their own test environments and tools.

So for me one of the HOT topics right now is: The evolution of the role.

1 Like

Well what this tool seems to do is generate checks and test case documents from sentences by finding one particular process flow.

It’s a nice way to generate a process flow for a stated goal. What concerns me is that if the AI makes the decisions then less thought is put into why those decisions are made. I need to be able to frame my testing to show its value to my test mission, and here I’m being told what we’re doing without an understanding of any logical thread between what we’re doing and why we’re doing it. If we try to make the strategy fit the testing then we are bending our purpose to serve the tool instead of the tool serving our purpose. The checks become more arbitrary, and the assumption of deeper testing is hidden in the decisions the AI has made. We reduce the time it takes to write the checks but give up decisions about design and important exploratory feedback and learning from that process.

I think another concern is that it doesn’t fix a particularly important problem, and encourages other problems. End-to-end automatic UI checks are costly, slow, unwieldy, brittle, flaky and shallow. Perhaps the fact that they take time to write is not only a benefit in learning, strategy and test design, but a blessing of pain that keeps people from writing so many. If they’re painful to make perhaps that encourages us to be careful with the cost-benefit of their creation. To make us think about repeatability, change risk and opportunity cost in creating an expensive safety net for simple capability fact checks.

This is without getting into the hidden details. The interpretation layer, the reliability with change, how it deals with software other than e-commerce, the blame and liability if they fail, how I can get it to make the right flow decisions based on what I want, what the tool claims vs what it can actually do, and so on.

If I could have reliable technology that found flows through my product I’d use it to generate as many flows as it could to see if it found any interesting ones. Flows I didn’t think of or ways to achieve a goal without using particular, stated parts of the product. I could use it to suggest possibilities to help me improve my strategy. That’d be neat.

2 Likes

I would say the Playwright hype

@kinofrost testRigor has that feature as well.
In the video I shared AI just helps you to build the test case, there is no flakiness there, more over testRigor as a system is designed to fix the problem with flaky tests in the industry. Think about it: how do you make your tests less flaky? I would argue that you want to make sure that the only thing you provide is a specification in English, and absolutely zero details of implementation. This is exactly what testRigor does for you.

End to end UI tests are inherently flaky no matter how they are written. This tool does nothing to change the existing problems of speed and cost after they are written. Concurrency, test data control, performance issues, third party code, infrastructure issues, change risk, all forms of unexpected non-determinism problems make these shallow multi-factor UI tests flaky. And brittle, which makes them frustrating and expensive to maintain. Given that all they do is make simple UI checks for known behaviours they better be important.

In the video you linked there were changes made that reflected changes in implementation. The checks still have to be examined and edited each time. Moreover because the edits have to be done at a high level it lacks the control and customisation of existing tools.

Also confirmation checks on written specifications are a very limited part of testing. I could go into detail about why trusting specifications never ends up being a good idea, but here I’ll say that (where the specifications are good and that is reflected all the way down the human language implicature chain) written, explicit specifications are where we are the least likely to find bugs. What is more interesting is the unwritten specifications, which are far more deep, nuanced and numerous.

But finding the flows is a neat thing to have. That could help find new and important problems or develop a better and more varied strategy.

1 Like

@kinofrost Think about it, how come automates tests are so brittle while we as humans can go through the flow no problem? This is why testRigor is not an automation tool, but rather a human emulator going through website based on a specification. Just try it out :slight_smile:

Think about it

Have I not?!

how come automates tests are so brittle while we as humans can go through the flow no problem?

A few reasons. Computers and humans are different in the way we process information. Humans are capable of interpreting information based on existing knowledge in a way that computers and current AI systems cannot. They are cognisant of contextual information that affects the value of the testing they perform. They can use many varied oracles, each assigned an amount of trust and authority, overlapping in a way that reduces their overall fallibility. Humans understand implicature and survive non-determinism because their internal heuristic software allows them to navigate the world in that way. Computers do what they are told because they are far more deterministic machines.

A human cannot be as obtuse as a computer, even if they tried. They are fundamentally different in the way they work.

This is why testRigor is not an automation tool

It generates shallow fact checks, which puts it on par with an automation tool. The only difference between this tool and any other similar UI tool that I can see is between formalising the functional behaviour and having the checks written. Which, to repeat myself, robs us of insights into our test strategy, and that is a lot of responsibility to hand over. I wonder, if a case goes to court over a software bug, if the lack of strategic insight of this tool would be considered liability? Perhaps blaming AI is the new industry scapegoat. We can push the sins of our failures onto it and drive it into the desert.

1 Like

@kinofrost Chris, did you try testRigor?

Did you engage with the concerns I raised?

1 Like