Product knowledge

Good morning -

I have a question for everyone - in your particular setting and thinking of the application(s) you test how much knowledge of the product do you have, and/or would you expect the tester(s) to have? for example, do you have a deep understanding of particular functionality, do you have a deep understanding of the whole thing, or do you have a surface level awareness of everything etc.

Yesterday I was having a discussion with someone who had the feeling that if we have automated tests then this takes away the overhead of testers having to learn the product in its entirety, i.e. they can focus on the areas that are specifically in scope of the interactive (“manual”) regression testing. I was on the side of disagreeing with this, but wondering what others thought.

We were discussing the pros and cons of automation on our products (to be blunt we have some automation on some of the products but not all of them and where I am we have >10 applications that we develop and test).

So the summary is - can automated tests act a substitute for product knowledge?

2 Likes

In my opinion, no.

The reason is that without product knowledge, you don’t know if the automated tests are covering the right parts of the product.

Any product is going to have features that are widely used by pretty much everyone, features that are essential to the product (which may not be the widely used features, for example in point of sale software it’s essential to set up products, but once that is done, the product setup module isn’t going to get much use compared to the sale module), and features which are used only by a very few. There can also be features that are better tested by humans, such as checking that a printout has the correct layout.

Someone who isn’t familiar with the software may spend more time focusing on the rarely used features which may well be able to be safely left until everything else that is reasonable to automate has been automated.

Familiarity with the product also means that the automators have a reasonable idea what constitutes steel thread or happy path, what constitutes error conditions, and what constitutes edge cases. For instance, in a multi-tenant application, a feature used by one customer can be considered a lower priority than one used by all customers (unless that one customer is a huge driver of the business, in which case keeping said customer happy becomes a major part of the company’s process - there’s a reason that pretty much everyone who does business with the amusement park industry knows that you Don’t Mess With The Mouse).

In short, knowing the product and how it’s used means that automation can be targeted to critical and frequently used functionality first, for maximum ROI. After that, if you have the time and budget you can fill in the gaps and expand what you have.

2 Likes

I see it almost the other way round. In my opinion, automation frees up space for people to focus more on exploratory testing, root cause analysis AND deepening product knowledge. Then they can use their findings to build the next automated tests.

4 Likes

Most companies are fine with shallow testing. That’s one of the main reasons they will start testing by creating scripted detailed test cases and only focusing on those. Automating those makes sense. The company might not be financially affected if some users encounter problems or bugs. Some companies like to launch a product and then spend the next 3-5 years adding features and fixing bugs that the clients or users complain about. I call this checking the product, or amateur testing.

On the other hand, you meet people in product development who care more about the product and risks and potential issues, and they easily find bugs by playing with it for a while. These will appreciate professional testers with deeper knowledge about assessing a product to find interesting or difficult bugs. They will give you space and respect for having deep and broad product knowledge or technical knowledge needed to do professional testing work. And I’d call this professional testing.
Sometimes they are pressured by higher management(e.g. CTO), and they will required to push test-cases and automation on the tester. I’ve had dev leads, pm telling me to try to at least fake it so that the IT management is happy.

Unfortunately in my journey this year, I have not found professional testing being required, based on job descriptions, interviews…but who knows, maybe some smart people who aren’t the ones hiring in some companies will appreciate it.

I lean towards no.

What if the automated tests have a false positive? This has a non-zero chance of happening and I think you need someone with sufficient product knowledge to call out that the automated tests are wrong and needs to be updated

2 Likes

I would say no. Automated Tests are not a substitute for product knowledge.

As an automater. I need to know the product. How can I possibly write test cases to be automated? If you’re automating from the UI, you need to know the workflow and the users expected (and sometimes unexpected) routes through the app. If you’re automating the backend, you need to know the application and how it interacts with other downstream (or upstream) services to get the right events to trigger.

Automation is really really good at the boring stuff. Regression. Click this button 100 times. Go through this workflow 1000 times. Send this payload and check it’s returned value. Automation is terrible at exploratory testing. If any of my regressions break. I’ll need to figure out why and how to fix, which can require a good bit of product knowledge as step 1 could have broken step 100 if it’s not set properly.

My QA (“manual” tester) probably has better in-depth knowledge since she can do the more creative and exploratory testing. So, she can help me understand things better or help with a deeper knowledge. But it doesn’t mean I can just ignore the product and not learn it.

At the end of the day. I say let me automate the boring stuff so my manual testers can do the fun creative exploratory testing.

2 Likes

Interesting.

Are you saying then that from your experience the companies are lessening the expectations of what testers do?

It’s not lessening, but a different path is chosen, due to a different perception of many companies of what testing is.

Some good examples are visible here:

“responsible for the planning and implementation of software tests and support our development-related quality assurance teams in test case specification and implementation. The aim is to ensure the best possible quality of our software products”
" * You are responsible for defining, developing, and executing automated integration and acceptance tests with JAVA in a cross-functional team."

If they were serious about professional testing I’d see several keywords like: finding gaps, product risks/ risks management, heuristics, learning, product and business knowledge, investigation, experiments, experiencing, exploring, going deep, hard-to-find issues, working closely with clients and/or business, participate in discussions, meetings product design early in the process, etc…

2 Likes

I wouldn’t say lessening. It would be different. We are transitioning to a lot of APIs and Micro-Services for our architecture. So, it allows for much easier automation of backend services.

So if we split a given product into 3 lanes

UI
APIs/EndPoints
Lines of Code.

I start in the ‘Lines of code’ layer and work up through the APIs and finally into the UI layer. While my QA(‘manual tester’) starts in the UI layer and works her way down. Since it’s much easier to automate the lower lanes and much more challenging to automate the higher lanes. This way QA can focus on the investigation/Exploratory, user workflows and work through the business requirements. My automation is proving that our code is working and functional as designed. 2 very different but necessary approaches. I write code to prove that 2 + 2 = 4, while my QA uses the actual calculator unit.

Definitely not.

Product knowledge means you can make sensible decisions on how to automate the testing of that product, and your automation knowledge means you understand the flow of the journeys offered by the product. In my mind if you know one, you know the other.

Aiming for your team to ‘know less’ about a product just because its large, far reaching, complex and automation covers a lot of it, seems like a bad idea.

2 Likes

Agree with the ‘no’ sentiment.
Feels to me that this sentiment or desire come up the product is actually more than one product and might benefit from some kind of splitting up along lines of concern to reduce the knowledge burden.

When you say splitting the product, do you mean splitting up the responsibility for testing? Or splitting the application out into smaller apps/services etc?

I agree here.

I think the person I was speaking to was having the feeling that when something is automated, it might remove some of the panic /urgency around handing over full product knowledge because it’s needed (e.g. for more manual, exploratory regression testing). If someone leaving and you need to hand over product knowledge to someone new.

Yes, automation will remove some of those feelings, and automation is like documentation.

Although you’re likely only covering ‘expected’ outcomes with the automation, it’s the product playbook - how the product flows, and anyone reading through the code should be able to understand ‘ah, for this journey, we go from the login screen to this screen, then this screen’ etc etc.

Just running the automation can give people who are not familiar with the product confidence stuff is working as expected, if they can initiate a run of the tests.

What automation shouldn’t do, is make you complacent or create blindspots; i.e ‘oh its covered by the automation, don’t worry’ - red flag.

It sounds like your product is a big and complex one, and no one knows it all - which is not uncommon - all that’s needed is for everyone to respect that, set realistic expectations on what can be tested and in what time frame, and what automation exists that might help speed things up.

Openness and transparency are the best.

2 Likes

I was thinking smaller apps, but I think there is a way to work that out. If there genuinely are customer journeys that cover the entire breadth of the product, then to have testers that focus on specific areas, not the entire product suite would be a disservice. If a tester feels incapable of testing the entire customer experience, then maybe the product is too complex in terms of what we are expecting in terms of test depth. Are we asking testers to test non-functional and functional side by side, or behaviours and non-business logic in the same testing activity maybe?

1 Like