Share a testing wisdom

Please share a piece of testing wisdom that you think is great.

I will start

“Don’t stop testing when you’ve seen the expected result, but keep an eye out for the unexpected."

6 Likes

A lot of times, Requirements are ambigous. That is normal as persons are different and work different and points of view are subjective.

Well, the thing is that whenever I find a bug that could or not could be a bug because the Requirement is not clear, I decide to go on and Report the bug.

A lot of times I got the rejection which is bad for me in the short term, but is good for the test process as valuable information is got (i.e. Now we know what the Requirement means)

I do that because in my experience the problem getd solved faster and the issues stays well documented. While the other approach is to write emails to the responsible of the Requirement and perhaps discussion with the Leaders and so on. Something that in my experience slow the process up and creates unwanted discussion.

How do you guys deal with this? Do you think this approach is correct?

3 Likes

In my environment, the best way is often to talk directly to the author of the requirements and clarify any ambiguities. Ideally, he then adapts the requirements in the documents immediately.

2 Likes

Our teams work in a squad environment using Jira workflow, if a jira story is being tested in a non live environment shared by other team(s), then a ‘story bug’ would be raised to enable to clearly track the issue, and it’s impacts at this later stage of the SDLC when it costs more to change solutions, and could impact other teams deliveries. (the jira story bug issue type , can be updated to reflect if missed requirement etc.)
If the jira story development hasn’t moved into a ‘shared environment’ setting, the team would generally comment on the ‘story ticket’ and use ‘flag’ if the story becomes blocked waiting for a response from outside of the team.

Having a potential issue clearly logged helps with tracking, team visibility to mitigate the risk of a dependency falling down the cracks e.g. could happen if one person in team is managing an email conversation without others in team being able to pick up in absence, or not to be clear that a dependency exists. Multiple negative impacts could play out; delays, delivery fails to meet requirements, customers impacted, costs etc. Tracking and logging analysis, issues and decision support the quality of a delivery - by providing the rationale and information that informs delivery scope along with any risks considered and managed/treated.

1 Like

Whenever anyone says “should, normally, maybe or anything alike with doubt” there is guaranteed a bug there! :smiley:

3 Likes

It’s good if you test everything that is possible.
But sometimes you only discover surprising behaviors when you try out seemingly impossible things.

1 Like

Quality is never an accident, it is always the result of a team effort.

2 Likes

The best test plans are not rigid documents but adaptable strategies that evolve with the project.

A test that finds no defects is not a wasted test - it provides confidence.
A test that finds defects provides insight.

3 Likes

I think this might be a really good approach on a large team where it might be difficult to have a conversation every time requirements aren’t clear. It forces the team to deal with the ambiguity, because they have to deal with the bug report.

At my company, QA doesn’t submit bug tickets, but we do report bugs in Slack so that a ticket can be created. When requirements are unclear, I write a bug report based on what the requirements seem to say. I tag the dev lead and the coder, and sometimes the product owner, to let them know we need clarification. Usually they give the needed clarification fairly quickly. Other times we need to have a discussion and the requirements need to be edited.

1 Like

Yeah, In my case we got this situation a lot of times and we start to discuss between the members of the team. Then we go to team leader, the we write to devolopemnt leader, then mybe to Requirements responsible, and alot of times they conclude that that was dicussed with client and Requirement is waht it is. So write a ticket at least saves all that harsh and even when rejected I can
write the ticket ID near to the testcase in the result reports or even in the Testcase specification as additional information. At least is someone new to that Requirement has the same dilema can got some clarification from the Rejected ticket without ask half of the world again.

1 Like
  1. Learn to code so you can speak to devs at their language
  2. Learn the business so you can speak to users at their language
  3. Learn to lead so you can speak to stakeholders at their language
4 Likes

“Think like an End User”

2 Likes

While we can agree on measurable things, anything of “it is X” beyond that depends on a certain context and perspective. Depends on humans (maybe also animals interacting with technology).

While we sometimes see “society / a group agreed on X”, this might be a brittle consent which is promoted by the loudest, while some disagree in silence.
In relation to humans is nothing set in stone, and even when it may take a while, more or less everything can change.

2 Likes

I read that as “Think like an Uber User” and both totally resonate. End users are often time limited, make sure your app is responsive. Nice one.

1 Like

Demo early, demo often, demo every feature

1 Like

A bug in production is often a question that was never asked during reviews and requirement discussions

The earlier you ask thoughtful, critical questions about assumptions, data, flows, or edge cases the fewer surprises you will face later. Good testing starts long before the first line of code is written

2 Likes

Don’t celebrate too soon when developers say they’ve fixed all the bugs

1 Like

Yes. It’s more of an attempt to fix. Testing is about to find out if it really fixes the problem.

“fix in readiness” or so.