Ambiguous phrases in Testing

I observe that many people us either the same phrase and mean different concepts or use different phrases for the same concept.

What are your experiences and examples with that?

Mine are listed here.


Ow boy can I…

I started on a project many years ago as a consultant…
I believe it was day 4 that:

  • Developer A said to developer B: ‘I’m done with deving this part of the app, can you unit test this for me?’
  • Developer B replied: But of course

Here I am thinking; damn actually developers who write unit tests, amazing work environment.
A few moments later (like 5minutes) this happened

  • Developer B said: ‘all done’

I replied: That’s pretty quick.

  • Developer B replied: yea I just scrolled a bit and clicked a bit, all good.

I replied: Uhm and the unit tests?

  • Developer B: Yea all done

So me being nosy said: Can you show me how you did the unit testing?
Developer B: yea come over to my desk.

Me walking to his desk

  • Developer B: he scrolls and clicks a few buttons on the app, fills in the form and says: there all done, unit tested.

At this point I’ll let you imagine how red my forehead was from the omega facepalm I did.
First thing I did was working on the terminology :stuck_out_tongue:


Oh! That is going deep. When even developers do not get their concepts right …
(aside that it would be better to call it Unit Checks or something similar, just not Tests)


I’ve seen people use smoke, sanity, and regression testing for the same thing. Using smoke and sanity interchangeably doesn’t bother me, but regression testing is more detailed and takes longer than the other. Also, I wish it was called anti-regression testing instead! :smiley:


Or Anti-Aggression testing :smiley:


Testing to detect regression and also to prevent the customer demanding regress :smiley:

1 Like

I think you may have opened Pandora’s box here :stuck_out_tongue:

In my first role regression testing was about verifying bugs. Most of the time we were doing destructive testing and occasionally it was scripted. All manual. In my next role those terms had different meanings.

Now if someone asks me if there’s any test scripts for testing X feature, I’d be digging out python files that I use to help my testing.

Having moved team, I’ve found more differences in meanings. There’s a huge push for smoke tests running
almost non stop but different understandings on what that means. One other that annoys me is “escapes”. I always understood that as bugs found in released software but my work use it to describe anything that has been pushed.


The box was opened decades ago and I try to close it !! :smiley:
I like to read such examples!
Because they confirm me in my observation that we problems here.

Can I quote you on my article? Do prefer to stay anonym or shall I write your name, Richard?


Have anyone experiences where this was not a thing. I think the most common high variation interpretation is Regression Tests / Testing. Another very ambiguous concept is Test Lead.

As a slight side track, another problematic term is Risk, not necessarily ambiguous, more very commonly misunderstood.


Definition of Done - it’s probably the first thing every team needs to get good solid agreement on. Some things are never truly “done”, but everyone needs to understand that done really means “push it out the door”. Just one example, but you will be surprised how many teams can argue over it for years.


Can you elaborate that? That is something I have not experienced
I understand the concrete risks depend on the context. Different people have different priorities and needs and judge by that risks differently.
But at least it should be clear what a risk in general, as abstract concept ist? Not?

1 Like

I know this very … sight.
Sometimes I think the a DoD gets written at first. And is also the first thing in the bin.
I see many outdated DoD documents while the teams where already discussion new entries, but seldom write them down.

1 Like

I suspect @ola.sundin is pondering how there are multiple risk sources, and so one word is a bad choice. Risks can be internal or external, and we tactically deal with those 2 source types so differently, that they don’t fit into one basket really. Risks can be to the business as well as to the product health or lifecycle, and we once again fail to disambiguate here.


Wait. If a DoD does not get respected by the team, the team and especially the scrum master have to be sure that the team agrees and iterates the DoD it is not the holy grail, but a team should agree on a DoD - and within the Sprint Retro throw things away which did not work and insert things that will work, but not throw away the DoD.
Otherwise the lack of quality is getting bigger…

just my 2ct on this. But this can also put into Agile Testing understanding and phrasing.

In my earlier jobs I had a very bad view on Agile or Scrum methods - because the understanding was different and it was never really shown how to use it properly…


In general I see a DoD as an agreement between people and a document is just an artifact of that.
Sometimes people agree on something but they get incentives to work against that.

What ever we note about testing in a DoD document, it is often ignored and not understood. I assume most agreed more nominell, out of politeness rather then understanding the urge.

I agree with your statement. I see it hard to enforce a DoD if people do not get the point.

1 Like

The terms “integration” and “system” test being used totally indiscriminately. To me integration testing is testing part of a system fitted into a framework enabling you to test the parts (not to function as a whole system). And a system test to test a system as far as possible but not as the final production system. I head these terms being thrown around for all kind of tests.
Same goes for the term “agile”. I heard it being used by a team just because they had dailies…


This one regularly takes the cake, so often we just gave up trying I suspect. :moon_cake:


Happy to be quoted :slight_smile:


Honestly, I try to get a clear definition of what the folks at my place of work think the terms mean, and work within that. It’s much easier than trying to change the existing mindset, and usually said mindset grew that way because it works for them.

It’s like older software. It often starts as clean, well-designed code. Then it grows. And grows. And starts to spawn tentacles and become self aware, and before you know it you’ve got a nightmare behind that shiny UI…

1 Like

So let’s start with some ISO standard way of talking and thinking about risk. You have hazards, things that are “dangerous”, for instance a razor. A Risk is then the product of the chance of an outcome to happen with the severity if it does. For instance, I can make a small cut while shaving which is somewhat likely but not that severe. Then I can also cut my throat while shaving, very unlikely, but very severe. These are two risks stemming from the same hazard.

So common interpretation is that we either refer to the hazard as the risk, or we ignore either severity or probability. Severity is easier to estimate correctly so a good strategy is to favour high severity over high probability. I.e. someone have changed something in a part of your application that is not core, but commonly used for a large portion of your customers. This is a hazard. Now you at least have a risk of this functionality breaking for portions of your customer is this part of the application, this breaking for all customers in this part of the application or the risk of this causing core functionality to fail for portion of your customers, or this breaking for all customers. Which are four very different risks with different severity and different probability. But since we confuse the hazard with the risk. We forget to manage it separately.

You can spot when this is occurring when people are saying “There is no risk with this change”. When they typically mean is that the probability of any risk is low. And more rarely even if this is a problem it does not matter that much, i.e. the severity even when it fails is low. And sometimes, it means, I cannot think of why this change would cause anything to fail, which a lot of testers I have worked with are good at spotting. These developers tend to get a little of a reputation as sloppy.

Another problem is also that we have a hard time understand the scale when it comes to probability. In accordance with Thinking Fast and Slow going from 0% chance to 1% chance feels like a way bigger leap than going from 50% to 51%. So commonly we stay with the absolutes. This is either impossible, or almost guaranteed. Living with the risk of something mildy severe and mildy probable is difficult. Also the probability of small cut, versus cut that will cause bleeding, versus cutting my throat are on very different places on the scale. As in, one happen 1 in 3 shaves, the other happen 1 in 20 shaves and the last 1 in shaves. Now, 1, 2,3 might not be a good enough category for you to use. If possible, using math is good here. Like, total outage for all customers for 1 hour is equal to $1.000.000, where partial outage for 1% of customers is $1000. Then your 33% * $1000 = $300, you can compare to 0.000001% * $1.000.000 = $1. To easier help you understand how you should react on these things.