Does anyone have a good "go to" reference on testing terminology?

I’m sure most (or all) of us have been here. We’ve been having a few “misunderstandings” where we’ve said “teams will go do their own sanity testing” and that has been interpreted differently by each team. I’ve worked with so many different teams with different definitions of things like “smoke testing”, “sanity testing”, “regression testing” and “system testing”. This year I’ve heard the term “escapes” be used it a completely different way.

Does anyone know of a good resource with simple, concise definitions of testing terminology?

There’s plenty great (albeit inconsistent) resources for “smoke vs sanity testing” but I’d love to know if there’s a good glossary that I should bookmark.

5 Likes

This is a great question. Off the bat, I feel like I should have an instant answer to this. But I don’t. There’s nothing that I can think of that I’d feel comfortable recommending. I’m sure there is though. Will think more about this. :thinking:

EDIT: Perhaps the work of the various Ministry of Testing curriculums may naturally form its own version of an available glossary that could be published on the MoT site. Cc @mwinteringham @sarah1.

1 Like

Every faction has an equal and opposite detraction. I think it’s healthy that there’s no official glossary of terms, and it helps to remind us to know what we’re talking about independent of what we’re saying.

RST deals with this by having its own namespace, where terms like test and check have meanings localised to that context. I have to translate what people say into the way I think, like a second language. Not just between test schools either, but between companies, industries, even departments and teams, everyone has their own flavour of terminology and what that means to them and the people they work with. I recommend playing or watching something like Keep Talking and Nobody Explodes and see how domain language evolves.

If there is a need for standardisation then the glossaries have to be limited to those domains. Outside of those domains they hold no authority.

I also have bad definitions.

3 Likes

Brilliant points.

In the past I’ve worked with a smaller number of people so I’ve been able to do the “translate” as you say. We could chat about what we’re meaning as well. However now I’m working with a much wider group (comprised of 3 companies merged together, across 4 countries) with people that I don’t know and it is clear things are getting lost in translation. We also have this bizarre thing where people are clearly talking about different things but everyone goes along with it. My hope is that we can gain a common understanding on what we’re saying, rather than being “correct”, if that makes sense.

Also that bad definitions - LOVE IT!

1 Like

ISTQB has a glossary, but the definitions are sometimes overly-complicated. If no one else has a recommendation, you could always check there and confirm your understanding of a term with your team. I’ve found that there’s no true industry standard for many of these terms.
https://glossary.istqb.org/

1 Like

I am working on a set of articles like this one

Maybe I will make a full list at the end like a glosarry

1 Like

My perspective there were no stranded for these naming conversion . But based on the description what are they giving for that , only i could understand what is that mean

1 Like

I suspect, that if Richard were to present some of those bad definitions as an icebreaker in conversations it would quickly force the 4 disconnected companies to take him more seriously in a shift to better communication for all the teams. Definitely agree Chris, keeping things domain specific and limited in breadth to the important stuff is a way to go. Trying to fix the whole world is rarely the goal.

2 Likes

But why do we focus so much on the terminology?Because i feel when we focus on them we ignore the other important things that need to be looked into.

2 Likes

Every time I read through someone else’s definitions I find ones I just don’t like / don’t agree with. I think that’s the challenge and why this hasn’t been adequately done before.

For example in your definition of Ad Hoc testing it says “where test scenarios are created spontaneously”. What does Ad Hoc testing have to do with test scenarios? Nothing. Ad Hoc is more of an approach than a technique.

Now that’s fine, if that’s what you understand. It’s also why we need to have conversations to understand people and why there is no true namespace everyone will agree on.

I think the focus on terminology is because such a large part of our role is communicating our findings. If others don’t have the same definitions of terms as we do, we aren’t communicating.

If we’re using terms differently than other people in our teams, things will go wrong. If we discuss certain topics with others in our communities and we use terms differently than others, at best there’ll be confusion.

This is why testing terminology keeps coming up as a topic here: every company (and sometimes every team) has their own usage of some of the more common testing terms, so we have to maintain mental maps of different terminology all the time.

2 Likes

When you compile the definitions for every letter, have you considered not ordering them alphabetically? (controversial suggestion I know)
When I was studying and came across an alphabetised list, I found it confusing and meaningless because all the connections between terms were missing, and related/contrasting terms were flung about into different sections. An example that stuck me just from your Part 1 was that nobody should be reading about alpha testing without beta testing right next to it. Also it would be helpful to have some sort of categorisation that puts availability testing as a type of non-functional testing, whereas automated testing is a way of doing certain kinds of testing. Otherwise any beginner will find it very hard to get an understanding of how testing works as a practice.

The easiest way to make me really hate a team is to suggest doing an “alpha release” , because that just “loads” 2 already “loaded” words together and is ripe ground for miss-understandings. Most places have no concept of “beta software”, and their definition of “alpha” becomes less like a closed alpha and more like a public beta, just one example. But being clear how your SLDC functions is key to even starting on a glossary of terms and language.

Each team/organization likely will have some variations of the same term. Heck, I think even within the same team, there might be variations as well (something as simple as a new member joining).

This is why I think documenting the scope of work and the actual work done is important. And then communicating the scope and results to the other parties.

2 Likes

Ah yeah, that’s an interesting question and observation. Nice idea to experiment with not doing it alphabetically and instead grouping where grouping is appropriate/helpful.

Wouldn’t this also be part of domain driven design, in a way? Having a clear understanding and adhering to the set naming convention and understanding. I get domain driven design can be ultra specific down to development but couldn’t the same principals apply? We could have some over arching terms in a broad sense, but wouldn’t each persons domain modify and adapt the language to fit the needs and understanding of the area?

I did also find this site that has a big list of testing terms and definitions. I haven’t really looked into this site to know if it’s worth anything, but maybe this can help guide someone in the right direction? A glossary of testing terms

domain driven design

We could put on those glasses to look at language domains, but there are a lot of domains. The way any two human beings communicate could be a domain, as two friends have almost their own language and expectations from years of shared conversations and interests and tacit understanding of value and consent. Even an interaction of two strangers becomes a confluence of two particular personalities and worldviews. Two people talking at home might be one domain, and those two people talking at work might be another domain, and again for down the pub or in a meeting or based on who else is listening.

If we want to we could say that there is an enormous set of domain-dependent heuristics that guide test term communications within any specified domain or subdomain. But I think we’re then saying “people say what they need to, when they need to, how they need to, to whom they need to”, which seems to fight against the affordances of naming conventions.

So let’s get down to cases. What concrete problem is solved, in the context of testing terms used within a business, by naming conventions that is not solved by clarifying statements in communication?

I haven’t really looked into this site to know if it’s worth anything, but maybe this can help guide someone in the right direction?

When I’ve been searching these glossaries I tend to look up the same things, because I inevitably find that I have problems with certain terms.

Automated testing: Automated testing describes any form of testing where a computer runs the tests rather than a human. Typically, this means automated UI testing . The aim is to get the computer to replicate the test steps that a human tester would perform.

So this seems to gloss over that testing cannot be automated, and a computer “running a test” is fundamentally different from a human “running a test”. It conflates a test in the sense of the act of testing, and a “test” as an artefact. It’s asking us to try to do the impossible.

Test: A test is the specific set of steps designed to verify a particular feature. Tests exist in both manual and automated testing. Some tests are extremely simple, consisting of just a few steps. Others are more complex and may even include branches. At Functionize, tests are produced by our ALP™ engine. See also Test plan and Test step .

This isn’t what test means to me at all. Testing is something done by a thinking human, and a specific set of steps is an incredibly limited example of a subset of testing. A set of instructions on how to look for a particular problem isn’t a test, but a person attempting to follow those instructions is testing.

The rest of the thinking is much aligned with this particular view of testing, where testing is a series of robotic actions codified into test cases and performed with tools as a replacement for human thinking, whereas my view of testing puts the human tester at the controls and all of the codification and tooling as part of their responsibilities.

Okay here’s my attempt to move things towards an understanding of testing reference.

Heuristics for Linguistic Formalisation of Testing Terminology Made Simple

If we create a standard of language, we create a standard of thinking, which creates an ideology at that level of the specificity of its standardisation.

The terminology-ideology heuristic
The degree to which we enforce linguistic formalisation should match the degree to which we wish to impose shared ideology.

Such shared ideology may be found within one company, rather than across testing communities, so would accept a more standardised language. The attempt to impose shared ideology across schools of testing is clearly not going to work, and so the linguistic formalisation between them fails more easily.


Also the use of language is a matter of consent. I agree to use the words you use within a particular context (a particular term might be used informally or need clarification based on the situation and how important we think it is), so a standard terminology is only possible where we agree on its use.

The terminology-consent heuristic
The degree to which we can enforce linguistic formalisation matches the degree to which we agree to the pragmatic use of that language.

This might be affected by other factors such as authority. If I work for a company they have some authority over me because I am paid by them to work as they see fit. If they mandate a particular language then I might to consent to its use at some level because of that agreement. Between test communities, however, I might choose to challenge use of language because they hold no authority over me.

Agreement on what the words mean came up for me today. We were reviewing a requirement, some work had already started to SPIKE it and mock up and experiment, so the requirement was something everyone was happy with. BUT the name of the requirement was completely miss-leading to anyone not involved in the project. The name implied it did more than what we agreed upon. And that, is where words get most dangerous, when a word is either too broad, or too narrow. I guess that’s why I love coding so much, because when you tell the compiler to do a thing… and also why I have only once ever written up a wiki of company terminology. Writing it down helps less, than listening carefully does.

1 Like

I have this problem on lots of projects. I think it’s a mistake to try and find authorities on a subject, though. Ultimately teams develop their own culture and language and that language often includes using terms differently to the rest of the industry.

It’s much better to try and crystallize what the team means by a certain term (e.g. sanity testing) into a written definition and get everybody to agree on it.

I often write definitions that include a reference to the industry standard term and explicitly write down any differences.

With testing terminology in particular, a lot of the industry terms are unconscionably vague so there’s little point trying to appeal to authority on, for example, what “is” or “isn’t” a unit test or integration test or whatever.