Please explain testability to me

So, I’ve seen a few Tweets flying around about testability and I’d love if the community could chip in to explain what it is and ideas (good or bad) that exist around it.

It’s not an area that I’ve personally spent much time thinking about, but have definitely seen people talking about it more.

Anyone care to chip in with ideas on what testability is and what it means for software testers?


So for me Testability in practical terms means how easy something is to test.

When there is a lack of testability it slows down feedback, it reduces the quality and dept of our testing efforts, it increases cost, destroys motivation and morale and ultimately results in poor quality.

Conversely focusing on Testability can unleash your development team, to do their best work, allowing them focus on the work that matters and facilitates:

  1. Faster Development.
  2. Earlier testing
  3. Broader & deeper testing
  4. Better more robust automation

This focus allows teams get changes into production safely and quickly in a sustainable way.

I use the 10 P’s of Testability model to help teams identify all the factors that influence the team testing experience:


The people in our team possess the mindset, skillset & knowledge set to do great testing and are aligned in their pursuit of quality.


The philosophy of our team encourages whole team responsibility for quality and collaboration across team roles, the business and with the customer.


The product is designed to facilitate great exploratory testing and automation at every level of the product.


The process helps the team decompose work into small testable chunks and discourages the accumulation of testing debt.


The team has a deep understanding of the problem the product solves for their customer and actively identifies and mitigates risk.


The team is provided the time, resources, space and autonomy to focus & do great testing.


The team’s pipeline provides fast, reliable, accessible and comprehensive feedback on every change as it moves towards production.


The team considers and applies the appropriate blend of testing to facilitate continuous feedback and unearth important problems as quickly as possible.

Production Issues

The team has very few customer impacting production issues but when they do occur the team can very quickly detect, debug and remediate the issue.


The team proactively seeks to continuously improve their test approach, learn from their mistakes and experiment with new tools and techniques.


For me it’s about making a product more understandable. There are potentially a lot of complex processes that go on underneath the hood of a product and we should be able to expose or control those inner workings in order to help us gain knowledge. The more we understand, the more testable it becomes.

I recently spoke to my development team about configuring log levels on an application and have our dev environment log all the requests that come into it, header, body and response with any corresponding errors etc. This at the time was just a tech task that I thought might be useful but going forwards it has been such a powerful tool for my testing of other user stories as now I’m able to fully wstch the inner workings of the system and debug any problems.

We have to be careful though, I’ve fell down the rabbit hole of making things more testable for something that adds little value or perhaps may be a short term fix. When we talk about testability we need to understand if there will be any long term benefit to this effort.


I love your point about testability making a system more understandable.

1 Like

The biggest aspect of “testability” for me is having clear acceptance criteria for the story. If I know what the system should do then I can test whether it actually does. Having these conversations early can also uncover any potential issues with the actual doing of the testing, e.g. no access to logs, database etc.

1 Like

Great topic @rosie!

I encourage testers and test leads to advocate for testability in their projects. For me, testability is about improving control and observability of products. Where I have control of how and what I test and can observe behaviors easily, I spend less time setting up and executing while being able to watch what happens when a product executes its tasks (or have evidence of what happened).

We also introduced an assessment of testability along dimensions of observability, dependencies, test data, environments, and testing capabilities. The assessment helps testing team understand testability opportunities and advocate for improved testability.



Testing: The act of learning about a product through experimentation and exploration.
Testability: The capacity of the product to ease the process of learning, both on the “input” (act on the product) and on the “output” (understanding how to the product responded) - in the context of the environment and tester themselves*.

* Silly example: If all logs are in Portuguese, the logs only improve testability to Portuguese-speaking people.

This is a question I ask myself from time to time.

“The capability of the software product to enable modified software to be tested [ISO 9126]. See also maintainability” this is how the glossary of the ISTQB Certified Tester understands testability. Not really helpful to answer @rosie question.

To me, the product is not only the application itself, but its requirements and its sw architecture (did I miss something?), too. So if we talk about testability, we should talk about testability of req and any kind of product documentation, architecture, sw design and so on. And and testability of source code, too, of cause.

I think the 10 P are very helpful.

@devtotest “introduced an assessment of testability along dimensions of observability, dependencies, test data, environments, and testing capabilities.” sounds very good! Would you mind explaining how it works and/or sharing your checklist with us? Thank you in advance.

Have a nice weekend everybody!


1 Like

Hello @janet!

Each of the dimensions has four or five categories that are scored on a scale of 0 to 3. A higher score in a category indicates some maturity in testability for that category. The scores from the dimensions are placed into Excel to create a radar chart. The chart helps testers and test leads identify testability opportunities.
For example, under observability there is an instrumentation category. Instrumentation is the availability and use of logs or monitors within a product. Logs and monitors give a product some introspection (an introspective product is a testable product!) that allow project team members understand what a product did (e.g., a product might write “Checked credentials against the database” into a log). A score of 0 indicates little or no logs or monitors and 3 indicates some maturity in using logs and monitors.

In practice, the assessment has seen value in facilitating the conversation around testability. While the scores provide a snapshot of the product at some point of time, project teams come away with some appreciation for testability in general, and a raised awareness for what is possible. As the project progresses and testability improvements are added, another assessment may be made to determine more opportunities and demonstrate improvements from the original assessment.


Hello @devtotest,

thank you for explaining your testability assessment. Seems to be a helpful tool.

It seems to me, that you and the team have a very clear idea about what testability means and how you can measure it according to your product.

I will try to create a checklist or assessment like yours for my team/product.

I wonder if every team member of your project team agrees with the needfulness of the product’s testability - and therefore accepts and supports requirements/features to improve testability?

I remember - long ago - a developer saying “I don’t write the code for you, to make it easier for you to test, but for the customer to use the application.”


I just compare testable applications with untestable applications. How do you feel about an application where you have no visibility into anything until it runs? What about when errors are happening and your users are complaining? How do you feel when every issue comes down to we’re not sure what’s happening or why? I helped deconstruct a monolith service into microservices and because of lack of testability had the luxury of months of production fires daily, to weekly and finally abating after a couple months. That time did not go towards new features for our users.


Hi @rosie,

please find my input about the same:

Testability is:
(1) the degree that characteristics that provide for testing exist, and
(2) the degree to which feasible tests can be devised for determining whether the developed software will satisfy the requirements. (IEEE Std 610.12)

Testability identify the process of verifying through inspections, tests, demonstrations and analysis that the designed and constructed product can meet the requirements.

Verification work is accomplished through comparison. That is, the characteristics of an element under inspection are compared to a predetermined standard. In making this comparison, Jeffrey Grady suggests applying four commonly accepted test methods as described in the following table: test, analysis, demonstration and examination.

Moreover, we should consider the following factors to determine the Testability:

  • root-cause of Bug
  • Environment and OS dependency
  • Business objective and Process


Very nice Joe, I use a mnemonic I call CODS to help teams identify those testability attributes:


Hello @janet!

I wish you well on your journey! I believe testability is a worthwhile concept that testers/test leads should advocate. I encourage you to collaborate with team members to understand their thoughts around it as well.

Initially, not every team member agreed with or even understood a testability assessment. This included both developers and testers. Under the definition I use, the benefits were obvious and provided value to the project team - I wasn’t pursuing this just for testing or testers. So, some education was required.

  • Improvements in observability reduces time required to locate evidence of product behaviors and provides transparency of that behavior
  • Understanding dependencies help to design tests that isolate specific behaviors while exposing components and systems required to have a product operate correctly
  • Exploring the creation or search for test data helps understand that effort and could justify automated methods of construction as well as helping understand the diversity of test data required to evaluate products
  • Investigating the availability, stability, and access of environments helps keep the project moving forward
  • Understanding the capabilities of the testing team may identify opportunities for training, automation, or more people

While testability provides improvements in a product and its evaluation for the project team, I believe the dimensions above facilitate and encourage conversations both inside and outside of the team that give an introspective view not often considered. Testers and Test Leads have an opportunity to lead a project team from within through the advocation of testability.

Go forth and Lead, @janet!


1 Like

Most comprehensive but concise thing I’ve ever read on testability:

Thank you very much, @ devtotest Joe!