Worried this is going to be a dumb question, but… Is there ever an argument for a feature being ‘untestable’, or can I, with all confidence, assume that there is always a way (even if it means more code)? Thank you in advance.
Hi there! I’d like to ask something about testability team management. At this moment I have my ‘testabilitiers’ distributed across multiples squads. And they have their roadmap (as a testability team) and squad roadmap (as squad members) and I’m having a lot of misinterpretation of priorities.
What would be the best testability team format (not the best, but most recommended).
Is there a difference between Testability for test automation and Testability for manual testing?
Other than developers, who else could we speak to about improving the Testability of an application?
What should we be asking for?
I once had to test a change that could not be controlled or observed via the UI.
My first thought, ‘how am i supposed to test this?’. In the end, I spoke to the developer directly, discussed the issue and we agreed that adding more logging will help with testing. We even worked out together where the extra logging was required.
How would you have approached this situation?
What do you both consider “Testability” to be? As I imagine it can be subjective, since we can test functions, ideas, requirements, and so on.
Put simply: How easy it is to test something. (whether that is functionality, requirements or ideas)
A question that runs through my head is:
”How do I know that the behaviour/what I’m reading etc is correct?” this is followed by ”How confident am I in knowing this is the correct behaviour?”
If I find something feels ”hard” to test (and I feel I have sufficient knowledge in the product area) then in general, I would say the testability is poor.
Testability, at its simplest, is how easy (or hard) a system is to test. This leaves a lot to the imagination though. Which is what makes it so much fun for me. Subjective is definitely the word, so you need to drill down a little more.
For me, there is a social and a technical element. The system can have testability (mostly technical), but also the team needs the ability (and the will) to test (mostly social). If there is a gap or imbalance between those two then toil and frustration can occur. That gap is where we need to focus our efforts.
When asking for the developers to make their code more testable I have asked for better feedback in the logs (if there is any feedback in the logs at all) and have also given them some potential examples of what I think “better feedback” would be.
But also just raising the matter with them before they start coding “How do I/would I test this?” If you don’t ask, they may not even think about the testability of their code and how you could address it.
Lastly, when it comes to automatibility (how easy it is to use test automation to test your application), I would give them some examples of what I would like e.g. selectors, types of selectors, or show them how I would like to write my automated test and how the current code state prevents me from achieving my goal.
TLDR: Examples of what I think would help make their code more testable, then from there they get an idea of what I want and they can make suggestions etc themselves.
Why should developers care about testability?
- Test findings are more likely to match production - no nasty surprises. Less chance of rework later.
- They can better test their code themselves and get feedback that is accurate/makes sense
- Testability (or lack thereof) is a team issue not a tester’s issue.
- Not only should developers care about it but the whole team because if it’s poor them the whole team will suffer from these nasty surprises. When testing, you want test findings to match the real world.
I really like @poppulo_tester’s 10Ps.
To quote Rob Meaney from this link: Please explain testability to me
I use the 10 P’s of Testability model to help teams identify all the factors that influence the team testing experience:
The people in our team possess the mindset, skillset & knowledge set to do great testing and are aligned in their pursuit of quality.
The philosophy of our team encourages whole team responsibility for quality and collaboration across team roles, the business and with the customer.
The product is designed to facilitate great exploratory testing and automation at every level of the product.
The process helps the team decompose work into small testable chunks and discourages the accumulation of testing debt.
The team has a deep understanding of the problem the product solves for their customer and actively identifies and mitigates risk.
The team is provided the time, resources, space and autonomy to focus & do great testing.
The team’s pipeline provides fast, reliable, accessible and comprehensive feedback on every change as it moves towards production.
The team considers and applies the appropriate blend of testing to facilitate continuous feedback and unearth important problems as quickly as possible.
The team has very few customer impacting production issues but when they do occur the team can very quickly detect, debug and remediate the issue.
The team proactively seeks to continuously improve their test approach, learn from their mistakes and experiment with new tools and techniques.
For this, I try and keep in mind four practices to advocate for:
- To be able to get feedback you need to slice through the architecture, rather than layer by layer. So build a small part of the whole application, the persistence layer, the API and the front end. Otherwise you store the feedback until the end, which is bad.
- Add logging and instrumentation, for what matters, function or performance. This is where testers come in. Worried performance problems? Add metrics? Need to know when a particular code path is triggered? Add an event. Ask for the information you need.
- Drive development with tests - consider minimal design to solve the problem. Less bloat, simpler to test. TDD can be a hard sell, but if you have a culture that drives design with tests and refactors often, you will have less (obvious) bugginess and you can explore for the really gnarly problems.
- Story Kick Off - Pairing - Show me what you’ve done - Demo - All of these are gold for more testable code. Lots of collaboration means less assumption and claims about what has been built, sharing that knowledge early is key. Also for testers, if a developer says ‘hey, have a look at this’ you SAY YES. Not, I’m too busy right now.
I try and go for the following angles:
- You get feedback on your code quicker. Unit tests don’t have millions of dependencies, integration tests have stubs where they need them, acceptance tests are minimal but targeted. You can run all on each change to get feedback.
- You build it, you run it - A lot more devs are on call/support now. If you want that to go smoothly (no 2am wakeups) then a testable system is a must. If your system is observable (exposes its state), understandable (logs and metrics are meaningful) and decomposable (faliure is managed and handled, rather than catastrophic) then its a whole let better to support.
- Whole team testing - go for the selfish option too. If the organisation wants everyone to take part in the testing effort, then ask them to make it easier for themselves.
- Go beyond the devs - I cant emphasis enough that your operations people will benefit from testability greatly (sys admins, DBA, application support) we have a massive amount in common with our ops friends. Give them a hug. But ask first.
Blockquote Worried this is going to be a dumb question, but… Is there ever an argument for a feature being ‘untestable’, or can I, with all confidence, assume that there is always a way (even if it means more code)? Thank you in advance.
Not that I can think of. Given that requirements and ideas can also be tested, not just applications or the UI - I don’t think it’s possible for a feature to be ”untestable” in every way.
The feature, however, may be difficult to test (the testability may be poor) and someone may mistake this for untestable.
In my current team, when I first started I would often be told that all of these features were ”untestable”, therefore there was no point in me testing their features and they should go straight to PROD. My first question to the developers was: What is your definition of ”testable” and ”untestable”? It turns out they thought only about front end UI changes and not about logs, or the requirements being tested.
My advice to someone being told that a feature is untestable is to first learn what that person thinks ”testable” and ”untestable” means. Once you know that, then you can explain to them, what actually is testable (and why), and then what can be done to improve the testability of that feature.
This is an exciting question! To have a team of ‘testabilitiers’ is a very progressive development, would love to speak to you about it.
In the world of ‘Team Topologies’ this is known as an ‘enabling team.’ The problem here is the team need to be in either a product development or the testabilitiers team, both will pretty much always create a conflict of interest.
Two things to consider here:
- An enabling team is supposed to be a change agent but if your testabilitiers are playing by the same rules as everyone else, what is really changing?
- How much are your product people (those with the budget) bought into the team’s existence? I would check that before continuing onwards.
Please checkout https://teamtopologies.com/ and see if there is a pattern to make this happen. And I’ll DM you too.
Blockquote Is there a difference between Testability for test automation and Testability for manual testing?
Yes. And one thing to watch out for is that the testability for test automation can be poor but in terms of manual testing, the testability is good. And vice versa. Don’t just assume that both are on the same level.
When I learned about testability, I found automatibility was usually listed as a subset or one of the contributing factors of testability.
My understanding is testability for test automation tends to focus on “automatibility”: How easy it is to use test automation to run tests against this application.
Whereas I find testability for manual testing tends to focus on how easy it is for a human to test the application; to get feedback; to know the behaviour they see is correct etc.
Recommended reading: https://www.eviltester.com/2018/01/testability-vs-automatability-in-theory.html#testability-is-not-automatizability (I think Alan Richardson answers this question very well in this page).
I think you approached this situation really well - especially since you spoke to the developer and asked: “How am I supposed to test this?” It seems to me, you worked on the assumption that the change was testable, not that it might or might not be testable.
How I would’ve approached it: (to be honest, it’s pretty similar to the way you outlined it)
- If the changes are not controlled or observed via the UI, I would ask myself, and then the developer where can I see the changes? How can I control the changes? How do I know the changes are correct?
- If there is any lack of feedback, I would ask for the logging to be improved and give them some examples of what I would expect and why. I prefer that developers feel ownership over their code, so I ask questions and make suggestions, but they would make the decisions on how to implement it.
- Lastly, I ask them are there any other applications (current or upcoming) we may need to change or update now that we have learned about this together.
This is a lovely question, which speaks to the overall responsibility for testability. It’s part of the whole product, not only the function of testers requesting help from developers. Although they are key allies.
Other roles would be Business/Data Analysts (more product insight makes it easier to test), Ops Engineers (sharing system diagnostics and customer usage), Product people (money, desire for timely feedback), management (treating testability as a first class strategic requirement).
We should be asking for:
- Controllability - feature flags, test data generation, disposable environments
- Observability - structured, aggregated logs on all environments to complement exploration, dashboard that are readable and meaningful
- Decomposability - loosely coupled systems to be able to test early and isolate systems to find problems.
- Simplicity - simple architectures that aren’t a multitude of technologies and testers being in the room when these decisions are made.
It’s big changes, but with massive benefits for all.
I often feel like this means all the forms of testing we currently do and I know about, we can’t imagine how to test this. Dig deeper.
A final thought from me here.
Your system is hard to test. My experience over the last many years in testing has helped me make that sweeping generalisation with confidence. Poor testability warps what we think testing is (and the rest of the software development world) and how it adds value. I think it’s one of the defining challenges of being a tester.
We can rise to this challenge though. Adding a focus on testability to your work will help you, your team and your organisation. Great testing comes from combining sharpening all those tester skills you have with enhanced testability. It is within our gift as testers to be catalysts for change. Tomorrow, ask the question. “How can we make this more testable?”
Thank you very much for your responses. It kind of confirms what I believed - fortunately it’s not something I’ve come across often.
Thank you Ash and Nicola for your replies and very useful, practical advice. Looking forward to applying it and bringing a better focus on testability to my own work and the team. I love your call for action, simple but powerful: asking the question “How can we make this more testable?”