Ask Me Anything: Testability


(Heather) #1

Tonight we had the wonderful @ash_winter, co-author of 30 Days of Testability and Team Guide to Software Testability for an Ask Me Anything on the topic of Testability. This is a topic that Ash in incredibly passionate and knowledgable about and is of huge importance to not just testers but the whole development team.

If we didn’t get to your question of you’re catching up on the Dojo and have thought of some questions you’d like to ask, please share them here.


(Heather) #2

Ash referenced the 10 Ps of Testability from @robertmeaney

The unanswered questions:

  1. How to approach testing cloud technologies?
    1.1 Somewhat related: I’d like to hear your comment on any special considerations for testability in microservices?
  2. What are some ways to bring awareness of testability to a team? i.e are there any workshops you could recommend?
  3. Can you have testability without observability or vise versa?
  4. What’s your good or bad experience about testability? What are they? How they affected overall quality of the product/system/solution? What can we do to improve and influence this to be better?
  5. Would you say testability is a highly subjective term? What might be highly testable to one person might not be to another?
  6. How would you approach the business to convince them they need to focus on testability from the start of a project?
  7. Testability seems really cool important to you. Was there particular projects/experiences that lead you to focusing on testability.
  8. What was first, the tester or the testability?
  9. Is the following scenario effectively untestable: Say, you need to test page 5 in a sign-up process without effectively testing pages 1-4 each time you want to test page 5? (Let’s assume page 1 is tested). There are dependencies and API dependent responses required on each previous page. Does that mean that pages 2-5 are effectively untestable?
  10. In a high functioning development team automated testing is often viewed as not required or “secondary”, how do you overcome this bias?
  11. What does ensure testability?
  12. How do you respond to “testability is the tester’s problem”? Especially combined with a reluctance to insert test-hooks to a product because “It’s not the product, and we don’t want to have test-only features that will increase our overhead”.
  13. Can you give examples of how you could improve the testability with the architecture?
  14. How does traceability relate to the testability of a system?
  15. Who is the most handsome, charasmatic tester you have written a book with?
  16. In agile world generally, we should reduce dependencies and each released piece of work (story) ot be independent, tastable and of value …
  17. What do you recommend for those stories that do have dependencies, and can bring some value to customer wupon released yet the true value is when all of the related pieces of the MVP of that given feature are released? Should we take a more waterfall approach in such cases, to respect the main dependencies and to avoid customer dissatisfaction with what could come across as buggy initial releases (i.e., until the rest of the MVP pieces are released)?
  18. A bit off-topic - how do you know what journeys/features to automate?
  19. What steps do you take to align expectations on what all the “-ilities” you’re talking about, are?
  20. How would you prioritize Testibility?
  21. What would be the best team setup which enforces testability during development?

(Ash) #3

I’ll chuck some answers in here as I go, I’ll go from the top:

How to approach testing cloud technologies?

The cloud provides some interesting new challenges. At a previous company, we used AWS to autoscale for a very high load scenario in a short period of time, but AWS couldn’t scale fast enough. So those services had to pre-scaled, defeating the point a little.

Just goes to show all the cloud in the world still has risk attached to it. Principles to use, from a testability POV:

  • Think about state and persistence. How can you set your app in the cloud into the right state (load balancer, nodes, auth) to begin testing.
  • Queues and Events - are hard to test, often needed high levels of control and observability. Prone to race conditions and long conversations about eventual consistency.
  • Use something like localstack to have a local cloud environment to test on. Alternatives can be expensive, eroding the value of your testing.
  • Learn the AWS cli and web interfaces. And the terminology too, buckets for objects, where objects are CSS.
  • Environments - YOU CAN HAVE A LOAD BALANCER in your test environments and test that too!
  • Waste - loads of cloud implementations are really wasteful, large instances left on. Make the accountants love you too.

Microservices

Microservices speak to testability in that smallness and isolatability are desirable. The entirety is a different matter. There are three levels here:

  • Services
  • Integration of services
  • Aggregate of services

You need to have a strategy for the three levels:

Testing a single service is isolation is great, but they are often not used in isolation. But you can use this to get great early feedback.

Integration of services is where you find out about relationships, contracts between services and between teams. This is where your resilience and fault tolerance testing comes in. How decomposable is your system? Mock where appropriate but don’t rely too deeply on them, start them simple and don’t rebuild the services, complex mock of a microservice? Not a microservice.

Finally, the aggregate, where the customer journeys often occur. Mapping (knowing) which services connect to form a journey will make you a legend. Sharing understanding is key to testability. Plus using a time series database to store aggregated events from all your services with a common id is pretty cool too.


(Joe) #4

Number 9 caught my eye:

  1. Is the following scenario effectively untestable: Say, you need to test page 5 in a sign-up process without effectively testing pages 1-4 each time you want to test page 5? (Let’s assume page 1 is tested). There are dependencies and API dependent responses required on each previous page. Does that mean that pages 2-5 are effectively untestable?

The short answer, in my opinion, is no. But it also depends how proactive the team was in advocating for testability.
Based on the description, there are dependencies between pages and possibly from one page to the next. Once I’ve identified a dependency, I’ve also identified a testability improvement opportunity.
The testability question for me is: how might I make each page independent of the previous page? One answer is to provide the page the information it needs to operate. I submit that the page is not dependent on the API. Rather, it is dependent on having the data provided by the API. The method of information delivery to the page should not matter.
When I move to providing the page the information it needs to operate, multiple scenarios open up to me as a tester. I can manipulate the data for both positive and negative behaviors and exercise a lot of code on the page.

Joe


(Ash) #6

Out of a sample size of 1, I would have to say its Robert Meaney. His only known flaw is that he can’t spell “charismatic.”


(Ash) #7

Ha ha! Nice. Testability doesn’t necessarily need testers and vice versa.

Testability without testers manifests itself in lots of ways, monitoring, tracing, debugging, beta groups and many more. Testers without testability, you can still test, but with limited effectiveness.

Pragmatically speaking, I think often the tester turns up in a team and then what is known as testability often becomes more explicit. Transferring from the more ethereal concept to something more tangible.

There are a few techniques you can try that are not necessarily directly testability related per se, but can give you observability, controllability, decomposability and understanding gains:

  • Blameless Post Mortems (made famous by Etsy) where you discuss incidents and outages and stick to the facts, establishing what happened without what if’s.
  • Draw the Architecture - this one is simple, give everyone on your team a piece of paper and ask them to draw the solution architecture on it. Compare and contrast. You get some amazing answers.
  • Adjacent Teams - have a think about which teams you depend upon and your relationship with them. How do you communicate? With a ticket system? How do you resolve bugs/issues/problems? Do they have environments you can use? Lots of downtime? Do they constrain your testability?
  • Try answering these testability questions - https://github.com/ConfluxDigital/testability-questions - rinse and repeat every few months and compare your score. Like the Spotify Health Check type model.

This is fairly wide ranging, hard to answer without a discussion. In the AMA I talked about some hard to test products I’d worked on. For me, build the relationships and understanding with the teams and products around you that directly impact how you test. Networks, customer support internal dev teams, external services, whatever your context is. After that, think about observability, controllability and decomposability. A handy guide I did is here: https://github.com/northern-tester/transmission/blob/master/03%20Exercise%203%20-%20Canvas/Handouts/Testability_Remedies_Improvements.pdf

I think observability is inherent to testing, and testability is about ease and effectiveness of testing, for testers at least. Think of the differences between monitoring and observability. Or to put it another way, things which you think might happen and investigating things which are UNKNOWN. Being able to investigate the unknown is the trait of a testable system and a big part of testing!

I mean, you can perform testing without observability, but it will likely be ineffective testing. Which is annoying for stakeholders, you can’t describe bugs well for developers or behaviours and their side effects well for product people.

Yes, it really is. Thats what makes it so much fun in my opinion. It requires you to “grow” a paradigm of it from various disciplines and sources. The world would be a dull place without such concepts.

Testability (like testing) is linked to value. You can have a shining oracle of a system which emanates testability but if it goes way beyond what the value of the system is, then why do it? You try not to perform testing that provides no value, same for injecting testability.

We have dependencies. We work within complexity, we should accept this and engage with it.

But you can make your life better:

  • Release behind toggles if you cannot split effectively. Test with a limited subset of sympathetic users, value and reward their feedback.
  • Make sure your contract with your dependencies is explicit for services - PACT type tooling to notify of changes for example.
  • Have breakers between your system and your dependencies. If they respond with errors break connections and poll until you get a positive response. Fail in favour of the user.
  • Get to know the teams that provide your dependencies, certainly the internal ones. Find out how and what they test, it will give you real insight to their cadence of delivery, bugs, and all manner of things.

Taking a waterfall approach is a false flag here. Dependency mapping still needs to be done in agile ways of work. Think about risk, do some analysis and build the smallest things that gives you feedback.