Ask Alan a Question About: Quality Without QA

Our 3rd talk during the TestBash Home conference is “Quality Without QA” with @angryweasel.

Alan first spoke about this topic at TestBash Philly in 2016

And went on to create The Modern Testing Principles.

I’m always fascinated to hear experiences of companies that test without testers as it often raises the question of “will testers become obsolete?” So I’m really interested to hear about Alans latest experiences.

We’ll be adding all unanswered questions from the talk here so if we didn’t get to your question, don’t worry, we will find you an answer :grin:

Alan we need your book recommendations, some we caught
Radical Candor Read The Best-Selling Book By Kim Scott | Radical Candor
Explore It
Pragmatic Programmer

Unanswered Questions

  • Irene You were saying you were pairing up with devs to improve their testing skills. At which stage of the development process would you mostly do that? And how when you do it in an agile team?

  • Dan Billing: Why do you think the ‘test/qa is a bottleneck’ idea persists? And beyond the Quality Coach route, how can we break down those misconceptions that still persist?

  • Amber: How do you advocate for quality in an extremely time-sensitive environment. My company does agricultural tech, and if we miss planting season, the feature has to wait until the next year, so there is tons of time pressure

  • Dan Billing: How do test/quality specialisms fit into your models of quality without qa? e.g. security

  • Marllos Prado: What are the main personal skills and tools you think that people starting today in the QA area should look for if they aim to work at Microsoft or Unity, based on your experience with both companies’ cultures?

  • Antonella: We currently have a bunch of swimlines in our sprint in Jira, “Testing” being one of them. In the “have the whole team test” approach, would we have to get rid of that swimline or just assign the task for any team member to test?

  • Shey Crompton: Can you recommend any resources to broaden oneself on Quality Coaching skills?

  • Laura Kopp: Have any advice for remote quality coaching?

  • Ben Dowen: How do you ensure UX gets the right attention without Testers?

  • José Lisbona: How do you motivate developers to be better testers (build better vs build more)?

  • Rachel Gilbert: Do you have advice for testers/QAs on teams that heavily support external contributors? How can they pursue a similar level of testing from a wide variety of devs?

  • Lilla: Would this work on every kind of domain? Is there a product type limitation?

  • Christian Dabnor: Do you think the modern tester should be growing closer to the UI/UX team and injecting quality at that stage of design?

  • Nathan Owen: Any recommendations on pitching the Modern Testing principles to a traditional QA org?

  • Pradeep Soundararajan: Where does the culture shift begin from?

  • Roman Segador: Any advice when you have almost completed the transition (all roles totally on board with the modern testing principles) but still few QAs are renegating from the change )generating noise and evolving in their roles to the new needs?

  • Nathaniel Lai: Any tips on how to stop being a bottleneck? Maybe some concrete examples of culture changing in a team

  • Santiago: In a fast evolving and changing industry, where people move around so often, we end up losing a lot of domain knowledge for the software we work on. In that situation, how do you champion quality, when neither the QA nor the Dev nor the Product person knows how things work?

  • Toby: How would you see the reduction in testers working in a regulated env, FDA medical etc, where you must show independent testing, where developers are not allowed to test their own code?

  • Ben: I’m trying to get my next role in testing I only got 7 months experience in my last role. how do I prove/show my experience in testing to get my next role?

  • Ludmilla Chellemben: Is it not important to have another pair of eyes on the feature? Dont u have deadlines? How about security, performance, cross browser and automation? Are the developers doing all that?

  • Ale Rodriguez: What was your biggest challenge as Quality Coach?

  • Masha S.: What role domain knowledge play takes in Quality Coach role? How important is it?

  • Natalia Shulyaeva: We just heard from Angie how it is not good to make the tester sit on two chairs and also be a test programmer. Isn’t it the same when we make developers and DevOps do tests? Isn’t it making them work for two?

  • ShobhaJayashankar: How to get the Developers who are not willing to do testing to test

  • PD: does that mean developer is doing the coding, unit testing, integration testing, system testing

  • Deb Collins: Do you try to get your developers to do behavior driven development?

  • Richard Forjoe: Curious to know after your testers dissolved into other teams, did they still learn about testing or did they focus more on other new skills? I’m assuming they’d focus less on improving their testing skills

  • Ralf Scherer: what would you suggest for a highly regulated domain like financing where you need a lot of test documentation and still need test cases?

  • Richard Forjoe: What problems do you think testing/Testers Introduce to Developers and the delivery process?

  • Emna Ayadi: By curiosity wanted to link the talk in part 6 of jenny “The only good quality metric is morale” with this one: what are the most important metrics to consider in modern Testing ?

If you’ve got a question you’d like to ask in advance, why not ask it now?


Here are my questions:

I) balance quality vs velocity ? 
    a) shippable quality means tolerable risk ... how you define that ?
          Alan said: can't break anything - which implies a very decent automated regression test campaign - correct ?
    b) only feasible with monitoring and/or metrics right ?
II) on Monitoring
      a) What are main challenges on implementing "monitoring" ?
      b) which sectors / contexts does it apply well & not so well ?
      c) Any special resource worth singling out ?

III) How does one manage when the dev team hasn't yet aquired the tester mindset? Alan said "over time" but... until then how do you manage that ?
IV) p50 estimate ?? 
    a) delivery dates only by engineering - and only a 50% chance of meeting ship date?  Any notion how RARE this is ? :-D

    b) I guess this only works for Product teams I guess. does clearly are not feasible in many many sectors/ markets/contexts, or AM I WRONG ?

Short answer is “all of the time”. We pair during planning to make sure we’re designing something that’s testable. We pair during implementation to help think of tests and testability early in the development cycle, and we pair throughout the project so we can think of areas where risk can be mitigated through better testing.

I’ve only worked in agile(ish) teams for the last six or seven years, and think MT builds on agile testing and lean software, so I haven’t really thought about what I’d do differently on a non-agile team.


History? I see some testers who are happy being the gatekeeper - or “making developers cry”. Fortunately (even outside of my bubble), a lot of the industry has moved past this. My linkedin inbox, however, is full of tool vendors and test companies who basically want me to use their tools/services to test quality into my products.

Even if you’re not a “quality coach”, looking for ways to improve the team - either through suggestions - or better, through leading frequent retros on the team may be one way to accelerate the team.

A story I didn’t tell today is that a tester on my team ~2 years ago was helping to lead a retro, and the team was reflecting on how they were blocked on testing a lot of the last few sprints. 5-10 years ago, the outcome may have been to ask for more testers - instead, this team came up with a plan on how to do more of the testing themselves.

I think there’s a lot to be said about being a collaborator with the team rather than an adversary.


Cut scope.

In my business, we really don’t have deadlines (other than stuff around CCPA or GDPR). However, if one of the teams proposes a feature with a far-off ship date, I work with them on scoping the project down into something smaller - with less impact, that can be shipped sooner. My goal (from my velocity hat), is that we ship 75% of our projects in my org in 6 weeks or less (from hypothesis to customer delivery).

In your context, this could mean that the first thing you complete is a small cross-section of the feature that has some value for the customer and is high quality. Then, you can build on top of that slowly, knowing that you could ship at any time. Worst case is that your customers get a partial feature that they can use, vs. an unimplemented feature that gives them zero value.


I. Another way to put this is “accelerate the delivery of customer value” - so tolerable risk means that value outweighs risk. In my case, our business relies a lot on our services working consistently - so if things break, we and our customers both lose revenue. A traditional way of mitigating risk is testing - but yes, we rely a lot on monitoring and alerting, as well as slow rollouts (e.g. deploying to 1% of our users, then 5%, then 10%, etc.) in order to minimize risk.

But yeah - monitoring and alerting are critical for services - although I’ve had similar experience with mobile apps (albeit, quite a bit slower)

1 Like

II. On Monitoring.

Main challenge is knowing what to monitor - some teams monitor too much - others not enough (sounds like testing, right). Start by monitoring (and alerting) on the path that makes your company money. Expand that into monitoring customer behavior so you can tell if they’re seeing more errors, or if unexpected things are happening.

There’s a book called Release It! by Michael Nygard that covers a lot of good idea - as well as the google Site Reliability Engineering book.

III. Same as my comment during Q&A - help the team care about quality. If I were the only tester on a team that didn’t know anything about testing, I would probably start by saying, “I’m not planning on doing very much - or any testing. But I will definitely help all of you test when needed”

If the whole team decided they needed my help and couldn’t do it on their own, I guess I’d set up office hours where they could book me for an hour to help them test. Eventually, I expect that they’d get tired of waiting their turn and try doing more testing on their own.

Regarding p50 estimates, I think this is from Mike Cohn (also famous for the “automated testing pyramid”), but since estimates are a guess, estimating with a confidence level of 50% works pretty well.

In fact, for us (looking at stuff we’ve shipped in 2020…) we ship 48% of our projects on or before the p50 date. I’m working with teams on cleaning that up, as well as reducing how late a few things are when they ship significantly late. As I mentioned, we take a lot of time to reflect, and we do some form of retrospective for anything that misses a p50 date by more than 30%, so we count on those opportunities to learn in order to get better at those estimates.

Going to see if I can group a few here so I don’t have too many answers

Nice one Dan - I think Security especially is somewhere where a specialist is not only helpful, but necessary. Performance analysis may also fit into this category depending on the perf goals of the team.

That said, every tester (and developer) should know something about security testing and performance testing.

I don’t know if I have a relevant answer for Microsoft anymore - in general, they look for deep technical expertise, but it can vary from group to group.

For Unity, we have a pretty autonomous culture so we (in general) look for a breadth of knowledge, but also a lot of ability to work independently, to influence other people, and to be good communicators and collaborators. I don’t think there’s a particular set of skills and tools that would make you more qualified for a particular company - just learn the stuff that let’s you find a job where you enjoy the work.

I think you know the answer. I don’t like “testing” swimlanes or columns. If testing tasks are included in the definition of done, then everyone knows that collaboration (and testing) are needed in order to complete the item, and good things usually happen as a result. If a developer knows that they can’t pick up their next ticket / item until the current item is “done”, then they’re going to be pretty invested in helping to get the testing tasks completed.

For the quality coaching aspect, Jerry Weinberg’s Secrets of Consulting and More Secrets of Consulting - as well as Getting Naked by Patrick Lencioni are all helpful. For more coaching advice (where I’m trying to change a behavior vs. address a skill gap), The Coaching Habit by Michael Stanier has been pretty helpful.

A great place to start would be the Quality Coaching Roadshow from Anne-Marie Charrett and Margaret Dineen

I mentioned in the talk (and just above) that sometimes I’m a consultant and sometimes I’m a coach. The biggest challenge in coaching is for me to avoid going into solution-mode too quickly. In coaching, I end up asking more questions and helping people learn to solve their own problems. It always goes better when I delay problem solving, but it’s a tough thing for me to remember sometime.

I do all of my coaching remotely, so my advice is around engagement over conferencing apps. Keep eye contact, practice active listening, take extra time to try and connect at a personal level, and check in often.


How do you ensure UX gets the right attention with testers? In fact, if developers are responsible for reviewing UX, they’re (in my experience), much more likely to keep logic out of the UI and increase testability.

For full disclaimer, I often use our enterprise support team to help when I feel like I need more eyes on a new user interface or workflow.

As I said in the live Q&A, I’ve found the goal is to get them to care more about quality - the “better at testing” part comes along quickly once that’s in place. As far as how I get them to care, it can be a gradual process, but it starts by building trust with them and building on that trust while asking questions that get them to think more and more about why quality is important to them.

I haven’t dealt with this situation personally before, but I think I’d start by establishing something like an SLA (service level agreement) that says when code is delivered, it should have some set of criteria (tests, analysis, documentation, etc.).

A few people in the talk mentioned my ping-pong metaphor from testbash philadelphia in 2016. The story there was that the back and forth from dev to test to dev to test (…) is inefficient, and that’s why I prefer a much more collaborative approach.

In the case where maybe you can’t collaborate, I’d try to find a way to do something up front that reduced back and forth communication between remote teams.

Would love to discuss more if I’m off the mark here - I’ll think about it in the meantime.


Yes, but with some possible limitations. If you’re building a medical device that keeps somoeone alive, your definition of “shippable” is quite different than an app for your phone that tells you where the nearest burrito is. MT is based heavily on lean principles - which have been applied to a variety of products.

Just focus on improving the business and adapt as needed for your context.

Remember Principle 1 - Our priority is improving the business. I don’t know your context, but if UI/UX is a big part of your business - or if they’re slowing you down, then yes - work with them as closely as you can.

One way that I know teams have done this is to have a team viewing of my Testbash Brighton talk in 2018 - and then have a discussion about what it could mean for the team. The best stepping point I’ve seen is to just start with trying to improve the quality culture. That generally leads to more/better retrospectives - which often end up highlighting bottlenecks in the system - which then usually leads to improvement in a lot of different places.

Change is hard, so just ask your traditional QA org if they think there’s a better way - or if they can think of some things that would improve the software and make the delivery process more efficient.

1 Like

It depends, but I think testers (or those with a background in testing and quality) are well suited to lead the quality culture improvement. Of course, it also requires some leadership skills to do that.

In some cases, product leadership may need to lead the culture change - the Leading Quality book by Ronald Cummings-John and Owais Peer is a great place to get ideas on where to start.

I don’t recall ever being part of a culture change where there wasn’t resistance. My advice is always the same - talk to the people who are feeling uncomfortable - acknowledge the feeling of uncomfortableness and then ask them why they’re uncomfortable. Then, ask them what would make them feel better and see where it goes. Maybe they’re worried about losing their job - or what the future entails for them - if that’s the case, explore ideas with them and see if you can find a path forward they feel better about.

Short answer is, use the Theory of Constraints.

Longer answer - first, identify what the bottleneck is - or what the biggest bottleneck is - e.g. spending too much time verifying pull requests. Remember, the output of a system is limited by the biggest bottleneck, so if you start with the bigest bottleneck, any improvements to it improve the entire system.

Then, brainstorm on ideas to address or mitigate that bottleneck. Find small, quick things that make it (you) less of a bottleneck - i.e. pairing, or sharing test ownership or whatever.

Then, look at the system again, see if the bottleneck has moved, and continue.

All of this is based on a part of the Theory of Constraints called The Five Focusing Steps - some internet searching on that may provide some more insights and ideas.

This is a big reason why principle 5 says, “We believe that the customer is the only one capable to judge and evaluate the quality of our product”

Focus groups, case studies, and customer feedback are the best ways to make sure you are building the right thing for the customer. That said, domain knowledge is important, but as you said, people move around a lot. That said, I (and others in similar positions) have hired domain knowledge experts (typically on a short term basis) from time to time in order to get additional insight. Having them onsite with the team increases the feedback loop - especially when building products where it’s difficult or slow to get early feedback from customers.


Even though regulated industries have rules you need to follow, you can still focus on improving the business, improving the quality culture, and all of the other MT principles - including - and maybe especially the “everyone can test” principle.

While there may be rules that test cases need to be documented - and even that testing need to be done by someone other than the implememter. That’s ok. You can work with developers to write test cases, and then pair with other developers to perform the tests if needed. I see plenty of opportunities for collaboration and opportunities for focusing on eliminating bottlenecks and improving the business.

Also important to remember - the goal of what we talk about with MT is NOT to eliminate or reduce testers - we’re just saying that in a whole lot of contexts that a team following the MT principles may get to a point where they realize they need fewer and fewer dedicated testing specialists. That cold easily happen in a regulated industry - but certainly doesn’t have to.

1 Like

The biggest challenge is simply (?) that coaching is really hard and is different in just about every situation. More specifically (and this addresses Masha’s question a bit) is for me to learn that sometimes, my job is to help people learn and discover new ideas - or to change behavior (coaching) and that sometimes my job is to offer advice or collaborate (consulting). The biggest challenge is switching seamlessly between these two hats.

Domain knowledge would generally play a bigger role when I’m wearing the consulting hat. Domain knowledge is, of course, valuable - but if I didn’t have it on my team, that’s something I can usually find in a contract position if needed.

I think Angie is awesome - but I disagree with this point. I’ve never liked the idea of separating “testers” from “test automators” - which may be why I have found it makes so much sense to test your code as part of developing it. It takes some practice and some coaching, but I’ve found time and time again that developers can write excellent test automation, and that they can do pretty good exploratory testing as well.

Well, in my org, they are expected to do testing in order to remain employed, but typically, if we look at bottlenecks, one that comes up often is “waiting for testing” - if a team is serious about improving and optimizing, finding a way to share testing - often starting with pairing is usually an obvious first step.

Of course a second pair of eyes help - but in my orgs for the last decade or so, those extra pair(s!) of eyes have come from other developers during the pull request review. In addition to reviewing the implementation code, teammates involved in the review run the automated tests, step through the code and try to find bugs or missing tests.

The extra eyes dont always have to come from a dedicated testing specialist.

To your other points, we do involve specialists for some security testing (e.g. penetration testing). I’ve found that cross-browser testing and cross device testing owned by the development team ensures that the development team writes highly testable and portable code. It saves a massive amount of time when they own their automation, as they have to write both the code and the tests in a way that is highly maintainable. If someone else is responsible for that stuff, they tend to (IME) be lazy.