Ask Alan a Question About: Quality Without QA

Not always, but I’ve done it a bunch with great results (but keep reading). 90% of the time when I’ve worked with a team on BDD, we never wrote the automation. IME, but bulk of the value from BDD comes from just brainstorming and documenting the behaviors of the software before it’s written. The automation is cute, but the alignment and design plusses that come out of the initial discussions are where almost all of the value comes from.

Yes. Absolutely, yes.

I don’t fully understand the question. I guess in traditional silo’d testing, there’s a problem introduced (a delay) when developers depend on testers to find their bugs. The back and forth (and back and forth) is a huge impact on delivery.

Now - some developers will say they are slower if they have to write their own tests. I have worked with hundreds (possibly thousands by now) of developers, and in every single case, the delivery process has (eventually) improved once developers fully took on testing tasks - even/especially if a testers paired with them or coached them.

It depends - I think the folks who moved into DevOps roles probably haven’t invested much in learning more about testing. The people who have moved into full-time test coach roles, however, have made huge strides. If your job is mostly about teaching testing, growth and learning about testing remain a huge priority.

Be willing and open to try anything. And then focus on trying new things and learning every day. Volunteer to to anything, and then after you’ve learned it, think of ways to optimize and improve. Keep yourself on a steep learning curve and find ways to add value.

A lot of buzz wordiness there, so in short, just be eager to learn and say yes a lot at first to keep yourself learning.

I’m really aligned with Jenny on this. There’s a book called The Advantage by Patrick Lencioni that has a whole bunch of data backing this up.

For Modern Testing, I suggest metrics that let you know how well you are Accelerating the Achievement of Shippable Quality. That breaks down to things like":

Accelerate Metrics:

  • Lead time for changes
  • Deployment frequency
  • Time to restore service
  • Change failure rate

And maybe things like:

  • Lead time for features (what’s the time from hypothesis to customer feedback)
  • Some measure of customer satisfaction. You can measure nsat, but it’s entirely a lagging metric. Have some data that lets you know how successful customers are with your feature. For a big part of my business, revenue lost is a reasonable proxy for product quality. That doesn’t work everywhere, but coming up with a One Metric That Matters (from the Lean Startup) can be a good place to start.
3 Likes

Thank you so much Alan !