Should We Just... Delete It?

Ian Sparks: Sorry what is the definition of dog fooding again I missed that.

Dogfooding is when a company uses it’s own product or services internally as a way of exercising it and learn faster about it.

Shailin Sheth: If we aren’t confident to delete the test can we start with lowering the priority first and then delete it?

As I mentioned in the talk, sometimes you face the perfect manifestation of the Endowment Effect - “what if this test comes in handy in a few months?!”… you can definitely try to disable the test (or lower the priority, as you say) and after a while you may find that people are more confident to delete it, if after a few releases they really didn’t miss the test in the first place.

1 Like

Patrick van Enkhuijzen: Shouldn’t your tests always be small and focussed? So they are delivering value and are easy to maintain?

I’m always trying to drive my teams to follow that principle: small and focused! However, that’s easier said than done and sometimes it’s not possible. Furthermore, I also believe that a complete automated testing strategy many times requires the right balance between small, fine-grained tests and higher-level e2e or system tests.

What has been your experience in this regard?

1 Like

Dapo Awoola: Do you still have redundant tests now that you have moved to CI/CD?

It happens, yes. I believe that redundancy has more to do with how well your teams are able to grasp the test coverage they have, regardless of having CI/CD in place or not. Also, sometimes when your test strategy merits a healthy combination of fine-grained tests with higher level e2e tests throughout the different stages of your CI/CD pipeline, some redundancy may be hard to avoid.

1 Like

Paul Marlow: isn’t there a possibility to refactor a bad test so that you understand what it’s doing and can keep it maintainable?

Yes! Especially for tests that sit in the top-right decision quadrant I presented in the talk: lower the cost of a High Risk / High Cost test to move it to the High Risk / Low Cost quadrant.

Kalaai: Isn’t ranking tests on risk and cost ( TCO), an expensive excerise by itself?

In some situations it may be, but you don’t have to start by ranking all of the tests! Maybe start by focusing on the tests that get people thinking why they exist in the first place.

Furthermore, even if the exercise is expensive, you only pay that “cost” a few times (or maybe even once). But a low value / high cost test will have an ever growing absolute cost throughout time…

Stacey: How important would you say pass rate is when deciding whether to delete a test?

It’s an important factor, I would say.

If the test has almost never failed throughout history, what does that tell you? Has the feature been really stable (no other tests around it failing, no bugs detected in production)? Is the test really testing anything?

If the test tends to fail a lot, is it because it’s finding bugs or is it because it requires lots of maintenance work?

1 Like

Maik Nogens: Would you say, a good monitoring can make the decision to “delete or not” easier, due to faster /earlier reaction time in production?

It can definitely have that effect! Good monitoring combined with fast reaction ends ups lowering the impact of something bad happening, thus lowering the risk that’s being covered by the test. And in this new world where testing in production and observability is gaining traction in our industry, I believe this idea will become even more relevant.

1 Like

Jamie: how do you feel about deleting entire infrastructures for running tests when it is old and slow?

It’s something not unknown to me, definitely, but we should always evaluate all the risks being covered by such infrastructures and make sure we will keep mitigating the important ones through other, better means.

1 Like

Varsandan Csaba: Have you ever deleted a test just to improve the success of the passed tests?

Not sure I understood you question correctly, but I would say you shouldn’t delete a test just to reach 100% pass for a set of tests. I have however been in situations where I addressed a failing test, figured out that there wasn’t a bug being found and ended up deleting it when I evaluated its purpose (and value / cost ratio).

Carolyn Newham: Can you ‘Future Proof’ your tests by good design?

I believe good design can make your tests “top notch” when it comes to the Cost aspect of your tests. It can also influence how effective you are at mitigating the Risk (the goal of the test), but it won’t make it “Future Proof” necessarily because there’s more to say about the Risk part. One example: as I mentioned in my talk, as time goes by, features may become obsolete or even unavailable to your users - this is a situation where we may delete even the most well crafted test.

Bucyeyeneza Isabelle: Who has the right to delete a test?

I think this is very much contextual and is highly influenced by the way your organization functions. I like the idea of delivery teams owning the tests that pertain to the features they own and being accountable for them. This accountability means that they are ultimately also responsible for evaluating risk and its mitigations. That includes making decisions on deleting tests and making sure that they are being clear on the reasons for doing that.

1 Like

Lonneke|: noticed your role is a ‘Quality Owner’, this sounds really interesting! What does this mean?
Well I have another talk just about that! :grin: Check it out!

1 Like

Dave B: what if tests all ran instantly with 0 cost? would you still remove?

Probably not, but I can’t find an example of a real-world test that is instantaneous and has 0 cost! :slight_smile:

Thank you @jrosaproenca! It was me, I like homework. :raising_hand_woman:

1 Like

@maiknog here is the full list !

We have been doing cross browser testing using browserstack for a couple of projects using the full automation mobile edition. Whilst its not enough data to call it “definitive proof” of anything, we did come up with a couple of points from our experience:

  • We found very few bugs using automated testing (i.e. the UI elements behaved very consistently overall)
  • The automation was quite high maintenance for browserstack tests as we investigated failures. We spent more time trying to keep the tests reliable through browserstack than finding real bugs.
  • We found more bugs exploratory testing on each browser, those bugs being primarily layout and style
  • Mobile (ios and android) was the same as above, although we did find more usability issues on real devices. Whilst emulators are good for look and feel and automation, emulators don’t represent how a user uses the system on a mobile.

So for us, the answer is “yes it is still valuable doing cross browser”, but the question that remains is what is the right balance between effort, tooling and return.