Should We Just... Delete It?

The first talk for the TestBash Home conference is the highly anticipated “Should We Just… Delete It?” with @jrosaproenca.

Deleting automated checks can seem daunting, especially if you’re new to a team!

Check out the Power Hour that João did for some more insights Power Hour - Should We Just... Delete It?

We’ll be adding all unanswered questions from the talk here so if we didn’t get to your question, don’t worry, we will find you an answer :grin:

Unanswered Questions:

  • Shaikh M. Rahman: We create tests those are mapped to Business Requirements / User Stories. Most of the time thet are part of a Regression or Smoke test suite. If the test cases are well defined and the requirements have not been changed then why would we delete a test?

  • Chitra: If the test covers a area of business which is no longer in use , why will you still retain the test . Will you not delete it and reduce the running time of the suite?

  • Mike Mc: What is the name of the book Jaoa mentioned in his talk on test deletion that had “continuous testing” in it?

  • Ana Milutinovic: What is the argument for creating automated test for a bug in production? In the context of automation strategy, how is this beneficial?

  • Honey Chawla: What should be the frequency to do exercise to figure out worthiness of tests? (lets say there are 5k+ UI tests)

  • Louise Gibbs: Is it worth keeping tests that cover little used but essential features? For example, warnings for dangerous products.

  • Louise Gibbs: How do you gain support from business for deleting tests?

  • Connor Munro: When Dogfooding, how does everyone know what is or isn’t a bug?

  • Dan Billing: How do you think your principles can be applied to what might be referred to ‘non-functional’ tests that might not be included in an automation framework?

  • Anita: Java applications backend API test automation for beginners, where to start? Your experience, learnings, DO’s and DONT’s?

  • Christian Dabnor: Do you think test coverage has too much emphasis put on it, maybe because it is easy to metricise? How do you go about changing this?

  • Ian Sparks: Sorry what is the definition of dog fooding again I missed that.

  • Shailin Sheth: If we aren’t confident to delete the test can we start with lowering the priority first and then delete it?

  • Patrick van Enkhuijzen: Shouldn’t your tests always be small and focussed? So they are delivering value and are easy to maintain?

  • Dapo Awoola: Do you still have redundant tests now that you have moved to CI/CD?

  • Paul Marlow: isn’t there a possibility to refactor a bad test so that you understand what it’s doing and can keep it maintainable?

  • Kalaai: Isn’t ranking tests on risk and cost ( TCO), an expensive excerise by itself?

  • Stacey: How important would you say pass rate is when deciding whether to delete a test?

  • Maik Nogens: Would you say, a good monitoring can make the decision to “delete or not” easier, due to faster /earlier reaction time in production?

  • Craig: You talked about different reasons for tests to be created. (E.g. testing if a bug has been fixed) Do you have a standard to label them somehow so in the future you know why they were created?

  • Jamie: how do you feel about deleting entire infrastructures for running tests when it is old and slow?

  • Varsandan Csaba: Have you ever deleted a test just to improve the success of the passed tests?

  • Shuja: What you have to say to those smaller teams who are still growing, in a startup environment which is chaotic 9 out a 10 times?

  • Carolyn Newham: Can you ‘Future Proof’ your tests by good design?

  • Karlo Smid: What is the main purpose of test automation in your organization? What is main risk that you try to mitigate with test automation?

  • Bucyeyeneza Isabelle: Who has the right to delete a test?

  • Ashton joseph: Do you block your pipeline when your main automation tests fail?

  • Jay: My strategy is when starting automating a new application is to first create a few end to end tests in order to have maximum coverage and quickly. Then with time I create smaller focused tests and when that job is complete, I delete the end to end tests. Is this a good strategy?

  • Varsandan Csaba: Deleting a test is about improving the quality and not about the reducing the quantity. Can you make another example where it could be a benefit for someone?

  • Lonneke|: noticed your role is a ‘Quality Owner’, this sounds really interesting! What does this mean?

  • Dave B: what if tests all ran instantly with 0 cost? would you still remove?

9 Likes

It seems that the premise for the questioning about a test assumes - there’s none documentation about WHAT the test is testing…and even considering the requirement COVERAGE…if that coverage is already covered by some other more stable test, or despite the test being flaky it’s coverage that being lost and requires manual testing… - is that so ?

A strategy to minimize this - why not consider having a sort of standard/required procedure for having a @javadoc (or anything equivalent) explaining (w/ a required peer review) whats the goal of the test, to avoid going over the code ?

1 Like

I’m curious if others have found that cross-browser testing isn’t so necessary anymore? I see a lot of people still emphasizing it.

9 Likes

I would love to hear an answer to this cross browser testing concept. I been pushing against using Cypress on some new projects due to it’s weak cross browser capabilities

1 Like

From experience, I feel that you find more visual bugs than functional bugs on different browsers… We are not doing cross browser automated tests but do cross browser visual tests instead.

We remove them for automated checks. Just running on chrome for a couple of years already without any issue. A lot of hours to invested on most valuable things there!

aha or yes! or an associated Gherkin syntax like Joao has said.

I still do cross browser UI automating testing (with testcafé).
Why ?
because of specific behavior according the browser (desktop versus mobile, dynosaur IE11, some specificities on Safari and mac shortcuts, magic mouse, etc …)
am I right ? truely, I don’t know. all is about context, risk and also development team maturity about quality.

Sure, all is about context. Our first steps to remove cross-browser testing were to enforce production code to follow strictly good practices for modern browsers. Then we presented to products the numbers for the cost of developing / testing for Internet Explorer vs the number of users that still use it. So all together decided to stop supporting it.

Our context allowed us to do it but, for sure, there are other contexts when it won´t be able.

1 Like

Shaikh M. Rahman: We create tests those are mapped to Business Requirements / User Stories. Most of the time thet are part of a Regression or Smoke test suite. If the test cases are well defined and the requirements have not been changed then why would we delete a test?
I like to think about the risk we’re actually covering with each tests vs the total cost of ownership of the test.
When we consider the risks for a feature that’s been around for a long time, the probability of something bad happening may have changed since we created the tests, even if they are still pretty much valid in their intent. Features can become really stable; they can become less used or even slowly replaced by other features; or maybe even technology evolved to make those risks lower.
So we can ask ourselves: does it still make sense to run all the tests of the set for this feature? How much are they costing us when running and maintaining them? What would be the smaller set that now makes us confident about this feature when releasing a new version of our software?

2 Likes

Someone during the talk requested for Book references relevant to my talk

Here are a few resources I either directly mention in my talk or are somewhat related:

3 Likes

Chitra: If the test covers a area of business which is no longer in use , why will you still retain the test . Will you not delete it and reduce the running time of the suite?
Sometimes features are not used at all or maybe not even available anymore to our users. This is a place where usually you’ll find good test candidates to be deleted.
However, always take into account what sort of bugs a test has found in the past (it’s great if you have a test management solution in place that allows you to do this sort of analysis). You may find that a test covers a feature that has been deprecated but it’s still finding bugs in other areas of your software (especially a “big” end-to-end test). In this case consider covering those bugs with other new tests before deleting the original one!

2 Likes

Mike Mc: What is the name of the book Jaoa mentioned in his talk on test deletion that had “continuous testing” in it?
It’s the Continuous Testing for DevOps Professionals by Eran Kinsbruner

Ana Milutinovic: What is the argument for creating automated test for a bug in production? In the context of automation strategy, how is this beneficial?
My experience tells me that usually an escaped defect is a good place to put an automated test in place, for a couple of reasons.
First, a TDD approach to bug fixing (where you create a test reproducing the bug, see it failing because of the bug, fix the bug and then see the test passing) is a great structured way to be confident about having fixed the issue.
Second, it’s significantly likely that if the bug was there in the first place, it may reappear again in the future. For instance: if you deal with multiple code branches and/or versions in your software development, maybe a mistake may be made and the code is not merged properly to another branch or not ported to a specific version.

Nevertheless, as I said in my talk, remember that if after a long time we come to the conclusion the bug has never come back, then you can really re-evaluate if it should stick around.

1 Like

Honey Chawla: What should be the frequency to do exercise to figure out worthiness of tests? (lets say there are 5k+ UI tests)

That’s a great question I hadn’t considered yet. Teams in my organization usually do this when specific “events” happen: if a test fails and they’re not really sure why it should exist in the first place; or when revamping a feature they may go through the tests that cover it and do some house cleaning.

However, I can totally see this as a practice that we should remember ourselves to do frequently, or better yet, come up with automated heuristics that alert us that a test should be considered for deletion!

2 Likes

Louise Gibbs: Is it worth keeping tests that cover little used but essential features? For example, warnings for dangerous products.
I believe it boils down to the risk being covered. In your example it should be a potential problem with a low probability (someone not understanding a product is dangerous in the first place) but a very high impact (the consequences of not understanding that). This amounts to a significant risk!

Does the test have a low cost? Then maybe we should keep it.

Does the test have a high cost? Then let’s consider the alternatives to automated testing, or even think how we can lower the cost of the test.

2 Likes

Louise Gibbs: How do you gain support from business for deleting tests?
Usually the business will be very sensible about the cost vs value analysis relationship. So if you are able to demonstrate that with hard facts (what proves the test is not providing value? what proves it has a high cost?) then you can make a much stronger case.

1 Like

Connor Munro: When Dogfooding, how does everyone know what is or isn’t a bug?

Don’t know if I understand your question 100%, but what I interpret is that sometimes our users may not acknowledge that a specific behavior is a bug, right?

I believe this is about having “rich” feedback loops in place. If you’re dogfooding, you can conduct interviews with your internal users to make these “misinterpreted behaviors” surface. I’m also a really big fan of Observability: make sure your software provides you with rich data that you can then observe to pinpoint patterns you weren’t expecting and go after them!

1 Like

Dan Billing: How do you think your principles can be applied to what might be referred to ‘non-functional’ tests that might not be included in an automation framework?

I’ve had to think about this also for automated performance testing as well, for instance, and I believe a lot of the principles can be applied as well (what are the risks we are covering? what is the impact and probability? how much do these tests cost us?).

For other types of non-functional tests (security, accessibility, etc.), I believe that when they’re automated and / or “regression” in nature tests, the same sort of analysis makes sense as well.

But for a lot of non-functional manual testing we conduct that is exploratory in nature (so, dealing with the “unknown unknowns”) I believe that is a whole different ball game and most of the ideas I expressed in my talk are not directly applicable.

1 Like

Anita: Java applications backend API test automation for beginners, where to start? Your experience, learnings, DO’s and DONT’s?

I definitely recommend what Angie Jones has put together with TestAutomationU. Check it out! You’ll find free courses in there about those subjects.

1 Like