Weâll be adding all unanswered questions from the talk here so if we didnât get to your question, donât worry, we will find you an answer
Unanswered Questions:
Shaikh M. Rahman: We create tests those are mapped to Business Requirements / User Stories. Most of the time thet are part of a Regression or Smoke test suite. If the test cases are well defined and the requirements have not been changed then why would we delete a test?
Chitra: If the test covers a area of business which is no longer in use , why will you still retain the test . Will you not delete it and reduce the running time of the suite?
Mike Mc: What is the name of the book Jaoa mentioned in his talk on test deletion that had âcontinuous testingâ in it?
Ana Milutinovic: What is the argument for creating automated test for a bug in production? In the context of automation strategy, how is this beneficial?
Honey Chawla: What should be the frequency to do exercise to figure out worthiness of tests? (lets say there are 5k+ UI tests)
Louise Gibbs: Is it worth keeping tests that cover little used but essential features? For example, warnings for dangerous products.
Louise Gibbs: How do you gain support from business for deleting tests?
Connor Munro: When Dogfooding, how does everyone know what is or isnât a bug?
Dan Billing: How do you think your principles can be applied to what might be referred to ânon-functionalâ tests that might not be included in an automation framework?
Anita: Java applications backend API test automation for beginners, where to start? Your experience, learnings, DOâs and DONTâs?
Christian Dabnor: Do you think test coverage has too much emphasis put on it, maybe because it is easy to metricise? How do you go about changing this?
Ian Sparks: Sorry what is the definition of dog fooding again I missed that.
Shailin Sheth: If we arenât confident to delete the test can we start with lowering the priority first and then delete it?
Patrick van Enkhuijzen: Shouldnât your tests always be small and focussed? So they are delivering value and are easy to maintain?
Dapo Awoola: Do you still have redundant tests now that you have moved to CI/CD?
Paul Marlow: isnât there a possibility to refactor a bad test so that you understand what itâs doing and can keep it maintainable?
Kalaai: Isnât ranking tests on risk and cost ( TCO), an expensive excerise by itself?
Stacey: How important would you say pass rate is when deciding whether to delete a test?
Maik Nogens: Would you say, a good monitoring can make the decision to âdelete or notâ easier, due to faster /earlier reaction time in production?
Craig: You talked about different reasons for tests to be created. (E.g. testing if a bug has been fixed) Do you have a standard to label them somehow so in the future you know why they were created?
Jamie: how do you feel about deleting entire infrastructures for running tests when it is old and slow?
Varsandan Csaba: Have you ever deleted a test just to improve the success of the passed tests?
Shuja: What you have to say to those smaller teams who are still growing, in a startup environment which is chaotic 9 out a 10 times?
Carolyn Newham: Can you âFuture Proofâ your tests by good design?
Karlo Smid: What is the main purpose of test automation in your organization? What is main risk that you try to mitigate with test automation?
Bucyeyeneza Isabelle: Who has the right to delete a test?
Ashton joseph: Do you block your pipeline when your main automation tests fail?
Jay: My strategy is when starting automating a new application is to first create a few end to end tests in order to have maximum coverage and quickly. Then with time I create smaller focused tests and when that job is complete, I delete the end to end tests. Is this a good strategy?
Varsandan Csaba: Deleting a test is about improving the quality and not about the reducing the quantity. Can you make another example where it could be a benefit for someone?
Lonneke|: noticed your role is a âQuality Ownerâ, this sounds really interesting! What does this mean?
Dave B: what if tests all ran instantly with 0 cost? would you still remove?
It seems that the premise for the questioning about a test assumes - thereâs none documentation about WHAT the test is testingâŚand even considering the requirement COVERAGEâŚif that coverage is already covered by some other more stable test, or despite the test being flaky itâs coverage that being lost and requires manual testing⌠- is that so ?
A strategy to minimize this - why not consider having a sort of standard/required procedure for having a @javadoc (or anything equivalent) explaining (w/ a required peer review) whats the goal of the test, to avoid going over the code ?
I would love to hear an answer to this cross browser testing concept. I been pushing against using Cypress on some new projects due to itâs weak cross browser capabilities
From experience, I feel that you find more visual bugs than functional bugs on different browsers⌠We are not doing cross browser automated tests but do cross browser visual tests instead.
We remove them for automated checks. Just running on chrome for a couple of years already without any issue. A lot of hours to invested on most valuable things there!
I still do cross browser UI automating testing (with testcafĂŠ).
Why ?
because of specific behavior according the browser (desktop versus mobile, dynosaur IE11, some specificities on Safari and mac shortcuts, magic mouse, etc âŚ)
am I right ? truely, I donât know. all is about context, risk and also development team maturity about quality.
Sure, all is about context. Our first steps to remove cross-browser testing were to enforce production code to follow strictly good practices for modern browsers. Then we presented to products the numbers for the cost of developing / testing for Internet Explorer vs the number of users that still use it. So all together decided to stop supporting it.
Our context allowed us to do it but, for sure, there are other contexts when it won´t be able.
Shaikh M. Rahman: We create tests those are mapped to Business Requirements / User Stories. Most of the time thet are part of a Regression or Smoke test suite. If the test cases are well defined and the requirements have not been changed then why would we delete a test?
I like to think about the risk weâre actually covering with each tests vs the total cost of ownership of the test.
When we consider the risks for a feature thatâs been around for a long time, the probability of something bad happening may have changed since we created the tests, even if they are still pretty much valid in their intent. Features can become really stable; they can become less used or even slowly replaced by other features; or maybe even technology evolved to make those risks lower.
So we can ask ourselves: does it still make sense to run all the tests of the set for this feature? How much are they costing us when running and maintaining them? What would be the smaller set that now makes us confident about this feature when releasing a new version of our software?
Accelerate - because itâs one of the best books about DevOps and some key ideas / research in there have influenced the way we look at quality and risk mitigation in my organization.
Chitra: If the test covers a area of business which is no longer in use , why will you still retain the test . Will you not delete it and reduce the running time of the suite?
Sometimes features are not used at all or maybe not even available anymore to our users. This is a place where usually youâll find good test candidates to be deleted.
However, always take into account what sort of bugs a test has found in the past (itâs great if you have a test management solution in place that allows you to do this sort of analysis). You may find that a test covers a feature that has been deprecated but itâs still finding bugs in other areas of your software (especially a âbigâ end-to-end test). In this case consider covering those bugs with other new tests before deleting the original one!
Mike Mc: What is the name of the book Jaoa mentioned in his talk on test deletion that had âcontinuous testingâ in it?
Itâs the Continuous Testing for DevOps Professionals by Eran Kinsbruner
Ana Milutinovic: What is the argument for creating automated test for a bug in production? In the context of automation strategy, how is this beneficial?
My experience tells me that usually an escaped defect is a good place to put an automated test in place, for a couple of reasons.
First, a TDD approach to bug fixing (where you create a test reproducing the bug, see it failing because of the bug, fix the bug and then see the test passing) is a great structured way to be confident about having fixed the issue.
Second, itâs significantly likely that if the bug was there in the first place, it may reappear again in the future. For instance: if you deal with multiple code branches and/or versions in your software development, maybe a mistake may be made and the code is not merged properly to another branch or not ported to a specific version.
Nevertheless, as I said in my talk, remember that if after a long time we come to the conclusion the bug has never come back, then you can really re-evaluate if it should stick around.
Honey Chawla: What should be the frequency to do exercise to figure out worthiness of tests? (lets say there are 5k+ UI tests)
Thatâs a great question I hadnât considered yet. Teams in my organization usually do this when specific âeventsâ happen: if a test fails and theyâre not really sure why it should exist in the first place; or when revamping a feature they may go through the tests that cover it and do some house cleaning.
However, I can totally see this as a practice that we should remember ourselves to do frequently, or better yet, come up with automated heuristics that alert us that a test should be considered for deletion!
Louise Gibbs: Is it worth keeping tests that cover little used but essential features? For example, warnings for dangerous products.
I believe it boils down to the risk being covered. In your example it should be a potential problem with a low probability (someone not understanding a product is dangerous in the first place) but a very high impact (the consequences of not understanding that). This amounts to a significant risk!
Does the test have a low cost? Then maybe we should keep it.
Does the test have a high cost? Then letâs consider the alternatives to automated testing, or even think how we can lower the cost of the test.
Louise Gibbs: How do you gain support from business for deleting tests?
Usually the business will be very sensible about the cost vs value analysis relationship. So if you are able to demonstrate that with hard facts (what proves the test is not providing value? what proves it has a high cost?) then you can make a much stronger case.
Connor Munro: When Dogfooding, how does everyone know what is or isnât a bug?
Donât know if I understand your question 100%, but what I interpret is that sometimes our users may not acknowledge that a specific behavior is a bug, right?
I believe this is about having ârichâ feedback loops in place. If youâre dogfooding, you can conduct interviews with your internal users to make these âmisinterpreted behaviorsâ surface. Iâm also a really big fan of Observability: make sure your software provides you with rich data that you can then observe to pinpoint patterns you werenât expecting and go after them!
Dan Billing: How do you think your principles can be applied to what might be referred to ânon-functionalâ tests that might not be included in an automation framework?
Iâve had to think about this also for automated performance testing as well, for instance, and I believe a lot of the principles can be applied as well (what are the risks we are covering? what is the impact and probability? how much do these tests cost us?).
For other types of non-functional tests (security, accessibility, etc.), I believe that when theyâre automated and / or âregressionâ in nature tests, the same sort of analysis makes sense as well.
But for a lot of non-functional manual testing we conduct that is exploratory in nature (so, dealing with the âunknown unknownsâ) I believe that is a whole different ball game and most of the ideas I expressed in my talk are not directly applicable.