TestBash World 2022 - Ask Me Anything About Regression Testing with Deborah Sherwood

During this one-hour-long session, our host @jamesespie will be joined by the fantastic @dsherwood for a Testing Ask Me Anything about Regression Testing.

We’ll use this Club thread to share resources mentioned during the session and answer any questions we don’t get to during the live session.

1 Like

Questions Not Answered Live

  • Anonymous - I’m new to testing & work in a team thats never had a tester. How do I begin talking about regression testing to a team that has never had this testing?
  • @friendlytester - One of your regression tests has just failed, what steps do you take next?
  • @benmue - Hey, when a feature or detail changes, do you proactivly change the tests or ‘wait’ for failure. How would you identify the tests need change?
  • @testingchef - Are there any cases where a manual test is preferred over automating?
  • @ifs - When I did regression tests last time, I encountered tests that weren’t worth executing (manually). How to remove a test, and how to explain it to PM?
  • @lsevern - Do you use data to drive your regression test?
  • @bruno_lopes - You mentioned that sometimes you don’t automate cases that are faster to run manually. Can you share a couple examples?
  • @friendlytester - How often are you deleting automated regression tests?
  • @lizbeth.es - In the dev/build process, when will you be ready to do regression testing? what needs to happen before we can start regression testing, any checklist items?
  • @gourav - Do you want to sequence your regression to run the critical first then others then others? Does it help?
  • Anonymous - Is there any other way than the regression testing to be sure that the quality of our app is stable/constant during the sprints?
  • @bruno_lopes - Are people executing manual regression testing the same who build automated regression packs?
  • @bruno_lopes - Have you considered codeless/low code automation tools (over cypress, for example)? If not tried or tried but not using, why?
  • Anonymous - what are the automation tools that you are using for end to end testing?
1 Like

Hi to whoever submitted this question! Thank you for watching my session at TestBash World! :smiley:

I will assume you are only doing manual testing and don’t have any automated tests like unit tests, UI tests etc.

I would start by tracking how long it takes to run all manual tests on the product. Show how this time increases as you add more manual test cases for the new features being built. In today’s world, we want to get our new features in the hands of our customers as quickly as possible. If the team can see that the release process is slowing down, then they might start to ask how can we make it faster?

Another angle you can take is with the engineers themselves. Ask them what gives them confidence that they haven’t broken something in another part of the system? If they say because you are there testing it, ask them what would happen if you weren’t. What would they do? Start explaining the benefits of automated regression testing. How they could run a set of unit tests and get feedback within a few minutes as to whether or not their changes generally worked. Explain to them that by having these automated unit tests, they might feel more comfortable changing parts of the system that they may hardly ever touch because the tests are there giving them quick feedback.

I hope this helps! :smiley:

Well it depends on what type of test failed and where it failed.

If I was a software engineer and I was running automated regression tests locally (e.g. unit tests) then I would look at where it broke and if there is any chance the changes I made could have broken it. If there isn’t anything obvious then I would reach out to other members of my team to see if they are seeing the same test failure. If they are, then it probably isn’t to do with my changes. At this point, I would get everyone involved to figure out what broke the test as the issue could exist in the production environment.

If an automated regression test failed in a pipeline, I would investigate what were the latest code changes and see if any of those changes could have broken it. If there is something obvious and it wasn’t a change I made, I would talk to the software engineer who made that change to see if they were aware of the failure and are fixing it.

If it was a manual regression test that failed, I would note the steps to reproduce it and talk to the team about it, including the Product Manager. At this stage, we would use our classification bug matrix and decide if it was a show stopper issue (i.e. it is a critical issue and we need to stop the release to fix it) or something minor that could be fixed later. If it’s minor, I would submit a bug report to the backlog and prioritise it accordingly.

Hi! Thank you for watching my session at TestBash World! :smiley:

In my role as Quality Coach, I encourage engineers to re-evaluate any tests that they know will be affected by the change and either update them or remove them. I also encourage them to add new tests as well, if needed.

Sometimes tests slip through and fail either when they are run locally or in a pipeline but that is ok. We can only change what we know about. The tests are there to identify the areas we didn’t know we affected and fix them.

Apart from the E2E tests, the software engineers themselves are responsible for updating the tests. When planning a story I will ask them about tests, whether they need to be updated or added and make sure estimates include time to spend on those tests.

I hope that helps!

Hi! Thank you for watching my session at TestBash World! :smiley:

Yes for sure.

There can be times when something is very difficult to automate. The time it takes to set up and run the test in an automated environment may outweigh the value you received in doing it. Instead, there may be more value in just having a manual checklist that is run once a month for example to make sure that part of the system works.

I will talk to the team (this includes software engineers) about how valuable an automated test is as opposed to a manual test. Together we will decide how critical that part of the system is, whether or not it is a major blocker for a customer if it fails and what confidence level we need in that area in order to release it.

Just this week we had a critical bug in one part of our product, where a package it uses released an update causing our product to break. This part, however, is set to be retired soon and is no longer being worked on. I know the quality will drop if a customer tries to use this part and it doesn’t work but there are no QA Engineers in our team to automate a test for it (I will be hiring soon if anyone in Australia is interested :wink: ) So instead of asking a software engineer to do this, I have instead written a manual checklist that I run once a week to make sure it still works. It takes less than 10 minutes and the team knows it’s still fine.

Hi! Thank you for watching my session at TestBash World! :smiley:

If a test is not providing any value or confidence to the team, then why run it? It is only wasting time and slowing down the release process. If a PM says no you can’t delete it, ask why? They might see a use case that it is testing that you might not know about.

If you have a few tests that you want to delete, start recording how long it takes to execute those tests from set up to clean up. You could then show the PM how much quicker the release process would be without those tests, especially if they are long complicated tests. You could also track what regressions if any, those tests find. If they don’t find any then maybe that is enough to show you can delete them.

Hi! Thank you for watching my session at TestBash World! :smiley:

Thanks for your question. I am not sure what you mean exactly? Do you mean data to decide what to test? or data within tests?

Hi! Thank you for watching my session at TestBash World! :smiley:

Thanks for the question! I wrote this response in a question further up the page :slight_smile:

Another example I have is testing a live chat app. We don’t want our releases to stop because that third-party tool is broken. We can’t control that tool and we are not responsible for fixing it, so it shouldn’t stop us from releasing our updates.

Another example is testing on less popular browsers and devices. All of our products work on all supported browsers. However, as we are using Cypress for our UI tests we can’t run on Safari but we are ok with that. Looking at our analytics we currently don’t have many users using Safari. Looking at the bugs that have been reported we noticed that when a bug is found in Safari it’s usually a cosmetic issue rather than a functional issue. Combining this information, as a team, we decided we are ok with not automating our tests on Safari but instead running a manual check when we are ready to release. However, if the bugs being reported start to increase and cause critical issues for customers, then I will reevaluate this decision with the team.

Thanks for everything that you do for MoT and for running TestBash World! :smiley:

Answer is whenever we need to.

If a software engineer is changing a part of the system, they should be re-evaluating the automated tests and deciding what is still relevant and what isn’t.

If we are noticing the run times increasing in pipelines to a point where it is taking way too long to release, we would re-evaluate the tests and either move them out of the pipeline or delete them.

That being said, for one of our newer products, we just rebuilt the UI so I took this opportunity to re-build our testing framework. The previous tests were bad - they were slow to run, it was built badly and the engineers found it difficult to use. We only just completed our new framework so our automated regression tests are all still fairly new and relevant.

Hi! Thank you for watching my session at TestBash World! :smiley:

We run our automated tests all the time. When software engineers are building things, they write and run unit and UI tests locally to make sure they are working before they commit their code. We expect all tests to pass before the commit. Then we run our visual regression tests and E2E tests to make sure they pass. If anything fails in the pipelines, engineers are expected to investigate why it failed.

Manual tests, however, get performed when we are releasing the product. At the moment the engineers will push the product to the staging environment where we all run manual tests. Once we arehappy with those then we release into production.

Hi! Thank you for watching my session at TestBash World! :smiley:

At the moment we don’t, we run them in the sequence they are in but our current test suite is an ok size and doesn’t take too long to run.

If we start running into issues where tests take too long to run, we could look at tagging tests to run within the pipeline and all other tests are run in a separate process.

Hi to whoever submitted this question! Thank you for watching my session at TestBash World! :smiley:

This is a difficult one to answer. The only thing that pops into my mind right now is exploratory testing. Even manual testing is a form of regression testing so it doesn’t feel right to answer your question this way. :slightly_smiling_face:

Does anyone else have ideas?

In my team, in most cases, yes but that is because I currently have a QA team of one - me! :grin: Ideally when I have more than just one member in the team, manual test cases will be performed by the QA team, PM or anyone else who might be interested in learning about testing and the automated tests will be written by software or QA engineers.

Hi to whoever submitted this question! Thank you for watching my session at TestBash World! :smiley:

Our end-to-end tests are using cucumber with Cypress. As our UI tests are using Cypress we are leveraging that set-up to also run E2E.

No, not really. Cypress ticked most, if not all of the boxes for a tool we wanted so we went with that.

But as we know the testing world always changes and we are constantly re-evaluating our tool choices, so who knows maybe one day we will introduce a codeless tool .