Redundant Testing

Just curious how others feel about redundancy in testing. An example of this is documenting a test plan for features that contains a large number of steps. When documenting the tests we create test cases that test one thing, one aspect of the overall feature. However, inevitably, to test other aspects of the feature you test the same steps over and over.

For example:

Confirmation emails are always at the tail end of some end-to-end test. In order to trigger the email confirmation, typically a tester would be required to test a whole flow to trigger it.

In your experience, how do you document these types of manual test cases? I don’t want test cases dependent on each other to be run if possible. I just don’t see a way around this? Perhaps its an API test to trigger the email? Just not sure where to go with this one. Any assistance is appreciated.

Hi @mirthfulman

I can think of these options:

  1. Single test with grid based actions and expected results. works when application has similar set of screens with different input combinations on each of them for every test.

  2. Domain based concise tests (could be a single line) written in business language (instead of end user language). Works if a rule based application has different flows and screens but outcome is same for tests. E.g. for flight application
    T1: Change meal-plan for group of customers and verify confirmation email is sent
    T2: Change seating plan for a single customer and verify that confirmation email is sent
    Note: Any (even a new) tester should understand business/domain in sufficient details to do some exploratory test that compliments manual test plans.

  3. Some test management tools support adding (reusable) test cases to Test sets. here tests can be used as reusable functions in programming and hence would reduce refactoring when certain flow in application changes. Works for any type of application that has repeated steps for different test scenarios. This is not exactly dependent test but a re-usable set of test steps (mainly action steps than assertions)

Not sure what you exactly meant by API test (in context of manual testing) but if appropriate APIs are exposed throughout the application then there may be various paths you could enter the application flow (instead of right from the beginning). Can’t comment much without knowing more about application.

Ah the workflow problem. Although I imagine other kinds of redundancy are possible to cover whenever this happens, lets start with the real problem, wasted time.

  1. Every single test aside from a few in the problem-domain might start with logging in, it gets redundant
  • Option #1 is get the developers to help you automate everything up to that point by adding in some kind of internal hooks that let you get to there with 1 click. Because you are not spending 100% of your eyeball time on the critical area in the test, you could miss small detail in the email checking. You want to only be checking the email.
  • Option #2 get the developers to give you a hook that lets you test just the email on it’s own, without doing the rest of the workflow at all. Once again, lets you better spend your testing time on just one thing that matters. I like this one better, since it forces the developers to package things up for re-usability and lets you test scenarios that might not be reachable because business rules in the workflow prevent things like for example what happens if the email subject line happens to be blank, does it then break the support ticket tracking system which relies on subject lines?
  1. The other redundancy case is when there are 2 or more paths you could exercise that get you the same result. One path might be to login and do the workflow on one computer, but stop just before submitting and log out - then login on a different computer and submit the workflow. It’s limited in the value it gives you, once again you really want to automate as much of an end-2-end as you can.

I like to diagram out and draw up a test grid of possible environments and paths, find one path that covers as many valuable steps as possible and automate just that one path, and then never test that path manually. Reserve manual checks for paths that are hard to automate robustly and meaningfully.

In my job I find that automating the login takes up the most test running time because I have to robustly install a new version and start a service, but still have to login between tests. What this does is makes me push the dev team to speed up logins as much as possible, because that helps me. To extend my login analogy, maybe you have 2 different login mechanisms, so 95% of the time I will test the fastest one.

  • I start a feature testing with a session having a particular goal.
  • I document things as I go along(notes - containing test ideas, possible problems, investigations to do, what I’ve covered, etc); These will be useful usually for max of a day or so.
  • Each test idea that I experiment with or explore has a particular target - information goal;
  • Even if some small steps are repeated I do it anyway. As I vary things all of the time and it gives me more chances of finding bugs along the way in other places. I might vary a browser setup, step from which I go on to check the idea, maybe the branch I test onto, some test data, some steps, some observations I do along the way, etc.
  • During a test I might wander away for a while distracted by something strange/interesting or an idea that came to mind. Then I resume to my initial idea, with maybe even more knowledge to identify a possible problem;

I guess using test-cases makes your experience in testing more bland and you’re skipping on the mindfulness of the moment: the modeling of the space, the questioning, the observations.
As I see it you have waste in many places:

  • first documenting the test-cases in detail in advance - test-cases are usually throwaway immediately after you wrote them; https://www.satisfice.com/download/test-cases-are-not-testing
  • following test-cases you’re skipping on: learning about the product, touring the feature/product/code, questioning the feature or the understanding of the feature from devs/ba or your beliefs which most of the time can be wrong; modeling the testing space having questioned and observed already with curiosity plenty of other things; etc…
  • using detailed test-cases you are forced to look into specific things; like a recipe of how not to find bugs; don’t watch left, don’t question right - as there might be bugs in there…
  • waste some more time by not having enough variation in testing; not feeding other test ideas; instead repeating the same one expecting a different result: https://softwareengineering.stackexchange.com/questions/191531/why-does-cem-kaner-consider-a-test-not-revealing-a-bug-a-waste-of-time

As for testing confirmation e-mails, it depends on the state of the product, which you have to uncover yourself:

  • has anything changed in the product related to that feature? a feature was added or removed from it? what exactly? can you review the code, ask a dev or the one that made the request to change?
  • has there been major refactoring to the e-mail confirmation?
  • has an external service used proven to be unstable?
  • has there been an update of the server database or e-mail provider?
  • did the content of the confirmation e-mail change?
  • has the API call to the e-mail server to trigger notification changed?
  • has a flag in the db, that holds the dependency for the e-mail server changed?
  • etc…
    Knowing the context of a feature change helps you test the appropriate thing, helps you understand and see the risks, helps you target the bugs better.

Well I am glad I am going down the right path. Shortly after posting this question I had a conversation with one of the devs about this same problem. Sometime in the next couple of weeks we are going to sit down and see if we can call that email function directly in order to get around having to test whole flows.

Hopefully this is just the beginning and we can consider the same approach for other areas of the application.

@conrad.braam When you automate a flow, do you use data driven tests to exercise the same flow using different data? Or do you test inidividual pages in a flow manually? Just trying to get an idea of how much gets automated and how much you exercise manually.

@ipstefan Thank you so much for your comments. That’s the testing I perform personally but the purpose of the test cases are for regression which we hand off to testers to run for every new release. Do you document test cases for the purposes of regression?

Thanks everyone for your replies

1 Like

Andrew, best I can suggest is to be talking to the developers. I am by trade a programmer, I have been testing for over 10 years now, but my bias is towards creating tools that help me do the checks. With my tester hat on, I don’t look at product code much, but an understanding of code helps you with some of the easier to do automation. It gives you a better idea of how the code can be changed to make test automation easier. Automating in general is simple to start doing, but much harder to do to any measure of completeness without a lot if developer interaction.

So what am I saying, lets unpack this more practically, which is hard. Every single application I have tested has had differing automation mechanics used by testers. Up until now I have worked always with native apps, lately I’m moving into web apps, and the techniques are widely varying, but some principles are common. I’m keen to talk about your email test Andrew, but I want to start with some guidances.

  1. The things I have tried to automate and failed at have always involved a false assumption that is only obvious to someone who has used the application for a long long time. And by long-long time, I don’t mean hours, I mean using the application a year ago when the Windows printer drivers changed, and we had to make a hack to stop printing coming out upside down. Just because the print spooler did not error, did not mean your document printed out in a way the customer can actually be delighted with. My latest complete screw-up was an assumption about windows graphics drivers under embedded Windows/Redstone working in the same way going forwards with the way error codes get returned, sometimes you don’t get an error at the point you expect one. You only know you have no graphics bugs, when the customer can actually see the pretty picture. Creating a test to check that pages come out of a printer in portrait, or that pixels are correct on a screen means building a piece of hardware to actually check. Checking these things in software is really impossible. I think there is a good video where James Bach warns against this. Just because you find a way that the OS behaves, that hints at your print job or graphics being wrong, that’s often not a good test,
  2. Don’t over engineer or over think, if you ever go to the extreme - and I do this often, because some things are really impossible to test well without an elaborate test jig. I mean cars get tested in wind tunnels with a rolling road, and sometimes there is a good reason for building a rolling road, but, most of the time, you don’t need a wind tunnel. 99% of the bugs can be found using a computer model of the car, it’s going to require you do more maths homework, but a simulation is going to tell you a lot. I don’t know cars, but a good friend who does loves to tell the story of how they took two landrovers up to the highest place in England they could find, only to discover that the manifold pressure sensor input code that lets the engine know how much air it needs (cars have to modify their fuel/air mix at altitude, you see, due to less oxygen to combust with.) Well the code uses a lookup table of sorts, which was not sensitive enough to know that at altitude, the engine needs much more air. It’s more complicated than that, because cars actually detect in code what kind of fuel grade you put into the tank by using sensors that check how much air the engine uses. But suffice to say, sometimes a field trip where a farmer has to come with a tractor and pull 2 brand new Landrover’s back down, off a small mountain, is a good way to test. So try to reserve some discoveries for manual testing sessions. Automate the others.
  3. Talk about the test success criteria, quite often you want to check that an email gets sent, and simple things like sending an email to yourself is a good way to check, if it comes back. But decide whether its actually good enough to check that the email just gets to the outbox without actually going out. Sometimes the simpler you can make the success criteria, the easier it gets to create things like a mock mailserver. This has the advantage of being a test that will still run even if your mailservers are offline, but has the downside that it has no server authentication. But I find that the most robust automated checks are ones that are clear about the fact that they are a simulation of the universe. So long as they don’t yield false positives too often, a hacked environment that you understand the limitations of well, can save you a lot of time.
  4. Learn about API’s This is probably the beginning of an automation journey for many. Learn one scripting language. Most any scripting language can call API’s, even ones created for a different language. You will need help from your dev team. This step will also level you up so you can speak to the developers in their language more often. You don’t have to learn COBOL,C/C++, C#, Forth, Fortran, Objective Camel or Smalltalk and many others. Bash scripts, DOS batch, Python, Java and a few other interpreted languages are great for creating automated tests. Get advice from your dev team in a scripting language choice, because they can actually be your free in-house trainers. I have worked on projects where for example Powershell scripting had hooks into every single part of the application. We could manipulate all of the internal data with exception of license and authentication in the application using a standalone script! So writing tests as Powershell scripts was dead easy. But be aware, you will have to master whichever scripting language you do choose.
  5. Automatability using callbacks , most script languages are not good at API’s that raise events, or that call hooks. For these cases you want to get the developers to change all callbacks or events so that they also write the callback data into a log. A script helper function can then be written to scan or poll that log looking for the event or trace specific to the hook or callback that you want. Code to do this for you in linux and in Windows is all over the web. You want to think about security when doing this, but the benefit of a log or trace in the right place in the code can be huge if the event text contains data that lets the tester know that the thing you want to track down in a workflow gave the intended outcome as a result code or maybe if it for example includes a “pending” account balance in the log, you can then check that balance against your test transactions.

I want to leave you with this quote :Brenan Keller "A QA engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 99999999999 beers. Orders a lizard. Orders -1 beers. Orders a ueicbksjdhd.

First real customer walks in and asks where the bathroom is. The bar bursts into flames, killing everyone." https://twitter.com/brenankeller/status/1068615953989087232?lang=en

1 Like