How do *you* write test cases?

I love this comment. :ok_hand:

2 Likes

Great question and one I have mulled over many times myself - often with inconsistency and flip floppings.

My thoughts are as follows:

New functionality - I tend not to write scripts (how can you until you have seen the functionality in full), but more the “scenarios” - single line statements noting what you want to test (and not necessarily the detailed steps of how to do it).

One the functionality is tested then I try and boil this down into a few tests that are useful as regression tests for the future

Now the decision here is the detail and how much. Who is the “customer” of the test - is it meant for someone who is new (e.g. who wouldn’t know the product)? Or is it more of a “checklist” to say something has been done, It’s about balance. I prefer to keep tests higher level, but then have supplementary documents/guides to cover the functionality if it is needed. If people are following scripts then they might miss things going on around them (i.e. a juicy bug in sight, or not far, but not on the path the script is taking them)

4 Likes

Thanks for putting so much time and effort in your comment, not that other posters haven’t - it’s been a really insightful and informative thread.

It’s interesting that whilst it takes some thought to design some test cases, for the most part it does seem very clerical indeed. I’ve also experienced a kind of shoehorning of Exploratory testing into test cases, which then inevitably leads to long and confusing test steps (e.g. click here to here to here…).

For the most part, my team follows a fairly low-level process which feels a bit unnatural, but on the flipside it’s good to learn and experience these things which will inform my testing going forward.

4 Likes

That sounds like a decent process. In our team we tend to write test cases for new features and bug reports. My team lead asked me to write test cases as if it was a total beginner executing the test, his reasoning is that it’ll help me understand the system, but it has certainly been a test-case heavy introduction for me in testing.

1 Like

My project is quite heavy on test cases but I rotate onto another project in a few months time and it’ll be interesting to see the differences there. I don’t mind writing some test cases, but I’ve not touched upon any exploratory testing yet unfortunately.

2 Likes

No worries, I have a lot of time on my hands.

Writing test cases is difficult to do, and trying to formalise mechanisms into cases is a way to slow down and think about the product, although I think it’s quite a boring and slow way to approach product learning.

There’s an idea in testing that exploratory is for a high-level meander around a product and cases are for hardcore detail, but that’s not true either. Exploration can be as detailed as you need it to be. A big part of exploratory skill from an RST point of view is focus/defocus. Defocus is more general, moves faster, finds more problems, finds new problems, find unexpected problems. To defocus you deviate away from your usual patterns, go off script, do things that are challenging, like messing with every setting on a page and clicking submit. Focusing is more specific, logical and precise. Focusing means that the things you see are more reliable, because there’s fewer unknowns. So if you change everything on a page and click submit, and that causes an error you don’t want, then you could go through every setting one at a time to see which one triggers the error. Defocusing finds the problem, focusing identifies the problem more accurately. Defocus can find more things that might go wrong be while focus can be used to be more certain that a path can work. You can be exploratory and also be specific, mathematical, logical, just that you choose as much as you need. So low-level is great, but you don’t have to sacrifice high level, and the two can complement each other. Defocus, think of a new idea while exploring, focus to test it, find no problems, defocus to make the tests harder to pass, find a problem, focus to figure out what the cause is, etc. Defocusing is awesome, but not very reproducible, so doesn’t work with formal written cases, you need informed strategy to guide the value of your testing.

Getting buy-in for more responsibility can be a pain, although there’s nothing stopping you from practicing exploratory skills or note taking in your spare time or learning time. Hopefully you can find a good mix and have some fun with it without too much interference.

1 Like

In my view, exploratory testing is not about “click here to here to here”. It is about performing an action, observing what happened, learning from that observation and using that learning to guide my next action. It can’t be written down as a test case because when I “click here”, I can’t predict what I will do next before I see what that click resulted in.

To clarify, I meant to say that Exploratory allows for navigating different ways and to test different routes to, for example, a dialog box - by clicking on an icon, going back, selecting a dropdrown, going back, going through another menu etc… if this is where the testing took you.

But if you were to write the same navigation in a low-level test case it would become way too verbose. My point was that our team could’ve used Exploratory instead of low-level test cases.

1 Like

I second you and the thing is to me educating and experience.
Having a verbose low-level test case is something for untrained people. It would be better to address the problem of untrained people earlier, less wasteful.

1 Like

I suppose my next question would be how you conduct good exploratory testing or how do you train for it? Did you get it just from experience or was there any training that allowed you to perform better testing?

I’ve read quite of a few of those articles on Satisfice now, and it’s definitely opened my eyes to another way of viewing testing, which certainly sounds more appealing than a marathon of test cases.

2 Likes

That’s a good point, trusting the Testers to be skilled, responsible and competent. Technically anyone could follow some low-level steps…

1 Like

I started when I went to a talk by James Bach, found him online, read the RST course materials and began practicing. I was incredibly relieved because I found that much of testing didn’t make sense to me, I assumed everyone knew what they were talking about until then. My company paid for my request to go on the RST course itself, which was very valuable to me. That was some time ago, and now I believe you can get the RST notes by requesting them, and some are just generally available: [rste appendicies], [HTSM], [other]. The course is interactive and experiential so is worth taking, but also I have no idea how much it costs now, or which one to take because it used to be all one big multi-day course.

I did get quite far just by reading the notes and practicing the ideas. I had a long history of interest in the philosophy of science, which helped me, but certainly isn’t required. Critical thinking and other scientific skills become important in good testing. Any training in magic is useful in a similar way. Some computer games too.

The key for any skill is purposeful practice, I suppose, so I’d read a thing, try the thing with the understanding that I would not necessarily succeed, and review it. There’s a heuristic called “plunge in and quit”, where you approach something complex or scary by just jumping in, then if you feel stuck you quit. That way you’re not committing to more than you can achieve.

One way you could give this a go is by running some test sessions. Negotiate work time, or use spare time to experiment.

Each session should have a simple form - choose a note taking system. I use OneNote a lot, but there’s other options, and you can use notepad (et al) if you have to. The header of the session notes has a title and/or description that says what you’re trying to do. It sets the scope of your testing, but also the purpose - what you’re testing and what you’re looking for is a good start, but they can take any form that’s useful to guide and assist you, and can be as specific as you like. I generally include a start date/time, and I’d also consider setting a time limit or a min and max time you spend on a session. Consider including references to setups, environments, test data, anything that makes your session unique and would be helpful to you in the future.

The body is for notes. Things you did, things you saw, things you thought of, if and when they’re worth writing down. I have a separate area for bugs (problems I found with the product, eg unwanted error messages, crashes, typos, looks ugly) and another for issues (problems with the project eg needing test data, needing access to an environment, not understanding scope). I use OneNote, so I use tags to mark questions I have, to make sure I go back and decide if I need answers. At the end you raise the bugs, solve the issues, ask the questions, and should end up with an outline of your testing.

Remember that anything you do that’s not testing takes up time and attention you could be using doing testing. That includes writing notes, investigating and reporting bugs, and setting up environments. Also remember that sessions are not limited by their scope - if you find something worth following up on you can do so. If you find yourself getting too off-track you could make a note that you need a session to look into it later. Sessions are also for you to refer to during testing, but if you need to refer to them much later, or you need to show them to someone else (like for a debrief) then you may write them slightly differently, so it’s worth thinking about later on - for now I’d just write them just for you to make it easier.

The first session I run is often a recon session. I’m not looking to find or follow up on problems (although if I do I will write that down), just think about what I’m seeing, what testing I might want to do, what risks I notice, things I might need (like test data or access). You’re building mental models of the stuff you’re looking at and thinking about what kinds of coverage you want and what you’ll need to do and have to achieve it.

Then you can run sessions based on your findings. Perhaps you’ll do a user scenario where you use the product as a particular user, or maybe a new more focused recon session about the user permissions system. You could run a more focused capability session, where you see if the product can do what it’s supposed to, then a more defocused reliability one where you see if it’s resistant to failure and has good error handling.

You’ll begin to see that what you do is based on what you’ve learned about the product and project in general, and the value of your choices is determined by the context in which you apply them. You can use stuff like the HTSM to help you spot problems or consider new risk, or even make your own lists of concerns that are tied to your situation, context, product area, whatever’s useful. You can take those skills into design meetings and consider problems before the product is designed, or into pair sessions and consider problems before it’s written, if you want to.

Please take any of these ideas, rework and rename them to work for you. If you want to do scouting timeboxes instead of recon sessions or whatever nobody’s going to complain. You are free to research, use and change your terms, methods and tools as you see fit. Try not to overthink the details of the process - it’s better to spend time considering testing than what colour highlighter to use, and the details will work themselves out over time anyway.

One aspect that gets missed a lot is the mind numbing boredom some people get executing a step by step manual script.

Like brain pathways shutting down for some, enthusiasm, morale and the whole buzz testing can offer being constrained.

Some people can get a buzz out of manual scripted tests but bear in mind many testers cannot and it could really harm how they view their job and role if its forced on them particularly if they are unable to see the same value that you do.

3 Likes

Interesting you mention this as I’m quite sure this is the reason that one of the Testing apprentices is on the verge of moving on already. He’s feeling very uninspired for sure.

Lots to learn - it’s great though and has increased my motivation knowing there’s more out there than the ‘Factory Style’ - thank you. I’ve found plenty of PDF’s on Satisfice.com so am downloading them and will take my time reading through them. I think my main priority right now is learning the product. It’s a very complex (and fairly archaic) system that even half the team don’t fully understand top to bottom - great for a first project though.

I appreciate that I’m probably looking through the prism of test case writing here, but it seems that perhaps a benefit of test cases is that it reproduces the steps in which the bug only appears if you follow those exact steps (assuming that the steps are executed correctly to begin with). How would this be done in an exploratory testing? Isn’t it possible that this ‘correct path’ is missed, or maybe exploratory session notes are linked to bug reports?

2 Likes

it seems that perhaps a benefit of test cases is that it reproduces the steps in which the bug only appears if you follow those exact steps (assuming that the steps are executed correctly to begin with)

I suppose there’s nothing preventing you from executing the same steps without the case. It’s important to remember that testing is in a de facto infinite space of possibilities. There are too many factors and inputs to exercise all of them, so all testing becomes sampling. The path chosen to be written down into test cases is one such sample.

There’s nothing particularly magical about test cases. Imagine writing down every step of a test you perform on your own, including something you’re looking for and if you found it or not. You’ve just designed and written a test case and executed it simultaneously. You can usually skip a lot of the writing, and the structure of testing is then in your head instead of made explicit. Then use the time you saved to do testing, and use what you found to influence what tests you design next. What’s more interesting than the case is why you chose to design it that way - the strategy you’re serving or the risk you’re mitigating.

Isn’t it possible that this ‘correct path’ is missed

Only if you miss it. A responsible tester will choose the best use of their time, and if that means that there’s an important path in the product you should probably exercise it. Lower formality testing like Exploratory Testing doesn’t mean random or unstructured, you still have to be rigorous and consider coverage and risk. It means that you have to consider the things that whoever wrote the cases considered, and much more. Otherwise you’re just messing about.

or maybe exploratory session notes are linked to bug reports?

I’m not 100% sure what you’re aiming at, here, but make some broad statements, and hopefully that will be helpful. If not please clarify here or send me a message.

Firstly I should note that testing does not require note taking or writing of any kind, and notes are sometimes distracting from the process, but in general they’re useful for reference and noting down ideas you don’t want to have to carry around while you’re thinking about your testing. I introduced sessions and session notes as one way to begin to try testing, but if you are being overly particular or thorough with your note taking I’d recommend trying to test with no notes at all. What you choose to write down will depend on your circumstances, how you test, your memory, etc. You want to note down stuff to help you remember, think and communicate without interrupting your actual testing more than necessary. You can also summarise, so if you want to exercise every combination of a select few inputs using some standardised data in limited equivalence partitions you can do that. You can even make a little table and put the results in. You just need to decide that it’s worth the time and effort doing things that way, which it sometimes is.

When you find a problem you’re going to want to investigate it. Very generally speaking you need to confirm that it is actually a problem, a way to reliably reproduce it, and the state things were in at the time (environment, version, test data, etc as appropriate). You’ll probably want to simplify your steps down to the minimum number to reproduce the issue.

You could report via a conversation with someone, at your desk, with your session notes open. You could also report via a written report that includes those reproduction steps. I assume those reproduction steps are what you’re looking to replace the information in a test case with.

An Example
So let’s say you’re poodling around in a submission form. You enter a name, email, check a couple of other things and hit “Submit”, and the form just closes on you. Disappears like you never opened it.

That’s a problem. You feel annoyed by its behaviour, and frustrated that you don’t know what happened, and you suppose that a user would feel that way too. The user isn’t told if it succeeded or failed. In fact, you don’t know that either.

So you go back and try again, using a different name and email and settings, click “Submit” and it happens again, the form disappears. This is now a pattern.

You could, at this point, report the problem as “when I try to do a bunch of stuff and submit this form it just disappears”, or you could do some investigation (which I would recommend).

You pull up the database and you find that the information you put into the form never made it into the relevant database tables. The submission form isn’t just rude, it’s not even submitting the data as far as the database! That’s a bigger problem, with more impact on users.

You could, at this point, report the problem as “submission form does not add records to database” with everything you did, in order, and get on with your day…

But you bring the form back up and just hit “Submit”. The idea, you’re thinking, is maybe the submit button just closes the form. But no, a nice error appears on the screen demanding that the Name and Email fields are “required”. So to navigate around this with minimum input you put in a one-letter name and “a@a.com” and hit “Submit” and it brings up a message saying the submission was successful. You check the database, and there it is, the record just as it should be.

So you know that the form can work and there is at least one scenario where it fails. So you repeat the process using a heuristic called progressive mismatch, where you’re changing what you do each time and accounting for the changes each time. You want to look at what the scenarios that fail have in common, and what those that succeed have in common, to look for patterns that might indicate a cause. You try “a” and “a@a.com” as the Name and Email each time, because you know they can work - you don’t want to introduce a new problem right now, you’re focusing down to investigate if any other settings cause the problem.

Nothing - they are all submitting just fine. In your frustration you try a new route and vary your initial inputs. You fill the form with a long name. The email is still the same, because you want to see if the name is the problem. You leave everything else blank, to improve the strength of your inference based on your input - no other factors to cause problems. Aha! The submission form shuts down without a warning. You check the database, and the information did not submit there! Now you know it’s the name field causing the problem. It could be the length of the name… you try fewer characters each time, until the form finally works!

Now you know the length of the Name field is the problem! Or, at least, a problem, other fields could be causing issues. You could test some other fields, or you could even open the source code and see if you (or someone near you) can identify what the issue is, and use that understanding to guide your testing.

Anyway there’s at least one problem that when the name is over 15 characters something goes screwy and the form closes, and the database doesn’t get populated. You can now create a much tighter bug report, with more knowledge of the impact.

Maybe you check with another tester. You say “hey, can you bring up that form and submit it with a 16 character name”, and it breaks on their machine as well. That reduces the chance of other factors being involved. You think that you have enough to write a report now, and create steps to reliably reproduce the problem in the simplest way, with information on how it impacts the system, and perhaps risks involved. What you write depends on the audience and what you think they already know and understand.

If it’s confusing you could include a short video of it happening (this shows what the person reading it is looking for). You could include a copy of your database, if it’s easy to do and you think that’ll help.

  1. Open the bumbleboo submission form
  2. Enter a name of 16 or more characters
  3. Enter any valid email, such as a@a.com
  4. Click Submit
  • The form closes without any user feedback, and no relevant entry appears in the bumbleboo_users database table

Environment: Version 6.9.420, OS Puppy Linux, Browser IE6

Done. Your test client has everything they need to know about what it is and how bad it is to make informed decisions and start a deeper investigation and fix from a more reliable starting point.

You also know that this can happen. You know it’s a risk. You might consider looking at other forms, or investigating other fields in this form. You might ask how this got through your automation checks, if you have those - perhaps this is a browser problem that doesn’t get checked. You certainly know to look at this sort of thing in the future to make sure it doesn’t happen again.

Now, that’s an example, but how you work may be very different. How much you investigate depends on a lot of things, like your familiarity and understanding with technology and the product, your working relationship with the code-writing developers, what the scope of your responsibility should be, how you’ve been asked to work, what the devs say they want from you, how buggy the product is, how good the team are, and so on. Also each problem you find will be a little different. “Product doesn’t even start” doesn’t need a long, detailed, report much of the time. Same with “You misspelled ‘karaoke’”. Just consider your bug, your environment and your audience.

Hope that is of some help!

1 Like

Generally speaking I try my best not to write test cases.

  • Testing a user story? I’ll write either an exploratory test charter or just make a few bullet points on areas that I want to check, with a view of exploring around those areas. Sometimes I’ll do a mix of both.
  • Need to do smoke testing? I’ve moved away from writing test cases as most of our “failures” were a result of incorrect (i.e. out of date) test cases and instead keep a list of what needs checking.
  • Writing a test case because I’ve been told to write a test case? Keep it short and simple.

What irritates me most with test cases is where there’s half a dozen steps that are effectively pre-conditions or guiding you in how to use the application to reach the point of the test. I like to stick as much of that as possible within the pre-conditions or some other suitable place. If something isn’t common knowledge on how to perform it, I’ll add a “suggested technique” to the test step. I try to ensure that the actual steps of the test case are focused on the behaviour. In fact sometimes I’ve just written a single step in gherkin.

For example our old test management system had a lot of test cases like the following fictional “rename camera” test case when renaming in our config tool “VSM Administrator” is reflected in the main “Control Center” application:

  1. Run VSM Administrator. | VSM Administrator opens
  2. Click ONVIF Cameras. | ONVIF Cameras tab is shown
  3. Click “Add” | Add camera dialog is shown.
  • … More steps on how to add a camera …
  1. Add the camera in Control Center | Camera is added
  2. Run VSM Administrator again | VSM Administrator opens
  3. Click ONVIF Cameras. | ONVIF Cameras tab is shown
  4. Double the camera from steps 3-8 | Camera edit dialog is shown
  5. Edit the name and click OK | Name is updated in VSM Administrator
  6. Click Save and then OK to apply the changes. | VSM Administrator closes
  7. View the device in Control Center | Device name has been updated.

Vs my test case of:

  • Pre-conditions: Camera added to VSM then Control Center.
  • Steps:
  1. Using VSM Administrator, update the name of the camera under test and save the changes. | The name change is reflected when viewed in Control Center.

If people gripe at the lack of technique for my regression test case, I usually update it to: “Following the steps outlined in our Help docs, use VSM Administrator to update the name of the camera under test and save the changes.”

My point here is that when writing a test case it is important to me that you focus on the behaviour as opposed to the technique/steps leading up to it. If we changed the UI for configuration but the behaviour was fundamentally the same, I shouldn’t need to go updating my test cases unless absolutely necessary.

Further to that, don’t even write the test case unless you expect it to be genuinely required for regression/smoke/sanity testing (i.e. a form of testing where you repeat previous testing). The one that I outlined above, when you boil it down to what we’re actually testing, is effectively one line “Renaming a camera in VSM Administrator is reflected in Control Center”. That surely doesn’t need the overhead of a test case. It is also definitely not worth the overhead of writing if we aren’t going to run it again. Before I got brave enough to push back, I’d spend a good 20-30 minutes writing test cases that took less than that to perform. In fact sometimes I’d be using the software to help me write the test cases with the correct button names etc. By the time I’d written the test case, I’d effectively tested it. I then had to go through review etc, wasting someone else’s time. Gah, the memories haunt me to the day.

3 Likes

Also for that very point I started separating more detailed test manuals (which less often change) and short check list for the execution itself.
Especial to give the testers space for detailed notes.
Trained testers need the manuals less often.

2 Likes

Really appreciate the time making these posts full of information.
Just recently I went outside my usual routine and did some exploratory testing as suggested and I found a quite a severe bug that caused the whole application to crash, I’m quite certain I wouldn’t have found it before.

4 Likes

Usually I’d rather not write test cases, it’s takes up too much precious time, unless it’s a higly regulated industry and it’s a legal requirement to document your testing in that old fashioned way.

That being said, I did blog about this a while ago, maybe some of the tips could be of use to you: