I love this comment.
Great question and one I have mulled over many times myself - often with inconsistency and flip floppings.
My thoughts are as follows:
New functionality - I tend not to write scripts (how can you until you have seen the functionality in full), but more the âscenariosâ - single line statements noting what you want to test (and not necessarily the detailed steps of how to do it).
One the functionality is tested then I try and boil this down into a few tests that are useful as regression tests for the future
Now the decision here is the detail and how much. Who is the âcustomerâ of the test - is it meant for someone who is new (e.g. who wouldnât know the product)? Or is it more of a âchecklistâ to say something has been done, Itâs about balance. I prefer to keep tests higher level, but then have supplementary documents/guides to cover the functionality if it is needed. If people are following scripts then they might miss things going on around them (i.e. a juicy bug in sight, or not far, but not on the path the script is taking them)
Thanks for putting so much time and effort in your comment, not that other posters havenât - itâs been a really insightful and informative thread.
Itâs interesting that whilst it takes some thought to design some test cases, for the most part it does seem very clerical indeed. Iâve also experienced a kind of shoehorning of Exploratory testing into test cases, which then inevitably leads to long and confusing test steps (e.g. click here to here to hereâŚ).
For the most part, my team follows a fairly low-level process which feels a bit unnatural, but on the flipside itâs good to learn and experience these things which will inform my testing going forward.
That sounds like a decent process. In our team we tend to write test cases for new features and bug reports. My team lead asked me to write test cases as if it was a total beginner executing the test, his reasoning is that itâll help me understand the system, but it has certainly been a test-case heavy introduction for me in testing.
My project is quite heavy on test cases but I rotate onto another project in a few months time and itâll be interesting to see the differences there. I donât mind writing some test cases, but Iâve not touched upon any exploratory testing yet unfortunately.
No worries, I have a lot of time on my hands.
Writing test cases is difficult to do, and trying to formalise mechanisms into cases is a way to slow down and think about the product, although I think itâs quite a boring and slow way to approach product learning.
Thereâs an idea in testing that exploratory is for a high-level meander around a product and cases are for hardcore detail, but thatâs not true either. Exploration can be as detailed as you need it to be. A big part of exploratory skill from an RST point of view is focus/defocus. Defocus is more general, moves faster, finds more problems, finds new problems, find unexpected problems. To defocus you deviate away from your usual patterns, go off script, do things that are challenging, like messing with every setting on a page and clicking submit. Focusing is more specific, logical and precise. Focusing means that the things you see are more reliable, because thereâs fewer unknowns. So if you change everything on a page and click submit, and that causes an error you donât want, then you could go through every setting one at a time to see which one triggers the error. Defocusing finds the problem, focusing identifies the problem more accurately. Defocus can find more things that might go wrong be while focus can be used to be more certain that a path can work. You can be exploratory and also be specific, mathematical, logical, just that you choose as much as you need. So low-level is great, but you donât have to sacrifice high level, and the two can complement each other. Defocus, think of a new idea while exploring, focus to test it, find no problems, defocus to make the tests harder to pass, find a problem, focus to figure out what the cause is, etc. Defocusing is awesome, but not very reproducible, so doesnât work with formal written cases, you need informed strategy to guide the value of your testing.
Getting buy-in for more responsibility can be a pain, although thereâs nothing stopping you from practicing exploratory skills or note taking in your spare time or learning time. Hopefully you can find a good mix and have some fun with it without too much interference.
In my view, exploratory testing is not about âclick here to here to hereâ. It is about performing an action, observing what happened, learning from that observation and using that learning to guide my next action. It canât be written down as a test case because when I âclick hereâ, I canât predict what I will do next before I see what that click resulted in.
To clarify, I meant to say that Exploratory allows for navigating different ways and to test different routes to, for example, a dialog box - by clicking on an icon, going back, selecting a dropdrown, going back, going through another menu etc⌠if this is where the testing took you.
But if you were to write the same navigation in a low-level test case it would become way too verbose. My point was that our team couldâve used Exploratory instead of low-level test cases.
I second you and the thing is to me educating and experience.
Having a verbose low-level test case is something for untrained people. It would be better to address the problem of untrained people earlier, less wasteful.
I suppose my next question would be how you conduct good exploratory testing or how do you train for it? Did you get it just from experience or was there any training that allowed you to perform better testing?
Iâve read quite of a few of those articles on Satisfice now, and itâs definitely opened my eyes to another way of viewing testing, which certainly sounds more appealing than a marathon of test cases.
Thatâs a good point, trusting the Testers to be skilled, responsible and competent. Technically anyone could follow some low-level stepsâŚ
I started when I went to a talk by James Bach, found him online, read the RST course materials and began practicing. I was incredibly relieved because I found that much of testing didnât make sense to me, I assumed everyone knew what they were talking about until then. My company paid for my request to go on the RST course itself, which was very valuable to me. That was some time ago, and now I believe you can get the RST notes by requesting them, and some are just generally available: [rste appendicies], [HTSM], [other]. The course is interactive and experiential so is worth taking, but also I have no idea how much it costs now, or which one to take because it used to be all one big multi-day course.
I did get quite far just by reading the notes and practicing the ideas. I had a long history of interest in the philosophy of science, which helped me, but certainly isnât required. Critical thinking and other scientific skills become important in good testing. Any training in magic is useful in a similar way. Some computer games too.
The key for any skill is purposeful practice, I suppose, so Iâd read a thing, try the thing with the understanding that I would not necessarily succeed, and review it. Thereâs a heuristic called âplunge in and quitâ, where you approach something complex or scary by just jumping in, then if you feel stuck you quit. That way youâre not committing to more than you can achieve.
One way you could give this a go is by running some test sessions. Negotiate work time, or use spare time to experiment.
Each session should have a simple form - choose a note taking system. I use OneNote a lot, but thereâs other options, and you can use notepad (et al) if you have to. The header of the session notes has a title and/or description that says what youâre trying to do. It sets the scope of your testing, but also the purpose - what youâre testing and what youâre looking for is a good start, but they can take any form thatâs useful to guide and assist you, and can be as specific as you like. I generally include a start date/time, and Iâd also consider setting a time limit or a min and max time you spend on a session. Consider including references to setups, environments, test data, anything that makes your session unique and would be helpful to you in the future.
The body is for notes. Things you did, things you saw, things you thought of, if and when theyâre worth writing down. I have a separate area for bugs (problems I found with the product, eg unwanted error messages, crashes, typos, looks ugly) and another for issues (problems with the project eg needing test data, needing access to an environment, not understanding scope). I use OneNote, so I use tags to mark questions I have, to make sure I go back and decide if I need answers. At the end you raise the bugs, solve the issues, ask the questions, and should end up with an outline of your testing.
Remember that anything you do thatâs not testing takes up time and attention you could be using doing testing. That includes writing notes, investigating and reporting bugs, and setting up environments. Also remember that sessions are not limited by their scope - if you find something worth following up on you can do so. If you find yourself getting too off-track you could make a note that you need a session to look into it later. Sessions are also for you to refer to during testing, but if you need to refer to them much later, or you need to show them to someone else (like for a debrief) then you may write them slightly differently, so itâs worth thinking about later on - for now Iâd just write them just for you to make it easier.
The first session I run is often a recon session. Iâm not looking to find or follow up on problems (although if I do I will write that down), just think about what Iâm seeing, what testing I might want to do, what risks I notice, things I might need (like test data or access). Youâre building mental models of the stuff youâre looking at and thinking about what kinds of coverage you want and what youâll need to do and have to achieve it.
Then you can run sessions based on your findings. Perhaps youâll do a user scenario where you use the product as a particular user, or maybe a new more focused recon session about the user permissions system. You could run a more focused capability session, where you see if the product can do what itâs supposed to, then a more defocused reliability one where you see if itâs resistant to failure and has good error handling.
Youâll begin to see that what you do is based on what youâve learned about the product and project in general, and the value of your choices is determined by the context in which you apply them. You can use stuff like the HTSM to help you spot problems or consider new risk, or even make your own lists of concerns that are tied to your situation, context, product area, whateverâs useful. You can take those skills into design meetings and consider problems before the product is designed, or into pair sessions and consider problems before itâs written, if you want to.
Please take any of these ideas, rework and rename them to work for you. If you want to do scouting timeboxes instead of recon sessions or whatever nobodyâs going to complain. You are free to research, use and change your terms, methods and tools as you see fit. Try not to overthink the details of the process - itâs better to spend time considering testing than what colour highlighter to use, and the details will work themselves out over time anyway.
One aspect that gets missed a lot is the mind numbing boredom some people get executing a step by step manual script.
Like brain pathways shutting down for some, enthusiasm, morale and the whole buzz testing can offer being constrained.
Some people can get a buzz out of manual scripted tests but bear in mind many testers cannot and it could really harm how they view their job and role if its forced on them particularly if they are unable to see the same value that you do.
Interesting you mention this as Iâm quite sure this is the reason that one of the Testing apprentices is on the verge of moving on already. Heâs feeling very uninspired for sure.
Lots to learn - itâs great though and has increased my motivation knowing thereâs more out there than the âFactory Styleâ - thank you. Iâve found plenty of PDFâs on Satisfice.com so am downloading them and will take my time reading through them. I think my main priority right now is learning the product. Itâs a very complex (and fairly archaic) system that even half the team donât fully understand top to bottom - great for a first project though.
I appreciate that Iâm probably looking through the prism of test case writing here, but it seems that perhaps a benefit of test cases is that it reproduces the steps in which the bug only appears if you follow those exact steps (assuming that the steps are executed correctly to begin with). How would this be done in an exploratory testing? Isnât it possible that this âcorrect pathâ is missed, or maybe exploratory session notes are linked to bug reports?
it seems that perhaps a benefit of test cases is that it reproduces the steps in which the bug only appears if you follow those exact steps (assuming that the steps are executed correctly to begin with)
I suppose thereâs nothing preventing you from executing the same steps without the case. Itâs important to remember that testing is in a de facto infinite space of possibilities. There are too many factors and inputs to exercise all of them, so all testing becomes sampling. The path chosen to be written down into test cases is one such sample.
Thereâs nothing particularly magical about test cases. Imagine writing down every step of a test you perform on your own, including something youâre looking for and if you found it or not. Youâve just designed and written a test case and executed it simultaneously. You can usually skip a lot of the writing, and the structure of testing is then in your head instead of made explicit. Then use the time you saved to do testing, and use what you found to influence what tests you design next. Whatâs more interesting than the case is why you chose to design it that way - the strategy youâre serving or the risk youâre mitigating.
Isnât it possible that this âcorrect pathâ is missed
Only if you miss it. A responsible tester will choose the best use of their time, and if that means that thereâs an important path in the product you should probably exercise it. Lower formality testing like Exploratory Testing doesnât mean random or unstructured, you still have to be rigorous and consider coverage and risk. It means that you have to consider the things that whoever wrote the cases considered, and much more. Otherwise youâre just messing about.
or maybe exploratory session notes are linked to bug reports?
Iâm not 100% sure what youâre aiming at, here, but make some broad statements, and hopefully that will be helpful. If not please clarify here or send me a message.
Firstly I should note that testing does not require note taking or writing of any kind, and notes are sometimes distracting from the process, but in general theyâre useful for reference and noting down ideas you donât want to have to carry around while youâre thinking about your testing. I introduced sessions and session notes as one way to begin to try testing, but if you are being overly particular or thorough with your note taking Iâd recommend trying to test with no notes at all. What you choose to write down will depend on your circumstances, how you test, your memory, etc. You want to note down stuff to help you remember, think and communicate without interrupting your actual testing more than necessary. You can also summarise, so if you want to exercise every combination of a select few inputs using some standardised data in limited equivalence partitions you can do that. You can even make a little table and put the results in. You just need to decide that itâs worth the time and effort doing things that way, which it sometimes is.
When you find a problem youâre going to want to investigate it. Very generally speaking you need to confirm that it is actually a problem, a way to reliably reproduce it, and the state things were in at the time (environment, version, test data, etc as appropriate). Youâll probably want to simplify your steps down to the minimum number to reproduce the issue.
You could report via a conversation with someone, at your desk, with your session notes open. You could also report via a written report that includes those reproduction steps. I assume those reproduction steps are what youâre looking to replace the information in a test case with.
An Example
So letâs say youâre poodling around in a submission form. You enter a name, email, check a couple of other things and hit âSubmitâ, and the form just closes on you. Disappears like you never opened it.
Thatâs a problem. You feel annoyed by its behaviour, and frustrated that you donât know what happened, and you suppose that a user would feel that way too. The user isnât told if it succeeded or failed. In fact, you donât know that either.
So you go back and try again, using a different name and email and settings, click âSubmitâ and it happens again, the form disappears. This is now a pattern.
You could, at this point, report the problem as âwhen I try to do a bunch of stuff and submit this form it just disappearsâ, or you could do some investigation (which I would recommend).
You pull up the database and you find that the information you put into the form never made it into the relevant database tables. The submission form isnât just rude, itâs not even submitting the data as far as the database! Thatâs a bigger problem, with more impact on users.
You could, at this point, report the problem as âsubmission form does not add records to databaseâ with everything you did, in order, and get on with your dayâŚ
But you bring the form back up and just hit âSubmitâ. The idea, youâre thinking, is maybe the submit button just closes the form. But no, a nice error appears on the screen demanding that the Name and Email fields are ârequiredâ. So to navigate around this with minimum input you put in a one-letter name and âa@a.comâ and hit âSubmitâ and it brings up a message saying the submission was successful. You check the database, and there it is, the record just as it should be.
So you know that the form can work and there is at least one scenario where it fails. So you repeat the process using a heuristic called progressive mismatch, where youâre changing what you do each time and accounting for the changes each time. You want to look at what the scenarios that fail have in common, and what those that succeed have in common, to look for patterns that might indicate a cause. You try âaâ and âa@a.comâ as the Name and Email each time, because you know they can work - you donât want to introduce a new problem right now, youâre focusing down to investigate if any other settings cause the problem.
Nothing - they are all submitting just fine. In your frustration you try a new route and vary your initial inputs. You fill the form with a long name. The email is still the same, because you want to see if the name is the problem. You leave everything else blank, to improve the strength of your inference based on your input - no other factors to cause problems. Aha! The submission form shuts down without a warning. You check the database, and the information did not submit there! Now you know itâs the name field causing the problem. It could be the length of the name⌠you try fewer characters each time, until the form finally works!
Now you know the length of the Name field is the problem! Or, at least, a problem, other fields could be causing issues. You could test some other fields, or you could even open the source code and see if you (or someone near you) can identify what the issue is, and use that understanding to guide your testing.
Anyway thereâs at least one problem that when the name is over 15 characters something goes screwy and the form closes, and the database doesnât get populated. You can now create a much tighter bug report, with more knowledge of the impact.
Maybe you check with another tester. You say âhey, can you bring up that form and submit it with a 16 character nameâ, and it breaks on their machine as well. That reduces the chance of other factors being involved. You think that you have enough to write a report now, and create steps to reliably reproduce the problem in the simplest way, with information on how it impacts the system, and perhaps risks involved. What you write depends on the audience and what you think they already know and understand.
If itâs confusing you could include a short video of it happening (this shows what the person reading it is looking for). You could include a copy of your database, if itâs easy to do and you think thatâll help.
- Open the bumbleboo submission form
- Enter a name of 16 or more characters
- Enter any valid email, such as a@a.com
- Click Submit
- The form closes without any user feedback, and no relevant entry appears in the bumbleboo_users database table
Environment: Version 6.9.420, OS Puppy Linux, Browser IE6
Done. Your test client has everything they need to know about what it is and how bad it is to make informed decisions and start a deeper investigation and fix from a more reliable starting point.
You also know that this can happen. You know itâs a risk. You might consider looking at other forms, or investigating other fields in this form. You might ask how this got through your automation checks, if you have those - perhaps this is a browser problem that doesnât get checked. You certainly know to look at this sort of thing in the future to make sure it doesnât happen again.
Now, thatâs an example, but how you work may be very different. How much you investigate depends on a lot of things, like your familiarity and understanding with technology and the product, your working relationship with the code-writing developers, what the scope of your responsibility should be, how youâve been asked to work, what the devs say they want from you, how buggy the product is, how good the team are, and so on. Also each problem you find will be a little different. âProduct doesnât even startâ doesnât need a long, detailed, report much of the time. Same with âYou misspelled âkaraokeââ. Just consider your bug, your environment and your audience.
Hope that is of some help!
Generally speaking I try my best not to write test cases.
- Testing a user story? Iâll write either an exploratory test charter or just make a few bullet points on areas that I want to check, with a view of exploring around those areas. Sometimes Iâll do a mix of both.
- Need to do smoke testing? Iâve moved away from writing test cases as most of our âfailuresâ were a result of incorrect (i.e. out of date) test cases and instead keep a list of what needs checking.
- Writing a test case because Iâve been told to write a test case? Keep it short and simple.
What irritates me most with test cases is where thereâs half a dozen steps that are effectively pre-conditions or guiding you in how to use the application to reach the point of the test. I like to stick as much of that as possible within the pre-conditions or some other suitable place. If something isnât common knowledge on how to perform it, Iâll add a âsuggested techniqueâ to the test step. I try to ensure that the actual steps of the test case are focused on the behaviour. In fact sometimes Iâve just written a single step in gherkin.
For example our old test management system had a lot of test cases like the following fictional ârename cameraâ test case when renaming in our config tool âVSM Administratorâ is reflected in the main âControl Centerâ application:
- Run VSM Administrator. | VSM Administrator opens
- Click ONVIF Cameras. | ONVIF Cameras tab is shown
- Click âAddâ | Add camera dialog is shown.
- ⌠More steps on how to add a camera âŚ
- Add the camera in Control Center | Camera is added
- Run VSM Administrator again | VSM Administrator opens
- Click ONVIF Cameras. | ONVIF Cameras tab is shown
- Double the camera from steps 3-8 | Camera edit dialog is shown
- Edit the name and click OK | Name is updated in VSM Administrator
- Click Save and then OK to apply the changes. | VSM Administrator closes
- View the device in Control Center | Device name has been updated.
Vs my test case of:
- Pre-conditions: Camera added to VSM then Control Center.
- Steps:
- Using VSM Administrator, update the name of the camera under test and save the changes. | The name change is reflected when viewed in Control Center.
If people gripe at the lack of technique for my regression test case, I usually update it to: âFollowing the steps outlined in our Help docs, use VSM Administrator to update the name of the camera under test and save the changes.â
My point here is that when writing a test case it is important to me that you focus on the behaviour as opposed to the technique/steps leading up to it. If we changed the UI for configuration but the behaviour was fundamentally the same, I shouldnât need to go updating my test cases unless absolutely necessary.
Further to that, donât even write the test case unless you expect it to be genuinely required for regression/smoke/sanity testing (i.e. a form of testing where you repeat previous testing). The one that I outlined above, when you boil it down to what weâre actually testing, is effectively one line âRenaming a camera in VSM Administrator is reflected in Control Centerâ. That surely doesnât need the overhead of a test case. It is also definitely not worth the overhead of writing if we arenât going to run it again. Before I got brave enough to push back, Iâd spend a good 20-30 minutes writing test cases that took less than that to perform. In fact sometimes Iâd be using the software to help me write the test cases with the correct button names etc. By the time Iâd written the test case, Iâd effectively tested it. I then had to go through review etc, wasting someone elseâs time. Gah, the memories haunt me to the day.
Also for that very point I started separating more detailed test manuals (which less often change) and short check list for the execution itself.
Especial to give the testers space for detailed notes.
Trained testers need the manuals less often.
Really appreciate the time making these posts full of information.
Just recently I went outside my usual routine and did some exploratory testing as suggested and I found a quite a severe bug that caused the whole application to crash, Iâm quite certain I wouldnât have found it before.
Usually Iâd rather not write test cases, itâs takes up too much precious time, unless itâs a higly regulated industry and itâs a legal requirement to document your testing in that old fashioned way.
That being said, I did blog about this a while ago, maybe some of the tips could be of use to you: