it’s a bad naming. Customers don’t care about functional or non-functional. those are just requirements. See it as quality attributes, You only can automatically check and design ideas using exploratory testing
I’d say there are two reasons I write/use tests cases:
- To organise my thoughts on a complicated area, to make sure I’ve covered all the important situations and get insight into the scope of what is going wrong
- In order to ‘speedrun’ a retest, whether that’s me having to regression test the area in the far future or as a smoke/sanity test when it comes back after a failure
Further to this - if I log a defect during exploratory testing, I’ll need to log some method to reproduce, in order to validate a fix. Do these steps then become a test case? I would say yes, for exactly your reasons here.
Wheee! It sounds like you have had a fun career so far!
This brings to mind a couple things I have done to gain the benefits of both exploratory/manual and automated testing. As well as to bring rigor to the manual cases.
I began a practice of pairing manual/exploratory testers with automation testers. The manual cases would become a backlog for the automation engineers.
Because multiple people would interact with the manual cases (whether they were created through exploratory tests or designed whole cloth) an authored test would be given a status of “ready” in the test management. Before it could become “active” it had to be reviewed by another tester.
It sounds like a lot of busy work but it became a much smoother operation as the “muscles” got used to it and we had well formed cases that anyone could run and automation engineers could pick off and automate.
I like it.
I tend to work in the other direction. I log my exploratory activity in a test case as I go. then I dont have to try to remember what I did. Ive got repro steps and I have a rough draft of a supporting manual test case for later use or backlog for automation.
I do adjust a bit because of the type of activity. For example I will describe the exploratory scenario in a user statement “As a customer I want to view my current statement, so that I can see how poor I am” etc.
Oh I need to take some time to digest that.
I was griping at Product that their user stories had very little in the way of acceptance criteria. I finally put my foot down and said “The acceptance criteria are what we are going to test. If you want something tested, get it in the AC.” It was an alley fight for a while but eventually we got things in a better place.
I must have missed this one originally.
It depends on what you mean by test cases.
If you’re talking about structured tests including steps in detail etc (i.e. the classic textbook TC), then I don’t. But if you mean a list of scenarios or specific test ideas then absolutely I do.
I lot of testing i do is not really testing via a UI. It’s data based - data being processed so for these whilst you can still apply some exploratory approach a lot of it needs to be specific test conditions.
Hear hear.
For me it starts with “What is the problem that the application/feature is solving?” then “How do we test that this implementation is solving that problem?” from there you design scenarios intended to test (Or “explore”) the answers to those questions. approaching the application “as a user” can be extremely variable because you can define user as - another service, a data store, infrastructure, a bad actor, etc. I might be misunderstanding a thing or three here; but I find I have to make sure Im not only thinking of user interface as an entry point to Exploratory Testing.
I was working on a product in which a service was a pod in a kubernetes node. there could be many running at a time. A developer whom I admire wanted to do some of his own testing of the service he was working on. So he created a program that would execute from the IDE to do a variety of things with the service. When I found out about it I made grabby hands and demanded that be put into the source repository. I used that to perform a variety of more expanded exploratory tests. We found some good defects and understood a lot more about a remote (other company) service we had to interact with that was (intentionally) obfuscated by lacking and poor documentation ( cough google cough) We documented the findings and activities just like any other testing and made it available to anyone who might later have to refine that service. IMO, is every bit as much “testing” as any other activity.
Johan, I went back and read this. Very valuable!
We did very similar things at my last job. But it wasnt QA who was responsible for AC, it was the whole team. During story refinement, Product would have created what they wanted as AC. During refinement QA, Dev and Product would review and expand the AC as we examined the story. The AC would then become test cases (manual/exploratory for dev cycle and feature testing, then automated for regression)
thank you for sharing that!
Hahaha it definitely hasn’t been boring ![]()
I do agree that while it takes some getting used to and some people will see it as added work this sort of practice defintly helps everyone. The manual testing quality improves when there is a second set of eyes and the quality of automation testing too.
Cause tbh I think one of the biggest issues auto tests face (tech stacks aside) is actually testing the right thing. Just because it runs and returns a green tick, doesn’t mean its providing value in WHAT or HOW it is tested.
Also the other added benefit of the system you mentioned, is that when inevitably some manager asks “how much coverage of automation tests do we have” we have something to answer with.
Managers are used to unit tests being explained as coverage and they dont understand auto test dont work the same way. Unless you give us a definition of what we are covering, we cant tell you what % of coverage we have. So if we have a process where manual tests become the backlog for automation engineers this resolves that issue.
Further added benefit of this approach is that when someone down the track asks “why the hell do we have 10x auto tests for this thing I dont think needs to be tested?”, you have traceability back to your manual tests for the reasoning of why we have those tests. Which helps when deciding if we need to keep or axe certain auto tests (especially if team members change frequently)
EXACTLY!
a thousand automated tests that dont tell have any direction are essentially wasted effort
From experience, test cases are still used in many organisations and they often compliment other forms of testing.
I do agree that it’s not seen as cool to be discussing the subject, and the responses that you can often get back when asking a related question about test cases or even manual testing in general, can be seen as negative.
I regularly write test cases for manual testing. The level of detail can vary dramatically from a rough note to detailed steps and expected outcomes. Mostly I’ll try and just write out a list of scenarios so I can copy and paste it into a jira comment to show what’s good (or not). I’ll get more detailed in my test cases if there are multiple variations that can be run in the same or very similar scenarios.
Thankfully, we don’t have much in the way of anyone reviewing test cases or them being used for reviews. They’re mostly just an aid for the tester so can be custom to that tester. (We have regression test cases which follow a detailed format but they don’t need updating every day)
I’m a fan of using whatever method is the most appropriate, considering factors such as frequency, investment, and practicality. Where I’ve found test case-like testing to be most appropriate is when dealing with embedded software, or companion software for IoT devices. In these cases, you need to physically do something - insert coins, create physical distance, or even home-make your own Faraday cage from an old jar and a great deal of tin foil ![]()
I say test case-like only because the term “test case” to me suggests very prescriptive, step-by-step, written scripts. After eight years as a quality engineer, and numerous projects with different needs, I’m still not a fan of these. But I understand they have a place, and that’s okay. What I do instead when doing regression testing or checking a fix that isn’t full-blown exploratory testing, for example, is usually to use a mental model of HISToW to guide me. When it comes to formally documenting how to do this type of activity, I think it really depends on the context and the target audience.
Joining this late but I feel that all I ever do is talk about non-automated testing. Granted it’s outside of this forum so the tags wouldn’t show it heh.
There’s a lot of value in knowing that testing encompasses more than just the topic d’jour of the social media bubble. Yes, AI and automation and engineering is sexy and people want to talk about it… but the basics are important to acknowledge too.
I’ll mostly use automation for repetitive tasks. The rest of my time would be spent exploring and discussing things with the developers.
A bit late to this convo as well. I did not vote. After a bad experience or two I read Explore It by Elizabeth Hendriks and it fully settled in. Do not automate anything, until you have manually tested it almost to death and understand the difference between your observations in the system and the actual changes in the system.
What do I mean by that? Well it’s often easy to test that a product accomplishes X by validating that after for example tappign the save button in the product, that a disc file is later present. Then going and writing a test case to validate that c:/users//Documents/productname/save/Document001 exists. And writing an automated test that checks and validates the file size and creation time. And then a week later only to discover that the file should have been saved in the office365 onedrive which was not logged in when you tested it all manually. My way around this knowledge gap, is to write and get all manual tests reviewed by both the coder and one other person on the team, before I automate. Write down your manual check in a brief but clear fashion on one page.
Usually 1 manual test will spawn off 3 or 4 automated tests, but if you never reviewed your logic behind the manual test, you can end up with a lot of incorrect or just un-implemented automated tests a few weeks later. I often end up with 1 manual test covering human/eyeball or something I don’t trust the automation to do well, for every 3-4 automated tests.
No, we are not afraid
it is completely okay to have non-automated test cases in 2024
and some cases might be too difficult to automate, and trying to find comprise is not always possible, not always possible to use test cases that are easy to automate and not always attempts to automate everything are reasonable, cost-effective and worth it
I’ve noticed that many teams today are increasingly adopting a shift-left approach, emphasizing sophisticated practices like test automation, CI/CD, and TestOps. While these practices offer significant benefits, they also raise numerous questions and challenges. On the other hand, manual testing, which often includes exploratory testing and the use of simple checklists, tends to be more straightforward and easier to understand. Even though manual testing teams may encounter questions, these are usually related to business logic rather than the complexities of writing test cases.
I think with automation getting a bigger push some people do feel like “if we speak of manual testing it’s taboo”… However, I think that’s where we need to get better with our language as well. People are great at testing but suck at regression. Computers are great at regression but suck at testing.
I can tell a computer to do the same thing a million times and it will do it that way and only that way. You tell me to do the same thing 5 times. I’m finding a way to make it faster cause I’ll get bored. However, you tell me to dig into something and figure out it. I can be creative and think outside the box. Computers, only do what I tell them to do, so not like they can really go testing for me.
People = Testing
Computer = Regression
