Are we afraid to talk about non-automated test cases in 2024?

@rosie spotted there arenā€™t many posts tagged with test-cases. And we both wondered why.

Is it just not cool to talk about them anymore? Have we all moved away from test cases because of the number of reasons why test cases can be flawed in certain contexts? Are we scared to talk about them because the ā€œtest caseā€ police will rock up and tell you ā€œYouā€™re not testing cos youā€™re using cases!ā€ :roll_eyes:

I have a hunch that there are plenty of testers out there who still design and run non-automated test cases with much success. Perhaps as a part of a regression checklist or set of post-release sanity tests.

How often do you design and run test cases that are not part of an automation suite?

  • I design and run non-automated test cases for the majority of my testing efforts.
  • I use an even mix of non-automated test cases and structured exploratory testing sessions.
  • All my non-automated testing efforts are via exploratory testing sessions. Test cases are automated.
0 voters

Iā€™d love to hear from folks who derive value from test cases that arenā€™t automated. How do you use them and how do they help?

5 Likes

How about the possibility that the active audience on this forum has changed?
Or did the overall number of posts decrease?
Or are people not finding it interesting to discuss test cases anymore(there are other more interesting topics)?

If I had to respond for my last 2 companies, neither choice would apply.
One:

  • I was automating scripts, or using various tools(that would automate some part of a task) to help with my testing;
  • I was creating automation UI automation without test-cases (but using user flows agreed with business);
  • I was developing API automation, or preparing API request templates, without test cases, but with the scope of regular data structure, and processing checks, or explorations in case of external/internal services updates;
  • I was doing risk-based testing, session-based approach, in varying exploratory levels, depending on the amount of available knowledge;

Two:

  • Others write the test-cases which arenā€™t in testing roles - analysts;
  • I do the testing(to discover risks and problems)
  • I automate UI checks based on the given test-cases(not exactly those);

A test case Iā€™ve seen can have different meanings, and it can be called as it is or generically a test-case.

  • a checklist of items;
  • a list of use-case;
  • ideas or a charter titles;
  • detailed scripts;
  • long sequence scenarios;
3 Likes

And for those that are still doing test cases, how do they approach them?

How much detail do you have to provide?

3 Likes

The company I am at has several different types of products and some are easier to create automation tests for than others.

1 Like

Wellā€¦there are a couple things here that kinda made me squirm :slight_smile:

Anyone writing a test case is doing testing. Even just teasing functional details out of a designer, architect, product owner, etc is doing testing.

A test case, as you noted can be a variety of things. From simple statements to exhausting manual steps, to the commentary in automated tests.

IMO test cases are extremely valuable things to describe. You are planning your work. You are clarifying others work. You are including other disciplines in understanding the scope of your work. You are then able to ask questions like: What is the risk value of this case? Should this case be automated? Should this case NOT be automated?

Even when I was runing a Kanban organized team I insisted that the testing of a given change had to be described in positive/negative/null cases.

This served multiple purposes. I could read the card and understand what the work was (and thus speak intelligently about it when I was asked about it - thus keeping my team from having to be interrupted for a simple question) It served as further information when we conducted RCA and Five Whys for production defects - which could happen some time after the event had passed.

I dont think there is ā€œOne True Wayā€ for organising QA work. So I might sound a bit ā€œon the soapboxā€ But Im really not on it :slight_smile:

3 Likes

I plan, design, note, gather, summarise, describe, excision, elaborate a lot in my testing.
I would not call much of it ā€œtest caseā€ in a classic test cases management way with predefined steps of actions and results. I, or more precise, my coworkers, have a few such leftovers from the past.
Iā€™m very glad about this freedom.

I guess as I see testing being always an exploration (itā€™s a spectrum of how much), I technically do mostly exploratory testing.
I test !!!

The best usage I have for test cases is using them headline-only.
I also use in my automation test cases.

I think it is an interesting topic which has a lot of nuances.

My view can be broken down into this (I think :D):

  • Test cases is just one way of codifying explicit knowledge
  • There are a lot of reasons to codify explicit knowledge, one of the main ones being to transfer or align knowledge in some shape or form
  • How we codify our explicit knowledge depends on what we are trying to achieve, and who the recipient is, and what the cost of doing so is

I think there is definitely room for discussions on how to best codify knowledge for different purposes.

Examples of codifying knowledge aside from test cases could be acceptance criteria, design documents, test missions, test scope mind maps, exploratory testing reports, etc.

If you have a third party test team that runs acceptance testing for you, then you probably want to have a lot of details (because the scope is static and their knowledge is limited) so then you could codify your knowledge in the form of test cases. If a tester is part of a development team, and the team does all their own testing, then the need for codifying a lot of details may just be a waste of time because everyone already have a lot of knowledge - maybe high level test missions and design documents are a better option.

I have written some articles which are somewhat related.

Codifying knowledge in tests:

Moving from scripted to exploratory testing: Moving from scripted regression testing to exploratory testing | PDF

Best regards,

Johan

1 Like

My reply is probably not going to be well received, but hereā€™s the reality in many large companies:

Iā€™ve moved on from being a tester and am now a functional consultant, configuring large ERP systems. My FUT is exploratory and follows a ā€œchecklistā€ approach, with a final run that is formalised into a test results document. We then deploy from Dev to Test where our more formalised SIT / UAT takes over, run by a large external system integrator with inexperienced test managers and tested by inexperienced testers.

They expect detailed step-by-step test cases to be provided to them by a business analyst, and when executing the tests they do not deviate from the steps. The focus of testing is on getting a pass, not looking for issues. Testing is then formally signed off and the developments are promoted to Production.

I believe that test cases can be useful to the person writing them (the old adage of you only truly understand something when you are teaching someone else), but in my experience there are too many testers that use them as something to sign off, not as a starting point for finding issues and deeper testing.

3 Likes

Iā€™m thinking a useful thread/question/discussion might be:

What formats do, or have you adopted to create or design your test cases?

What do you think?

You get no disagreement from me.
Human Automation is incredibly inefficient and provides the least amount of information about the quality of the current state of a project. I use that term specifically and distinctly from ā€œManualā€ or ā€œExploratoryā€ testing.

I find this ā€œjust what the case told me to doā€ most frequently in QA folks new to the discipline or in need of mentorship. Less frequently in people who are ā€œpunching a clockā€ or have been trained by courses or certs to act in that fashion. (this will be my controversial opinionā€¦)

Its varied depending on the need of the project.
Most lately I was using the Azure Devops tool. Itsā€¦decent. Its got all the necessary buttons and dials. It does have its quirks though.

But in times of need I have resorted to spreadsheets or even bulleted lists.

For me the driving factors are: Organizing my approach, communicating the activities, providing a means of review.

Over my testing career Iā€™ve moved from different roles/team structures

Initially being a 100% manual tester role in a team with about 5x other manual testers and 1x automation tester.

Then in that same team migrated to 20% automation 80% manual. The remaining team members stayed 100% manual or automation.

I then moved to a company where they no internal manual testers, they had a company contracted to do it whenever there was a release. Also no automtion testing of any kind.

This company I was hired to be an automation tester but ā€˜help as needed for manual testingā€™. Which translates to spending 6 months fighting to get the time to even set up the most basic automation suite between chaotic release schedules.

Eventually got to the point where all my time was 100% on automation and a second internal tester would focus on 90% of the manual testing and help with automation 10% (this in addition to the external manual testers being used as needed)

Im now in a company where I was hire as 100% automation tester with no expectation of manual testing. They have internal Subject Matter Experts (SMEs) who do the manual testing of new features prior to automation and smoke tests that havenā€™t been automated for business reasons. The SMEs sit in a team within the QA department separate to the Technology department. However, 1x SME is assigned to our product team and they cycle them every 3 months.

In every single one of these different roles/teams there has been a mix of automation testing, explortory testing, manual test cases and sanity/smoke/deployment checklists

However the manual test cases have different levels of detail between the different teams based on the needs and the people using them.

In the first company we had a lot of manual testers who were long term staff (10+ years). So they knew the business well. The test cases helped to plan expected testing workload. They also allowed us to have them reviewed by another manual tester prior to execution. This helped catch edge cases or prevent testing things that werenā€™t needing to be tested.

These test cases didnt have detailed test steps. They were a headline that sumerised the scenario. They contained any information a tester might need to know about preconditions and links to requirements/business rules/process documents that could be referenced if someone didnt know this feature/functionality.

These testcases were also used as the guide for the automation testers to know what scenarios might need to be automated. Often the automation tester was new to the business so this meant they werent starting from scratch deciding what scenarios needed to be covered. We would go through our manual scenarios with them and together we decide which ones are worth the ROI to automate vs which are candidates to be automated.

Meanwhile at the second company because we were working with an external contracting agency for our manual testers and had very tight deadlines our manual test cases were very different.

They were more detailed step by step instructions, with a lot more information included. We often would have to include screenshots of things they had to interact with in each test step and the expected results.

This was made worse by the fact it was a medtech comapany that was dealing with reciving CT scans, generating a 3d model of the hearts vessel and then also reporting a lot of information in a lot of screens. It was a diagnostic tool whose users would be radiologist and cardiolgists. So it was also complex in trying to explain to an external tester with no background that they had to find and scroll down the Left Main artry in one of the 10x views until they reach a slice of scan that has a section of stenosis and plaque. Then check that about 20 fields display or dont display certain things.

My current company while they have manual test cases they are more used for auditing purposes by the QA team to validate something has been tested. Ive had a look at them occasionally when looking for info on historic features im trying to automate and often they donā€™t include any real information. This does seem to depend on who creates them.

Often it will be something likeā€¦
Test Case name = User login
Test step = Tests that a user can log in
Expected results = user logged in

If you get a good one they might have a test step with a list of different user types/scenarios.
And this will be the 1x test case for the entire login/logout/auth feature :rofl:

I donā€™t think anything drastic happened. I view test cases (TC) as a stepping stone, building blocks of any testing effort (except exploratory, of course). Itā€™s kind of like you have to know HTML in order to use JavaScript in it, so you have to write TCs in order to do most testing.

What bothers me more, though, is the quality of those tests, and boy oh boy how bad those are sometimes!! Most testers Iā€™ve known or have interviewed, write in imperative style and thereā€™s usually no one to teach them better practices - namely, declarative type of writing TCs.

I think declarative writing is superior in most casesā€¦ does anyone strongly disagree? One exception I can think of is what people above mentioned, when you have an external company/team that does the testing, all they care about are as detailed test steps as possible.

2 Likes

Hi @ivoqa. Thanks for sharing.

Would you mind sharing a couple of examples of the same test case using imperative and declarative style?

Sure. For example Cucumberā€™s docs are full of good docs, hereā€™s one:
https://cucumber.io/docs/bdd/better-gherkin/
Imperative:

Scenario: Free subscribers see only the free articles
  Given users with a free subscription can access "FreeArticle1" but not "PaidArticle1" 
  When I type "freeFrieda@example.com" in the email field
  And I type "validPassword123" in the password field
  And I press the "Submit" button
  Then I see "FreeArticle1" on the home page
  And I do not see "PaidArticle1" on the home page

Declarative:

Scenario: Free subscribers see only the free articles
  Given Free Frieda has a free subscription
  When Free Frieda logs in with her valid credentials
  Then she sees a Free article

The first question most people ask is: but what if a tester is new and donā€™t know the steps? The answer is simple but often hard to do: have good docs and tutorials. Having some tests that do the basic functionality in imperative style can also be beneficial. But most Tests can be broken down in more basic tests. To use virtual example from above, it might be something like ā€œUser A should be able to loginā€, ā€œUser A should be able to get free submissionā€ and ā€œAn article should have a checkbox to make it free for allā€ => and only when youā€™ve done all three of those, are you equipped with enough knowledge to progress to more complex scenarios like the one above (even if it looks easy, it implies a lot of domain knowledge and basic scenarios are there to help gain that knowledge - especially if tutorials and/or documentation is lacking :smiley:).

The key point is clarity and understandability of written text. Even imperative style can be okay if it has a good Test Title (a.k.a. Scenario in above case), accompanied by optional Test Description or at least a link to some further reading that explains things more in-depth, like Feature Description or internal docs/wiki, etc.

In real world, tests are written by multiple people having multiple writing styles and have very often confusing Test Title that is then followed by a myriad of Test Steps - sometimes over 20 or even 40+ steps. In such cases, one gets easily confused, overwhelmed and have no clue what the test is all about in the first place!

Similar thinking goes for programming, not just the production code but also of course even the automation code :smiley:):

3 Likes

Thank you for the links! Very helpful.

Apparently I had some thoughts 9 years ago (which I had now forgotten) that are very similar to this:

I didnā€™t know the right nomenclature (declarative/imperative) back then, so really nice to have that now.

/Johan

3 Likes

Very good article, Johan! Just glancing through it I caught myself nodding a lot :smile: I need to re-read it once more in full once I have more time.

And I also totally know how you feel as I felt it before - namely when I came into a team full of seniors for the first time in my career. They were using this fancy words like closures, stubs, mocks, thunksā€¦ I was thinking to myself ā€œOMG, what IS all that?!ā€ only to realise after some time that I have been using all those coding patterns for years, I just didnā€™t know they were called like that :smile:

2 Likes

recently I experimented with the EXPLORE-WITH-TO DISCOVER format from the book ā€œExplore ITā€ ā€‹as described here;

It was for a group of security specialists doing some application testing with some user access management system. They ended up writing data checks in excel, as the charters where not detailed enough,

1 Like

I often have test cases that are run by other people than professional testers, sometimes they need explicit technical details other times they just need a one-liner. Having to grasp a full ET charter format is a stretch as I mentioned above.

Plenty of public service contracts here require explicit test cases, even though they seem just to be busy work - and only confirm that things work once. That being said some of the public case management scenarios are rather tricky and require know-how that cannot be easily automated.

Itā€™s definitely a case of test police for me.

I mostly work with non-automated test cases and I like it. They are very helpful for planning your testing and organizing your thoughts. In the end I have some documentation that I can come back to or share with others. We just needed to find the right amount of detail that works for us. It needs to be enough for you to remember what you did a year later and answer questions if certain scenarios where covered.
Depending on the scenario Iā€™ll write brief steps or use a table. Currently Iā€™m testing a lot of permissions and a table with actions and different roles is very helpful:

Create action Delete Action
Admin Yes Yes
Guest Yes No
1 Like