Executed results in a Test Case

Hello everyone.
We began to revise our test cases. Some of them will be removed, some will be updated and of course we will add new test cases.

During this revision, here appeared questions: Should be there executed result for each step in the test case or not? Or may be we need to add executed result only to important steps?

Here are some points, why we have these questions:

  1. Time consuming - projects became complex. Even if we take only one project and try to cover it, we are not sure that the data(executed results/steps) will be relevant in the end. Already have such situation.

  2. New QAs. We have specs, mock-ups, guides, epics and old test cases. If we simplify test cases, I am not sure that QA will understand how everything is working even he/she/it has all necessary info.

  3. Simple way is not always a bad option . :slight_smile:

Thank you for your advice.

2 Likes

Hello! I expect you will get a multitude of answers on this one as there is no right answer.
Iā€™d say it depends heavily on how you write your test cases, who the intended audience is and why you write them in the first place.
We aim to write test cases to be short and concise, with each step designed having an expected result. Theyā€™re written by and executed by individuals who have at least a base understanding of the system.
In the event that the result of a step is not met, the step and therefore the whole test case fails. Every action required to get there, that is not being actively tested as part of this test case, is a precondition in our world.
You do end up with more test cases this way, but a good level of granularity when it comes to reporting, and it can help with created automated regression suites for an older system as each test case becomes an automation test, rather than having to break apart big test cases.

Whatever works for you at the end of the day, there is no right or wrong :slight_smile:

Hi @veronica_s . Welcome to probably the biggest, and definitely the best, software testing community in the world.

Can I start trying to be sure I understand some of your context, and some more of your question. I know itā€™s hard when you join a new club to know how much detail to give. So you are wanting to clean up your test suite, Iā€™m for now assuming you mean an automated test suite, or are we talking manual testing using a management tool?

  1. Iā€™m instead not really answering, but giving some of my experience I hope is relevant. Automated tests always need to have one outcome or verdict only, itā€™s a fun thing we do when converting a manual test to an automated test is to strictly break it down into steps. It makes our lives easier. But now when a test fails, the report starts to try and tell you which phase the test failed on, and the tester uses that to triage the report, and that is thus a false strategy to use steps. Because either the product feature works, or it does not. Knowing that the phone contacts app crashes in most tests during number validation is not helpful to stakeholders. They only care if the app works. Your test report should show that, and so when you are adding tests or removing tests, try to not add tests that help you to triage product assembly problems. If you do be sure to tag them as ā€œintegrationā€ tests intended for dev and QA, and nothing to do with functionality. I may be going off on a tangent here, but test project complexity happens when as you rightly point out, people just keep adding without taking away. Another big source of tests that are not needed later are ā€œfeatureā€ tests, tests added to give the devs quick feedback on things in an area of code churn right now are valuable now, but will be 90% pointless once the epic/project ends. Even if they make your stats and coverage look good, kill them. All your tests are in version control, so just go and delete all those ā€œfeatureā€, ā€œintegrationā€ or ā€œnon-functionalā€ tests when you find they are not needed anymore. By organizing your suite, it gets easier to kill time waster tests.

  2. I find that itsā€™ always a good play to keep a few basic or simple to execute tests, especially for new people to use as a starting point. Keep some of your history, those old basic tests, but be sure to rename tests when the names are wrong, to prevent confusing people with old language. Never name a test based on the release name or epic if you can help it. A small suite of easy tests is often what I call a smoke test set, they need to be quick ones too.

  3. I think, you are talking about manual testing here, and how steps to follow may have checks for each step. I think this ties back to my first point. A test should test a full workflow or journey. If anything on that journey fails, but the end result still works, your product or your testing approach is flawed in some way. It starts to hint at there being workarounds or other ways to achieve a test pass verdict. It should be impossible to reach the last step of a test and have that final step pass. If you can, then you probably have steps in your test that are repetitive or can be worked around/skipped. Or your final step is not the actual point of the test and is merely wasting the testers time. Is your final step maybe always something like logout? Unless logging out is a important thing that is a product selling point, or is the point at which data gets saved, so donā€™t test it repeatedly and ā€œimplicitlyā€. Be explicit and test it once. Tests should be explicit - and thatā€™s my tip for how to find tests to remove. Tests that are not explicit about their value to the customer are probably not worth running every single day.

Yeah, sorry, a wall of text, but its a very good question to be asking.

1 Like

Hello geoffd and conrad.connected!
Thank you for welcoming and your opinions.
You left me with food for thought. :slightly_smiling_face:

Yeah, sorry, I didnā€™t mention that we use Test Rail and there are kept test cases for manual testing. They were created for QAs and by QAs, no matter what experience they have.

I will skip subject of automation, here we have, lets say a gap. I do not think that in complex projects manual testing and automation testing can live without each other. So to fulfil this gap, we need to prepare at least a small part of new and updated test cases, which can be used and by QA automation engineers.

Much work remains to be done, but we will take small steps and see how it goes. :slightly_smiling_face:

2 Likes

Do share as you progress, all us testers are on a journey. You have to be, and now you are of our tribe. I do a lot more manual testing these days than I had in the past, and would love to hear more of the practical frustrations (and victories) of manual testers, because trust me, automation-testers have their own share too. Manual testers have superpowers, so do show them off.

It is always a great question with regards to writing manual test cases. I would say a lot depends on what you are wanting to achieve.

Test cases - ideally - should be checking one specific thing. However, if it is a full end to end experience you want, then maybe checkpoints on the way.

System under test will also reflect on the importance of how much you are checking.

We tend to look at ā€˜acceptance criteriaā€™ for the business and write our tests as such (Cucumber style). Main reason - easy to understand, whilst at the same time ensuring our products are intuitive to achieve the goal.

Having step by step tests : Click, check, click, check - I find removes focus to what is actually going on and is too ā€˜directedā€™ to a single path for all to follow. Customers NEVER do what you expect.

Risk and Importance have to be considered with coverage - As daft as it sounds, it normally comes down to money or mis-selling (and regulation) - but again depends on your industry.

At the end of the day - comes down to what YOU and your team define as required, important and meaningful.

Big thing is - QA learn by ā€˜exploringā€™ - not by following a script!

Hi!

If test cases are time-consuming and not always relevant you could replace them with more flexible alternatives like session charters, risk catalogues, checklists, and the like.

That way you donā€™t have to try to make your wishes explicit in a communication document that may or may not be relevant now or in the future to another tester in the hope that their interpretation of the document is useful or accurate.

If you replace test steps with things like desired coverage, and give purpose and reasons in the documentation instead of instructions, then QAs can learn the product through exploration and experimentation, and their feedback can help adapt and improve your charters. They can formulate and ask questions to fill in the gaps in their individual knowledge, and the experience they gain can be used to better evaluate and test the product in the future.

That should deal with both questions - better allocation of time, and adaptive product learning. Iā€™m happy to answer any questions you may have about my perspective on this.

More Reading
Iā€™ve written about this before, in case you have a dull afternoon to fill.

All my replies in this thread:

All my replies here:

Alsoā€¦