How do most teams manage their test cases, in spreadsheets or using a tool?

Hi! I’m in the process of building out a test case suite and I’m not sure whether the best approach is to do it in spreadsheets or to use a test case management tool like test rail?

How do your teams manager their test cases?

4 Likes

Our team found success with a test case management tool. It offers better organization, traceability and collaboration features.
However, the choice depends on your project’s needs, requirement and preferences. If simplicity suffices, a spreadsheet works well; for more robust management.

5 Likes

Keep in mind, there is no such thing as a perfect tool. So consider the workflows, the expected size (number) of tests. What information people (QA and not-QA) want from the tool. how it integrates with work processes. Then understand that there will always be aspects that dont work well and you will need to just work around them

4 Likes

Hi @drowen_qa,

Thanks for asking.

If you do end up going down the test management tool route, perhaps have a look at the MoT Software Testing Tools page.

Search for “management” and you’ll see 20 results. (at the time of writing)

1 Like

I started off as the first QA at my company and I used Google Sheets but it didn’t do anything else but contain the test cases in there. I convinced my company to switch over to an actual Test Management tool that is within Jira called Xray. It’s definitely a lot better to have a test management tool to compare results, link to user stories/epics, generate a traceability matrix, and just easier to organize as well as show what you’re actively working on.

2 Likes

Whenever I had control over it, I was not writing test cases ahead of testing.
I was doing testing sessions in a SBTM manner and logging the notes in the places where the project management system was hosted.
If people didn’t want to use the ticketing system but post-its, I’d have a debrief face-to-face with the dev and/or PO/PM.

Otherwise, I’ve seen them in Confluence, Jira, Excel, Google Sheets, TestLink, HP ALM, IBM something, MS Azure Test Plan, Visual Studio Teams Manager, and a couple more tools.

2 Likes

My experience is that Excel works fine up to around 100 odd test cases, after that, or when you have more than a dozen releases per year, Excel starts to drag you down big-time. It’s also easier to import Excel into any other tool, than to go between tools, and Excel actually lets you see your tests in other ways that fancy tools just cannot.

It’s a great tool to use for a few months though, because it might even prove that Traceability matrixes and test History and fancy features are not really as important as people say they are.

2 Likes

I still use spreadhseets as a “scratchpad” if no other tool is available. But I dont like it. As you note, its not scalable. It enforces tiny bits of information by the nature of cells. and Its always a pain in the nethers to get that stuff into a proper test case management tool.

2 Likes

I make heavy use of Confluence and use some parts of this:

I like a business Wikipedia because it gives a reach tool set of varying my test notes and reports as I need them. I found freedom to adapt individually to every issue more helpful then enforced formality. You can agree on a certain formality, but codifying it limits testing in my experience. There is always an exception (more likely many) to the rule you amuse as default.
Sometimes we make bug lists on this sides and the developers check which they have fixed and I could retest. Not at all but at for some issues.

Starting out with excel then moved to Xray within Jira.

Spreadsheets were brilliant for starting out and then they weren’t as we wanted to capture more detail around the test cases and coverage which is when we moved.

I wouldn’t suggest blindly following the most used tool etc, set your practises and understand what you want and find a tool that fits that need.

The tool needs to fit within your practises not the other way around.

2 Likes

I have used Redmine, JIRA, Devops and also good ol excell.
Like people have already said, there is no option that I feel is better than the rest and it can depend on environmental factors as to what works for you and your team.

What I have found is that despite what tool is used, I have been able to adjust my workflow to accomodate whatever is needed, so if you have the chance, try a few different options and see if any agree with you.

1 Like

Going to share simply because it’s relatively rare thing to do.

At one project I’ve been in we stored all test cases in git repository, along with implementation of automation of these tests. Team has been using Python, and Python has a concept of docstring - a documentation that you attach to your functions, classes etc. There has been a tool that read through all these docstrings to create some reports, and also a tool that validated docstrings (so you couldn’t put in things like “Importance: Monkey”).

Here’s a random example how this looks like today: robottelo/tests/foreman/longrun/test_oscap.py at master · SatelliteQE/robottelo · GitHub

The team was so sold to this idea, that they even maintained not-automated (and sometimes non-automatable) cases in the same way. Here’s one example I was able to find: robottelo/tests/foreman/maintain/test_upgrade.py at master · SatelliteQE/robottelo · GitHub

There was also a software lifecycle management software used by all teams in the company, called “Polarion” (now provided by Siemens). That thing included a module for test cases, execution, releases, reports etc.

3 Likes

so how did the mix of tests in a CI pipeline work? It failed manual tests until someone performed them and overrode the gate?

For repeatable test cases I will always want to use a management tool like TestRail or TestLink. They are built to solve that purpose.

However when testing user stories, unless you want test cases that are repeatable then I’m keen for people to use whatever they like most for that test. I like spreadsheets when I’m juggling combinations of variables / elements / setups etc. I most commonly will use a test task in Jira. Sometimes I’ll use Xray Exploratory App.

For automated tests, I don’t really care so much. Ideally they should be ran automatically in pipelines / build jobs and if it fails then I should get a big red NO sent to Slack.

1 Like

@msh CI pipeline would filter out manual tests and consider only automated ones.

There was separate automation that would ensure tests in git repository are in sync with tests known to Polarion. CI at the end would upload automation results to Polarion, which created “tests run” instance, or however it was called. Technically, if you had any manual tests on your name, you were supposed to go to Polarion, find the run and change tests execution status manually. It went as well as you can imagine - many people skipped that step and manual tests were executed only once in a while.

Satellite (product that this suite is testing) is a software that customers install on their own machines, often in environment with limited Internet connectivity. Version with new features is released twice a year. So there were no pipeline gates in the same sense as you would find them in SaaS product.

Excellent summary and explanation, thank you. Ive been working with SaaS for so long its hard for me to quickly shift to longer cycles that other solutions require. :smiley: