Test Case Management in Excel/Google Sheets?

I hear a lot of people mention tracking their test cases and testing efforts in spreadsheets as an easy alternative to more complex tooling…

But with the infinite customizability of spreadsheets, my mind is boggled by imagining all of the possible ways one could do that in a streamlined manner!

I’d love to see some examples of how people manage testing across a whole team in a spreadsheet.

If there are any spreadsheet testers out there, I’m curious -

Does anyone have a template they use?
How do you track results over time while still maintaining an easy to read dashboard?
How do you assign work and keep track of what’s left to do?


Pretty sure there was a thread either here or on the bird platform on this topic very recently.

In my 1st 2 jobs as an SDET I worked for a large company, and so they had fancy tools (which did and did not work.) The tools often get provisioned in a way that suits the testing culture and process, and make it rigid over time (which can be good and bad). Excel completely removes barriers and is probably the only “agile” testing tracking tool in existence. Test management tools are often terrible at helping the tester identify testing gaps, boundaries, duplication and interfaces. So I’ve yet to encounter one that I find really helpful in serious use, like for finding test coverage gaps or dry spots.

I changed jobs 3 years ago, I have recently asked my test manager to get us a new tool , because our old tool stopped being used across all teams. So I’m using Excel for some projects. I was going to just give some screenshots, but since I keep changing the format drastically, these are shots just to illustrate.

  1. What’s missing in this screenie is the final verdict, since this is a snapshot taken when we started the testing. Each time we get a new release candidate build, I add it’s jenkins link in the list at at the top there.
  2. For each RC, you typically have to copy-paste this entire spreadsheet and start a fresh one, or just copy-paste that page that has all the tests (I blacked it out here), so you can end up with multiple tabs
  3. The blacked out tab is the list of regression tests, but there is a tab called “Exploring”
  4. In current iterations of this I have a special “Security” tests tab separated out. Basically the same as the “Exploring” tab.
  5. You can even add a tab for smoke tests

How you mange “generations” of the sheet for every release or for every test iteration is pretty much free-form. but if you push the sheet into a shared location, Excel will keep it in sync for you.

You should be able to happily add as many columns as you like to the test sheet per test platform or per test scenario onto the right. I use only 3 colours here.

  • an amber box = fail and must have a jira defect
  • a green box = pass
  • a gray box = skip (do not test unless time allows)
  • blank boxes are todo.

As for assigning to various people, I find that if you split the sheet into 2 half sheets, it makes it easier to divide work, but have never found any issues with 2 people running through the same list causing any friction, since Excel updates live. My current test iteration has got the tests split out into 5 sheets now. Basically the freedom to change the format at will and the fact you can even give this sheet to someone in the Marketing team and ask them to help test is a huge bonus. (That only works if you use a Google drive or a OneDrive spreadsheet.)

This is all my personal opinion on this matter only, I’m not an expert on the process side. I’d prefer to not do it in Excel, but this is just a 2 week long exercise for major releases to force the use of eyeballs in our testing, so I’ve come to like it for it’s freedom.

  1. Only for release testing.
  2. We don’t track results over time. We used to have a test case management tool that did all that but no one cared.
  3. We typically test per user story so its rare that there’s multiple people testing at once. When that is the case we just use a simple sheet.

I like using Google Sheets for times when my testing is better represented in a table than a list. Usually this is because there’s a few variables/permutations to wrap my head around or keep track of what I have/haven’t tested.

Sometimes I’ll draft it in advance, like below, or it will be something that I put together as I test. Doing that can often highlight the edge case that I hadn’t thought about.

(The colour scheme was me being a bit of an arse after my first sheet was rejected for being too complicated)

When we have a bunch of things to test split across people, Google Sheets can be useful:

(Yes. AC 7.1… and that is only a glimpse of the sheet)

Also our release testing follows a template, with the below example at the top:


I’m so glad I’m not the only person had the guts to show actual real but anonymized examples graphically Richard. And that I’m not the only person who loves the freedom this gives, because it’s fully open, whenever someone adds a new test case to the sheet, everyone sees the “new focus area” open up and can feel encouraged to add more test cases below the new row.

1 Like

Thank you for the detailed response Conrad!

This is kind of how I imagined things too. It seems there are always trade offs when it comes to proliferating files vs sheets versus row count, so it’s good to know I’m not the only one who took this straight-forward approach. Now I’m just considering how best to automate it. :slightly_smiling_face:

You say that you’d prefer not to do it in Excel though - what would your ideal alternative be?

Agreed! The simplicity of this approach definitely lends itself to the more human side of testing - people take notice of things that could otherwise get lost in the noise!

Personally, I love the color scheme! :smiley:

The template seems pretty intuitive - thank you for sharing!

Why do you limit spreadsheet usage to only release testing? Do lesser testing efforts just not warrant the overhead that comes with this level of reporting/tracking?

1 Like

We just keep all of our day-to-day test results/notes on the Jira tickets. It may be worth calling out that our scrum team is the only team on our (rather large) solution. The target audience for the results of our testing is pretty ourselves. Testing is per user story so the board shows how much is in flight. Our release testing is treated differently because it is the only time we have specific manual regression tests to run.

Our test case management software went offline a couple of months ago and we’re quite content to leave it that way. I hope that I never have to deal with a massive suite of manual test cases ever again.