At one place I worked, we had a simple regression pack on a spreadsheet that we’d just keep adding to. It was a discipline for the whole team to know when to add to it, remove items and edit existing test cases. Our team was small enough that it was manageable, yet even so it would become a bit messy from time to time – duplicates worded slightly differently comes to mind!
Yet I imagine for large distributed teams that rely heavily on test cases – often held in a test case management tool – that things get out of hand quite quickly.
I’m curious to know, what steps do you take to reorganise a messy test case library?
I have introduced the updating of regression/smoke tests to the various project teams that I work with and gradually it is becoming the norm. The Project Managers and developer buy in to the little time it takes (if done regularly and consistency), as we are reflecting more accurately what the product is trying to achieve.
I foresee a few issue where if you have mutliple members updating the tests so some careful planning and communicating would be required, for example:
You all agree what constitues a change to the test cases so that you are all on the same page.
As always the testcases are clear so they become familiar for someone that has not worked on them.
It is planned into the project from the outset and challenged if not.
It happens after the completion of the project whilst you have the greatest understanding.
The regression pack is broken down into sections so it is manageble as it will, as you say @simon_tomes inevitably grow
I ran into a similar problem where when working on a product that was in its early stages there would be issues like:
A new set of test cases looking like they’re almost similar to the ones written before, this would lead to questions like do I write new ones? do I update the old ones?
An old set of test cases being rendered redundant due to major functional changes: sigh…do I have to write all of them again?
To overcome this, I adopted an approach where I documented key areas of the software and a generic approach on what to test in them. On the other hand, the test cases for all new work done got converted into test activity charters which is basically a free form of writing down your brain storms before a test session.
This flexible way of maintaining documentation reduces the headache of gazillion test case pool.
We still use spreadsheets for our regression packs. What we follow is we always create a copy of the previous sheet and mark the heading “what’s included in the release” with the ticket numbers of all the bugs, improvements and new features. Using that as a reference we always go to those areas that needing updating. Once done we start our regression with confidence that we have the most updated version of our pack.
I’m of the mindset that if it’s worth writing down, it’s worth automating.
That means: no spreadsheets full of test cases that need updating and organising.
If you must, a test charter to guide human executed exploratory tests to probe a bit deeper for smoke or regression testing works imho better than test cases.
As conrad mentions, small ones at first or you could create a parallel project and start from scratch and copy-paste them in.
Eventually deprecating the old version or overwriting the old one with the new content, depending on your setups.
I would like to add to “small ones at first” also “highest priority first”.
But it depends on what you are going to do, how you are refactoring it.
It’s really hard to manage such test cases with big teams.
That’s only about regular processes that should be controlled by peers.
Definitely, you should go from spreadsheet to some test management tool.
However, test management is also not a silver bullet here.
Nowadays, TMS solves this problem partially.
You know, AI may solve this very well but no tools on the market right now for this problem.