Managing your tests - do you do that? To what end?

Recently I’ve been asked “how do you manage your tests?” And my answer was “we don’t”.
This led to a discussion that left me pondering -
I can see some benefit from having an amazing coverage report, and being able to know exactly what requirements we cover and what we don’t
Problem is - Maintaining such a thing is so heavy that I assume it’s not cost effective in all but the most heavily regulated areas. And quite frankly - even when we maintained a very detailed suite in our test management tool, no one really looked at it, so it seems like a lot of wasted time.

So, how are you approaching this? What are you gaining out of whichever method you manage your testing with that is worth the time invested in it?

7 Likes

I don’t.

I’m the sole tester, and my “test management” consists of an extensive set of regression notes kept in Wiki format mostly so I remember what needs to be cross-checked and have a reminder of the configurations I need to go over, plus whatever notes I add to the User Story or Defect while I’m testing it.

I decide what needs to be given more information or documentation based on the likelihood I or someone else is going to need to reference it rather than a mandate for test documentation.

5 Likes

It depends on what you mean by “test management”…

For instance, on my current employer, to communicate test coverage I use a mind map and update it by demand - the latest picture of it is on a Wiki page for everybody to access. Management people have liked that format a lot, to show how far we cover.

To communicate in-which-environment-those-tests-are-run-and-why, I found a table more suited, crossing tests per execution area, and a comment to show what issue is affecting it, etc.

To sync priorities, a weekly meeting and a kanban board with tasks to be done next week have been suited enough so far.

About the effort it takes to keep this all up, again it depends - if someone is asking such information every week it means that person would benefit from it. Therefore, there’s some degree of value to spend 1hr every week updating your coverage mind map or planning your next week tasks.

If no one other than you bothers about reporting status, showing progress and things like that, they may only don’t know what you can offer - so feel free to surprise them and show how organized you can be :slight_smile:

3 Likes

Currently, we’re in a position that I don’t like with this. We have two Test Case Management tools and some tests exist only in our code and hence we have no easy way of reviewing coverage without going into the code.

I want to move us to a position where we have tests in one repository which is organised by application and sub-functionality. This will help us when planning to test around risk because we can see what coverage is in place for sub-sets of functionality. We can also review gaps in test coverage more easily.

In general, I think it depends on the nature of the organisation. If you work in a regulated environment you’ll be needing something like QC to demonstrate coverage against requirements and associated results etc.

1 Like

I feel context plays major role in taking decisions in where to follow light weight approach of tracking changes and Test progress, Test coverage etc. Few points to consider while deciding the test approach which also includes managing tests I believe

  • Size of Project
  • Duration of Project
  • Complexity of project
  • Is it Client project or In-house project
  • What are expectations around Test Deliverable
  • Risks: Are there any communications gaps between Dev and Test due to timezone or geographical locations.If so How we communicate how we provide access to our tests to dev. Are mind maps self explanatory to devs or managers to determine the coverage. Session Notes might help better with light weight graphs etc.
  • Tool considerations with reporting plugins
4 Likes

My industry is not regulated, but since we release embedded software that requires a service engineer visit or RTB to update then I consider management of our tests vital. I cannot allow untested code to slip into the end product, I need tight control of regression on updates.

Typically our product life spans are >10 years so since I joined my current employer 6 years ago I had a bit of a hard start with our current baseline tests and identifying gaps. I’m planning to make my life easier this time around as our developers are currently working in TFS and the requirements are stored there. I will be putting tests, in whatever form (this could be charters, session records, scripts, automated regression suites etc.), into TFS and linking up to requirements as we go. This will give me an indication on my dashboard of where I am up to.

What am I gaining? A quick picture of where I do and don’t have tests. Bugs are also tracked in TFS for the project, so I can look at return rates on tests etc. I still need to invest my time as a lead trying to assess the quality of what we do - and the test management tools can be used really badly for misinformation in this way (especially by people who don’t understand test - the amount of times I have to have the discussion of High Coverage != High Quality is ridiculous). But overall I think proper management of the tests aids my understanding of where we are and so we can be more effective headlights for the project.

Hi

In my company where develop real time market data feedhandlers, we use TestLink as the Test Manager.
We have around 1000 test cases where 80% are generic, 20% specific. The specific tests are done to adapt the test results depending on the product we test.
About 800 are automated.
Specific tests are often reworked because our softwares changes due mostly to incoming feed we process from the exchanges. As for generic tests we do no not change them very much, as our automation tool overrides the generic tests with the specific ones when they apply.
But generally speaking, in my opinion, pilling useless test cases should be avoided.

I generally manage my test cases in google docs. I create test cases for each feature set, each component set, each user flow, and regression tests for deployments. The regression test cases are the ones most people tend to be interested in and I link those to the release document when we are looking to deploy to production. This gives a clear indication of any found bugs and decisions and timelines to wait on those to be finished for release management purposes. I create a new smoke test document each week and archive the one before.

As far as unit and component tests most CI interfaces allow you to pull code coverage level metrics gained from build management. So you have a developers view of stability, a qa engineers view of stability, and a release managers view of stability.

My automation tests are organized by category and are run in parallel, which outputs a JSON file that i use a templating engine to make look nice, so we have an account of what tests are running and passing on build for UI automation.

So i think the answer is unless you have thousands of tests most test management software is about communication, and that can be handled in various ways as seen from all the responses above.

2 Likes

Me, no, higher-up, yes.
Manual testing - definitely.
I’m working (not for long) with loads of tests >5000 and loads of people running them independently and spread across teams, you need a management system that is probably 1 step up for excel. The flaw an any tool bigger than a google doc is that it will break every time someone updates or upgrades or integrates “one more new feature”…
So I prefer individual teams not to have any automation test “management”, because it would cloud the conversation around flakey tests, removing stale tests that never find bugs, test that take too long to setup, or tests that take long to run with low value. And using testing as a stick. However something key for me is having a smaller number of good tests that give us confidence today - and some older automated tests that we can try to dust off once in a while. All this relies on not changing test framework of course, hence my preference is to go lightweight. When teams have to justify their component coverage with a test-case run/not-run metric, that becomes a lie. Code-coverage tools don’t however.

But at a high level, you have to have a really long checklist showing what was run, and a copy of that list for every release, regardless of how often you release. regardless of if you are regulated or not.

1 Like

Lots of talk here about managing test cases, but how do you guys manage your test charters and exploratory testing? That’s an important part of testing. I’d love to hear how you all manage that in your orgs too.

4 Likes

I write my own in OneNote. I have a template I’ve written that lets me indicate what kind of exploring I’m doing (recon/survey, general exploration, deep coverage, bug fix verification), the start date and time, a description (the charter/mission), any linked information (the case details in our case tracking software, tools I used, model artifacts I employed, the name of a screen recording video file, whatever I think will be useful because it’s unusual or unique to this session). Then I have a notes section that includes my freeform test notes and two areas for bugs and issues in case I don’t want to raise them straight away - so that I don’t forget to. Then a section for environmental information such as the version number, database file, particular screens/windows I was testing. Then the end date/time, a completion checklist, and a space to write the name of the person that debriefed the session (if any).

These are stored as pages in a OneNote section, the completed ones going into a “Complete” subgroup so I know which ones are left to do. I export them as PDFs and attach them to the case. I have one section for sessions and one for threads, and yet another for project-related information. These live in a section group named after that particular project.

I only write these if I think the value of writing stuff down outweighs the cost. If I want to do more freeform, less formalised work (heavily defocused stuff, time-based stuff that doesn’t lend itself to note taking) then I’ll add a screen recording and write little.

OneNote has a great screen clipper and it’s great for pasting screenshots and creating tables and pasting excel areas and whatnot. I use the tag shortcuts for “To Do” to create a list of tasks to keep me on-point for the charter (keeps my test framing in check), and I have custom ones for “important information”, “question I don’t have the answer to yet” and “I think I found a bug”.

If I ever have written test cases they’ll be in these notes or in an excel file attached to them.

7 Likes

I absolutely agree. How would you get along in a project with 50 testers lasting more than a month without any tool to support organization and tracking of tests? I would like to see that happening, as my clients keeps saying that they do not want to spend money for tools.

Obviously there are different project settings. But sometimes the management tools give great opportunities that just save you as a person overseeing a large project. Some of the tools also can give you a lot information that help organizing your work. I guess everybody would welcome knowing which tests to execute after some code changes, or after a change request. Especially in project settings where there are very many interdependencies this really helps. And you cannot always just reorganize to code to have organized in small chunks…

3 Likes

I was going through my Trello board and found this blog post from a while ago

I thought it might be a good one to link to this thread :smiley:

2 Likes

We developed a system for our own use at Gera-IT and then made a product of it.
Previously we used Google spreadsheet. But that a not a suitable way to manage everything in one place with different people.
So then we developed TestCaseLab.
That allows us to organize test cases, diversify them according to different categories, therefore gather them into plans and build clear-cut test runs for QA engineers to follow. So, this is a kind of a diary for QAs. The system also integrates problematic cases directly to bug-trackers (e.g. JIRA) where developers can view and proceed to resolving an indicated issue.

1 Like

Roman , where you on the Joe Collantonio podcast last year? The app sounded quite impressive; was that you on testtalks?

Managing your tests is a big term in itself. We have to be precise first like what we are going to manage in our tests or testing. Is this for complete projects? Is this for test cases?

A test project whether in house or third party outsourced should have a duration to complete. In that particular duration, the management should decide the team size, working hours, planning meetings, as per sprint or agile methodologies, progress reports, issue discussion, daily, weekly meetings. Go no go and final deployment etc are just high level test management tasks that most of the software testing companies are following for the success of the projects.

For test cases, different test case management tools like TestRail , Zephyr are there. These tools can be easily integrated with other management tool like Jira for the reporting purpose also and it makes the analysis easy. Test plans are created for what we need to achieve in testing. Follow the defect testing, defect regression, integration testing, smoke testing sequentially and update the results with total hours spent on each test case. It will help future testing estimation easy. After every successful test plan execution mark it as complete.

In any testing task, major role is team co-ordination between Dev and QA. It should be crisp and to the point. Discussions among both teams should avoid any release related issues later on.

These are just few management related things that we must follow strictly for the success of the project overall. I do not say that this is complete information, but still if it serve the purpose then I be delighted.