I would like to start open discussion on what you think is the best test case management tool you have used and why?
Iāve used Testrail & Tuskr for many project recently but as QA Manager they didnāt meet my requirements as a manager nor project as whole for the following reasons. below & was forced to come up with own solutions exporting raw data into excel:
No Testcase numbers or reporting of test results per feature, only per suite (folder of many tests)
No reporting on type of testing/coverage conducted/ reporting using any custom fields
No Test case stage pipeline or filtering - Draft > sign off > Testable > automated
With regression testing (Daily BVT or smoke regression packs), there is alot of repeated tests. Report to tell you how many Test cases executed just once (how many coverage) not including repeats.
Does a testcase management tool exist that functionality to meet these requirements?
You can manage requirements and test cases, both can be organised in folders or test suites.
TestLink automatically assigns a unique intern number to test cases, which can be used for cross references and for the requirement coverage. It has great options for reporting, but Iām not sure about generating reports based on custom fields.
Test cases can have a status, like draft, or final.
You can have a separate test plan for the regression testing and then create reports from this test plan.
The reports didnāt include the fact, that some test cases were executed more than once. That was great for the overall coverage, but sometimes, we wanted an overview, how many testcase executions were done alltogether for a project, and it was really difficult to get that number.
TestLink is opensource, but it seems that itās no longer developed and maintained.
Weāve been looking for alternatives, and among others we came across Klaros and TestMonitor, which seemed quite useful. But except from a few experiments with their free trials, I donāt have any practical experiences.
We are about to, once the system team can get it installed, trail a jira plugin by a popular vendor. And this question is worrying me a lot too @captainjonesy2 . I suspect these tools suffer from the same problem all software suffers, it solves only one perspective of the problem. Often making managers happy becomes the goal, and things like trending the pass/fail rate of a single āfeatureā becomes impossible unless you organize your test suites accordingly. Basically you have to have good control over how features map or how functionality/architecture maps into your test suites, you can never have both, one or the other is what I have found. I personally prefer a component mapping, but a āfeatureā mapping is appealing to me more and more, since that makes deprecating a feature easier to just kill off later. But does mean I will need to plan a lot better.
Being able to run one test many times in different environments is a test report vector I am keen to get working to help me find those flakey environments. So thatās a requirement for me too @monija . I was also disappointed that Testlink eventually went out of support.
Iām dreading the trial start day, because Iāll need to temper my expectations, the demos they showed us look great, but until you try things yourself, itās really hard to know if it will help you as much as you want. Perhaps the solution is to have lower expectations. My end goal is to be able to find flakey tests quickly and easily by being able to run historical queries. So sometimes just having a decent graphing front-end is what some of us really want.
@conrad.connected - Yes youāre right, you canāt have everything. Most TC tools Iāve seen are built for audience of the testers for TC creation & execution, not for QA Managers. Iāve surprised that no one out has designed for tool for both type of users/use cases for execution & reporting.
@monija - Thanks for suggestions, very helpful. I will certainly check out TestLink, Klaros & TestMonitor.
Another requirement I have which has just occurred to me; the ability to design tests without creating dupes. My projects Iāve been on, Iāve had to create a process to ensure testers are not creating dupes leading to a waste of effect. Is a tool with a feature that would help with this?
Iām currently at looking at Xray & Zephyr which seem so far meet my earlier requirements.
@conrad.connected@monija - How do you want to decision on the right TC tool for your project? Do you have requirement checklist and see which tools meets the most of your requirements?
Iāve been prompted to write up my requirements on our internal wiki, I liked some things that Testlink did give us, but overall it was a very āexpensiveā maintenance-wise tool and usability was just terrible. Will update when I have a moment on what requirements goals I came up with. Busy pushing a release to the apple store today, so a tad busy.
In my old job, TestLink was already there, when I joined them. So I wasnāt participating in the decision process.
In my new job, there is no tool at all. Requirements are not explicitely documented, which means I search through dozens of tickets to find the customerās extension requests, bugs and issues. Like a hunter and gatherer.
But Iād like to have a place where I can store and organise my harvest and haul and I need to make sure that I donāt miss anything.
So I made a list with must haves and nice to haves based on the things I knew. But in the meantime i realised, that my list was too much oriented on the workflows of my previous job. Of course in my new team everything is handled differently, so I gave myself some more time to learn and find out what will be needed here.
This was my original minimum must haves list:
Manage product requirements
Manage test cases (with steps)
Assign test cases to requirements
Organise test cases in test suites
Document test executions
Have a field for an issue-tracker-ID
Maybe later there would be more features needed:
Assign test cases to test environments
Assign test cases (or even test case versions) to product versions
Have a list of automated test cases and somehow assign them to the product requirements to make the coverage visible.
My current view on this is: maybe I donāt even need very elaborated test cases with all those test steps and expectations, but focus on a proper documentation of requirements in the first place.
Feel free to give our test management tool Testmo a try. We have built-in support for things like workflow states (e.g. Draft ā Active ā Retired etc.), automation linking (and filtering), coverage status columns (so you can see the latest status of all your test cases across all runs) and similar that might fit what you are looking for.
Hey Chris.
If looking at Xray & Zephyr, I hope they meet your needs. The one reason we are moving from Xray is down to their licensing (Zephyr is the same). They license on the number of Atlassian / Jira users as opposed to the number of QA / Test users. We have a team of 4 QA, yet over 300 Jira users - so the price for Xray is extortionate! If they could address this, we would āstick with itā and its peculiarities.
We were looking at āTest-Gear.ioā as seen at TestBash recently - but they are no longer trading!
So, āTestmoā & āQaseā are currently being considered for suitability.
We are moving away from TestRail ourselves, but I can add two tools that we have been looking at that have not been mentioned. One was PractiTest, which seemed quite robust to me, and the other was Kualitee, which is quite affordable. Unfortunately, I canāt tell you if they meet all your requirements because I donāt know them that well.
Having been a user of various tools in the past (one being TestRail, although Iāve not come across Tuskr), Iām currently using TestLodge, which works well for my current contract.
It doesnāt have as many features as TestRail, but it is very stable, and what Iāve found most helpful is the issue-tracking integrations, which work with many different bug trackers. (and will automate the entering of issue tracker IDās)
They have the ability to create and associate requirements with Test cases; test steps can be numbered using test formatting within a test case, which are then stored in test suites. Test runs will then allow you to document and record your test executions.
Iām not sure about your additional feature list, but your must-haves are similar to what I use, so it would certainly be a contender if you are looking for a less complex tool.
Weāre seeing a lot of movement toward PractiTest. To the point that we have partnered with them and also created an integration to PractiTest from our test automation tool called Alchemy Testing.
I personally have only used PractiTest a little bit, but what I have seen, I think it is great and offers many features. Certainly worth checking into.
In my last company, we used Zephyr Squad in Jira for test case management. Overall the experience was good.
User Interface was simple and for every sprint that is going on in Jira, we could parallel concurrent sprints in Zephyr and we could also add a regression suite for a particular sprint or master regression suite.
Iām not sure how much it cost to the organization but yeah it was really helpful in managing the test cases.
One of the key benefits of the Zephyr squad was that the dashboard was integrated in Jira, and once anyone navigated to the dashboard they could see all the numbers like the number of test cases executed, number of test cases not run, etc. Apart from that, we could also easily visualize the regression bugs, reopened bugs, the total number of bugs, etc. in graphs which was easy to understand as well as to share with the stakeholders.
So overall Zephyr Squad meets the requirements of our QA team and we were okay with its features and functionalities.
The first point is overcome by having the suites feature focused.
I use Azure DevOps and for the lifecycle we can add custom values in a āStateā field
I like Ado with the exception of exploratory - itās clearly design more around āscriptsā.
There are some lacking in there for example history of test case results (e.g. if you wanted to use data to drive some priorities, e.g. tests that have a history of failing