Test Management Tools


(Gary ) #1

Hi,

I’m looking at implementing a test management tool in my work place.

The problem I have is I’m currently the only tester and we have a very small Dev team (which will probably grow over the next 12 months) but currently I’m not sure what budget I have for testing and there are so many tools on the market.

I want a tool that can manage my test cases but also a tool that can easily produce reports on testing progress and defects raised. also the potential to link into other applications for future automation testing would be advantageous.

Any advice anyone could offer would be great.


Ask Me Anything: Test Management
(Olaf) #2

Before answering this, there are a few more things we’d ideally need to know, because these answers will dictate the direction of the suggestions you’re likely to get.

What tools are your developers using? What other software is your company using?


(nikhila) #3

hey can u check this one https://anyaut.com/ .


(Chris) #4

One option would be to dump your test cases, then you don’t have to spend time, money and effort managing them. Writing things down is a significant cost that you have to justify. Written test cases often become expensive and unwieldy and difficult to use as a communication tool (let’s say you get another tester - have you written all of the cases so that anyone could follow them?).

An alternative approach is charters, where you decide on what needs to be covered, write a mission, then explore the product with that mission in mind. You could put risk catalogues and areas of coverage in lists on a wiki page or in OneNote, then use them to help explore in a more freeform way. This is cheaper, more fun and more powerful - and it’s just as structured as test cases only that the structure has to be in your methodology as a tester rather than test artefacts. You also pseudo-repeat yourself less frequently, which is more likely to find important problems. You can word the charters to the purpose of the testing, rather than have steps that aren’t associated with any purpose, which you can catalogue for regression testing - that way you know your testing is consistently valuable and it becomes a lot easier to review any charters you may repeat 5 years later, as low-value testing is easier to spot. It’s less tightly coupled to the program, as well, so you get flexibility and lower your maintenance costs because your charters won’t expire like your test cases can. All the management of charters can be pretty low-tech, and won’t tie you into a tool which will go on to shape the way you need to work (so that your process fits into the way the tool needs to you work). Because it’s more fun and engaging you’ll also find yourself doing more work, if you’re anything like me.

Reports can come from the notes you create while testing, and the bugs you raise in your bug tracking tool. Progress can be reported using something like this: http://www.satisfice.com/presentations/dashboard.pdf.

If you’re worried about auditing then something like SBTM to create a record of testing is an approach I’ve used.

Just some ideas that might solve the problem without the headaches in finding and implementing a TCM tool. I moved away from formal proscription to charters written in notepad and OneNote when I was the only tester for a company, and I never looked back.


(Lee) #5

It depends on what you need. If you’re looking for a test management/reporting tool that plays well with other software, I suggest you try the demos for these:

  • Plutora Test (the RTM is great for visibility)
  • Selenium (very popular automated testing tool)
  • qTest
  • Zephyr

(Andrew) #6

We switched from really detailed test cases in excel that defined HOW we were going to test something, to really high level WHAT test cases that we punch into a TCM (TestRail). It took a lot of the repetition out of writing test case, solved the problem of having to maintain a monolithic suite of cases, and prevented some of our less experienced testers from going into the weeds writing endless documentation with little value.

We also took the approach of looking at testing as a team and having the dev and test team meet to define what the expectation is for testing, and used that time to write everything out.


(gordon) #7

Sorry to go a bit off topic but in that transition did you ever have to deal with the situation where an issue went live that would have been caught by the old how test scripts but ended up not being tested as the tester didn’t think to test that from the what test case?


(Chris) #8

While I haven’t done that I’ve had to mitigate those risks for companies before. The conduit between formalism and informalism is purpose.

Translate your test cases into purpose - what risks are these test cases trying to mitigate? In some cases you’ll find they have no discernible purpose, in some cases you’ll find they go into automation very easily, in some cases you’ll find that a whole raft of tests is trying to test for one thing.

When you have the purpose of the test cases you can craft a less-formal solution that covers that purpose. This could be a list of risks, a list of product areas or some other coverage map, a checklist of important things to look for, and so on.

Then you have the purpose of the test PLUS the power of exploration.

If you have to argue the point about deformalisation there are many advantages. Firstly, instead of thinking of issues you might no longer catch think of all the issues you could be catching that weren’t in the test cases! You’re not catching anything outside of the test cases (if the test cases are strictly adhered to). This means you’re doing a similar thing over and over, which means you won’t explore new risks or find new problems.

When you provide a purpose-driven interface for testers they tend to (in my experience) work better, faster, with more self-confidence, and with a sense of direction because they understand the purpose of their work. You also have less maintenance cost because formally describing a human process is so difficult and whenever your product or project changes you have to re-describe that process or you end up with test cases of very little value. If you have a purpose-driven process then you simply adjust to whatever purpose you need, and when that purpose seems silly that is an indicator that you need to review your checklists/charters. Try looking through a test case suite and deciding on the value of your cases, and you’ll soon see the advantage!

So yes, you could possibly miss something that was in a test case. But you could do that already. Test cases are boring and are poor tools for communication, often written by non-experts (and test cases are really hard to write. It’s hard enough to instruct someone on how to do something in person!). This means that 10 different testers will interpret and process that test case in 10 different ways, which means 10 different kinds of coverage at 10 different levels. Some will skip steps, some will include extra steps, some will avoid what they don’t understand, and so on.

Think of test cases as searching a cave inch-by-inch with a high powered short-range flashlight, and deformalised testing as searching a cave room-by-room with a large long-range lantern. Now imagine you gave the people with lanterns a map showing key areas to search (risk catalogues, checklists, etc).


(Andrew) #9

Chris eluded to this but transitioning away from heavy documentation and test cases you will find you are testing more valuable scenarios, providing more flexibility for the tester, and greatly increasing test coverage overall.

We are still recording what will be tested just not how to test them. Ultimately we are creating a checklist of things to verify and occasionally adding assistive notes for harder to run workflows.

Again, we are cutting the heavy documentation, replacing it with a casual conversation and a risk-based checklist. We have seen a drastic increase in the overall quality, the speed we are able to get things turned around, and greater team ownership of any defects found. We often find issues during the conversation and get them fixed before testing even begins; these are issues that would have been found much later in our cycle.

Of course there is no guarantee that all defects will be found or nothing will get past testing, but nothing can provide that guarantee. What we can guarantee is having a process that produces great quality but can also work in a reactive nature. When there is an issue found we can work quickly to get a fix coded and tested with a high amount of confidence that what we are releasing is quality.


(Luka) #10

I am using TestRail and must say that I like it!
http://www.gurock.com/testrail/

You can have multiple projects on one account, you can create multiple Test Cases that you can easily and nicely organize in folders, you can write Test Cases in Steps, create Milestones for each Release, Create Test Plans and Test Runs for each Release.
And things I like the most, you can have multiple Test Suites for each project and TestRail has many integration possibilities.
You can integrate it with JIRA, to link or create tasks in JIRA directly and there is TestRail API so you can integrate it with Automated Tests to send Test Results directly to TestRails.
And in the end it has many different Reports creation which you can either import somewhere as iframe or download as PDF to send to client/dev team!

You do need to pay monthly fee though but there is also 30 Days free Trial so you can try it out.


(Nichole) #11

does testrail integrate with tfs, the company I am in uses a very old version of tfs and wont be moving anytime soon.
I need a test tool to manage the Business testing, those of us in the project team use a wiki to track how we test and the results, but that wont work with the end users.


(Luke) #12

Testrail is capable of some integration with TFS, but not full integration.
Testrail allows you to configure reference urls so you can easily link failed test cases to bugs on tfs, or link test cases to stories / tasks.
But its the most basic, and somewhat manual linking. Not true integration.
You are better off using Test Manager with TFS.


(Nichole) #13

is test manager not very clunky?
We currently use MTM and I find it a difficult tool to use as its very step by step


(Luke) #14

MTM is very clunky, especially the older versions. But the TFS integration is best around.
If you are lucky enough to be using the latest VSTS, the Test Hub on the site is actually a pleasure to use. Has almost all the MTM functionality, but works much smoother. MTM is being deprecated in favor of the Test Hub on TFS/VSTS Dash.
I hated MTM when i arrived at my current organisation. My previous 2 roles i used Test Rail, I even did the initial setup and integration for it when introducing my previous company to it.
I wanted to do the same when i arrived at my current job, but after time realised the power, functionality and integration of Test Manager (test hub, i dont use the mtm windows client) was much better due to the team working on a full Microsoft vsts stack. If you take the time to learn how to use it right, its a pleasure.
Saying this, i do know the older tfs on premise version and the windows MTM client can be a NIGHTMARE. I am lucky to be using a newer version.


(Magnus) #15

Depends on what type test you want to do. Development centric, or validation centric? For validation, TestCenter is good choice. Easy, flexible and efficient.


testcenter.one

(Rosie) #16

We have put together a list of Test Management Type tools here - https://dojo.ministryoftesting.com/dojo/lessons/project-and-test-management-tools

It might help you in your search.