Documenting Test Cases in JIRA

I am currently building a Test team in a fairly young company. We are using JIRA but at this time we have no Test Management system. This is in the pipeline but it is not likely to be online until the end of the year(3-5 months away)
Do document our test cases and test plans we have been attaching them in comments. This was fine when we had few or less experienced testers but now we are bringing more experiences testers in they are adding a large amount(which is good) of test cases to the tickets. This is making the comments area very busy and we are finding we are missing useful information but in the comments by the PO or devs. After a bit of thinking we have a couple of options:

  • Create a sub task on the ticket and add test case there. I am against this as it is another step in the process and things can get lost or messy very quickly
  • Add test case into a spreadsheet and attach the spreadsheet to the ticket
  • Add a separate activity tab in JIRA for tetsing- This would require dev ops work and could take just as long to get done as the test management system as they are busy

Through gritted teeth the spreadsheet seems like the best idea at the moment. Has any one else got any good stop gap ideas for the next couple of months while we get our test management system up and running?


Hi Samuel,

I used separated jira item type for that. Created different workflow for that with states: passed, failed etc.
In case of test failed you can link this test to the issue item. A developer can fast to check what tests have been done and what step is failed.
Also priority can be set for each test case and you can create a dashboard with some status information: what tests failed, how much do you need to finish test cycle etc.

Hi Samuel,
I’d worked for a few years withing very similar process and I got some notices:

  • Jira is a powerful tool for tracking both issues and test cases, but we found that Zephyr plugin for Jira saves a lot of time and made test cycles progress visible and status reporting straightforward.
  • Any “development feature” item got its testing counterpart in Jira. These testing tasks had subtasks for investigation, documentation, manual testing, design of automatic test and updating project test pipeline.
1 Like

Have you considered documenting your testcases in Confluence and linking them to tasks in JIRA?

Hi Sam,

On one project, we started on JIRA with Zephyr plugin, however that didnt work as well (7 years ago), then we switched to JIRA with an ALM sync.

However recently, I have used JIRA with XRAY plugin which worked well. I have also heard that the improved Zephyr plugin works well too. Maybe worth exploring the plugin options. Have you considered how you might transfer those spreadsheet comments and artefacts into the new tool in 3 - 5 months time for traceability?

We are using the XRAY Plugin as well and than generate a Matrix with Story/Task and the linked Testcases with the REST API. This gives us a good overview what is missing and were we already have a good coverage.

Hi Sam,

To organize this, create test cases as separate stories in JIRA (with test steps as subtask). Then, link the ticket ID to each respective test cases as ‘Requirements’. This will give a structure like, once you open the Requirement story, you will see all its related test scripts as links.

Some good suggestions here @samwebb.

I’ve used Confluence with links to Jira and trialled Zephyr, I would recommend that approach. Since you’re looking to implement a test management system in a few months, it might be worth deciding what system you are going to use for that and biting the bullet a little earlier than planned…

Just don’t go down the spreadsheet route, you will regret it, they soon get out of date no matter how good your intentions and version controlling them just adds overheads. Only attach immutable objects, use links to a “living document” for everything else.


Has any one else got any good stop gap ideas for the next couple of months while we get our test management system up and running?

What is the purpose in your context for adding test cases to the Jira system?
In my case, as a lone tester, the test reporting happens in any of my preferred way with the developer or with the product manager. They are not needing documentation to read and even avoid reading anything that’s longer than 10 lines.
The general purpose of test reporting I see as:

  • inform the direct manager on the progress of the feature and an evaluation of it’s quality;
  • have some grounds for evaluating the decision for the release of the feature in the current state;
  • inform the developers about some of the bugs they introduced so that they fix them as fast as possible and avoiding processes overload and annoying situations or arguments;
  • communicate as efficient as possible so that most of your message that counts for the other people is understood;

Examples of what I might be doing:

  • Get the Product Manager in a room or go to her desk. Try to take as much time as possible to describe: what’s the status of the product - problems I’ve found, how I know that(what general test ideas I’ve went through), what I couldn’t test and why. Usually I get a around 10 minutes, where discussions and brainstorming on how to fix the problems, impact analysis, priority to fix them are included.
  • With developers it’s different. I get to their desk and ask for their time. I present the issues that I feel are big ones and try to convince them to fix. The smaller issues get either into bug tickets or stories with or without a discussion with the PM. Sometimes the discussion happens after the ticket is created.

If I would be in a team and instead of reporting just to the PM or devs, I’d have to report to a test manager only a small part would change.
I’d take the test manager aside and do a debriefing of about 5 mins after each day, feature tested or testing session - depending on the length/duration of the testing. The debriefing would contain the same information I’d try to get to the PM: how good is the product, how do I know, what’s missing/couldn’t test/blocking-slowing me areas;


Thanks for all the replies!

We are going for the approach of creating a custom sub-task called QA Test Plan which then has its own states, icon and workflow which is independent of the sprint tickets. Something on the likes of :
Open -> QA Groom -> Testing In Progress -> Passed, Failed or Blocked
With Passed being the finished state
The Test Cases would then be added to the Description of the QA Test Plan Issue type and emoji’s used to clearly show the test cases passed and failed.

I had a trial set up on a sandbox account and everyone seemed to like it so we will look to put into action in the preceding sprint