How do you document your functional test cases?

This seems like a vibrant community I stumbled across, so figure I could repost a thread I’ve posted in another QA forum long time back for additional input:

How do you structure and document your test cases for functional testing? Just wanted to get to know how QA people in the field generally do this.

Let me know if this is better posted in the lobby/general discussion area.

Here’s how we do it:

A high level feature area to test. We’re in telecom field, so say SIP/VoIP phones as an example. We would then document a set of test cases for this area, like having a test plan, so we would have a test plan, or more correctly, a set of test cases or SIP phones. We document our tests in Excel spreadsheets. Personally, I prefer Word documents with tables over Excel but company practice is to use Excel. I have a colleague who used to do his tests along with test plan documentation all in Adobe FrameMaker and publish that output to PDF (what a nightmare for a newbie not familiar with FrameMaker to maintain).

Sometimes, the feature area has more categories to be broken down by, so we use multiple worksheets within the Excel spreadsheet. An example could be different SIP phone types like Aastra 9133i, Aastra 480i. We could have same & different tests for each phone type and group them all in a single Excel spreadsheet, separating them by worksheets.

Finally, we breakdown the test case definition in the following tabular spreadsheet format (presented as CSV here):

Test Cateogry (or keyword/grouping),Test Case ID,Test Name/Description,Test Procedure,Expected Result,Status,Defect ID,Comments,Test Scope,Automation flag

Test category represents a feature group to test like call transfers, or putting calls on hold, etc.

Status is for pass, fail, blocked, skipped, etc.

Comments for additional info not put in the other columns.

Test scope defines whether test case is for basic acceptance testing or regression testing, etc.

Automation flag indicates whether test is automated (yes/no).

The test procedure includes parameters indicating if there is a matching automated test script, the preconditions and postconditions for executing the test case, and includes detailed steps to execute a test.

In hindsight, in terms of test management and usability (in execution and maintenance) between testers, it would seem best to have test procedures be generic with test data values and allow the tester to define the test data used as needed. We could use the same data/configuration or different ones. But for automation, and reproducibility, it is best to use the exact same configuration (except for random data testing).

So I figure it is better to modify the template format and add a new column for automated test procedure, which would spell out the test script and matching test data to use, and define what the automated test script actually does with the test data (so tester doesn’t have to open up the test script to analyze it to see what it does). This column would also mention the automation’s preconditions and postconditions which may be different from the manual test version. The original test procedure column then would spell out a more generic version for manual test case execution.

Let me know your thoughts on our approach (the good, bad, and ugly). And the approach your company takes.

1 Like

Hello @daluu!

Our company has testers focus on risk based tests rather than requirements or functional tests. With that change, developers are responsible for demonstrating requirements through many types of testing (unit, requirement, functional). Testers review the results and occasionally consult on creating tests.
With this change in focus, I coach testers to write test cases using open ended questions. These questions are motivated by an information objective - usually the risk under evaluation. I don’t recommend long descriptions of HOW to execute the test, and I don’t recommend having an expected result. The result of a risk based test is information rather than a binary pass/fail.

If descriptions are needed - and they often are - they become part of an application guide. In this manner, the tester can learn about the part of the application under test and allow their open ended questions guide them towards exercising a risk.

When I read your description of the documentation, it felt a test case may have more documentation than needed. In my opinion, I want testers testing, exercising, inspecting, experimenting, and reviewing products rather than documenting. I believe plans are valuable but should be a small part of what a tester does.

Joe

2 Likes

Joe, that sounds like an interesting approach. Do you have an example of a risk-based test? Do you manage it in some sort of test management tool?

1 Like

Hey @daluu. Are you as a current tester, or are new testers as overwhelmed by your processes as I am by your explanation of those processes? Using an Excel spreadsheet to track your test cases is not a bad thing. Let’s face it, some companies rely on testing, but either don’t want to or can afford to invest in testing tools that could make testing much easier.

Having said that, I’ve been in similar situations. As I have progressed through my testing career, I have been introduced to different methodologies for test planning that have made the process much quicker and more efficiently planned.

My favorite process so far has been using the Agile Scrum methodology to creating user stories. We created Epics that included chunks of functionality with the basic parameters laid out in the epic. This included the definition (As a user with the correct permissions I can do stuff.), a functionality mock-up (in this case screen), field specs (field value definitions and location information (i.e., Db table, etc.), relevant permissions, and permission definitions.

We then created user stories using the Given -> When -> Then format. This format allowed us to be generic where specificity wasn’t needed. If there was something technical that was needed, we would add a technical note for the developers. They coded from our user stories, the mock-up, and specifications. I cannot say that I was ever on a better run project.

Long story short, if you think there is a better way to do what you are doing, see if you can spearhead a campaign for change. If you can’t, do a little research on your own time and send the information you found to your manager.

Testing is often a thankless job that non-believers don’t see added value in doing. If you can show how you can increase efficiency and save money, you just might be the hero of the day/week/month/year. In turn, this could lead to recognition, or even a promotion for your efforts. At the very least, it could make you life way easier if your suggested changes are implemented.

1 Like

Hello @elysialock!

We used Hewlett Packard ALM for a long time to manage projects; test cases are a part of a project. The UI is, in my opinion, very clumsy and unintuitive. When considering test cases in ALM, they follow a hierarchy of Test Set (e.g., Authentication), Test Case (e.g., Log In or Navigation to a Secure Page), and Test Steps. In my opinion, Test Steps is very poorly named but it is where the details of a test plan reside – both descriptions and results. Further, the UI “helps” a tester by naming a “step” Step 1, Step 2, and so forth. I believe this gives testers a myopic view of their testing and leads to perceptions that anyone can execute a test.
We recently moved to Microsoft Team Foundation Server (TFS). I’m still new to it but it appears to suffer from the same view of testing as HP ALM.

As an example of risk-based testing, let’s look at Authentication. This functionality would be defined through requirements. Functionally, the log in page appears, credentials are provided, and the user navigates to the next page.
In testing, we might explore both valid and invalid credentials for entering the application. Valid credentials allow the user to enter the application; invalid credentials navigate the user to an error page or a “forgot my password” workflow.
In a risk-based approach, I might ask what happens if the user enters the application without credentials. While I’m not too concerned with HOW this might occur, there is a risk (either a technical risk or a business risk) that it could occur. For example, if I entered a bank application web site without authenticating, I could transfer money to other accounts. This is the kind of risk I want to explore.
My test plan is a set of open ended questions to guide me in testing for this risk. Here are some examples.

  • What happens when I navigate a bookmarked page within the application?
  • What happens when I use the bookmark’s URL in a different browser?
  • After landing on a page inside the application without authentication, what is the behavior of the application when navigating to a succeeding page?

Note that these questions provide feedback about scenarios. For example, if I successfully navigated to a bookmarked page without authenticating, I could provide that information back to the project team in the form of a defect or some other report.

I’d be happy to entertain other functionalities from a risk-based point of view.

Joe

3 Likes

Cool, that’s very similar to what I’ve been doing. I didn’t know if you had a template or format for cases that you’re using. Thanks!

@m.pawloski, I haven’t been doing much testing currently in my current role. This was something I thought about in a previous job where we used what was described. The other testers around during my previous stint, didn’t seem to have trouble using what we had nor did I, it just didn’t seem optimal (for me).

I did propose some automation (and automation test case) tooling changes/migrations, one that involved using Robot Framework which supports the BDD Given When Then Gherkin format. Too bad that got shot down due to the amount of work to do (even if it was to be done over time).

In my company, we’ve been trying for a while the BDD format to write scenarios and improve collaboration.

We develop software using Scrum, so, we generally use the Sprint Planning to discuss and create most of the critical scenarios during the meeting with the participation of all the team members(Devs, SMs and QAs). After that, the document becomes accessible to all members of the team and the test team usually adds more complex scenarios to increase the quality overall. It is used as guide to developing, testing and as a documentation too.

These scenarios help the testers check the most important parts of the recently developed feature. I usually add an extra layer of exploratory testing to make sure I covered everything that needs to be tested for that feature.

So I work for Kualitatem and we have our own test management tool that’s also sold commercially. It’s a cloud based tool. We write and execute our functional test cases on it for both manual and automated testing. So we just assign test case ID and write description with expected and actual results but if we are following requirements, we do associate our test scenarios and test cases with that.

For a test case that marked as failed we associate a defect with all the relevant information. The custom reporting allows us to create custom test and bug reports and export them in a format of our choice.

The functional test cases should include all the scenarios related to workflow and additional jobs that could be part of a process. To make the test cases easy to understand by the end user, we always make sure to divide the test cases into multiple scenarios that too modules wise functionality. Further sub-division is also considered a better approach.

For example, if we are using any e-invoicing application that only create and validate invoices. The test cases cover the following standard details:

  • Invoice creation
  • Invoice submission to client by vendor
  • Invoice payment to vendor

There are other steps added to above workflow and need to be documented as additional scenarios. To address a few, below is the sample:

  • Invoice review
  • Additional approval if required
  • Invoice approval
  • Additional approval
  • Notification to requester
  • Reason for rejection if given or not
  • Any discount or adjustment to the invoice

Above example is just an overview of how to cover the entire functionality of the application using different scenarios. There are many test cases management tools to document the test scenarios. We use TestRail for this purpose with ease of use and it integrates with Jira also. TestRail or any other test cases management tool could be useful to document the test cases. Top software testing companies do not emphasize on the usage of MS-Excel also to document the functionality. We cannot debate on the use as it is totally up to the user.

When we document the test cases, we consider the pre-requisites of the test cases in a very first step followed by steps to verify the functionality.

The discrepancies in the test cases always added in the form of expected/actual results. Expected is the desired behavior and actual is the issue that is been reported and need to be corrected. The application version with build/patch number if applied should also be mentioned. Attachments, if required, in form of class/XML or XLS/CSV file format etc. should not miss with proper instructions to use them.

Hope this information is helpful for you.

Something different got to know today on functional testing?