Organising test cases

Hello everyone!

Recently I got a new job as QA Engineer at a small company. I have experience testing (just testing) from working at a big testing company.

I was asked to create test cases for all the projects that we have (mostly are ecommerce websites). While I know what is a test case and how to write it, I have problems organising them. For example, should I create test cases for each functioanlity? For each page of the website? Include the first in the other? We don’t have product requirements or any sort documentation so I need to write them purely on what I see on the website.

Do you guys have any advice?


If you have no requirements documentation then how can you be sure that any test case matches the required functionality, indeed how was the e-commerce web site written without a spec?


Well I suppose they have some requirements in some old conversations with the clients but these are not new projects I and feel kinda bad pressuring them about documentation. I written some test cases by assuming how the functionality would work and I guess I’ll ask the devs to review them

1 Like

Congrats on the new job! First, I’ll say that this isn’t something that you can be wrong about.
I work in EdTech, so I’ve never organized tests for a commerce website. Some folks organize by function, user role, or by page. You may try it one way and then decide that you’d prefer something different and reorganize them again. When I organized my tests, I went by functionality and I’m happy with the results.

I like to organize by function or user story. I also use tags so that I can run subgroups for testing individual features when necessary. I also have tags for automation, manual testing, mobile-only, critical, etc. I don’t know what type of test management software you’re using, but this is a pretty common feature among all brands.

When I reorganized my tests, I made a quick diagram to map out my file structure before I got started. I showed it to my team and asked for feedback. Hopefully, they’re willing to give feedback.


I written some test cases by assuming how the functionality would work and I guess I’ll ask the devs to review them

This is the right way to do it (except not just devs - ask product too). Ward Cunningham once said that the best way to get the right answer on the Internet is not to ask a question; it’s to post the wrong answer. It’s also the best way to get specs.

Often when you do this you’ll find corner cases where nobody is really sure how the app should work nor how it does work and by dragging people into those conversations you can provide a huge amount of value and even change the direction of the product.

1 Like

I’d go with a session-based approach & a checklist for release/smoke.

How to start about learning your product and doing session based testing:

For the release/smoke checklists:

  • using your gained knowledge of the business, risks, and product create a list of interesting test ideas;
  • go through them with the team(lead/pm);
  • use as many of them as you can to automate the verification(code them at the level agreed with the team);
  • make a ci/cd pipeline that is connected to code changes(main branch push, deployments on pre-prod env., code commits)…

I think we should first sort out the function points of the whole project, and then get familiar with the actual application scenarios of the project, or how the industry will use it, and then consider the preparation of test cases according to the priority

1 Like

this is a ninja question , been on my heart recently as we are looking to use a jira plugin for test case management @carolinel . So welcome to the community and hope you get some clues from the responses. I’m going to share a very gross generalisation theory I’ve built up over 15 years as a coder and nearly 15 as a tester.

As an automation person, I’m typically making intersections of tests across collections or groups, where tests in one source file are called a suite, but how all test source files in a folder are also a higher level suite too. And so my biggest dilemma is how organisation of tests organically occurs at the feature (web page) level, and rarely gets organised at the internal component level. We tend to think in components every day because teams own components, defects also live in components, so it’s always going to be more comfortable to build collections along a component boundary longer term. I’m thinking about ways to add all our testing into Jira next month and if a fresh organisation is going to help me at all too.

To date I have never worked in a company where we get it right to agree on one or the other, and maybe that’s OK. I just prefer component organisation because as we all know most defects are injected by humans, and thus those defects in the product are often getting injected at the boundaries between teams of humans. Hence my desire to arrange suites by component not around functionality or workflows, even if functionality groupings are what we deliver in the end of the day. Maybe the answer is to just live with both.

A big defence for grouping by feature often used is that we need to write test to verify/satisfy requirements for one feature at a time as they get added. For a long-term view that’s a false economy because we know that testing requirements is a box ticking exercise to create coverage of requirements to the determent of end-to-end flow testing to find interaction or side effects when all of the pieces come into play. Testing to cover requirements alone feels a bit like the TDD mantra, it’s not evil, but it’s also not the entire story. It’s also too easy to get hung up on architecture. Maybe the winners are people who build a organic philosophy that allows test to move about as time goes by. Being able to delete old tests HAS to be a part of your philosophy anyway, and so maybe moving them around every so often needs to be added to maintenance program.

I like your idea of arranging tests based on conversations you have with customers @carolinel , mapping what you test into real world scenarios needs to be more natural in our everyday test case management thinking. Because customer scenarios are journeys that cross the more likely trouble spots that your customers care about.

Congratulations on the new job!

I’ve only ever worked in smaller companies (and for some reason when my smaller company got bought out and became part of a bigger company my groups stayed in its little silo) so I’ve got some ideas that might help.

  1. You can use the software itself as your requirements - by this I mean if you’re looking at a quantity field, you should only be able to enter whole numbers, most likely positive whole numbers although sometimes it’s possible to enter a negative to clear the order. Other data entry fields will have similar limits based on the type of data they get. If you have access to the data behind the websites, then the maximum size of the field in the data should match the maximum allowed in data entry.
  2. Ask older team members, whether developers, testers, business analysts. Don’t forget the customer service folks - they’re the ones that field questions and problems from the end users, so they tend to know a lot about how the software should work.
  3. Common sense. Look for spelling errors (bad spelling in a commercial site tends to make the site look less than professional), bad grammar, awkward layout or anything that makes the site harder to use. Also cases where images impinge on the text or the forms, excessive page load times and so forth.
  4. For organization, is each project independent, or are they created from a base template and then customized and branded? If they’re independent, it doesn’t really matter how you organize them as long as the method you use works for you and those you’re working with. If there’s a template, you can use base test cases for functionality, and have a set of test cases covering the branding and customization for each individual project.
  5. As you might have guessed, I’m a big fan of minimizing duplicated effort in any and all forms. My preference is to document functionality exactly once and link to it from anything else using that functionality. As an example, if you can log in at any point from browsing the products to finalizing your order, your log in function should still be a single function point that gets pulled into your test scenario when needed.
  6. If there’s a common structure to the projects, use that structure in your test organization, but separate out common modules - so if there’s a store browsing page that opens a product detail page, that’s a base test that’s common across the different ecommerce sites. You still have to test all of them, but you only need to document the test once. The different sites and different product types are parameters to the test.

You don’t have to use any of these suggestions: they’re things I’ve found helpful in my working life. If they work for you, that’s great. If they don’t, that’s also great: you’ve learned something that doesn’t work.


this is probably the biggest reason why I do not like test suites that are organised around requirements, repetition can get out of hand. Well said.