How can you map multiple automated tests to a single manual test?

I encountered a problem with the slot game project. I actually have a set of 10 test cases that manual testers use on 20-30 different slots (different designs, etc.).

The question arises, how can I map all 20 tests to one slot? Currently, I repeat this test case for each slot (basically ctrl+c - ctrl+v), but some tests have unique features that require the creation of specific handlers.

I thought about putting everything into one cycle, but then there is a problem with tracing these tests (it is not clear what failed and there are no statistics for each slot/case). And there is the same problem with running tests, I don’t really understand how I can conveniently transfer the cases that need to be run via CI/CD.

At the moment, I have added two variables before running tests in GitLab: version (selects the game slot version) and grep (selects a tag, for example, smoke). This allows me to flexibly run different tests with different tags. But I already see a problem with scalability. The project already has more than 200 cases (total test time is 33 minutes on two machines), which is acceptable for now, but in the long run, I see a problem with the huge number of tests that will be difficult to maintain.

I use Playwright and standard trcli (if there is a better and more convenient tool, please recommend something).

1 Like

After some mental gymnastics, I came to the following conclusion
I don’t like the first (many to one) option at all. Plus, there is the problem that each of these six tests must be multiplied by the number of slots (for example, 10). And to check one feature, I need 60 tests in separate files, which is not so bad because parameterized tests are not very suitable.

1 Like

Quite an interesting problem.

Question: what does the box represent? 6 tests of a slot?

I’m curious to know why parametrisation wouldn’t have helped here.

So if you had a structured JSON of the slots and another of what tests each slot needs, would that not help then to only run the required tests for each slot?

1 Like

The black box around it is simply logically separated for convenient visual display.

There are several problems:

There are a large number of slots, and it is practically impossible to map test cases. The current logic is 1 manual slot for every 10 different slots (some slots have different configurations, for example, there is a welcome screen, no win spin, no loss spin, and so on).

Therefore, I decided to divide everything into folders to make it easier to run tests (for example, only a folder with 1 slot or only slots with the @ tag for one slot).

Parameterized tests are cool, but they don’t allow for such flexibility (in my opinion) and complicate debugging and reporting too much (if the test failed first, and then everything else passed, they will overwrite the results).

And then there’s the problem with Testrail. Their official reporter isn’t very convenient, but I’m forced to work only with it (company policy). If anyone has a solution with a more convenient reporter, I’d be grateful!

I had an idea to use projects in Playwright to separate slots, but I’m still thinking about how to implement it.

Which, is probably why this is really 6 test cases, just with different contexts or environments.

In the same way the manual test also has a reporting problem, the automated tests would too. Many (almost every framework I have used) struggle to separate parameterisation tests without the overhead of duplicating some steps that are shared and do not really need to always be repeated. And that’s where automation might start to save execution time. What if you could run all 6 tests simultaneously on different CI agents? Is it possible to run the test on 6 different agents for each variation? Testrail would handle parallelized and parameterised tests would it not?

Admission : never used Playwright, and only ever evaluated Testrail but found it too complicated and went with Zephyr by Smartbear.

The problem lies in reporting. To be honest, I don’t really care that a lot of the code is just ctrl+c/ctrl+v with minor edits for individual slots. For now, we’ve agreed to simply create separate tests for each slot and do the same in Testrail. This will give us a good understanding of the test history (when and what failed) and the stability of the slot as a whole.

1 Like
  1. The copy-paste method tells me that your code is not structured efficiently.
    That is part of the problem that makes it hard in debugging.
    If there is slightly different values, you can make the common parts as a function - FUNC1 and it will get different parameters each time it is called.
    We give functions by the Excpected results .
    Different expected result = different test.
  2. In testing we usually have component tests - which maybe the ones you refer to as test1,test2 ..test6. But, when we run those in manual testing we tend to combine them to a longer E2E test.
    (saves time)
    So you can use tagging/suites for your E2E test.
    Test6 on the right, in your illustration,is your E2E, this test would include component tests [test1…test6].
    So if test6 creates report, it would still report if the component test failed and in which component test, in that way you keep logic separated while keeping the traceability to a bigger E2E view.
    I hope that helps.
1 Like

Aha, yes that sounds like something I would be doing too. Not that my experience counts for more, but having the reporting tool give you a pass fail history for each copy is super useful.

I often have a load of tests that look like they copy-paste share a lot of setup and teardown, or only differ in one small area. But for all the phobia I have around copy-paste and DRY I have to tell myself not to refactor sometimes. Sometimes it’s better to have almost identical copies of a test, which allow you to reason in your mind about the product behaviour more explicitly by just having different targeted assertions and steps. I feel it allows me freedom to break one of the tests, without harming the other 5 at all.

2 Likes

In principle, I have the same approach; some tests differ only in terms of numbers or additional assertions. Although this may appear to violate the DRY principle, it allows for great flexibility in compiling tests for launch, adding tags (skip, fixme), and so on. Thank you for your feedback!

2 Likes