Automation - Tracking included tests

Morning!
For those who use automated test executed as part of their test approach, do you keep a record anywhere of what tests are actually covered by the automation (e.g. if a team member wanted to review and see what was covered). If so then how do you do this? If you donā€™t then interested in understanding the reasoning for this.
Thanks

1 Like

Currently we only have the record in the code repo for the Automation itself.

In most cases, we are going directly to automation without any formal scripted test. Obviously I am doing exploring as in formulate my automated tests

What I do feel I am missing is:

  1. A central place to record and review test coverage

  2. A single place to aggregate test results across builds (Jenkins logs donā€™t stick around for long)

  3. Anywhere to go for guidance should a manual regression test actually be desirable

3 Likes

I think that the automation should document what it covers. Youā€™d aim for a good understanding of the goal at each level. Some love put into the readme or high level summary of whats in there can be helpful(In my opinion).

It can be hard to understand your overall coverage if you have tests in different places. e.g. unit tests, other automation, manual tests etc.
I know there are test managment tools can help with that, but can be pretty heavy weight.

So like everything totally depends on what information I think needs to be shared. No harm in a word doc(or similar) if it helps you or your team.

2 Likes

Wherever I worked it used to happen like this(note that Iā€™ve not managed this process in any way, but was ā€˜forcedā€™ into doing some parts of it):

  • letā€™s automate some checks
  • document what is going to be automated, before, if the developer(automation engineer) has no idea what to implement or check or not understanding the product at all;
  • document what was automated after, if some manager wanted some green/red table of automated scripts with naming tag in front;
  • have extra tools, write extra code, maintain extra data sets to store and show that documentation;
  • stop maintaining the documentation, as it takes too much time and effort to do;
  • stop documenting and delete all documentation that was made as thatā€™s of no use to anyone;
  • delete the automation project, as the product has been finished/closed or completely rebuilt and the automation code is useless now;

My advice generally is, whenever you do something for the ā€˜benefit of the productā€™, think about the costs that are made against how will that help the business of the company increase the sales or have less revenue impact.

3 Likes

This is a very interesting topic and also give ROI as well.
Now in industry everyone is using Test management tool(free or paid) and where we have fields against each test case where we can easily mark or create categories like automatable, not-automatable, automated , not-automated that can help everyone in team and can clear visibility that which test case has to pick by Functional/ automation QA

1 Like

Great thoughts. At the moment I encourage the team to capture each test in our TCM tool (TFS). I donā€™t use fields though, I organise by suites (so test suites of automated, and test suites of exploratory/manual checks).

1 Like

Yea, just near the test scenarioā€™s, written in what type of test that they are covered (unit, api, ui or to execute manual.)

This way manuals testers donā€™t do ā€œdouble workā€

1 Like

Naming suites well starts becoming very important. TMSā€™s will always come and go, but the scripts will tend to stick around far longer. Iā€™ve always tried to just use comments above each test case, because a lot of context is lost when you read the code of a script. The environments it was intended to verify get lost and the naming convention for tests which is arguably the largest part of being able to human-parse a test report needs to be designed up front.

Sadly Iā€™ve never found a good way of delineating system tests from component tests without knowledge of each suite and reading the code to discover if it uses mocks and for what parts it uses mocks for. Every automated or manual test really needs a ā€œbest-beforeā€ date, and so I find tracking them less useful than continuously creating new ones, as that makes it harder to delete useless old ones that are either flakey or are not telling you things.

1 Like