Automation - reporting and showing value

Morning all - Question mainly to those who do automation.

Do you find yourself having to demonstrate value you’re bringing, , continued improvements, or other metrics?

How are you reporting on this?

No, I don’t and I am wondering why you should demonstrate that. Especially the “you” part. Test automation is part of development. You are developing in a team. Who is asking for that demonstration?

the main goal of test automation is giving developers confidence they haven’t messed up so they can make big changes. the cost is development and maintenance. the only persons who can make a quality judgement in this case are the devs.

there are other benefits with other metrics. who is asking and why?

1 Like

Yes, when we first started implementing AT it seemed to be a concern with middle management. We were taking a Sr Developer away from writing features and moving them to the test team. Our small AT team was asked to hold weekly meetings showing the current test count, number of bugs found by AT, time saved during regression, etc. After about a year the meeting was canceled. We now have 2 teams (out of ~8) that use automation as the value of having AT is catching on, albeit slowly. Coming back to your question, we would add a tag ‘AT’ to our work items and bugs so they could be analyzed by whoever was interested.


Thanks @charlie_from_cny this response resonates with me so appreciate it.

It’s exactly the thing I am considering here - e.g. things like showing we’re developing the tests and adding new tests as the functionality grows, showing we’re perhaps making them run more efficiently, or how much time are we spending on rework etc.

1 Like


You’re reading too much into the “you” part there.

In the context I’m working in, the automation is running as part of the product team yes, but more at the UI end (so using the Selenium framework).

From the perspective of management, I want to be able to demonstrate that the automation is improving continuously either in terms of numbers of tests (whilst I know that this alone isn’t useful, the assumption being that the new tests are added due to new functionality being added)., or efficiency (running tests faster).

If you don’t think this is a useful thing to do then I would be interested in why? And in your example, is it just a case of you just do it, i.e. it’s just seen as part of the development process - i.e. you don’t necessarily feel you have to “justify” it?

1 Like

If you don’t think this is a useful thing to do then I would be interested in why?

because you are doing that for management? improving the quality is important for the team, but management shouldn’t care about these things. the team should be able to pick their own processes and tools (to a point) and if the team is confident and happy with the test automation that’s happening, why would management step in there?
Unless the dev team is quite junior and you can’t trust them?

you don’t necessarily feel you have to “justify” it?

no need to justify. as dev team we have to deliver. and when we come to the conclusion that certain things are stopping us from delivering faster and safer, we might decide to invest more into test automation. without sufficient test automation there is no good CI/CD. delivery is something management should care more about. high quality test automation is often accompanied with good DORA metrics :slight_smile:

1 Like

We needed to justify our test automation for a long time as a tester had wasted a lot of time on such a project in the past. Devs and PO were originally against it, but the test director required us to do it.
This meant we tried to demonstrate the usefulness of our automation work to reduce the backlash and make them understand why we thought is was so useful.

Helpful metrics (without too much effort) were number of bugs uncovered through automation, especially highlighting severe ones found as early as development that could otherwise have been missed. Our devs now really appreciate the quick feedback we can give them on their work through automation.
For highlighting overall effort and progress with a nice picture for reports we have a graph measuring covered components, number of test cases and level of coverage. Comparing snapshots of this graph from different points in time shows our progress.


Whilst I understand where you are coming from, isn’t this just one form of automation?

In our case, we don’t have CI/CD and automated tests are an aid to the regression testing (what might otherwise be more manual)more so once it is in the test environment. It’s not as mature perhaps as in your scenario and perhaps that is why the justification isn’t required maybe? This is more baked into the process now and without it the process will just not work.

1 Like

o.O i have trouble following you

we are talking about test automation, right? and you are running regression test suites in a test environment? and based on some criteria (all green?) this allows changes to go from test to production probably?
And the goal of management and dev is to bring changes to production faster, in a manner that doesn’t introduce new bugs or regressions. That’s what we care about, if getting better at that is accompanied by more machine runnable test cases is not that important for management i’d argue

isn’t this just one form of automation?

or am I misunderstanding the question? :slight_smile: what other kinds of automation are we talking about here?

OK maybe use of the words “kind of automation” here is wrong, and what I meant is the approach to the automation. I need to think a little more before I reply to this one again.
But the question was more about whether those in automation have any requirements to report to “someone” on their progress and continuous improvement.

hmm… I guess it would make a difference if test automation is embedded in a team or if that’s done by some third party. when “test automation” is the work, then there are people that want to check on that

1 Like