Collecting trends across multiple automation runs

Anyone out there collecting data from test automation runs and storing it to compare trends over time?

:man_shrugging: How are you doing it?

:partying_face: Has it proven useful?

:compass: Point me in in the right direction!

1 Like

On an overview level, I would say the checks that you do with automation should be context-driven based on the current state of the software. So, I would first ask why you would like to collect this data.

1 Like

Most of my integration tests wouldn’t stop code bring merged to main.

And there is no long lasting record of what tests passed against what builds.

So if a test does fail, I’d like to gain insight into if this is the first time we saw the test fail. And what version it last passed against.

Helps pinpoint what version the breaking change was introduced.

1 Like

So, it looks like you would like to build a matrix of tests against versions. It’s wonderful that you are trying to get an insight on when the code broke. I’m sure many of the existing tools would help you with that. If not, it won’t be difficult to build a simple spreadsheet based on your results recorded.

1 Like

I don’t want a spreadsheet, I want a tool, a dashboard.

It isn’t just one automation suite and one version. I’m talking about some 15+ test suites and each covers between 2 and 8 microservices that make up the SUT, each with a version.

1 Like