Start a discussion.
I’ve been working with Testomat for almost 3 years. I transitioned from TestRail, and it was my best decision. If you’re interested, I can share more of my experience.
Start a discussion.
I’ve been working with Testomat for almost 3 years. I transitioned from TestRail, and it was my best decision. If you’re interested, I can share more of my experience.
I haven’t used test management tool for almost 10 years now. I am not really sure what their value is and my biggest issue is any developer ever looked into it. It was always seen as testers tool.
I am a fan of keeping all test documentation in the same place as whole dev team, connecting it to the code and feeding back into the same ecosystem through CI/CD.
I know Testomat integrates with CI/CD and even more, but for me it feels like an additional layer. I’d like to use same tools as devs.
I wonder do developers work with Testomat in your team? How does it work with the whole team and not only testers?
I actually really like your point about keeping everything in the same ecosystem as developers. If a test management tool becomes a QA-only island, it usually fails.
In our case, it works a bit differently. Our E2E automated tests are synced with Testomat via CI/CD, so execution results are automatically reflected there. Developers don’t need to work inside the tool daily, but they can clearly see:
what is automated within their feature scope
what isn’t covered yet
and how stable that coverage is
Our PMs also use it to understand feature readiness and regression scope before releases.
So yes, I’d say it’s primarily a QA-driven tool. But it acts as a bridge between multiple processes rather than a separate layer.
For us, it became a kind of source of truth because QA consistently maintains documentation, links it to automation, and keeps coverage transparent. I completely agree that tools only make sense if they add visibility instead of friction. In our case, it helped align QA, product, and engineering rather than isolate testing.
Thanks for sharing details. Every environment is very different and every team works differently.
In our case, we didn’t have a need to use test management system nor excel. We never had PM asking for feature readiness, the team had process and system in place to know if feature was ready to go live. There was never any sign off from PM nor check on coverage, etc. We never aimed to test everything and we were aware of it. We had a very good monitoring and observability system and were able to spot issues and fix forward very soon after the problem arises. With each production incident we had, we improve coverage, monitoring, expanded test data, etc. We were ok with failures and we learned a lot from them. We possibly took too much focus on speed at some point and later on we had to slow down deliberately.
Our E2E tests were not growing and it was intentional approach. We automated only core user journeys and the rest of automation was happening on unit, integration and API.
I know it sounds a bit unclear and maybe chaotic how to see the whole picture or how it would work successfully. I think high trust, high performing teams we had and very good infrastructure made it successful. We also were ok with failures and no blame culture helped a lot.