Long post ahead, maybe I should have broken this up by question
So far I have not run into this, however, a candidate who has experience with a tool under use would look more appealing (assuming they are good in the other things we are looking at).
When it comes to choosing tools, I try to steer away from tools which need extensive training and focus on tools where if you understand the concept (eg, if someone can strongly explain performance testing and how to use it, I feel confident we can onboard them onto performance testing tools).
That does change slightly if we are talking about something like programming languages. I have been in some companies which struggled to recruit due to the primary language in use at the company and trying to keep things within that ecosystem for ease of cognition was a trial.
I have an entire rant-I mean talk- about Swiss Cheese model related to this
Basically if the purpose of the quality layer is being met and the tool isnât introducing enough friction to be a problem (and the tool is within budget), itâs the right tool!
Usually donât have hard numbers on this. Can do ROI to compare an old tool to a new tool using things like # of hours saved through either maintenance, creation, or time test runs.
If a tool is solving a specific problem where the problem itself has a cost associated with it (eg, preventing a data leak or protecting against an issue which has happened before), then you can show ROI via comparing how much you spend on the tool through price and hourly cost vs the cost if the issue happened again.
If a tool is speeding things up for development, you can do an ROI using that, but you need to keep the initial onboarding cost in mind for this one.
A tool is not for me if: it does not resolve a problem I am trying to solve, is a pain to work with, or is just too expensive to fit into whatever budget the group is working with.
I usually make a matrix in a spreadsheet with the dimensions I care about. For all evaluations, thereâs dimensions like âexperience working withâ, âdifficulty to introduceâ, âlearning curveâ.
Thereâs also purpose-specific dimensions, for example if I am looking at a static code analysis tool, I will have a dimension around whether the tool supports the current programming languages within the team/group/company.
A tool makes it past the evaluation stage if on paper it meets all our needs without a lot of known issues introducing it to our environment. If no tool meets the baseline needs, Iâll look at the tools which are partial fits or might have some introduction difficulties and see in trial if they are worth itâŠor build a proof of concept for rolling our own tool.
I try to treat whichever documentation tool (Eg, Confluence, Notion) as my second brain. For every situation we are running evaluations for, there will be a space dedicated to it with the matrix for the tools visible, notes on the tools, and results. When tools move to trial or adoption, someone will make an announcement in something like an Engineering All Hands, in addition to things like slack messages and word of mouth.
Successes with tools, eg the first time a junior developer used SonarQube and found the feedback feature on potential smells, are featured whenever possible to generate excitement.
I usually prefer the idea of consistency is the default, but if thereâs a good reason to be different than do that instead. Ultimately a teamâs day to day has to have priority when it comes to tooling.
Some exceptions to this would be tools like Pact - which only apply to begin with if your project meets specific requirements (integrating with another project within company) and which are flexible (eg, Pactâs API) enough an exception to using it would be a very extreme case.
For instance, if I am someplace which uses SonarQube by default, exceptions would be rare. However, you could have a circumstance where a team is working with an unsupported language and they would have to write their own adapters to integrate with SonarQube. Their other choice is a great static analysis tool which is specific for the language and test framework. Rather than force them to write an adapter (unless they wanted to and had the time to!), Iâd advise them to use the new tool until SonarQube has something.
I donât
Just kidding. As mentioned in a previous answer, I try to make whatever documentation (Confluence, Notion) my second brain for projects.
The current state of the quality strategy, which tools are being used for what purpose, etc should all be in there and maintained as living documentation.
I will literally use this when interacting with folks around tests and strongly encourage others to raise issues/ideas/suggestions to this area (or to me directly and Iâll record it) so we can also keep track of where frictions are.
Even if I am the only one using this, so long as I am using it, it is useful
The more critical the vendor is, the more you need things like support for when things go wrong. Although ideally after the initial setup and onboarding, you never have to speak to the vendor again beyond billing.
If a vendor becomes a blocker and does not give you a way forward, find a different vendor. Make sure whatever contract you have in place means you will not be giving money to a vendor who is not meeting your needs.
Most likely the stable, but slow tool. However, if it is too slow for the purpose, I would choose neither tool and I would look at alternate ways to get what is needed or if there is a way to mitigate the need entirely. Ideally I act as though I have a time budget for tools (eg, a commit to final environment pipeline should not take more than 15 minutes total, so the time budget is based on that)
If the tool is flaky, it wonât be trusted, so it would not be a useful tool outside of potentially exploratory testing. Introducing a flaky tool is rarely worth it.
If the tool provides slow feedback, the question is if it is still useful. For example, if Iâm looking at a performance tool for doing weekly load testing, it is usually okay for that tool to take up to 12 hours. If Iâm looking at adding throughput testing to all commits around a performance critical piece, however, that needs to be fast enough it does not slow down the commit pipeline down to âtoo slowâ levels.