This is my overview of the relevant questions, most of them from this thread and some added by me. First a compact version. Note that I use the PUPPET model as a backbone to provide the structure, even though not all of its letters seem relevant for this issue. There is more to automation in testing than the tool, after all …
A topic that is very important IMHO is that of the contribution to value. Since what is considered the value of automation should differ from one context to the next, it should also affect our evaluation of tools.
A good start.
I’d add to costs: execution(you might pay per number of requests), hosting (an environment, server, database), reporting/dashboards, complementary products(devices, screen compare, req. traceability, helpers cost(part of the automation might be handled by others like BAs);
In the ‘product - future proofing’ How about the reverse - short-term use/efficiency.
Note: I was checking online some statistics on software development and it seems the average time to deliver a project is 3 months. What tools can one make use of in that time or shorter?
But then these came to mind:
when do we decide if it’s good at something, at some time?
how are the business domain and regulations impacting the choice?
how about software/device type,
technological domain
where’s the risk type it tries to handle; I was reading some software statistics that 80-90% of a software is rarely or never seen by a user
adaptability and monitoring(rewriting/shifting focus)
My background is less tooling and more coding automated tests, but some additional considerations could be:
Company-wide Scope.
Is this tool used with other Teams?
Is it of interest to other teams?
This can be a major factor for learning, support and very importantly, reducing costs.
Future Proofed.
Will this tool still be useful in 1-2 years in a rapidly evolving industry?
Will there be any hidden costs?
Paid for Tool Contract Type.
Is the contract flexible? Are there penalties?
Vendor Track Record
Does the Vendor have a proven track record with other tools and products?
Are they looking to get high volumes of users and sell on to a bigger company thus contract and pricing changes?
Great points and discussion. As someone who’s working at a company that develops tools, I want to add one more point - a good tool will help you not to use it too much… not to spend more time on it, than doing your actual work. Take a test management tool as an example. If the tester is working on/with the tool, more than they work on the tested app, then something’s wrong.
Just one more to consider here.