Test.bash(); 2022 - Discussion: Managing Your Testing Toolbox

In this Panel Discussion, @callum is joined by three tool experts to discuss ways to better manage your toolbox. Some of the questions they’ll be answering:

  • How can you introduce new tools to your teams?

  • How do you decide on the best tools for your company?

  • How do you know you’re using the right tool?

  • Sarah Hutchins (@sarahproof ) is passionate about doing things correctly for the right reasons and believes technology should never act as a band-aid for issues which lay elsewhere. She’s also passionate about robotics and automation with the goal of using technology to make life easier.

  • Ibironke-Yekinni (@ibironke) has over 5 years of diverse QA and Testing Experience. In addition, has adequately raised more than 512 individuals from zero knowledge of Technology to becoming Professional Test Engineers in the market space.

  • Trisha Chetani (@agarwalatrisha1212) is a software tester and automation enthusiast. Trisha has been helping the teams to follow testing processes and support, that enable teams to deliver high-quality software in DevOps Environment. She’s always enthusiastic to attend Conferences and meet-ups for professional development, as well as an active community member. Trisha loves the work but has begun to think more about the big picture, so she’s looking for opportunities to thrive in a management role.

We’ll use this Club thread to share resources mentioned during the session and answer any questions we don’t get to during the live session.

2 Likes

Questions Asked

@callum - How do you stay aware of tools?

@callum - How can you introduce new tools to your teams?

@tomhudson - When do you know it’s time to switch tool? What indicators do you look for to decide to make a change?

@simon_tomes - For tools that are paid-for, how do you convince the person with the budget to pay for it?

@callum - How do you balance free / opensource and commercial tools?

@jaswanth - What’s your strategy to trial a specific tool ?

@tobiasm - What do you think is currently missing in your testing toolbox?

Jenna Maybury - Do you have issues recruiting depending on the tools you are using? Does that have any impact on changing / choosing tools?

@callum - How do you know you’re using the right tools?

@gurukiran - how do you find the ROI on tools ?

@callum - How do you decide when a tool is not for you?

Hanna Johansson - You spoke about evaluation before introduction, what would be the best way to do that?

@jenbauer - How do you make your tools - and possibly the results - accessible across your team? Do you also try to make results accessible outside of the team?

Gary Hawkes - What circumstances do you need to maintain tool consistency across project teams versus allowing teams to use different tools for the same purpose?

@callum - How do you manage your testing toolbox?

@gurukiran - For paid tool what key take aways with vendor management would you share?

@suriya - Question to the panel. Will you choose a tool which is vulnerable with flaky tests but faster in execution or the one which is non flaky & slower in execution

3 Likes

Long post ahead, maybe I should have broken this up by question :sweat_smile:

So far I have not run into this, however, a candidate who has experience with a tool under use would look more appealing (assuming they are good in the other things we are looking at).

When it comes to choosing tools, I try to steer away from tools which need extensive training and focus on tools where if you understand the concept (eg, if someone can strongly explain performance testing and how to use it, I feel confident we can onboard them onto performance testing tools).

That does change slightly if we are talking about something like programming languages. I have been in some companies which struggled to recruit due to the primary language in use at the company and trying to keep things within that ecosystem for ease of cognition was a trial.

I have an entire rant-I mean talk- about Swiss Cheese model related to this :smiley:
Basically if the purpose of the quality layer is being met and the tool isn’t introducing enough friction to be a problem (and the tool is within budget), it’s the right tool!

Usually don’t have hard numbers on this. Can do ROI to compare an old tool to a new tool using things like # of hours saved through either maintenance, creation, or time test runs.

If a tool is solving a specific problem where the problem itself has a cost associated with it (eg, preventing a data leak or protecting against an issue which has happened before), then you can show ROI via comparing how much you spend on the tool through price and hourly cost vs the cost if the issue happened again.

If a tool is speeding things up for development, you can do an ROI using that, but you need to keep the initial onboarding cost in mind for this one.

A tool is not for me if: it does not resolve a problem I am trying to solve, is a pain to work with, or is just too expensive to fit into whatever budget the group is working with.

I usually make a matrix in a spreadsheet with the dimensions I care about. For all evaluations, there’s dimensions like “experience working with”, “difficulty to introduce”, “learning curve”.
There’s also purpose-specific dimensions, for example if I am looking at a static code analysis tool, I will have a dimension around whether the tool supports the current programming languages within the team/group/company.

A tool makes it past the evaluation stage if on paper it meets all our needs without a lot of known issues introducing it to our environment. If no tool meets the baseline needs, I’ll look at the tools which are partial fits or might have some introduction difficulties and see in trial if they are worth it
or build a proof of concept for rolling our own tool.

I try to treat whichever documentation tool (Eg, Confluence, Notion) as my second brain. For every situation we are running evaluations for, there will be a space dedicated to it with the matrix for the tools visible, notes on the tools, and results. When tools move to trial or adoption, someone will make an announcement in something like an Engineering All Hands, in addition to things like slack messages and word of mouth.

Successes with tools, eg the first time a junior developer used SonarQube and found the feedback feature on potential smells, are featured whenever possible to generate excitement.

I usually prefer the idea of consistency is the default, but if there’s a good reason to be different than do that instead. Ultimately a team’s day to day has to have priority when it comes to tooling.

Some exceptions to this would be tools like Pact - which only apply to begin with if your project meets specific requirements (integrating with another project within company) and which are flexible (eg, Pact’s API) enough an exception to using it would be a very extreme case.

For instance, if I am someplace which uses SonarQube by default, exceptions would be rare. However, you could have a circumstance where a team is working with an unsupported language and they would have to write their own adapters to integrate with SonarQube. Their other choice is a great static analysis tool which is specific for the language and test framework. Rather than force them to write an adapter (unless they wanted to and had the time to!), I’d advise them to use the new tool until SonarQube has something.

I don’t :upside_down_face:
Just kidding. As mentioned in a previous answer, I try to make whatever documentation (Confluence, Notion) my second brain for projects.
The current state of the quality strategy, which tools are being used for what purpose, etc should all be in there and maintained as living documentation.
I will literally use this when interacting with folks around tests and strongly encourage others to raise issues/ideas/suggestions to this area (or to me directly and I’ll record it) so we can also keep track of where frictions are.
Even if I am the only one using this, so long as I am using it, it is useful

The more critical the vendor is, the more you need things like support for when things go wrong. Although ideally after the initial setup and onboarding, you never have to speak to the vendor again beyond billing.

If a vendor becomes a blocker and does not give you a way forward, find a different vendor. Make sure whatever contract you have in place means you will not be giving money to a vendor who is not meeting your needs.

Most likely the stable, but slow tool. However, if it is too slow for the purpose, I would choose neither tool and I would look at alternate ways to get what is needed or if there is a way to mitigate the need entirely. Ideally I act as though I have a time budget for tools (eg, a commit to final environment pipeline should not take more than 15 minutes total, so the time budget is based on that)

If the tool is flaky, it won’t be trusted, so it would not be a useful tool outside of potentially exploratory testing. Introducing a flaky tool is rarely worth it.

If the tool provides slow feedback, the question is if it is still useful. For example, if I’m looking at a performance tool for doing weekly load testing, it is usually okay for that tool to take up to 12 hours. If I’m looking at adding throughput testing to all commits around a performance critical piece, however, that needs to be fast enough it does not slow down the commit pipeline down to “too slow” levels.

3 Likes