What sort of questions would you be interested to have automation to help to answer?

What sort of questions would you be interested to have automation to help to answer?

Some of my questions:

  • Can I get easier access to data - for reading, manipulation, analysis, experiments, finding gaps?
  • Can I get some logging, please? Can I get some system on top of logging with filtering capabilities, which I could use to filter logs and aggregate data(E.g. grafana/graylog)?
  • How many inaccessible URLs does the site/app has?(e.g. xenu sleuth)
  • Can it help me investigate faster and pinpointing a bug?
    An example of a recent flow for a bug investigation included going with tools like burp proxy, browser dev tools, git repository search, code inspect, soap query, XML response investigation, custom database tool; which included sense-making, critical thinking, lateral thinking, systems thinking, observation, experiment, questioning, inferences, etc - for which I don’t expect a tool to be there.
  • Can it help me simulate some particular product state without having access to servers or needing to modify and deploy packages with custom changes?
  • Can it help me check the availability of all the products and readiness for purchase, on a web app, going through millions of variations with the basic variables only?
  • What build version of each app/subsystem is deployed in each of the environments? Can it spin up or refresh test environments quickly after each repository commit? I’m doing this myself now…about 20-30 builds/deployments each week.
  • An important one: Can the automation itself not stay in my way or other people’s way, for doing some great, fast, efficient work of testing?
  • Could it scan for possible accessibility issues? How about UI inconsistencies.
    Some automation might take too much time to build/maintain/use and the benefits can be too low.
    I’ve met several people in the position of ‘automation tester’, ‘software tester’ - that enjoyed writing code and automating simple checks, automation frameworks, maintaining, improving them. So much that their testing was biased towards validation.
    Left alone they were ‘helping’ in releasing products that passed their checks, but introduced dozens of unknown problems to the users, and caused long undetected unavailability of features in the platform, increase in customer calls, a spike in crashes, or errors of the product.
1 Like