My testing doesn’t start when the coding starts, my testing doesn’t end when the product is released.
What I do:
- I help with the release - I test the release process: I review code, branches, the package, the environment deployment, dependencies and configurations, timing, coordination across departments of teams, what’s supposed to be released and what not; so at any point I might step in and inform the release manager or product manager about a possible release problem;
- I am connected and getting all the user feedback(internal and external); I am going through it all and investigate the usage, or problems the users might have with understanding, controlling the application, or being blocked. I test user’s assumptions, behaviors. I provide user feedback about current state, known problems, possibly coming soon solutions, etc… I might inform stakeholders of possible problems the user’s could face and formulate a bug which then advocate for fixing;
- I am connected to analytics, so I dig through data and find user types, platforms, behaviors, drop zones, check performance or availability; sometimes I need a tool expert to help and we drive the ideas together through the data and tool > until we find possible problems or improvements;
- I am connected to production logs. So I create statistical data and do analysis. Finding possible problems, causes, recreate scenarios, dig deep to pinpoint problems. Report problems to various departments within the company.
- I am testing the orders problems; Going through various logs streams I put together the scenarios and we optimize the system. We’ve been fixing things that were migrated over from an older system, across multiple departments and various processes with gaps.
- I am connected to the external payment system provider account the company has. I’ve been analyzing those logs, financial statements, data transfers between the system and our app, encoding of data, e-mails and configurations and template setup. Noticing a handful of problems here.
I’m also tracking from time to time the payment system updates and necessary changes in our platform.
- I am also testing the external systems or APIs integrated into our platform; So from time to time I might go and do some risk based testing there; and report to the provider the problems to fix them.
- I am going and testing systems and integrations between systems(internal) where people don’t care much about or there’s no responsible person handling the things. If our system is plugged in another system and our system inherits a problem from the external system, it’s a problem for our system.
- I’m doing data analysis. Using sometimes scripting and crawling of data through APIs. E.g. geolocations of certain points that are set to be more than 50 km away from the cities/regions. Data is handled by two or three other departments. In theory they are in charge of them but problems are occurring in the product because of the setup issues that they’re not aware about.
- I’m investigating failed www and API calls tracked in the monitoring tool on several products; aggregating results, finding patterns, rebuilding scenarios/calls, generalizing impact of multiple failed attempts into a particular feature - so that I get attention of some stakeholders, build some cases and contact some managers with whom I might work on some small improvement/testing phase.
- Go into various storage tools and query or setup data and see review how things could go wrong.
- Sync with Claims/Sales/Marketing/Support/Call-Center on various user’s behaviors, problems they encounter, repetitive work. Constantly managing to find ways to improve the product by fixing some non-prioritized issue. Adding a small feature in the product to benefit others.
- Dig into known and unsolved/unsolvable problems. It’s strange sometimes that a problem deemed as unsolvable can have an easy solution if you actually understand the business, the product, the wish, the happening thing. Review those problems. Rebuild/present it in a different way - find an easier solution/problem that would fix that initial issue.
- Review static content set for internationalization, localization, translation. At some point each of the release had 1-2 problems caused by this process. So I found the gaps, doing testing in the process and product for the next 1-2 months: 2-3 Angular bugs, renaming of i18n tags by the devs, confusion of what’s or not i18n, misconstruction of i18n, pluralization, non-fitting content, etc…After issues were in front of us we knew how to deal with each of them. I keep overseeing and testing in that area, as people(IT & Business) still do mistakes.
- And more…depending on the products, team, availability, risks, business/domain knowledge, technical access.
Note that all of this starts as exploration, or risk analysis, goes into investigations, learning systems, connecting with people, tools > which leads to finding possible problems and/or product or experience quality improvements.
It is all testing or testing/quality related but on very different levels than most are doing: process-related, code, business, systems, gaps between departments, unreported user trouble, unknown/undetected production problems, rare cases, unknown product behaviors, missing features, misusing features, tacit and accepted problems, etc…