What are the add-hoc that a tester usually perform?

Hello QA Professionals,

I wanted to start a discussion to know more about the additional that a QA individual should/supposed to perform.

As a QA Individual, apart from our regular tasks (test case design, execution, reporting, automation, etc.), I do write feature, use cases and collaborate with BAs and product owners to outline the feature purpose and acceptance criteria.

Beyond testing what additional you do in your organization? Your input would be highly appreciated.

3 Likes

Welcome!

Interestingly, or perhaps otherwise, I don’t consider test case design, execution or automation etc to be my regular tasks as a tester. Reporting, definitely.

My job is to fulfil my mission: to put relevant, important and well-formatted and communicated information into the possession of my test clients. To communicate the state of the software, as I see it, and empathising with the perspectives of my test clients, often particularly end users. And I do whatever I need to do to achieve that mission. That will change based on a myriad of factors concerning, among many other things, the people I work with/for, the nature and intent of the software, the marketplace, how we sell our software, competing products, where I sit in the building, my relationship to other departments and so much more. That’s what I feel makes me valuable. That I can be put into a situation, learn about it, and find the best way to achieve my mission.

So I need to know what the business wants from me. What do the developers want? How do they want me to work with them? How should I format my reports? What does the designer intend? What does operations need to know? What would make support more affordable? What kinds of users are they and what do they prioritise? How do people pay for our software? Why do they stop paying for it?

For example if people pay for the software via a periodic contract they will probably stop paying when the contract needs to be renewed. That’s a great time for the software to not go wrong. If they pay with their data then we need to provide engagement that supports them providing that data. If they pay with their attention we need to keep it. Each will determine what is problematic, wrong or missing in our software in different ways.

The usual way I structure actual testing is in sessions, as I find them very flexible and open-ended, and I can adjust my notes and records to the needs of the session and how formalised a company wants to be about that process.

I will attend design meetings and offer early feedback on testing problems and challenges, or other considerations that designers may have not made based on my own experience and knowledge. This way I can let them know about costs, issues, faults and needs very early in the process, which may affect the design.

I collaborate with developers to build a good working relationship. I will try to support them and make them look good.

I will talk to other departments and hear their problems and learn how I might change processes to ease their burdens and reduce friction and complexity where bugs may occur.

I make myself an invaluable resource at kick-off meetings, giving feedback, asking for things I need, getting clarification, pointing out testing difficulties, thinking up approaches and techniques I might employ, offering assistance and help to others.

I bring people drinks and bring in snacks, for the morale of my team and to establish a better relationship with those people, and to show that I am in service of their success, not a barrier or obstacle to overcome to get the code into production.

I read our website and promotional materials to see what users are being told our product can do and what it is for, to see if our product can do those things and achieve those aims. I read competitors websites, figure out where our USP is, and figure out what is appealing or useful from their software so I can see where we are placed in the market and better understand our user base.

I examine our support tickets to see where problems are common and what user complaints actually are.

I run workshops and book clubs to help train other testers, and learn from them and from the experience of teaching.

I write about testing to see if I can formalise what I know and see if it can withstand self-critique, or critique from others.

I do debriefs with others, to check my testing for missing concepts and ideas and fill in gaps in my knowledge, or do that for others.

And as part of all that if I need to formalise part of my testing, or use tools like automation, I’m free to make that decision, informed by the situation and the needs of my testing, keeping it light, flexible, easily and cheaply maintainable, fast and adaptable.

So that’s a few of the things I’ll do, beyond the actual performance of testing in my sessions, all with the aim to improve that testing and beyond. Hopefully some of that is helpful!

5 Likes

Hello All,

Apart from the usual QA tasks like test cases and execution, I also document test strategies, join grooming sessions, and help define story points. I like getting involved early in requirements to spot gaps, do some risk analysis, and suggest ways to reduce them.

I also assist in go/no-go decisions, collaborate with other teams to prepare datasets (for their own tests or for integration tests), suggest process improvements, and share tips or tools with BAs (to help validate the acceptance criterias) and with support teams (to help with issues analysis) so everyone’s work runs smooth. On the other hand I get tips from developers for quick testing or automating some tests.

3 Likes

@rumana,

Good question!

If I reflect upon my own experience, I would say a tester’s ad hoc activities tend to be much more than regular test designing and execution. Some of the typical areas in which I usually find myself active are:

Exploratory Testing- Unscripted sessions identifying edge cases that formal test cases may have missed.

Attending or getting involved in requirements discussions very early-Acceptance criteria identification, spotting gaps in acceptance criteria, and suggesting improvements in requirements even before the commencement of development.

Checking from the users’ viewpoint-Walking through the product from the user’s lens and giving feedback on usability, accessibility, and the user experience.

Process improvements-Suggesting improvements to QA processes, test coverage strategies, and alternative ways of working together for the team.

Knowledge sharing-Helping junior testers, organizing internal knowledge-sharing sessions, or documenting best practices for reference by the team.

Cross-functional Collaboration-Working with Developers to debug critical issues; Working with BAs/Product Owners to refine use cases.

Hence, the value of any additional tester does not stand solely within the realm of test case execution. Quite a few times, we serve as an interface between the business, development, and user.

2 Likes

Chris,

Thank you for writing all the responsibilities in such detail, you’re truly doing a lot! These are insightful points, and I’m sure I’ll read again, as they highlight the diverse roles a tester can take on. It does make me wonder if a team has six members and one person is already contributing so broadly, how might the other team members participate or share responsibilities!

1 Like

Thank you Asmae,

Great to know the details of what you’re doing, it brings another question to mind. When it comes to defining story points and doing risk analysis, are those activities handled as part of sprint planning, or do you usually conduct them separately in between?

Thank you!

It’s great to know the specifics of what you’re doing, I can relate, as I often perform those tasks myself. I especially find exploratory testing valuable for understanding user flows and identifying the best course of action. I appreciate you sharing your thoughts and insights.