Collecting feedback on QA Key probelms to build useful QA tools for today

I work at Start up called Product Triangle with the aim to make effective QA tools for today to solve modern QA Problems. I’m currently conducting on product research to collect feedback from QA people to identify key problems you are/were facing to build a product that is useful for QA.

Just wondering if anyone here could spare 5 mins to fill in this QA Product survey? QA Product Survey

This community input and experience would be very valid and would much appreciated.

The challenge I’m been facing is collecting valid feedback & data from QA community so as Product owner can align solution that needed to be by QA.

What would be a good way to collect such feedback to ask QA people to take part in my survey? Not sure this is way of approaching this.

Thanks,

Chris

2 Likes

I’m curious regarding how you established your questions, is there a potential bias in them?

One of the things I noticed was it brought up a lot of very old practices in my mind rather than being linked to the “modern” term used in the intro, do you perhaps already have a target market in mind, say enterprise level companies who have older practices for example?

@andrewkelly2555 Thanks for your feedback.

The challenge I’ve had is getting precise feedback, I would ask what would be “What aspect of QA causes your frustration with your QA Team/function”?
Then the answer would be “Test scripts” but then I’m wondering what aspect with test scripts does frustrate that user? Designing (coming up with a good test), writing (the code), execution (run against the build), reporting (providing data useful for stakeholders), etc.

Another factor is that you have different levels of roles and experience of QA from different sectors, I want all QA could understand and partake it in this survey (common QA language) as I don’t want the tool for just managers (Decision makers), also for users (people who use the tool). Also I want to keep it simple and quick fill in to not take someone too much of anyone’s time as this is common pitfall for surveys, if too long or complex people won’t fill it in, they will give up.

Yes you’re right there is bias here for cost of precise, ease for user to fill in quick and easy to understand for any QA. It is tricky balance, perhaps the answers are too direct & focused, not broad enough?

Yes this tool audience be used for big enterprise companies, it won’t be suited for Start ups (Lean mythology). I love working at Start ups but as QA you need to experiment to come up with your own ways of testing with limited resources, every start up has their own problems & processes, you need build around those which is apart of the fun to figure out. Whereas bigger companies are more repeatable and consistent between them with common use and downfalls of typical STLC.

When I say “modern”, it’s using these practices to solve problems of scope of today’s products & expectations. I’m curious when you say “old practices”, what new practices do you use? Do you not use these practices at your company?

From the past 10 years, I’ve implemented and used BDD that is still test management, there been has truck based development &CI/CD for faster iteration that falls under test management or quality control.

Sorry that is very long message for additional context, again thanks for your feedback and hope that answers your question.

1 Like

Thanks Chris,

The big enterprise target audience was the bit I was sort of expecting based on the questions and that is a fairly big clarifier. I’ve worked those models in the past but not so much these days.

For a tool provider it is often a good thing, I don’t know whether its the size or product complexity or not but I tend to see they take a lot longer to evolve their practices and it can at times seem like they are just discovering new practices from the early 2000’s.

For example there is a world quality report published every year generally with input only from these type of companies so for others it often looks like 20 year old states apart from the buzz word of the year, its been automation for a decade or so which was a decade after others were using it, but this year its AI.

Now in your survey with it targeting similar companies, for example in question five your first three questions mention test cases. Now I’ve not created a test case in over a decade unless its being coded straight into a script, many other companies are the same so for some the solution is to stop doing test cases completely and do something more modern.

The risk is rather than stop doing potentially poor and wasteful practices and improving the way they work perhaps with something entirely different and better that someone sells them a tool instead that takes away some of the pain, time, cost and effort of doing that poor practice instead, so they continue to do it albeit more efficiently.

I’m not sure though how to get that balanced view, some companies will not look beyond “takes away some of the pain, time, cost and effort” and I can see the benefit in that from tool sellers perspective and perhaps they then have that bias and vested interest I mentioned not to change significantly.

1 Like

@andrewkelly2555 Do you work on small product with regular quick releases (SAAS agile)? What is QA/Dev ratio? What is size of your dev team? I’m assuming you use Truck based development as well right?

The reason I ask I typically I 've worked in gaming with teams of 400 people making a computer game taking across 3 to 5 years to make (costing millions to make). So writing tests within code is very risky because of the sheer of scope and complex of the functionality you have to cover.

Perhaps it is terminology thing, when I say “Test case” I mean any test that measured as a test that you so could be a test script. Like in case, it is test script within the code. Perhaps change the wording to say “Test”?

Also it is related to risk and scope of the product and scale of the dev/QA team, I don’t think test cases are waste and add alot of value because you want to ensure have correct intended behaviour especially if you use BDD agreed with all parties in the dev team makes things go more smoothly. The value is alignment of what is expected with address complexity. From QA manager hat on, I just want to know what & how much coverage has executed and test quality (tests are valid or not) as well as test results to manage stakeholders confidence & manage risk.

Having test management is very useful for these use cases especially when you have 1000’s tests across device matrix. But they are very limited and narrow focus forcing QA to work certain way to get any use from the tool which is what I would built onward for a solution.

Yes so big orgs do intend buy tools like Testrail for an example that forces a way of working but I think what would like to attempt to provide high level solutions and the company/team can use it any way to fit their context for improvement. The tool provider role is sell and encourage for intended change to make things better for software developer.

In regards of AI, I don’t think you will even get AI that will be able write tests well without any human input within next 10/20 years. No one wants to build Skynet & Robocop nor would I argue there been business case yet, unless AI would be function as a CEO for a company. Imagine Steve Jobs AI.

What do I think what AI would be useful for like boring stuff like correcting or updating tests or collating data on testing so stakeholder can make better decisions on their product health. AI should be used to able to make QA jobs be more effective rather replacing their job.

Sorry I’m drifting off the orignal point, having a rant but I’m sure you get what I’m trying to say.

1 Like

Chris, good to exchange views with other contexts.

I used to work on much larger teams and products, I was managing teams of around 30 testers at one point. It was a bit test case orientated, I did not know any better at the time but I know from that experience how much waste we had and I still see it often.

I had the horrors of winrunner and QTP at the time, sold on the basis of “ease of use” wahahaha and cost, effort savings based on the test case savings.

In time though for me I recognised we should just have burned most of those scripts, some were of value and necessary but most should have been burned, there are much better approaches to testing.

Now QTP was expensive but what it actually did was allow us to do the original wrong practices quicker, lots of tools offer the same model now and that’s the risk that I’m flagging. Building a tool to meet market poor practices and allow them to do those poor practices quicker can contribute in my view to holding testing back.

My context is very different these days, one tester to say ten developers and I am often on multiple products at the same time, I have five to test today for example ideally before the developers get in the office across the globe.

One big shift is that my testing has moved from the known risks that can be scripted and are very suited to developers to more focus on the unknowns with discovery, investigation and exploration of risk the key elements of testing. Tools that help with those are more interesting.

A lot tool focus still tends to focus on the basics, the scripted side and the known things which is where I see this older view of testing focused rather than a more modern focus on unknowns.

Target market is important, I’m just wary when the old practices are the target maybe just maybe it contributes to holding the industry back as a whole.