Anyone out there only use "manual" testing

Even me being able to code I enjoy also the phases of more exploratory testing.
Specifically automation is only one tool in my box, because it works only for the repetitive tasks.
Testing exploratory can be very demanding. e.g. “Is this a good UX?”


Yes, some people really don’t like coding, and such individuals either stay as “manual” testers, who eventually become some kind of domain testing specialists, or go into some managerial directions, like being leads or test managers. Sometimes Scrum Master, Product Owners, Business Analysts, etc.

So in short, if someone doesn’t want to work on automation, there are other alternative directions in which a tester can grow.


I’ve observed a fair amount of “manual-tester-only-shaming”, where if a software tester is not skilled at or actively learning how to stand up and maintain etc. an automation framework, they are valuated as being professionally “less than”. I don’t think this is cool, and honestly it’s not something that our testing community should ever do or condone.


Thanks for the comments so far, I suppose I should have given some examples myself when first posting.

By “manual testing” (and without getting in to the debate of the soundness of this as a term to use) I meant the post-programming system testing/regression testing performed by someone who would be typically in a tester role/job.

In my example, I work in a team where there are 9 different applications including enterprise-level ERP systems that have been in place for 20+ years with huge, spanning functionality.

In some examples where the product might not be updated very frequently, we have opted to stay manual only largely due to us feeling that there wouldn’t be the same ROI in investing time in the automation. If it is a small-ish application with limited functionality that is updated once every 3 months or so (and in some occasions less than that) then we’re happy staying with the manual testing/regression testing - i.e. the manual regression for 1-1.5 days every few months is more cost effective than developing an automated suite for it.

Now, prior to my joining the company a few years back there was 0 automation and I have introduced a certain degree for a number of the applications that have been more frequently updated (any even then I mean every 4-6 weeks) - e.g. 90% if the regression testing performed by automated checks, and final 10% as manual checklist/exploratory sessions.

In some applications, there may be considered low ROI in the automation development but we’re still doing as side project (a percentage of time each week) more as a development tool for the testers themselves so if larger projects arise where the benefit then we’re in a better position to support it.

For the larger systems that have had 0 automation in 20 years, it’s actually quite difficult to make a case as to why it is now all of a sudden necessary , and actually quite a daunting task as well. Though we’ve made some headway into specific areas which change more frequently - around integration, rather than the UI level (e.g. integration of the JSON messages into the system from other systems which can drive a lot of the processes, but is still a drop in the ocean against the full capability of the system).


I’ve been getting stuck in many conversations where people mention the 80/90’s term manual because of the mindset difference on what their focus is.

I’ve got to the point that in order for the discussion to get on the same wavelength we need to understand what the testing focus is.

If your testing is focused towards verification you will have completely different thinking around this compared to if your testing focus is discovery. Of course there may be a mix and match but its often worth considering, what takes precedence in our team.

Where I am our testing focus is usually discovery and in most cases our developers own and do most of the automation coverage, perhaps with support from testers.

Here is some condensed examples of how that plays out from a project I was on last year.

At design stage we will test ideas through collaborative sessions, discovering risks, solutions, ways to improve things before they are built.

Similarly with user stories we test perhaps by bouncing ideas off each other to discover better ways of doing things.

A/B testing, both of initial hypotheses and real user data, logs and feedback again to discover improvement opportunities. Here we can also leverage from discovered data to drive change and make informed decisions.

Testing session have a focus on discovering new risks or discovering useful information on known risks, this will also include real device coverage, persona’s and a whole plethora of general mobile and web risks.

I should also flag this discovery bias really benefits from the use of technical tools.

So in this model our developers pretty much have the verification view covered and do reasonable automation coverage, its very important but the reason we choose a discovery model for testers is that we want to go beyond this to both accelerate the team as a whole and leverage from testing to make better products and guide ongoing improvement in the product.

On some other projects, perhaps its an older product and we are late to the party and they now have a major regression risk. On these projects we sometimes switch to having a verification focus and here we do go heavy on automation.

Personally I prefer the former and often see that latter as a compensation model but that’s a longer discussion.


I believe that I fall into this category.

One of the key reasons why we’ve primarily used manual testing is that automation was “too hard”. I think part of this stemmed from the fact that we’re using services and a monolith client built upon older technology and proprietary protocols. A lot of popular tooling and strategies don’t easily map. Whilst I’ve learnt about things that would make it easier to do more automated testing, honestly after 15 years or whatever, it ain’t happening now as we wind down development on the software.

We’ve had software test engineers embedded within the development teams running manual tests for a while now, although as we move to a new project I’m the last software tester left.

It may be worth calling out that I actually spent 5 years in the development team. I can code. However one of my pet peeves at the time was writing the unit and component tests (plus the documentation & doc reviews). It was a chore. Meanwhile there was our manual tester going out and finding the interesting bugs. So yeah, I switched back to test and no regrets!

My concern going forward is that I will have to end up doing mainly automated testing, which to me is a less appealing role than when I was a dev.


Interesting post - as a sub-branch to the discussion I am interested in what makes a programmer decide to become a “manual” tester?


Ignoring the old school term manual again, I went from coding to testing. For me I had stopped enjoying coding, it became a bit formulaic, whilst my code logic is good it took a while for me to think things through. Then there was also a lot of repetition and and the whole copy and paste without understanding bothered me. I did like the product creation part though, I made this and its awesome sort of thing.

None of the above is why I moved to testing it just flags I was fairly average at coding and was not really enjoying it much, the exact same things apply automation for me except for me it also lacks the I made this and its awesome factor though I appreciate some automaters still get that.

Now testing for me I get the buzz similar to what I get when I am going fishing somewhere new, I’ve got my tools, my knowledge, my research, my experience and a whole lot of judgement but I just do not know what I am going to find/catch if anything and that jumping into the unknown is what I enjoy about testing.

I was in a lucky position to be able to choose what I enjoyed doing which is why I stuck with the transition from dev to testing.

Note if it was the old school verification focused ‘manual’, I do not enjoy that at all and I’d still be coding if that’s what testing was all about.


Great reply, appreciated.


It’s brilliant to see someone recognising this. I’ve been a victim of this myself and it’s pretty awful.
It used to be only come from Devs (the “You’re just a tester, I’m a coding god” mantra) but I’m increasingly seeing it from other areas, including other testers now. The increasing and unceasing march to automate everything means that those of us who got into testing because we were bad at coding (me!) and were consistently told we didn’t need to code/denied training in those areas now increasingly find ourselves slipping further and further down the stack to cries of ‘Only manual’.
It’s pretty flipping disheartening to be honest.


By my lengthy experience all those automation developers have to do some testing too.
The best time to fully automate checks and actions for running on a CI Server ist AFTER the current development cycle. When most bugs are found and fixed.

When you develop a full automation during the current cycle you always stumble over bugs. Some are hindrances which need to be removed aka fixed. Some don’t bother you much, but you investigate and report some anyway.
This is all this “manual” work.
A testers work, the same as you do.

I doubt there are not many people which doing pure automation in testing, by their actions.

Finally automation is just a tool for testers. Not a replacement.
(The next rant would be about that automation is only one way to use development in testing)


Too hard? Feels like a number of people get walled into manual testing purely because the app we built, was never designed to be “testable”, and ergo , not designed to be “automatable”.
Every app is “automatable”, even if all it has to do is blink an LED. But that’s easy for me to say, because to date, I’ve never worked in a place where we rely on off-the-shelf test tools at all, we create our own. Yes I have automation-tested on 2 non-trivial embedded systems.


Oh I don’t disagree. It wasn’t viewed as technically impossible but as the code wasn’t built for testability / automatability, it was hard. Not just that “we can’t use selenium” but also when we tried hooking into the decoder pipeline to monitor data, it affected the end result. Similar I guess to when certain issues don’t show in a debugger because you’re slowing everything down. It would take a significant time investment to solve these problems. The UI level testing was definitely more solvable but less of an concern.

Perhaps I’d have been better saying “too much effort”.

Its worth noting that we did use automation but that was more test automation than automated tests.


Interesting post - as a sub-branch to the discussion I am interested in what makes a programmer decide to become a “manual” tester?

I went to Uni to study programming and started my career in test as a foot in the door. Probably not an unusual thing to do. After a few years of bouncing between roles (test, games designer, test, engineering support), I finally landed in development. However I found myself more interested in talks on testing than coding. When covering for colleagues on holiday I found that buzz of trying to find the reproduction for a bug, of trying to explore the software and challenging myself to break it. Eventually I realised that I enjoyed this more than development where it felt like a bit of satisfaction writing code, wrapped in layers of chore.


Since we (at Testuff) have many testing groups as customers, I can say as an observation, that moist groups still do not use any real automation testing, no matter the group size, industry or type of software being tested. There’s still a long way for most testing groups to include automation in their testing process as far as we can see from the data we have.
From conversations with a few, I believe that the main reason for that is the lack of knowledge on how to do it (code), the lack of budget to bring on testers who may be able to do it and a kind of “fear” I’ve found from “getting into it” as one said to me.
I think with time we’ll see more, and better, tools for performing automated testing which will help those who are still not doing it to get started. Tools that will “automate the automation” :slight_smile:


There are a lot of “manual only” testers out there. People like me for instance who “drifted” into testing not from coding but other areas like support and others. Up to now you were fine, manual testers were needed, you didn’t need to learn to code (let’s be honest, you don’t learn that in a one week course), you maybe got a certificate in testing and gathered experience and the correct mindset (which imho what testing is about). These days every client is asking for “testers” who can code, meaning mostly that those testers are used as backup coders when pandemics hit or during vacation time. And the most important tests are test on the unit test level, meaning that every coder has to test his own code when it’s a change of features.
To me this could be (doesn’t have to be though) an elimination of the testing procedure done by dedicated testers.


You are describing my testing career, @larsthomsen ! And you are so right.

And as I’ve pointed out elsewhere, unit testing and automated testing only confirms that the code as written is correct. It takes no account of issues with UI design, implementation, what happens when you stand the app up in a live environment and what happens when it interacts with other systems, live data or - the ultimate test - users, who will do things to the system that no-one would ever anticipate, or even believe. Under those conditions, you need to know that the system will either cope with the unexpected; or if it fails, it does so gracefully, without corrupting data or requiring complex and high-level (read: expensive) intervention.

It’s no good being able to say “We applied all the best unit and automated tests” if the system caused people to lose money, go to prison, or die.


… automated testing only confirms that the code as written is correct

And even not that is guaranteed and becomes often a maintenance hell.

Not always is the automation code adapted to changes in the product. The failures shown are outdated, “hard-wired”, expectation.

At worst you have false positive automation case.
An outdated, not reviewed, automation code hides a real bug in the application.


Ha ha, how true! Before I became a tester I worked for the same company in tech support. I didn’t know it at the time but that was an ideal introduction to the wild and wacky world of what users can do. I’ve lost count of the times that I’ve found undesirable behavior when using the software in entirely unintended ways, and when the devs say “good grief, but why?” in a tone of voice suggesting that perhaps I should be in a straitjacket, I can only say that I’ve seen users do even worse.

Want to get a fresh perspective on crazy things to test? Help out the support team and work with a few end users. It will either open your eyes or drive you to drink. :rofl:


I started out the same way, transitioning from a support role into testing, dealing with the customers directly gave me good insights into how real users behave out there in the wild. :smiley: