Morning,
Iām interested in how many in the community work for companies that donāt use much or any Automated testing and use primarily rely on manual/exploratory activity.
If so then what are the reasons you have
Morning,
Iām interested in how many in the community work for companies that donāt use much or any Automated testing and use primarily rely on manual/exploratory activity.
If so then what are the reasons you have
I suspect we are talking about only the final artifact checking activities, and not about any build-time and provisioning-time checks that are performed when we say ārely on manual testingā. Everyone uses some kind of tool and thus some kind of automation to compose/compile/build/deploy a product, but keen to know how many people do stop there too.
I have temporarily done this while office moves meant that the testing equipment could not be turned on and made to work, and found it exhausting but also a lot of fun for a few months. So I currently appreciate both sides of the art, because manual testing can and did flush out customer UX or workflow defects. I wonder who would plan to do this all of the time though?
From what Iāve seen, itās usually due to a lack of testers who know how to code, lack of time, lack of budget to invest in training people so they can learn automation, lack of flexibility on the management side to include the developers into contributing to automation. Lots of lacks basically.
I would also ponder that some testers may fall into the category of being satisfied with only performing manual testing, and not desiring to learn to code; especially if they are on a team where manual testing is their niche and there are other more technical teammates who do enjoy and thrive in test automation. I mean, if this former person is adding value as a āmanual testerā and there is no force from management to āgrowā / move them from this valuable position, then all is well. Would you agree, @mirza and @conrad.braam ?
Even me being able to code I enjoy also the phases of more exploratory testing.
Specifically automation is only one tool in my box, because it works only for the repetitive tasks.
Testing exploratory can be very demanding. e.g. āIs this a good UX?ā
Yes, some people really donāt like coding, and such individuals either stay as āmanualā testers, who eventually become some kind of domain testing specialists, or go into some managerial directions, like being leads or test managers. Sometimes Scrum Master, Product Owners, Business Analysts, etc.
So in short, if someone doesnāt want to work on automation, there are other alternative directions in which a tester can grow.
Iāve observed a fair amount of āmanual-tester-only-shamingā, where if a software tester is not skilled at or actively learning how to stand up and maintain etc. an automation framework, they are valuated as being professionally āless thanā. I donāt think this is cool, and honestly itās not something that our testing community should ever do or condone.
Thanks for the comments so far, I suppose I should have given some examples myself when first posting.
By āmanual testingā (and without getting in to the debate of the soundness of this as a term to use) I meant the post-programming system testing/regression testing performed by someone who would be typically in a tester role/job.
In my example, I work in a team where there are 9 different applications including enterprise-level ERP systems that have been in place for 20+ years with huge, spanning functionality.
In some examples where the product might not be updated very frequently, we have opted to stay manual only largely due to us feeling that there wouldnāt be the same ROI in investing time in the automation. If it is a small-ish application with limited functionality that is updated once every 3 months or so (and in some occasions less than that) then weāre happy staying with the manual testing/regression testing - i.e. the manual regression for 1-1.5 days every few months is more cost effective than developing an automated suite for it.
Now, prior to my joining the company a few years back there was 0 automation and I have introduced a certain degree for a number of the applications that have been more frequently updated (any even then I mean every 4-6 weeks) - e.g. 90% if the regression testing performed by automated checks, and final 10% as manual checklist/exploratory sessions.
In some applications, there may be considered low ROI in the automation development but weāre still doing as side project (a percentage of time each week) more as a development tool for the testers themselves so if larger projects arise where the benefit then weāre in a better position to support it.
For the larger systems that have had 0 automation in 20 years, itās actually quite difficult to make a case as to why it is now all of a sudden necessary , and actually quite a daunting task as well. Though weāve made some headway into specific areas which change more frequently - around integration, rather than the UI level (e.g. integration of the JSON messages into the system from other systems which can drive a lot of the processes, but is still a drop in the ocean against the full capability of the system).
Iāve been getting stuck in many conversations where people mention the 80/90ās term manual because of the mindset difference on what their focus is.
Iāve got to the point that in order for the discussion to get on the same wavelength we need to understand what the testing focus is.
If your testing is focused towards verification you will have completely different thinking around this compared to if your testing focus is discovery. Of course there may be a mix and match but its often worth considering, what takes precedence in our team.
Where I am our testing focus is usually discovery and in most cases our developers own and do most of the automation coverage, perhaps with support from testers.
Here is some condensed examples of how that plays out from a project I was on last year.
At design stage we will test ideas through collaborative sessions, discovering risks, solutions, ways to improve things before they are built.
Similarly with user stories we test perhaps by bouncing ideas off each other to discover better ways of doing things.
A/B testing, both of initial hypotheses and real user data, logs and feedback again to discover improvement opportunities. Here we can also leverage from discovered data to drive change and make informed decisions.
Testing session have a focus on discovering new risks or discovering useful information on known risks, this will also include real device coverage, personaās and a whole plethora of general mobile and web risks.
I should also flag this discovery bias really benefits from the use of technical tools.
So in this model our developers pretty much have the verification view covered and do reasonable automation coverage, its very important but the reason we choose a discovery model for testers is that we want to go beyond this to both accelerate the team as a whole and leverage from testing to make better products and guide ongoing improvement in the product.
On some other projects, perhaps its an older product and we are late to the party and they now have a major regression risk. On these projects we sometimes switch to having a verification focus and here we do go heavy on automation.
Personally I prefer the former and often see that latter as a compensation model but thatās a longer discussion.
I believe that I fall into this category.
One of the key reasons why weāve primarily used manual testing is that automation was ātoo hardā. I think part of this stemmed from the fact that weāre using services and a monolith client built upon older technology and proprietary protocols. A lot of popular tooling and strategies donāt easily map. Whilst Iāve learnt about things that would make it easier to do more automated testing, honestly after 15 years or whatever, it aināt happening now as we wind down development on the software.
Weāve had software test engineers embedded within the development teams running manual tests for a while now, although as we move to a new project Iām the last software tester left.
It may be worth calling out that I actually spent 5 years in the development team. I can code. However one of my pet peeves at the time was writing the unit and component tests (plus the documentation & doc reviews). It was a chore. Meanwhile there was our manual tester going out and finding the interesting bugs. So yeah, I switched back to test and no regrets!
My concern going forward is that I will have to end up doing mainly automated testing, which to me is a less appealing role than when I was a dev.
Interesting post - as a sub-branch to the discussion I am interested in what makes a programmer decide to become a āmanualā tester?
Ignoring the old school term manual again, I went from coding to testing. For me I had stopped enjoying coding, it became a bit formulaic, whilst my code logic is good it took a while for me to think things through. Then there was also a lot of repetition and and the whole copy and paste without understanding bothered me. I did like the product creation part though, I made this and its awesome sort of thing.
None of the above is why I moved to testing it just flags I was fairly average at coding and was not really enjoying it much, the exact same things apply automation for me except for me it also lacks the I made this and its awesome factor though I appreciate some automaters still get that.
Now testing for me I get the buzz similar to what I get when I am going fishing somewhere new, Iāve got my tools, my knowledge, my research, my experience and a whole lot of judgement but I just do not know what I am going to find/catch if anything and that jumping into the unknown is what I enjoy about testing.
I was in a lucky position to be able to choose what I enjoyed doing which is why I stuck with the transition from dev to testing.
Note if it was the old school verification focused āmanualā, I do not enjoy that at all and Iād still be coding if thatās what testing was all about.
Great reply, appreciated.
Itās brilliant to see someone recognising this. Iāve been a victim of this myself and itās pretty awful.
It used to be only come from Devs (the āYouāre just a tester, Iām a coding godā mantra) but Iām increasingly seeing it from other areas, including other testers now. The increasing and unceasing march to automate everything means that those of us who got into testing because we were bad at coding (me!) and were consistently told we didnāt need to code/denied training in those areas now increasingly find ourselves slipping further and further down the stack to cries of āOnly manualā.
Itās pretty flipping disheartening to be honest.
By my lengthy experience all those automation developers have to do some testing too.
The best time to fully automate checks and actions for running on a CI Server ist AFTER the current development cycle. When most bugs are found and fixed.
When you develop a full automation during the current cycle you always stumble over bugs. Some are hindrances which need to be removed aka fixed. Some donāt bother you much, but you investigate and report some anyway.
This is all this āmanualā work.
A testers work, the same as you do.
I doubt there are not many people which doing pure automation in testing, by their actions.
Finally automation is just a tool for testers. Not a replacement.
(The next rant would be about that automation is only one way to use development in testing)
Too hard? Feels like a number of people get walled into manual testing purely because the app we built, was never designed to be ātestableā, and ergo , not designed to be āautomatableā.
Every app is āautomatableā, even if all it has to do is blink an LED. But thatās easy for me to say, because to date, Iāve never worked in a place where we rely on off-the-shelf test tools at all, we create our own. Yes I have automation-tested on 2 non-trivial embedded systems.
Oh I donāt disagree. It wasnāt viewed as technically impossible but as the code wasnāt built for testability / automatability, it was hard. Not just that āwe canāt use seleniumā but also when we tried hooking into the decoder pipeline to monitor data, it affected the end result. Similar I guess to when certain issues donāt show in a debugger because youāre slowing everything down. It would take a significant time investment to solve these problems. The UI level testing was definitely more solvable but less of an concern.
Perhaps Iād have been better saying ātoo much effortā.
Its worth noting that we did use automation but that was more test automation than automated tests.
Interesting post - as a sub-branch to the discussion I am interested in what makes a programmer decide to become a āmanualā tester?
I went to Uni to study programming and started my career in test as a foot in the door. Probably not an unusual thing to do. After a few years of bouncing between roles (test, games designer, test, engineering support), I finally landed in development. However I found myself more interested in talks on testing than coding. When covering for colleagues on holiday I found that buzz of trying to find the reproduction for a bug, of trying to explore the software and challenging myself to break it. Eventually I realised that I enjoyed this more than development where it felt like a bit of satisfaction writing code, wrapped in layers of chore.
Since we (at Testuff) have many testing groups as customers, I can say as an observation, that moist groups still do not use any real automation testing, no matter the group size, industry or type of software being tested. Thereās still a long way for most testing groups to include automation in their testing process as far as we can see from the data we have.
From conversations with a few, I believe that the main reason for that is the lack of knowledge on how to do it (code), the lack of budget to bring on testers who may be able to do it and a kind of āfearā Iāve found from āgetting into itā as one said to me.
I think with time weāll see more, and better, tools for performing automated testing which will help those who are still not doing it to get started. Tools that will āautomate the automationā
There are a lot of āmanual onlyā testers out there. People like me for instance who ādriftedā into testing not from coding but other areas like support and others. Up to now you were fine, manual testers were needed, you didnāt need to learn to code (letās be honest, you donāt learn that in a one week course), you maybe got a certificate in testing and gathered experience and the correct mindset (which imho what testing is about). These days every client is asking for ātestersā who can code, meaning mostly that those testers are used as backup coders when pandemics hit or during vacation time. And the most important tests are test on the unit test level, meaning that every coder has to test his own code when itās a change of features.
To me this could be (doesnāt have to be though) an elimination of the testing procedure done by dedicated testers.