One of the key reasons why weāve primarily used manual testing is that automation was ātoo hardā. I think part of this stemmed from the fact that weāre using services and a monolith client built upon older technology and proprietary protocols. A lot of popular tooling and strategies donāt easily map. Whilst Iāve learnt about things that would make it easier to do more automated testing, honestly after 15 years or whatever, it aināt happening now as we wind down development on the software.
Weāve had software test engineers embedded within the development teams running manual tests for a while now, although as we move to a new project Iām the last software tester left.
It may be worth calling out that I actually spent 5 years in the development team. I can code. However one of my pet peeves at the time was writing the unit and component tests (plus the documentation & doc reviews). It was a chore. Meanwhile there was our manual tester going out and finding the interesting bugs. So yeah, I switched back to test and no regrets!
My concern going forward is that I will have to end up doing mainly automated testing, which to me is a less appealing role than when I was a dev.
Ignoring the old school term manual again, I went from coding to testing. For me I had stopped enjoying coding, it became a bit formulaic, whilst my code logic is good it took a while for me to think things through. Then there was also a lot of repetition and and the whole copy and paste without understanding bothered me. I did like the product creation part though, I made this and its awesome sort of thing.
None of the above is why I moved to testing it just flags I was fairly average at coding and was not really enjoying it much, the exact same things apply automation for me except for me it also lacks the I made this and its awesome factor though I appreciate some automaters still get that.
Now testing for me I get the buzz similar to what I get when I am going fishing somewhere new, Iāve got my tools, my knowledge, my research, my experience and a whole lot of judgement but I just do not know what I am going to find/catch if anything and that jumping into the unknown is what I enjoy about testing.
I was in a lucky position to be able to choose what I enjoyed doing which is why I stuck with the transition from dev to testing.
Note if it was the old school verification focused āmanualā, I do not enjoy that at all and Iād still be coding if thatās what testing was all about.
Itās brilliant to see someone recognising this. Iāve been a victim of this myself and itās pretty awful.
It used to be only come from Devs (the āYouāre just a tester, Iām a coding godā mantra) but Iām increasingly seeing it from other areas, including other testers now. The increasing and unceasing march to automate everything means that those of us who got into testing because we were bad at coding (me!) and were consistently told we didnāt need to code/denied training in those areas now increasingly find ourselves slipping further and further down the stack to cries of āOnly manualā.
Itās pretty flipping disheartening to be honest.
By my lengthy experience all those automation developers have to do some testing too.
The best time to fully automate checks and actions for running on a CI Server ist AFTER the current development cycle. When most bugs are found and fixed.
When you develop a full automation during the current cycle you always stumble over bugs. Some are hindrances which need to be removed aka fixed. Some donāt bother you much, but you investigate and report some anyway.
This is all this āmanualā work.
A testers work, the same as you do.
I doubt there are not many people which doing pure automation in testing, by their actions.
Finally automation is just a tool for testers. Not a replacement.
(The next rant would be about that automation is only one way to use development in testing)
Too hard? Feels like a number of people get walled into manual testing purely because the app we built, was never designed to be ātestableā, and ergo , not designed to be āautomatableā.
Every app is āautomatableā, even if all it has to do is blink an LED. But thatās easy for me to say, because to date, Iāve never worked in a place where we rely on off-the-shelf test tools at all, we create our own. Yes I have automation-tested on 2 non-trivial embedded systems.
Oh I donāt disagree. It wasnāt viewed as technically impossible but as the code wasnāt built for testability / automatability, it was hard. Not just that āwe canāt use seleniumā but also when we tried hooking into the decoder pipeline to monitor data, it affected the end result. Similar I guess to when certain issues donāt show in a debugger because youāre slowing everything down. It would take a significant time investment to solve these problems. The UI level testing was definitely more solvable but less of an concern.
Perhaps Iād have been better saying ātoo much effortā.
Its worth noting that we did use automation but that was more test automation than automated tests.
Interesting post - as a sub-branch to the discussion I am interested in what makes a programmer decide to become a āmanualā tester?
I went to Uni to study programming and started my career in test as a foot in the door. Probably not an unusual thing to do. After a few years of bouncing between roles (test, games designer, test, engineering support), I finally landed in development. However I found myself more interested in talks on testing than coding. When covering for colleagues on holiday I found that buzz of trying to find the reproduction for a bug, of trying to explore the software and challenging myself to break it. Eventually I realised that I enjoyed this more than development where it felt like a bit of satisfaction writing code, wrapped in layers of chore.
Since we (at Testuff) have many testing groups as customers, I can say as an observation, that moist groups still do not use any real automation testing, no matter the group size, industry or type of software being tested. Thereās still a long way for most testing groups to include automation in their testing process as far as we can see from the data we have.
From conversations with a few, I believe that the main reason for that is the lack of knowledge on how to do it (code), the lack of budget to bring on testers who may be able to do it and a kind of āfearā Iāve found from āgetting into itā as one said to me.
I think with time weāll see more, and better, tools for performing automated testing which will help those who are still not doing it to get started. Tools that will āautomate the automationā
There are a lot of āmanual onlyā testers out there. People like me for instance who ādriftedā into testing not from coding but other areas like support and others. Up to now you were fine, manual testers were needed, you didnāt need to learn to code (letās be honest, you donāt learn that in a one week course), you maybe got a certificate in testing and gathered experience and the correct mindset (which imho what testing is about). These days every client is asking for ātestersā who can code, meaning mostly that those testers are used as backup coders when pandemics hit or during vacation time. And the most important tests are test on the unit test level, meaning that every coder has to test his own code when itās a change of features.
To me this could be (doesnāt have to be though) an elimination of the testing procedure done by dedicated testers.
You are describing my testing career, @larsthomsen ! And you are so right.
And as Iāve pointed out elsewhere, unit testing and automated testing only confirms that the code as written is correct. It takes no account of issues with UI design, implementation, what happens when you stand the app up in a live environment and what happens when it interacts with other systems, live data or - the ultimate test - users, who will do things to the system that no-one would ever anticipate, or even believe. Under those conditions, you need to know that the system will either cope with the unexpected; or if it fails, it does so gracefully, without corrupting data or requiring complex and high-level (read: expensive) intervention.
Itās no good being able to say āWe applied all the best unit and automated testsā if the system caused people to lose money, go to prison, or die.
Ha ha, how true! Before I became a tester I worked for the same company in tech support. I didnāt know it at the time but that was an ideal introduction to the wild and wacky world of what users can do. Iāve lost count of the times that Iāve found undesirable behavior when using the software in entirely unintended ways, and when the devs say āgood grief, but why?ā in a tone of voice suggesting that perhaps I should be in a straitjacket, I can only say that Iāve seen users do even worse.
Want to get a fresh perspective on crazy things to test? Help out the support team and work with a few end users. It will either open your eyes or drive you to drink.
I started out the same way, transitioning from a support role into testing, dealing with the customers directly gave me good insights into how real users behave out there in the wild.
I ran into this same problem at a previous company.
We tried writing UI automation and found that it actually provided very little coverage and was a PAIN to maintain. And, there was a tonne of errors just from UI changes and flakiness.
We ended up writing a series of unit tests for specific business logic and just beefed up our non-technical support staff who tested with playbooks.
We probably could have done better if there was more willingness to invest in the reliability of the product. But, thatās what we did with a small team and a small budget.
I had similar experiences (not as an automation tester). There are a lot of clueless clients who think, that getting lots of automated tests will solve any problem (āand itās cheap!ā) but that is rarely the case. If you just produce lots of automated tests you will have long running times for machines, you have tests that cover the same features several times. And should changes come even simple ones, you have a lot of upkeep to do. The solution to me is have tests run through manual testers, let them work out āgoodā test cases, keep them all in a common repository where you have an overview what is tested over all, automatically and what needs manual testing. Just dumbly producing automated tests will kill you with costs for upkeep etc. As usual communication, organization and a little thinking will save you money.
I actually worked for a place that had reasonably comprehensive UI automation - and worked on said automation myself. It was even reliable (at least until we ran into a nasty compiler bug that corrupted the debuginfo once the executable was large enough).
The way that automation was configured was terrifying for someone new to the team: the automation was massively data-driven using CSV files to hold tests, test objects, and expected results. A test run was defined by a CSV file listing of tests, another CSV file containing the test details, the test objects for each detail row (CSV again).
On the flip side, coping with a change in how part of the application behaved was typically modifying one method, along with adjusting the order of steps if needed. A renamed UI object was an adjustment to one file. And a new test for a feature was something like half a dozen CSV edits plus one new method and a couple of lines to the various control functions.
The whole thing was immensely complex, and when I was working there over 10 years ago ran on multiple machines overnight, with a 5-6 hour configuration and test data setup run every weekend to generate the base test data.
While things were going well, it took less than an hour each day for someone to check the results. When they werenāt, we could usually pinpoint where the problem was, and reproduce it manually.
The key thing here is that this was a codebase that had been in constant development for over ten years when I was working there. There were around half a million lines of code in the automation codebase, and about the same in the data files (which were effectively a relational database in CSV files).
And the reason this was all UI automation was that the application in test had been in constant development for over twenty years and still had a great deal of code where it wasnāt possible to disentangle the UI from the back end.
On the plus side, having it meant that the test team did a lot of manual work in exploration and determining what would make good candidates for automation.
Iāve worked on teams that relied heavily upon it several times, for the following reasons:
There were a lot of (cheap) manual testers available but developer time was scarce.
The automation framework basically didnāt work or didnāt work very well and we needed to get a release out.
The product or area of the product was changing so fast that it wasnāt really worth it. If you end up running an automated test < 20 times then the return on investment is probably going to be negative. This is the case in a lot of startups where building the wrong thing is far more dangerous than building the thing wrong.
The product has a lot of bugs that automated tests werenāt very good at catching - e.g. visual defects.
The type of test would have been unreasonably difficult to automate (e.g. in one company we needed to test the use of a receipt printer - that scenario we left to the manual testers).
Not for a few years now, but when I worked at a small consultancy, and we had to log hours against projects for clients, very little work had any automation.
That is, small handful had unit tests, and almost nothing had automated system or integration tests, little to no automation on UI.
It was a combination of our setup in terms of how work was specified and billed, and lack of experience in building automation thet would have made it cheaper.
I was pushing to get capacity for Testers to learn automation until the day we went bankrupt and collapsed. The two probably were not directly linkedā¦ Probably.