Anyone out there only use "manual" testing

You are describing my testing career, @larsthomsen ! And you are so right.

And as I’ve pointed out elsewhere, unit testing and automated testing only confirms that the code as written is correct. It takes no account of issues with UI design, implementation, what happens when you stand the app up in a live environment and what happens when it interacts with other systems, live data or - the ultimate test - users, who will do things to the system that no-one would ever anticipate, or even believe. Under those conditions, you need to know that the system will either cope with the unexpected; or if it fails, it does so gracefully, without corrupting data or requiring complex and high-level (read: expensive) intervention.

It’s no good being able to say “We applied all the best unit and automated tests” if the system caused people to lose money, go to prison, or die.

6 Likes

… automated testing only confirms that the code as written is correct

And even not that is guaranteed and becomes often a maintenance hell.

Not always is the automation code adapted to changes in the product. The failures shown are outdated, “hard-wired”, expectation.

At worst you have false positive automation case.
An outdated, not reviewed, automation code hides a real bug in the application.

4 Likes

Ha ha, how true! Before I became a tester I worked for the same company in tech support. I didn’t know it at the time but that was an ideal introduction to the wild and wacky world of what users can do. I’ve lost count of the times that I’ve found undesirable behavior when using the software in entirely unintended ways, and when the devs say “good grief, but why?” in a tone of voice suggesting that perhaps I should be in a straitjacket, I can only say that I’ve seen users do even worse.

Want to get a fresh perspective on crazy things to test? Help out the support team and work with a few end users. It will either open your eyes or drive you to drink. :rofl:

2 Likes

I started out the same way, transitioning from a support role into testing, dealing with the customers directly gave me good insights into how real users behave out there in the wild. :smiley:

3 Likes

I ran into this same problem at a previous company.

We tried writing UI automation and found that it actually provided very little coverage and was a PAIN to maintain. And, there was a tonne of errors just from UI changes and flakiness.

We ended up writing a series of unit tests for specific business logic and just beefed up our non-technical support staff who tested with playbooks.

We probably could have done better if there was more willingness to invest in the reliability of the product. But, that’s what we did with a small team and a small budget.

4 Likes

I had similar experiences (not as an automation tester). There are a lot of clueless clients who think, that getting lots of automated tests will solve any problem (“and it’s cheap!”) but that is rarely the case. If you just produce lots of automated tests you will have long running times for machines, you have tests that cover the same features several times. And should changes come even simple ones, you have a lot of upkeep to do. The solution to me is have tests run through manual testers, let them work out “good” test cases, keep them all in a common repository where you have an overview what is tested over all, automatically and what needs manual testing. Just dumbly producing automated tests will kill you with costs for upkeep etc. As usual communication, organization and a little thinking will save you money.

1 Like

I actually worked for a place that had reasonably comprehensive UI automation - and worked on said automation myself. It was even reliable (at least until we ran into a nasty compiler bug that corrupted the debuginfo once the executable was large enough).

The way that automation was configured was terrifying for someone new to the team: the automation was massively data-driven using CSV files to hold tests, test objects, and expected results. A test run was defined by a CSV file listing of tests, another CSV file containing the test details, the test objects for each detail row (CSV again).

On the flip side, coping with a change in how part of the application behaved was typically modifying one method, along with adjusting the order of steps if needed. A renamed UI object was an adjustment to one file. And a new test for a feature was something like half a dozen CSV edits plus one new method and a couple of lines to the various control functions.

The whole thing was immensely complex, and when I was working there over 10 years ago ran on multiple machines overnight, with a 5-6 hour configuration and test data setup run every weekend to generate the base test data.

While things were going well, it took less than an hour each day for someone to check the results. When they weren’t, we could usually pinpoint where the problem was, and reproduce it manually.

The key thing here is that this was a codebase that had been in constant development for over ten years when I was working there. There were around half a million lines of code in the automation codebase, and about the same in the data files (which were effectively a relational database in CSV files).

And the reason this was all UI automation was that the application in test had been in constant development for over twenty years and still had a great deal of code where it wasn’t possible to disentangle the UI from the back end.

On the plus side, having it meant that the test team did a lot of manual work in exploration and determining what would make good candidates for automation.

3 Likes

I’ve worked on teams that relied heavily upon it several times, for the following reasons:

  • There were a lot of (cheap) manual testers available but developer time was scarce.
  • The automation framework basically didn’t work or didn’t work very well and we needed to get a release out.
  • The product or area of the product was changing so fast that it wasn’t really worth it. If you end up running an automated test < 20 times then the return on investment is probably going to be negative. This is the case in a lot of startups where building the wrong thing is far more dangerous than building the thing wrong.
  • The product has a lot of bugs that automated tests weren’t very good at catching - e.g. visual defects.
  • The type of test would have been unreasonably difficult to automate (e.g. in one company we needed to test the use of a receipt printer - that scenario we left to the manual testers).
3 Likes

Not for a few years now, but when I worked at a small consultancy, and we had to log hours against projects for clients, very little work had any automation.

That is, small handful had unit tests, and almost nothing had automated system or integration tests, little to no automation on UI.

It was a combination of our setup in terms of how work was specified and billed, and lack of experience in building automation thet would have made it cheaper.

I was pushing to get capacity for Testers to learn automation until the day we went bankrupt and collapsed. The two probably were not directly linked… Probably.

2 Likes