When do you give up on a test script?

For context, this is very similar to my mood right now: https://twitter.com/BruceOnlyBruce/status/1572508414596771842?s=20&t=ti0Jq6tFc_sAHBl5QRk_kA

The big difference is that just one app is causing environment pain when it comes to testing it. All kinds of problems, ranging from being sure to be correctly configuring the app to connect to the correct test cloud environment (we have 3 or more really). All the way to the security hurdles the app has, have caused a 3 month project to take 6 months and still not finish. My automation is just flakey.
The goal of the script was to let developers test from home instead of just letting them wait for nightly results. It’s much easier to then debug or rerun as the tests take ages to complete anyway, all sorts of reasons for that too. So we want devs to be able to test at home. The build and test lab is not in the office anymore, everyone uses a datacenter these days, so asking devs to come to the office was never going to help us here. A lot of the work delays have been that we need to do a lot of manual testing that the test rig just cannot execute right now because it requires automating things that are not possible, so we need some manual testing, but the flakiness of the auto tests has been a distraction which we are solving by re-writing some parts of the automation. The shift away from a nice office with everything in one place has been a big knock. Tech debt clearly plays a part. But each thing we fix , shows another thing that needs attention.

I’m the kind of person who often uses pure enthusiasm to keep me chipping away at a thing, I don’t give up easy. But right now, just want to stab it with a wooden stake and ask to switch to a different team, or just rewrite the plan a bit. Why shouldn’t I?


Take off and nuke it from orbit?

Seriously, though, why are the automated scripts acting up?


I get like this often, in my experience, walk away from it for a few days, help out other team members even if its not your primary job.

Come back with a clean mind and more often than not (for me anyways) it seems so clear when i return, sometimes you dig so far down a hole with persistence that often you are blind to the obvious that you would of found without the frustration.

Ps. Hang in there!


The main blocker has been networking and VPN related. I’m not keen to go into technical detail, but network troubles in our satellite office has also played a part by just being another hurdle I did not need @jon_thompson . Mostly caused by Covid remote working, one project overshadowing another and like @medigoldhealth points out, not stepping back and taking stock.

Had a war room, and we decided on some hard deals, we will cut back the scope and environments we want to support, which reduces the vendors we will execute automated tests against to just the main vendors. Trying to make sure tests ran against all vendors was probably a mistake with the small team we have. It’s easy to aim big and then fail to deliver anything really as big as you thought you could deliver. So aiming a bit lower is humbling, but feels good. Basically the devs all came back and said that some tests against some vendors appeared to be stable, and keeping those as automated will loose us some coverage, but save a lot of lost sleep. And if we only run manual tests but focus on the remainder of difficult vendors we struggled to automate, that’s probably a good chance to win back time too.

I’m of a mind to do this kind of “step-back” exercise on a regular basis.

1 Like

I think I know what you mean about networking problems @conrad.connected I’m developing automated tests remotely in the North West, connecting to the corporate network over a VPN, and then hopping through to a VM’s desktop in London running the actual web app I’m testing. Pretty much every other statement is to make sure I’m syncing with the UI before moving on, because if I don’t, the tests will randomly break because of random things not being fast enough.

Also, the devs rebuild several times per day. The app is built with Angular. Angular seems to randomly change static parts of the interface with each build just because it can. So, finding robust locators for each and every element that are stable regardless of Angular’s quirks is basically my life at the moment.

Still, if we knew what we were doing, we’d only be bored, right?


I think that’s the big thing that keeps me moving forward. I’m suspect we all suffer the imposter syndrome, although I prefer to call it low self esteem. And I’m then reminded of my father’s proverb:

If it was easy, they would get monkeys to do it.

Ah Angular, did a bit of that a year ago. It needs a lot of test-code organization and structuring skills, since lots of controls are just plain tricky to drive well. Reminding devs to add accessibility ID’s every time they make any changes is fun. I overall suspect that when the problem is complex, that’s when we enjoy it most, but also when we need to give ourselves more room to improve our tooling, and clean things up more regularly.

@conrad.connected - Each week i have that at least once. So you are not alone my friend. If it helps in any way know that, in my eyes you are a good tester and : You can do it.

When network (latency and/or bandwidth?) is the main issue:
How about uploading the automation application either to a PC next to the mobile devices or on the mobile decides itself? I one had the latter (totally individually developed by a co-worker because for Windows Mobile didn’t something like this exists)
And basically just getting reports, logs, etc. back once a run is finished?
Make the execution really running next to the devices or on them.

Another, more general, questions:
What are your trying to achieve by your (UI?) automation?
Are there maybe others ways?
E.g. you want to check that the app can be executed on the device. Why is it necessary to do this via UI automation? Why not using APIs on the mobile device?
Are you by UI automation also checking that the UI is correctly rendered? How about letting the automation “just” navigating through the app and making screenshots which your team (at least at the first time) checks by themselves to be correct?
You can do image comparison, but don’t get started with thresholds for deviations - imo make image comparison 1:1 or not at all. And be sure you comparisons screenshots don’t contain false-postives.

Imo there will be always parts which the humans need to do, they are better at improvisation and seeing the whole screen (not just tiny parts like automation code).
Think about how automation, and in general coding, could help your team to speed up the repetitive parts which are computers are good for.
And let the demanding things for humans. As handling a dynamically changing structure.

There is not such a thing as “manual testing” :wink:
I partly know what you are trying to say.
There is always a certain part left for humans. Guess what reading automation reports and logs is😉
Otherwise it won’t be testing - looking for problems which matter to people.