Hey! I’m a junior automation tester, and was curious to ask our experienced MOT members, what was the hardest/most interesting thing for you to automate? And how did you overcome it? Maybe someone can even provide an example Thank you in advance!
As OTT is growing and to automate OTT content like Audio, Video, Video quality, Sync of Close Caption with Voice(Still didn’t got the good solution) were the hardest …and hard because no fund provided to use paid tool need to discover open source or can only buy customized hardware to support automation
I used Metamorph Testing to automatically test a machine learning model. (Data scientists create the model, test it themselves and I tested it also)
We created 100.000 test cases and since in machine learning an answer is black box (prediction) you use metamorphic testing to compare those 100.000 results to each other so you can check if the outcome is acceptable or not with an (for example) 80% accuracy.
It was one hell of a ride
The hardest thing was a page that had to provide a good overview of tickets. The requirement was very fuzzy as to what “good” was and we didn’t have access to good page hooks (locators are random GUIDs). So we overcame it by not automating the overview page, but had people evaluate it
Automating almost anything retrospectively is impossible to do well. Good automation is written along with code that is designed to be tested.
But some more classical answers… UI testing, like a web with no IDs. Something generated like Salesforce is traditionally considered hard to do well. So UI frameworks have no interface other than an image so are very flaky to automate.
Anything subjective like audio or video quality.
Anything non-deterministic like intermittent bugs, race conditions, real world performance.
Anything uncontrolled like which phones customers use. Supporting all those old browsers and Android phones.
The hardest thing to automate would be many things I failed to automate. Failed, primarily because I was trying to automate a thing that should not be. A few people have stated reasons higher up why, for example here is one similar (not mine) which is typical of things I’ve failed at (time code 40:23).
Instead of being vague I’ll be very specific
goal: test that graphics driver loads
step1: install baseline driver, try to detect version
step2: install new driver, wait for old driver to go away, wait for new driver version
step3: reboot and verify driver does not roll-back
Each step is followed by a pixel-check (a device aimed at screen that checks the screen colour is a specified colour)
Anyone can guess, step3 went wrong. But for 2 reasons
- My testing toolstack could handle reboots, but not a double-reboot if updates were required
- The version number check was a misnomer, since it turned out “bad” driver can still load and run apparently correctly, but still have poor performance.
Pretty sure I wasted a whole month on that failure. Some things, should not be automated.