30 Days of Automation in Testing Day 22: Share your biggest frustration with automation testing

My biggest frustration with automation testing that in the previous project, I could not optimize and make the test scripts stable before rotating to another project.

I joined an automation project which has so many test scripts before about the database, API, write files from the database to compare. I optimized the framework like I mentioned on Day 21. The framework is cool now, but the test scripts which were scripted by many QAs were not stable. I have a task run regression weekly and analyze failed scripts, fix the failed scripts. Last week, I fixed about 40 cases of failed test cases but next week, about 50 others were failed. It made me crazy, there were so many reasons like environment, data out of date, using WinForms manipulation, compare image… The scripts have not been stable. After refactoring the framework, some issues were gone, the performance has been increased so much but the test scripts still not stable. I need time to invest, analyze the structure, the system, the source code and define the way to make the test scripts more stable. But I rotated to another project.

As it relates to just the task of automating tests, I can’t say I’ve had much in the way of frustrations. My one challenge comes to mind where we (the Senior Engineer and I) had to get “creative” with our testing because the framework in use was starting to reach its limitation.

I won’t get into Devs not adding proper element identifiers (tags) to page objects, as I’ve been told this is not always possible with frameworks like REACT. Still it makes our job harder to do.

I would add the test -> fix -> retest -> commit loop, but thats our job. As is investigating tests that were working in “Env.1” but fail in “Env. 2”.

My biggest frustration with automation is also when taking the integration task which integrating test suites developed by many other members in the automation team. Some of them are junior automation members who don’t have many experience in scripting, the others have just moved from manual test in.
That challenge took me a lot of time to analyze failed test scripts time by time but it’s also give me the chance to find out many kind of holes when doing automation script. It helps me very much in designing a better automation framework and sharing lessons learnt to other automation projects.

From the Twittersphere:

My challenge is finding out how to automate what can and should be automated in a way that respects those tasks that cannot and should not be automated. In particular, the systems I test are back ends to hardware that requires direct interaction (button pressing) in a way that cannot be automated (unless I want to get into building robots) and probably should not be automated (because there is a user experience aspect to what I do).

So while I have been gaining a lot of knowledge in automation,and have been doing some practice on toy systems, it has been hard to put it into REAL practice. Hard, but not impossible. I am finding ways to speed up tasks that free up some time and labor, but it doesn’t look anything like a classical ‘test automation’ setup.

-Dave K

1 Like

Below are the pain points that I have experienced:

  1. Flaky Tests - Tests failing because of changing web elements or loading time.
  2. Reducing dependencies - Automating test that involve the dependency of multiple application could be challenging. Because, when one of those dependant application doesn’t work as expected the automated tests would fail.
  3. Test Data - Tests who’s test data can’t be data driven and have to be entered unique and manually every time has been a challenge

Source: https://qakumar.wordpress.com/2018/07/27/day-22-share-your-biggest-frustration-with-automation-testing/

1 Like

For me it has to be Element not found due to page loading issue. Making wait statements work.