How to do the maintainance of the automation test due to the project changes so that CI/CD is not broken?

Hi, we have started automation for a new project using a low-code tool.

This project includes both basic and main functionalities. The basic functionalities, such as user login, user activation by management, and user registration, are prerequisites for main functionalities like transactions or other features.

We have conducted many automation tests for these basic and main functionalities, and they are integrated into our CI/CD flow.

However, the clients have suggested improvements that alter web elements and the workflow. For example, consider the login interface, which is already automated and used as a basis for other automation tests.

Now, with changes to the login interface affecting both the interface and its elements, we face a dilemma:

  1. If we start updating the automation tests for the login based on these improvements, the other tasks in the automation test will fail since they rely on the login, and our daily/weekly regression plans will also fail.

  2. If we don’t update the automation tests, then the CI/CD will break when the improvement task is merged.

We have a hybrid testing team composed of both manual and automation testers.

Could you suggest how we should proceed with these changes?"

1 Like

A thing that comes to mind is that you have different test environments or product code repositories that do things differently.
If you want to keep going in parallel with those you might want also to split your testing effort between the product versions. In terms of automation:

  • branching automation repo code, one per environment/or product code branch;
  • to execute each of them, different CI/CD pipelines might be needed(or if you have a highly configurable system, track the development branch changes and trigger the appropriate automation launch);
  • if the product code is part of a subsystem, then some extra work is needed, adapting other places as well(other pipelines, environments, databases);

I usually prefer to first test the changes. Only when it’s stable and merged to the main product code branch, start automating the checks.
The change might be slow to be developed, deprioritized for 6 months, canceled, rewritten a couple of times, or redesigned. So you might not want to spend time too soon with coding this stuff in another product(automation).

Hello ipstefan,

Thanks for the response.

Even I prefer testing first and once it is stable then automate it.

As we follow hybrid and as per your preference,
I was just thinking of having phases.
First phase: Manual testing of the tasks by branching the product code with all bug fixes testing as well. So basically from a manual point of view, the task branch should be stable.
Second phase: once manual testing is done, do the automation fix by branching the product code
Third phase: once automation is done, then merge the task branch of the product to the main branch.

Here, till manual testing is done, there won’t be any change in automation due to the task. Only verifying if the automation will be affected or not. Analysing the ETA to fix.

If ETA is acceptable, then only consider automation fix once manual testing is completed.
else fix it after the merge.

The only drawback here is: that ETA should be acceptable to take it forward. Considering the priority, dependency, and all.
This fix had to be done and merged before the daily/weekly regression run. Also the same goes for fix after merge too.

What do you say?

You could also dive deeper into the context of your software:
E.g.
From a business reference, is there a point where certain features will have reached their stable state such that there won’t be anything major to change in them? If yes, then you could consider holding back running automation on those areas.
Generally it is recommended to take up UI automation only when there’s stability in sight otherwise its not worth the effort.

Another approach to consider, could your effort be made easier if you focused more on API automation first?

Hello,

This project is focused first on UI and hence we started with UI automation. And in this project, there are UI improvements. So I am not able to understand how can we make sure to handle the continuous integration flow if there are improvements without affecting other tasks and runs.

Yes, we shall be doing automation once it is stable. Say suppose a new improvement is merged, the CI will be triggered and it will fail.
I suppose fixing automation that is failed in CI should be taken as a high priority, to avoid more issues. Isn’t it?
Rework ongoing automation script as per the latest changes on the main branch.

Anyways CI will not help much if there are multiple UI improvements I feel.

I want to make use of CI to do some regression rather than doing it manually again once the task is merged.

1 Like

basically that, divorce from anyone’s minds that CI/CD is viable and that they instead need to take a look at manually testing flows that change first, and test that those flows are in fact correct. When you repair a UI test you have to manually intervene anyway, and do a lot of “human eyeball” work , so who are fooling there? Nobody, just explain that changes have to be manually tested for a while until they are fixed and stable.

Funny thing is when using a UI automation tool in CI/CD you will see 2 ugly things, the UI changes, and a day later just as you fix the tests, it perhaps changes quickly again. Then the 2nd ugly thing will be that the product owner sees the new UI 2 days later and asks for another change. People need to catch a reality check and realise that the task of the UI checking is to prevent “regressions escaping”, not to test the bleeding edge changes. It’s the hard truth, but I have seen expensive low code tool binned after about 2 years of effort because people tried to jamb it into their feature development pipelines, it’s not a fit even if the marketing material says it is, it just is not a fit.

I should focus on ensuring the manual testing flow is intact and concise for UI/UX-related tasks, rather than emphasizing CI/CD. Once the UI/UX is stable through manual testing, I will proceed with fixing the UI automation.

What do you guys think about backend functionality automation?

For example, consider a scenario where an existing UI form submits a number value via a textbox. The backend server performs a calculation, such as addition, and displays the result back on the UI.

Now, suppose there’s a task to change this calculation from addition to multiplication. In this case, the change occurs on the server side with no UI changes.

How would you approach automating the testing of such backend functionality?