Locator changes and corresponding updates in automated checks: What sequence/procedure do you follow?

Ok this might sound dumb but I feel no shame in asking, how do you work in the following scenario:

There is a CI CD pipeline setup on GitLab. We have a pipeline for E2E tests and a cdk deployment pipeline for deploying frontend changes.
The CDK pipeline triggers the E2E tests when new changes are deployed on the relevant stage (dev, staging, master)

Scenario: The frontend dev has changed a locator on a button but when he wants to deploy his changes the E2E tests will fail because the locator change has not been done in the E2E tests.
Now in such a scenario, how should the E2E dev and frontend dev work so that when the frontend changes have been deployed the updated tests are already available.
Currently, what we do is we trigger the deployment pipeline and then stop it as soon as it reaches the test stage. Then we deploy E2E test changes and let the E2E pipeline complete so updated tests can run.

I assume the best way would be to have the front-end changes deployed on a test/feature branch. Followed by running the updated E2E tests on the said test/feature branch.
Having passed, the E2E test changes are merged into dev or staging but the E2E pipeline does not trigger any tests (in our structure the E2E pipeline triggers tests on the relevant environment). Now, the front-end dev goes ahead to merge his changes into dev or staging whilst letting the pipeline run up to the stage where tests are executed.

4 Likes

I’ve gone through multiple approaches and it always ends up being: an automation project staying completely separate from the product deployment pipelines.
The main product deployment can notify/trigger automation product start, but that’s it.

What do you want to achieve through your process?
“how should the E2E dev and frontend dev work so that when the frontend changes have been deployed the updated tests are already available.”
Does this mean that you only want to deploy the product when there’s no failing E2E check? Why?

What I’ve partially done before is to get tagged in the peer review of a branch where the change was made. Review the changes and add if necessary the old or new locators. Then use that info to adapt the automation(without blocking anything though). Or if you have the time, review all the frontend changes.

1 Like

Counter question: It sounds to me like somehow both are not talking about this to each other. Why?

I work as tester (including automation) in Scrum team and as soon as I realize (at best during refinement or planning) that there are changes for the UI, I add a sub task to adapt the UI automation. I do this often on the same feature branch as the UI developer does it.

It sounds to me mostly like a communication problem.

Still I don’t fully get your situation. e.g. I don’t know your branching concept is.
Is see you making many implications which others don’t know.
I’m currently short on time, maybe I can get later into detail. I still wanted to share my advice above.
Maybe others can read that better than me.

e.g.

dev or staging what? branches?
What is your branching concept?

1 Like

You might just be having a workflow branching problem, the gitflow discussion might help Git - branching strategies for automation

  1. Developer makes product code change, also updates the test branch to reflect the ID change,
  2. Requests a pipeline build and feeds their branch(changelist), and the test branch changelist into the CI/CD
  3. All tests pass,
  4. Developer dresses any late code review comments merges code and test branches.
  5. Done

I agree with Sebastian’s point. I believe this type of issue occurs in many places but the real problem here is the lack of communication. The solution suggested by Conrad is good but only if it’s feasible. Not every company has the infrastructure to implement this kind of approach or it might not be practical from a time-to-market perspective to set up continuous deployment/continuous integration pipelines that also support developer branches. So I would say that as Sebastian said, your primary focus should be on creating a solid communication process.

In my company, we have agile testers embedded in product teams and we’ve set up Confluence pages called “contract locators” where all the locators used in test are described and if a developer aim to change it, he or she should update it and of course share it. Even with this, we still occasionally have someone merge code that breaks the pipeline :joy:.

2 Likes

In my experience this is often a communication issue - the UI and automation devs haven’t discussed this dependency and how best to manage it.
It’s difficult to suggest how to solve that in your context but often just getting these parties together to discuss how they want to approach this works well.

At one company I worked at, this discussion led to the realisation that the UI devs rarely used the name attribute so agreed always to give interactable elements a unique name and notify the automation team if that was changed. This worked well for that context.

Often it can depend on how integrated the dev and automation changes are. If automation is often written after the dev team has delivered then it can be a challenge since there is often a disconnect in the value of that automation to the dev team (so they have little incentive to resolve this). On the other hand, if the automation is part of the feedback loop to the devs (for example, PRs can’t get merged until these tests run), they become incentivised to keep things in sync either by notifying the automation devs or making the change themselves.

Depending on the complexity of your locators and the UI, you might be able to create something that monitors changes to the source looking for changes to element locators. May work well depending on what attributes you use for locators and the underlying UI framework.

You can also look at some of the “self-healing test” solutions (or roll your own) - these can work well in some contexts but can also introduce some risks.

Answering the mutual concern of @billmatthews and @sebastian_solidwork, there’s no communication gap between me and the 2 devs. Its just that I wanted get insights from the community so I can improve our process.
From what I’ve read, I guess there’s no “best practice” in this regard and what’s best is what we’re already doing. i.e. working out a way that works best for both devs.

1 Like

When you only have 1 feature in “flight” at a time, just picking up the phone works, so yes my CI/CD gitflow suggestion only works once you hit at least 3 teams or at least 3 features in flight.
The term CI/CD is quite broad, but lately for me it does mean dedicated machine groups for build, a private cloud with multiple test sandboxes and a dedicated person who looks after it. That’s not to say you cannot call a single build machine that runs all your tests as well a CI/CD. It’s just less flexible until you do scale up, and then it become inflexible again for a while. Yes context is king, but the pain of a broken locator should never be solved using self-healing. That’s a wart on a wart.

1 Like