Does anyone have a way to link UI code changes to the tests in the automation pack?

Hi

Sorry for the long title but its an awkward one to phrase.
Hopefully this will explain properly.

I was having a chat with a tech lead who wants the developers to be more aligned with the automation tester. The issue is that the developers are making front end changes that break the automated tests as they are not really aware of what it does and what functions are covered.
There is an easy fix - we do a demo so they can see how its put together and if a developer makes a code change, they will think to tell the tester.

But this doesnt really go as far as I think it should. How does the developer know if a field is covered within an automation test (we have a backlog that we are working through).
In my fantasy land, a developer would make a code change, the tester would be notified of the code that he or she needs to update in the automation pack and a new task is added with the details included.

The issue is, how can we create a mapping between a field and the automation test relating to that field in order to set something like that up? We are using Selenium with C#, and have created a POM framework. I cant help thinking that there must be other people who have cracked this one, so any working suggestions would be much appreciated.

Thanks

4 Likes

Imagine for a moment, a dev makes a change to a internal module an that cvuases one of the CI tests to fail, who fixes the test? Wel, mostly a tester will raise a bug, then a dev will have to explain how to fix the test… or…
You could, get the developers to be responsible for more of the automated test writing? If your code change causes a test to fail, it’s not the testers job to fix the test, I mean it might be if you had that kind of structure, but it’s not helping the dev if their code cannot pass CI because now they are blocking their own progress if they are not allowed to merge their code. You are onto the right route using POM, Steve, but it’s not the entire picture. I would make it easy for devs to run any single UI test whenever they want to, and thus easy to fix it at the same time.

I would also have a conversation and see if it’s a good time to move your test code to sit in the same repo as the product code. That will definitely force the issue, even if it does not get test code into the same repo (that’s a much harder ask.)

I’m not really answering your question - I think you are after something like a “high level” templating tool, perhaps if your app uses an app framework to generate the UI (like if its a web app) and then exports somehow or makes available through internal UI layer hooks all of the controls, then a test can check for new controls at runtime and raise an alarm. One way to do this might be if the test framework can read all the string translations, or if it can query all the existing controls. Where I work we have an internal API, which we use, so it is possible to count all the controls… but if every POM page asserted the number of controls … then it might become nasty when a developer deletes a control that was pretty cosmetic anyway and it causes every single test case to barf.

2 Likes

Thanks Conrad.
We have dedicated testers working on the automation, so are not looking at getting developers to write it, as the testers come at it with a testing mindset which we need.
A templating tool sounds like it might be the answer, but I really dont know. I need to do some more research - someone may have found a good solution. My idea may not be the right thing anyway, it was just a random thought as a starting point.
Cheers

2 Likes

Well, on the tool question my experience is, that your first idea is often the one you should push the most for. In my experience, probably because of the niche jobs I have taken, nobody else has published the tool that just “sits nicely in your hand”. So finding that tool is very often going to take about as long as trying to build one yourself, a process which in itself will generate fresh requirements and domain problem understanding.

Making the call whether to spend the effort is hard, and it’s an art form to be able to make these “frugal” experiments. I do wish you luck.

3 Likes

Here’s some ideas:

  • the developers in the automation product are monitoring the changes in the actual business product; there should be some code repository that can be accessed and read…
  • the developers in the automation product work in the same team as the business product developers; everyone develops product code and automation code; shouldn’t be more different than unit/integration/or other similar automated checks;
  • there’s no UI level automation checks; find a lower level to automate then and do some testing for the UI changes;
  • the automation product developer does peer reviews and adapts the html for the business product(for easier element finding) and automation product;
  • leave the automation product to fail on a test environment; you’d want to know that there are UI changes in the business product and which ones those are; so go get some information/learning - who changed, why, what’s the business need, how did they interpret the change, what actually changed, is there something that shouldn’t have changed…etc.
  • have the business developers inform the automation developers that they plan to change X by Y date; keep contact with them to be aware of the progress and actual changes;
  • delete automated checks code that gets flaky too often; if there’s too many UI changes, maybe it doesn’t make sense to automate any checks yet; keep it light;
  • add the code in the same repository, and/or link the deployment jobs of business and automation products; the developers can’t deploy the business product until the automation product is adjusted;
  • disable the job that runs regularly the automation product - in a production environment; and only execute it when changes made to the business product and automation product have been made and deployed in sync to a production environment; something like dependency versioning - an old version of automation will not run as it’s incompatible with a newer version of the business product;
3 Likes

Could you add to your automation something that spits out all the URL / selector pairs that they cover, so that at least the developers can do a manual check against their code?

If you can’t get the automation to do it directly, it might be possible to create something external to do it by reflection. I.e. some code that looks at the tests’ DLL, looks for the page objects within it via some common base class, and within each page object look for a property or constant for the URL, and for all the properties that are derived from the Selenium class for page elements. (And pull the selector from each page element.)

4 Likes

Hey Steve,

Have you thought about using qa selectors? I wrote a short blog post a few years back which explains the idea. I’ve not used these in a while but might be something you could explore.

Viv

3 Likes

Thanks Stefan, Bob & Viv - you have given me some food for thought. I will look at the selector pairs and qa selectors ideas and see if that could be a solution.

3 Likes

Having hooks/logs to tell you which QA selectors were hit during a test run would give you a basic “coverage statistic” - but that would require logging in the page-object, is that what you are proposing with “QA Selectors”

2 Likes

Hey Conrad, I was thinking QA selectors could be used as a way just to indicate elements that are being used for UI tests when people change the frontend and as an easier way to perhaps write selectors for UI tests. I wasn’t thinking of coverage and/or logging within page-objects.

I don’t think you can beat, to be honest a good bit of communication and collaboration, and don’t think automating alerts/reports on what is covered/needs to be updated to alert other team members sounds like a good idea personally. Keen to follow this thread though and see what Steve comes up with :slight_smile:

3 Likes

There might be some insights from AutomationPanda on some of the .NET “Screenplay” tactics that might be portable to non .NET frameworks that can give us this too.

We’ve decided to use selectors on a couple of products to mitigate the issue of changing Xpath or class names, so we’ll see how that works. It may be that we just put in place a better system of communication - if a dev changes a field with an automation selector, he or she informs the testers. But I will ask our tech lead on one of the teams if he could find a way to auto flag up any changes.
If we get it working, I’ll add a reply here.
Thanks for your input :grinning:

4 Likes

Congrats on finding a way forward, @sjwatsonuk. Good luck with the experiment.

Excellent!

1 Like

2 things Steve, Dashboards and ownership. This may sound preachy, and without shifting product-quality “ownership” around a bit in the org or between teams might be a longer term direction to pull in anyway.

Not everyone likes dashboards, and that’s because dashboards are a cleanliness thing, I know I don’t. Once again not a technical tactic, but I find that whenever you “apply pressure” to start doing the right thing, it creates a chance for an engineer to solve the problem you created then you started applying that pressure.

1 Like

I went down the route of treating all locators as data that could be put into actionable structures. In my case, configuration files that describe web screens and how to get to them. That has the downside of more complex, harder to debug, framework code but the impact of changes to the system under test is limited to configuration changes on framework side. When it comes to screen changes involving adding or removing fields, the greater impact is on the actual test cases.

3 Likes

Well, that’s what TDD is there for.
The developer implements an automated check for the behavior he/she wants to have
and then drive the development using this check.

1 Like

Steve is after a non-process change here @joaofarias , and as a technician I’m also tempted to push the “fuzzy” option or people-change buttons. But when the problem hits a certain scale, both in time and volume, people become a weak link.

I use a page factory and object model, so technically every test goes through my framework and all I have to do is add some prologue to every internal place in the plumbing of my framework and i will get a log of every page and element we hit. No need to touch a single line of test code, but it’s a case of what do i do with that log and will the log merely tell me which pages get the best test coverage… which is probably what we are after, if I had a list of all the hidden pages in the system I guess. Likewise a i see a similar issue with a test that counts which controls in the UI get tested, when on a web page, some control are perhaps invisible, intentionally, that might require some effort, but I’m keen to know how this develops for you Steve.

1 Like

Hiya, its early days but we have had one of our front end devs work with the tech lead and tester to propose a set of unique id’s that will be assigned to each element, and these will not change. There is a naming convention so any new elements will follow that pattern, which should make life easier.
If a button or field is changed in any way, the dev can inform the tester of that id so they can allow for this in the automated tests and make whatever changes are needed.
The id’s are being implemented now, so we will know how it works in a few weeks time once all are done and the automated tests are updated.
Thanks for all the suggestions, much appreciated.

3 Likes