Should the development of new features or bug fixes not break the e2e tests?

Hey all. First time posting here :slight_smile:
I want to vent a little bit about a situation I am going through.

disclaimer: I am not an english native speaker, so please ignore (or please correct if you will) my mistakes :slight_smile:
I am a QA Engineer with my whole career focused on automation. 5 years.
I have worked with lots of test suites, made by other people and some created from the ground zero by me. I am mostly focused on e2e, although I can also work with api testing without any problem.

I am in the same project, for something about 2 years. But lately things are changing… There’s this new guy which is the new CTO from my client’s company. The project is passing through a lot of changes and will eventually end by the end of this year (the client doesn’t want to keep the contract anymore - for some reasons that aren’t my concern).

My job, since the beginning, was very praised by the client and by everyone from my internal team. I am not perfect, far from that, but they value a lot the QA’s work and I am really proud of it.

So, the project was going pretty well in my hands until this guy came. (he’s a pain in the ***).
I am really sorry for anyone that is going to work with him…He likes to boss around and I can see that he fails as a leader.

This week, we had a discussion where he kinda bashed me because my e2e tests have broken after a HUGE new feature was being implemented (but not really even merged yet). He said that no tests should fail and the e2e should always be reliable source to validate the application regardless the change. He said that would only be acceptable if there was a requirement change. But anything besides that, it should not break any test.
(just for context, the tests failed because we added a completely new product in the application which caused some backend changes, and the tests needed to be updated to include the new parameter added to the api calls - and without it, the calls would fail).
If it was about some bad selector choices, that would be acceptable. But I already use data-testids for every single element I want to interact.

We discussed a lot and he kept saying that “that’s not questionable” (that’s why I know he’s an awful leader) and if that’s going to happen again, the dev and I should contact him so he could help to find a way that would not break the e2e.

Honestly, I am really pissed off by this whole situation. That’s not the first time he complains and tries to change the way we work.
If I didn’t know that the project is about to end, I’d talk to the HR to move me to another project.

What do you guys think about this? How frequently do you have give maintenance to your automation code? Does what he say makes sense?


@lufis Its very subjective to tell but automation framework should be modularize enough to handle / quick fix in case of bugs delivered by dev team then only we will get ROI from automation.

New feature: If new feature coming in between of existing E2E tets case and have dependency on next step then definitely it going to affect E2E test case BUT if new feature adding on top of existing functionality or its not included into existing E2E then it should not affect.


Thanks for the answer. I agree with what you’ve said. And got the conclusion that he simply doesn’t understand what it took to add this new feature. Which, in the end, is also a HUGE requirement change. Since the devs were never told that a new product would exist.
I couldn’t have predicted it as well.


hi Daniel,

good question and one im not quite sure how to answer but here’s my perspective:

personally i think that failing e2e tests are GREAT, it could mean (not limited to) :

  • we’ve caught a bug
  • we need to challenge that we still see the value in the e2e test
  • we missed a requirement to UPDATE the e2e tests

in this particular scenario i feel like perhaps we missed the requirement of updating the e2e based on the changes, but we are all humans and sometimes this happens.

A simple question your team could collectively ask every ticket that goes through a refinement/analysis… will we need to update the e2e tests? if so GREAT i can get my hands dirty with some code. If not at least we asked the question.


From what you described it seems to me that your tests are were doing their job well, they failed due to a change in the system, the usual regression testing outcome, oh and that CTO is dick!

I have to agree with the general consensus here.

a) Your automation tests failed due to a change - that’s what they are made to do.
b) @jamie6bly idea that a questions be added to your tickets is great and I would suggest you get it implemented.
c) Some CTO just do not understand, try to walk them through the SDLC and explain when E2E are fixed in that process.