Regression bug because of something on develop, how to proceed?

Hi, I’m a new tester (1 month of experience) and today something happened that made me unsure about how to proceed.

I test our system on a shared online development environment and today some pages broke because of an alteration that a dev did on the database and because of this new alteration those pages will be down for 1-2 days until the dev creates a pull request and a merge with the new code is deployed on the shared online dev environment.

Should this be pointed as a Regression bug and a card be created on our board to fix it? I believe it is a regression bug, but the dev told me this is not a regression bug, this is “a problem during development time” -his words


Can you give us some more details about what went wrong? Why do you think that you have a regression bug?

Well, to be honest without additional context (you probably signed an NDA and can’t and shouldn’t now share specific details) I can’t say with certainty, but someone would need to do a root-cause analysis to determine if it is just a transient/temporary environmental issue or if the indeed system regressed - an unforeseen consequence of recent changes to the software.

Ask around with the team, show proactivity, just be diplomatic and avoid blaming and accusations, these thing happen all the time. Emphasize that your are new and that you are eager to learn and that you are willing to contribute.

I think that pointing out things like this (even if it is just an issue cause by the deployment, an improvement can be suggested) can be a plus for you, in any decent company as long as you communicate it properly.

Hope this helps to give you at least some general ideas on how to proceed. :nerd_face:

1 Like

So I guess it is a transient/temporary environmental issue, since the dev will send a PR with a new feature that will implement new things to the system and also fix the broken pages (the pages broke because of this new feature implementations on the database before having a PR with the code in the system).

For now it is broken, but we know the root-cause is the new implementation of a feature on the database before having its respective code in the system.

So… no new card to fix this as a regression bug?

Yea if that is the case, just verify that the issue is no longer present after that PR with the fix gets deployed, if it is still there I guess a defect should be formally reported.

If it was live in production it was a regression.
It’s still a “defect”, but any company that worries far too much about which bugs are regressions versus integration bugs , versus unclassifiable, needs to write a better rule book. Your job as a tester is to “talk” to the ship-ability of the product. It’s a waste of time in my opinion to worry about classifications while there is a house-fire.

It’s hard to be diplomatic as @mirza points out. But that’s how you will get things fixed quicker. And as long as you can use it to learn about the software development lifecycle in more detail, you all can win. You might want to also talk about “planned breakages” alongside this in future, because sometimes a break can prevent other kinds of testing, so getting warning will help other teams plan their work. The only way devs can warn you that things might blow up soon, is if there is a daily scrum meeting where devs and test and “ops” are all in the same room together.

Basically also helps to think of software creation as a very very long conveyer belt (My first job we did work with coal mining.) And if a developer breaks things, they either need to come under pressure to revert their change completely, and prepare a better change, or they need to plan to break and then plan to fix it, but not on a Friday afternoon. Any downtime ends up costing a lot of other people and teams time. also there is a greater risk during downtime that any testing that you could not do, means that other teams are unable to contribute their code either. It does not help to increase pressure from any external teams who also get blocked, so diplomacy is key.

This might also mean you need to learn how to look at code changes and how to even revert a change if you can get agreement to do so. In this case someone changed a DB, which means none of this advice about code will help, but that possibly points to a systemic architecture flaw, since all data in the DB (I’m assuming this is not production) should be test-only data, and all the DB schema and procedures and triggers should be in the same version-control repository too.


Does it make a difference?
A bug is a “Request for Change”.
The team organizes its documentation (e.g. a task board) on the way more convenient for efficiency. When it’s time to apply the change, one does the change and sends it to the users.

What’s the value of having an additional classifications?
Isn’t it overprocessing?


Backwards, the diagram reads for me as DOOWMIT.

Which is hilarious, DoowMit! Sorry, just had to point that out, it’s how I’m wired, it made me laugh on a Friday!

1 Like

Should this be pointed as a Regression bug and a card be created on our board to fix it? I believe it is a regression bug, but the dev told me this is not a regression bug, this is “a problem during development time” -his words

What would be the consequences of reporting or recording this problem as a regression bug? Would someone or something be affected in an important way by classifying it one way or another?


I’d take this more as an opportunity to clarify your understanding of the “shared online development environment”. It sounds like whatever you were testing had a dependency on a shared resource that got broken. Figure out if this is normal or not, what the expectations are about dependent services in dev, etc. At my current employer, dev is the wild west, and services can go up or down. Teams trying to keep their applications working as they realize it provides the first real integration environment, but there’s no guarantees/SLAs/etc.

It’s also a great opportunity to learn some SQL and try and understand what/how things were broken.