I’m still doing a bit of hypothesising so sorry if I make any incorrect assumptions below!
My recommendation would be to speak about your team’s Definition of Done or working agreement. I suppose if we look at it from the developer’s perspective, they don’t need to fix the tests before they release software, so why should they bother?
A Definition of Done can be really powerful if the team respects it. That means that people in the team need to hold themselves, and each other, accountable when this standard isn’t being met.
On the other side - are the tests failing often? Does the team see value in the tests that are failing? Does it highlight a problem in the product when the tests fail, or are the tests flaky?
If people don’t see value in the tests they’re running then they’ll also be less likely to fix them. I’d suggest that any test that doesn’t provide value should be deleted from the test suite.
Hope this helps! Let me know if I’ve misunderstood anything, happy to keep this conversation going.