What's the longest time you've waited from raising a bug to it getting fixed?

I raise a bug, one of my very first, early in my career. It was a scenario where Citrix implementation of Remote PC Access would prevent you from regaining control and logging in locally, in a specific scenario where the machine being controlled went to sleep.

Instead of fixing the bug, the root cause of which was an obscure virtual keyboard driver, the team responsible choose instead to release guidance to turn off all sleep settings and said they wouldn’t support the feature on laptops.

This bug, which I found long before we went to production with that version, not only went to production but stayed there, for well over 5 years! At that point, I had been made redundant from Citrix and had moved on, I only found out it was fixed due to a Product Manager for the feature celebrating the fix over Twitter. That made me smile, to say the least.

So, there you have it, I don’t remember how long exactly but over 5 years, can you beat that?

What about the quickest fix? I’ve paired with developers, and together we have found and fixed issues during our session. At that point, did we fix a bug or prevent it? Either way, I highly recommend pairing over a bug report if you get the chance.


I reported a bug in 2012 and it still isn’t fixed :stuck_out_tongue:

Quickest fix would probably be even before development, in the writing of the business idea, I was present at the meeting already and I noted 'please remember X or Y ’ and that won’t work because of Z. But I guess that would be more of a prevent a bug instead of fixing a bug.

So pairing with developers, and when they demo something I’ll ask them to do a few things and they asap fix the bugs.


I’m pretty sure there are active bug that have been in our system for most of the 10 years I’ve been working here. They’ll probably never be fixed: many of them are more along the lines of disagreements over the way the company as it used to be prioritized user experience: the assumption used to be that users would be rigorously trained and could accept being able to completely mess themselves up as long as the core functions stayed more or less unchanged.

My description of the philosophy: “We’re so helpful we not only allow our users to shoot themselves in the foot if they want to, we hold their hands while they’re doing it, and will even pull the trigger for them.”

If you can’t tell, I much prefer a UI that doesn’t let things through which shouldn’t be allowed. If the back end logic says that only every second odd number is valid, then the front end only allows the user to enter valid data.


Not the longest, but still impressive:
I once told our product manager (just vocally, not by or issue tracker) that there is a gap in our workflow which enlarges the used data (connecting data from each day with the previous one). It required an optional, organisational demanding, step to cut off the data from the previous day.
I was told that the users would do that. Maybe he did not understand (might be my fault).

Guess what the stressed users did not had done …

Month later we had to deliver a quick fix, because the production system became hardly usable due to long loading times.

I still feel a bit sad about that. It was early in my carrier.
I guess it was the most expensive insights I had.


If it was already in the code, you fixed it.:slightly_smiling_face:
Still quite early.:+1:

I do similar things from time to time.:raised_hands:
It’s just minutes from observing the bug until having a new version with a fix running.


Ah yes. “The users will handle that step”, “The users will never make that mistake”… I sympathize with you. I had one of those - and two days later I had to talk the users through fixing the problem that was caused when they did make that mistake. While not allowing a hint of “I told you so” into the call.

It’s an expensive insight, but a worthwhile one. Human nature will trump any and all engineering, so the engineering had better be bomb-proof if you don’t want to be spending large amounts of time fixing something you could have prevented.

Alas, we don’t usually have the ability to make that call.


These war stories are rather interesting! Thanks @fullsnacktester for kicking it off. I can’t say that I’ve beat that but I’m aware of bugs that I’ve raised that just suddenly become the “fix it in the redesign” which btw 6yrs on there’s not been a redesign. I remember raising defects even in our external resources (Swagger docs) and that didn’t phase us. It raises the question around the amount of risk a business is comfortable to take and how much this affects customers. I remember an analogy from @ThePirateTester once from his YT around the dirty dishes piling up and feels like that’s sadly a normal, almost numbing feeling.


One of the products I’m working on is over 25 years old and I’m sure there are issues I logged nearly 20 years ago that hit the “won’t fix” heap back then that still exist (some because customers actually liked the benefits they gained from the bug even if, in theory, it wasn’t intended behaviour). It’s a case of being pragmatic. Our current open not in dev bug list on our largest product (millions of lines of code and 25 years old) stands at 15 bugs. None are causing functional issues and in all likelihood no one would actually realise we’d fixed them unless we told them so they’re staying on hold until the devs get that fictional “free Friday afternoon”. Over the last 6 months we’ve actually discovered more bugs that are many years old than from the latest production release. Just a reminder that we’ll never find all the bugs - that’s the nature of software.

1 Like

I can’t remember exactly but close to 5 years also. Anytime we tested our app with more than one user on the same page, we’d see 500 errors in the console. Our app would retry 3 times after a 500 error, but frequently the retries as we were testing kept getting 500s. I showed the devs, often they were sitting WITH me doing the testing. They just shrugged and said, nobody was seeing that in prod. I filed a bug (our product was a project tracking tool, which included the bug reports). It never got prioritized.

Eventually, the CEO contacted one of the dev managers and was furious about seeing 500 errors. The error had been there all along, but it took the CEO complaining about it to get it analyzed and fixed. It was kind of a hairy problem, as I recall it was some kind of deadlock situation with the db. This was a great team, doing all the good practices, building quality in, but just could not be persuaded to look at this problem until the CEO (who originally helped code the app) yelled about it.


Painfully familiar. I’ve also had a few of these where I’ve given over too soon and felt I should have dug deeper to understand the impact and sell it better to my team, so they could take action sooner.

Not saying you did that this case, the opposite, you tried hard here and were ignored. In the past I’ve been part of the crew giving up too easily.