I’m looking for insights into how teams manage testing across multiple legacy environments, especially when customers are still using older versions of Windows, SQL Server, Android. I’m curious:
- Are there industry-standard strategies for maintaining compatibility without spinning up every possible version?
- How do you decide which environments to keep active?
- Are there tools or workflows that help streamline this?
- Who typically owns this responsibility, QA, DevOps, or someone else?
I’d love to hear how others approach this practically, especially in long-term support scenarios. Any war stories, tips, or frameworks welcome!
@jonbag I want to share story from one of my project where I have to test OTT application for different browser version, mobile (android + iOS) , Roku and other devices
Same challenge that the latest application can work what all old version of devices.
This is for US region, so get quick details from google that what no of users falling for android vs iOS
For our end users , we are capturing that what device /type / model user accessing OTT app , that help me to prioritize that on which version I have to focus more , so less no of people affected If I lower the priority to test on some older version due to time constraint
Definitely automation helped to run smoke on lower version
To maintain old version of devices , created internal device farm where all devices hooked up which we are supporting
At some point we think is hard to manage internal device farm and then moved to external vendor saucelabs where get all version to run automation test
DevOPS dont care about all this , so this fall on QA
Thank you for such an interesting reply, I really appreciate getting a glimpse into how people in different roles approach similar challenges with different solutions. I hadn’t considered capturing device, OS, and browser data via the User-Agent header before, but now I’m curious to see what insights that could reveal. I’ll definitely look into how we might implement something like ua-parser-js, I imagine it’s already part of some process, just waiting to be surfaced. Fascinating stuff.
Also, thanks for pointing me toward external vendors like Sauce Labs. I didn’t realise that kind of setup was possible, it really opens up a whole new set of possibilities once I get some automation in place.
There is a problem with choosing test platforms based on User Agent analytics. If your website works badly with a platform, people won’t use it, so it will show up as a low percentage in your analytics. You may easily come to the conclusion that you can ignore that platform because not many people use it, whereas the correct conclusion might be that more people would use that platform if you fixed your website.
There’s no harm in looking at the analytics as long as you don’t draw the wrong conclusion. I usually recommend that clients base their decisions on global usage statistics unless they have a reason to believe that usage of their website is likely to be different.
CyberEssentials
If you’re working in the UK, your organisation may have, or be working towards, CyberEssentials certification. It’s mandatory for software development companies wanting to apply for Lot 1 of the government’s G-Cloud 15 framework that opened for applications last week.
I only started looking at CyberEssentials last week, but it seems that there would be difficulties maintaining legacy platforms. For instance, my understanding is that:
All operating systems, browsers and other applications must be set to automatically update.
If any machines are not running the latest version of all software, they must be on a segregated network such as a VLAN.
It must not be possible to access those machines from the Internet (although I don’t know if this means they must not have Internet access).
There are also issues with the security of third-party SaaS services like Sauce Labs, and it’s possible that 2-factor authentication may be necessary.
I have only read the requirements once, so I may not be right about everything, but if your organisation has, or is working towards, CyberEssentials or CyberEssentials Plus (which is even more stringent), it’s something you ought to look into.
Thanks very much for the thoughtful reply, I really appreciate the heads-up about analytics. I think current usage data can help me with prioritisation, but, interesting, global usage stats would offer a broader view.
Cyber Essentials is new to me, so thanks for flagging it. The requirements around legacy platforms sound pretty unworkable in practice, but I’ll definitely look into how they might reshape our QA approach. This has been very helpful, I’m learning loads about the many angles of long-term support and QA.