What does your company do for testing windows patches and gold image builds? The company I currently work for uses a 3rd party to facilitate its patches and releasing onto test machines, pilot groups and then finally into production. Is there an automation tool which could create a sub-set of sanity tests which could be executed once a patch is released onto the test machines? Do anyone know of a different process we could follow or any suggestions on how to improve this process?
Unless you have your own virtualization environment, and use Chef/puppet, it’s a bit pointless going down this rabbit hole. If your product touches a lot of the OS, it’s worth doing, like if it installs any drivers. But if not, it’s not worth sweating to do this on-premises. I was on a team doing this about 4 years ago. We used a mix of Zen, some 3rd party stacks and our own in-house tools.
Microsoft release images every so often, but these are not really perfect for automation, close to, but hides the problem for testing on Mac and on other other 'nixes. Never even touched doing this on Mac, but Windows and at least Debian Distros allow you to build golden images totally on-demand using their installer engines. And to finely control the patch levels using their engines. As you already know, that’s how the vanilla AWS and Azure public cloud machines we might all be renting at any point get managed. But since it requires a dedicated person to do this yourself “on-premises” it’s a load of learning, tonnes of network-storage, and MacOS being a walled garden. Many small companies (<100 engineers) typically cannot afford the resource.
Sorry to go outside Windows on you here Lindsey, I’m a M$ fanboi, but I still don’t see the point of apps that are not runnable on the unixes. If you are virtualizing and patching Ubuntu et-al already, you will find that doing this for Windows is similar in broad approach, but just with added reboots, 10x larger storage and completely different tooling, and a different expert driving it.
I used to work for a company that sold hardware diagnostic software to OEMs (e.g. Dell, Lenovo, etc), and that was meant to make support easier for the OEMs, which mean we had to run on a wide variety of hardware and software versions. This company was more waterfall than agile, so had the luxury of a test lab with maybe 15 testers, and an extensive hardware library (i.e. each tester had dual 8-port KVMs, a dozen plus machines, etc), and care was taken to never update some machines, have some on certain service packs, etc and then folks would have to validate various upgrade paths as they were pulled from S3. We automated some of this, but the manual testers actually tended to be better as TestComplete licenses were more expensive than having people manually do upgrades on a bench full of systems . . .