I’ve not fully experienced regression testing at my current place (I’ve only been here a month!) but this is what I did at the previous job around regression testing for a set of templates used for creating e-learning. Because these templates were so designed that they could be loaded to the content server that delivered the e-learning that was built using them, and they would override the version within the learning (thus allowing fixes and improvements to be pushed out automatically), they needed regression testing when changes were made.
As I’d worked on these from the beginning and knew them pretty much inside out, I’d got the test scripts refined down to a good lean path through - in the ideal world these would have been peer-reviewed but I was working for a failing company and largely working alone. These scripts were then used to create an automated test suite; I used Selenium IDE for this as I had no previous automation experience, nor the time or support to learn a language to run Webdriver, so it was the best option for me under the circumstances.
The automation suite took care of the happy-path coverage, which allowed me to focus manual testing onto the areas of change. This would involve cross-browser and device testing (using a combination of VMs, actual devices and Browserstack) as well as pushing the boundaries on the changed templates; again, experience informed what I tried out. I also did two rounds of testing, an initial smoke test on a dev environment followed by a more thorough pass on a change-controlled test environment. Once through this, the prod deployment would take place first thing in the morning and I’d aim to get a quick smoke test done before 9.30 to confirm all was fine - although the change-controlled test environments were supposed to be the same as production, that was rarely a safe assumption!