I think a lot depends on the regulation model. For example, when I worked in Ofwat (UK Government), our IT kit had to be approved by GCHQ because we were connected to the Government Secure Intranet, and GCHQ disapproved of apps that ran executable software across our internal network. (Hence my failure to use test automation tools back in the 1990s because IBM Rational did just that.)
On the other hand, I had a contract with a company that produced software for biochemical analysis machines. Although our formal test programme (script execution) was drawn up according to the appropriate Government Regulations set out by the Medical & Heathcare Products Regulatory Authority (MHRA) using the (then) current regulations, I never saw any evidence that the company had strict lists of what was or was not allowed, just that they had a methodology for testing that met the requirements. (But then again, I was only in on the last six months of completing a big project, so thinking up new ways of testing wasn’t high on the list of priorities.)
From what I see on the Government website (https://www.gov.uk/government/publications/report-a-non-compliant-medical-device-enforcement-process/how-mhra-ensures-the-safety-and-quality-of-medical-devices), MHSA aren’t going around enforcing the regs except in the event of an allegation of non-compliance. (Like most UK Government safety regulators, they don’t have the resources to follow a proactive programme of inspections.) And I also expect that software testing methodology is just one area where they are expected to have a role. So I suspect that a lot of instances where IT managers throw up their hands in horror at a suggestion from a tester as to how the process can be improved arises out of self-imposed restrictions and possibly a misunderstanding of the purpose and role of regulation generally.