Testing in Regulated Industries

We recently had a webinar with TestRail’s @chris.faraglia to talk about testing in regulated industries - the problems faces and how they could be solved.

There were some great questions, I thought I’d share them here allowing Chris to elaborate and give you ideas for more questions!

Can you talk about coverage analysis for data coupling/control coupling testing strategies? - Anday Türkoğlu

How to follow appropriate Test Pyramid as in regualted industry standards are not agile/updated enough to accomodate different layer of tests like unit/integration/api as accepted coverage - Divyang Raval

What are your experienced with agile ways of working in these contexts? - Jesper

How do you handle ambiguity in standards and regulations? - DanPanachyda

Ever have trouble with an audit? How did you handle? - Rabi’a Brown

One of the key aspects of testing within a regulated environment is the establishment of baselines of all test artefacts against a well defined product. Can the product manage traceability across versions of artefacts and not just to the latest version of an artefact? - Ivor McCormack

Who in your team decides on the risk associated with any testcase fails? - Mark


Thanks @mcgovernaine !

While we didn’t get into industry specifics in the webinar, does anyone have any specific industry use cases or examples that are particularly challenging them in terms of testing strategy?


Thank you again for this webinar, there are some good learnings for those new to the regulated industries. There are some quirks to be aware of if your experience is in teams building consumer software. Will the recording be available with club membership?

1 Like

Yes, the recording will be available to all members, Club and Pro.


Overall, I think the idea of determination of risk should be objective and decided using some mechanism similar to a “scorecard”. This objective scoring may include topics/categories such as escaped defects, scope changes and code freeze dates, etc. that impact the quality and risk of a given release.

Great question Mark!

Yes. We should have version control of our automation and manual tests in the same manner as the application/system-under-test is being managed.

Automated test code should follow feature branching or similar git workflows as hotfixes, minor releases, etc. are completed which then eventually would make its way back to latest (maybe head of develop, etc.) version of the software.

See Git Feature Branch Workflow | Atlassian Git Tutorial

Manual testing should also follow the same guidelines. Having a test management tool that allows similar version of tests is critical to managing test-> requirement linkage and these changes in codebase.

Reference some capabilities in TestRail Enterprise, as we feature versioning functionality for any automated and manual tests that exist in the system: https://support.testrail.com/hc/en-us/articles/7768433966996-Test-case-versioning

Thanks Ivor McCormack!

1 Like

How do you handle ambiguity in standards and regulations? - DanPanachyda

I think the best approach is NOT making any assumptions. Resources within your organization exist to help when this occurs (legal, compliance officer, etc.). Don’t be afraid to flag items and get help!

Thanks DanPanachyda!

1 Like

While I personally haven’t had trouble in audits, as mentioned in the webinar this was always due to good process and doing self assessments. Its better to find gaps internally and correct them than have external oversight find these in audits etc.

Thanks Rabi’a Brown!

Thanks Jesper, glad you enjoyed the content!

In general, I think teams should try to follow distribution of tests in a similar model to the test pyramid. Moving away from that model you may start to deal with more flaky, long running and hard-to-maintain tests that really don’t give the value you need.

This all applies and is relevant to regulated industries regardless if teams are agile or not. Having a reliable regression/automated test suite is essential for maintaining quality.

Thanks Divyang Raval!

For this question I will address it in two parts since it appears to be related to data management and test coverage analysis.

For coverage analysis a variety of metrics and approaches exists. Rather than re-state the information I suggest you take a look at a recent post on the testrail blog around this topic: How to Improve Automation Test Coverage  - TestRail

As for test data management, a general rule for coupling tests-> data is thinking that your tests should be data independent and not rely on static external data. Integrated mocking capability in frameworks like cypress will greatly help with these design approaches. Also as mentioned in the webinar content looking at property based testing rather than static data can be very effective.

This article may provide some additional insight: Drop a Little AI on It: Random Test Data Generation - TestRail