Based on my own experience so far in FDA regulated companies it is really difficult to have agile practices in those environments. Equally waterfall based approaches can end up costing a lot of money when they fail at the end of the process. Iāve worked on a project where it was waterfall based and the deadline was forever being pushed later and later due to complications that could not have been foreseen. More recently I worked on a project where we had sprints and testing throughout but the release of the product was more of a waterfall type with a deadline in X months to deliver to customers.
Are there ways to build agility into regulated testing projects?
Are there any things that you have encountered that you would forewarn people about?
I work in a heavily regulated space as. Iād agree itās a challenge.
We have agile elements to our process, but loosely fixed delivery dates. I think the fixed delivery is as much an artifact of the domain as due to challenges in the development process. Regulated customers have to do a great deal of due diligence to validate a system for use, in addition to what is done in the SDLC.
That being said, I think there are things you can do to maintain agility in the regulated environment without being waterfall. Even if you have a fixed delivery date, like all agile projects, scope and priority may shift during the development cycle, so you need to be adaptable from the start.
Test throughout the lifecycle. (a given)
Build your validation in small enough components that if a piece gets de-scoped, it doesnāt impact your entire test suite (that goes for both manual and automated checks)
Keep your stories small
Plan for different regulatory requirements. (All of our raw documentation is in XML, which means we can post process it into different formats via XSLT)
Most of those are pretty standard agile concepts, but I think they become even more important in the regulated space.
The biggest challenge/mistake Iāve seen is orgs being afraid to build their documentation out until the end because āthings may changeā. That leads to a long tail of not much productivity for the team.
Iāve seen the opposite of this where a lot of documentation was done up front and a lot of the teams time was spent on change controls for the documentation.
I guess you have to find the balance with the right kind of documentation through trial and error. Also the person writing the documentation and the target audience for it needs to be taken into consideration I think but often it seems to be left to the person who appears to have the most time and may not necessarily understand the reason or audience for the document. Perhaps this is not always the case but it has been my experience so far.
I find the different workflows people need to use really interesting. Historically, our test documentation has not gone through the formal approval process until near the end of a project lifecycle. That worked for us, because our official validation is done near the end too. Change controls were minimized, but it did have the end result that all execution for āvalidationā had to be done at the end as well.
Weāve started to mitigate that long tail by executing most of our low-level verification via automation, with extensive documentation of the execution.
High level validation is still done manually for the business needs, but the test cycle for that can be comparatively quick with low level verification done in an automated fashion.
That way, if we change scope (cut features) late in the dev cycle, itās just a matter of excluding those tests from validation. We still have the documentation for those features, they just donāt get executed or submitted for approval. Thatās where it gets important to keep things compartmentalized.
Embracing the V-model has actually strangely allowed us to be more efficient.
A follow on from that, I had a discussion with JeanAnn during the week about how code can be documented. Do you include code comments (including comments in automation code) under the umbrella of documentation or is it something else?
Our automation code is pretty much self documenting (more below). So, the automation is definitely included in the umbrella of documentation.
All of our automation logic is abstracted into keywords that are readable by business users. Things like:
Element Text Should Be [locator] [expectedtext]
Click Link [LinkText]
Beyond that, we abstract the granular checks into higher level keywords. So, a test at the highest level will be similar to gherkin.
A made up example would be:
Log in as Administrator User
Change Read Only User to Write User
Log out
Log in as Read Only User
Make Change to Data
Log out
Every one of those keywords may include a bunch of steps (although they still use pretty readable language). For some of our business users, we provide the documentation at this very highest level.
For others, we might provide the documentation at a slightly lower level.
For our official validation documentation, we include full execution results, which include the detail down to the lowest level keyword (i.e. Element Text Should Be) along with the results.
Thatās where having our results in xml has been really valuable, because we generate all the different formats off a single source file.
Interesting. By that description Iāve been accidentally building documentation into my automation code quite well then obviously thereās places I can improve but itās great to know Iām not doing a terrible job. Thank you
Iām not sure it is just about externally regulated audit. In a recent project I was asked to submit a complete set of written test cases AND a requirements testability matrix before the start of testing to BAs for approval before the start of every test cycle. In the project I am on now I have to submit written test cases for approval by the senior finance manager and the BA for every use case I test as a precursor to testing. It was a slow process, although not as slow as getting our UAT test plan approved, which required negotiation and formal signoff from tens of stakeholders in different departments.
Apparently (in the former case) this was because test cases along with defects raised are seen as the ādeliverablesā of the test team and approval of requirements and test coverage is a business and internal audit process. Itās the way testing goes in the financial services industry apparently. For someone who prefers exploratory to pre-scripted tests, I found it unsettling and would struggle to fit that into a fast and agile testing approach, however my manager and I couldnāt change their mind.
This one is interesting, as it is a guidance document by a industry organisation. The regulated industries usually follows Good Industry Practice described by interest organisations (aka industry best practices), so this is a an interesting development.
This gets me riled. Internal audit processes should apply only to the work of internal audit. Auditors should never dictate the processes in other areas. Their job is to review the processes, controls and management. The reason auditors want something they can count and check is that it makes their job easier. It turns audit from a difficult and valuable job into an easy and worthless one.
In this blog I refer to the problem of auditors building a cosy consensus with auditees. The auditees produce something that the auditors can check easily, and in turn the auditors give a glowing report.
The motive of implementing Agile process is to shorten the software development cycle and providing the software product releases more frequently. There are various components which constitute this process. Letās have a look on them one by one:
The first one in the list is the Test Automation. With the use of Test Automation the development and test cycle can be repeated more often.
Next one is Continuous Integration. Using this developers are allowed to write the code and test the software functionality as a continuous process.
Third one is Release Automation, which enables software to be automatically packed, deployed & tested in the staging environment that simulates the production environment.
And the final one is continuous delivery of software product.
Here, Agile not only provide benefits to software testing companies but also provides a number of important business benefits to the client as well. Hope this information is helpful for you.
I realize this thread is a little older but I found it interesting.
When I reviewed the original question, I wondered why āworking in a regulated industryā would have any influence on how a project executes. I read the Christie blog and cringed at @paulmaxwellwalters experience.
I agree with @jameschristie - auditing should have no influence on HOW a product is constructed. They can request evidence and Iād be happy to help an auditor understand how the evidence I provide fits into any one of their auditing buckets. At the end of the day, the products and by-products produced during construction is their for the benefit of the business and the customers they serve as well as those providing support to those products.
I worked in an industry regulated by the FDA. I recall we were very conscious of the requirements for FDA approval while at the same time collecting bits of documentation that appeared to fit into the definition of documentation or testing required for FDA approval. I donāt recall that influencing how we executed projects. I do recall that attempting to meet ISO9000 requirements was frustrating for many of my peers and seemed to duplicate much of what we were already doing to meet FDA approvals.
Joeās post has just made me revisit some of the earlier posts that he references. Iāve worked in regulated areas a fair amount (15 years in UK Government utility regulation and then a stint with a medical equipment software house), so Iāve had first-hand experience of regulatory regimes, as well as auditors - including some of the big four UK audit companies - and other reporting regimes.
Iāve realised that in a previous role, I inadvertently developed an approach which certainly kept corporate managements happy. I effectively did exploratory testing on a system which I then documented as a test script. This could then be used to demonstrate tests done and could be duplicated should the need arise (which it rarely did). It allowed me to develop happy path tests for demos and enabled regression testing.
Non-testing managers in large organisations need some sort of reassurance that they understand. I was able to provide this and at the same time was able to satisfy their need for metrics without sacrificing my desire to test for best user outcomes.