Seven Deadly Sins of Compliant Software Development

Companies often spend a lot of time trying to figure out why software compliance projects fail. After having worked on compliant software projects with dozens of medical device providers, seven culprits emerge as the conspicuous causes of project failings. Although committing these software transgressions is not as enjoyable as partaking in gluttony, lust, greed, etc., the consequences to an organization are comparably dire to reparations from the seven deadly sins.

June 11, 2013

6 Min Read
MDDI logo in a gray background | MDDI

Companies often spend a lot of time trying to figure out why software compliance projects fail. After having worked on compliant software projects with dozens of medical device providers, seven culprits emerge as the conspicuous causes of project failings. Although committing these software transgressions is not as enjoyable as partaking in gluttony, lust, greed, etc., the consequences to an organization are comparably dire to reparations from the seven deadly sins. When unexpected tasks suddenly appear at crucial deadlines, requirements verification will need yet another pass, and cost overruns are excruciating, there’s a good chance at least one of these causes is to blame.

If your organization is guilty of any of these vices, there are redemptive corrections you can make to your development and verification processes. Spotting potential problems early offers the best chance for favorable results with the least negative impact, so start looking at your projects now.

Andrew Dallas and Full Spectrum will be at MD&M East, Booth 3918 to answer software compliance questions. Register to attend the event for free

1.Too many requirements.

It isn’t easy to quantify how many requirements are too many, but we know it when we see it. Of course, the number of requirements depends on the level of complexity of the system. But taking that into account, we still see hundreds of requirements or dozens of pages for even modest size projects. One main cause is excessive detail. Because every requirement needs to be verified, the cost of any change to any of the detail is compounded by the overhead of modifying the design and code, as well as test documentation.

Typically, the detail is better left for the functional or design specifications. The requirements should address what is needed, not how the need will be met. Take the example of the sleep or timeout feature, which is nearly universal in software applications. You might be able to use a single requirement: “The application shall stop operating after a specified inactive period.” In the functional or design specification, you can quantify the default value and range, as well as what happens when the timeout is reached and how to resume operation after a timeout. If all of that is spelled out as part of the requirements, you could have a page instead of a sentence. Those aspects of the timeout feature will still be tested, but the requirement verification does not need to explicitly address every detail.

Another frequent reason for too many requirements is designing the luxury model before the entry level edition has been introduced. Cramming all the features that may do well in version 2 into an initial release could result in no version ever even being released due to prohibitive costs or missing the market window. Although it’s important to consider and capture potential future directions for architecture decisions, don’t include those ideas as requirements in version 1.0. Some projects end up striking more than half of the original requirements before a release. All of the lost time and effort spent on them can add up to be lethal.

2.Not maintaining a requirements traceability matrix throughout the process.

There are countless reasons for perpetrating this blunder. Some projects start with a proof of concept and move directly to coding, leaving the requirements documentation for later. It’s no wonder some important features, often safety-related, have to be tacked on at the end. Let’s face it, honing requirements isn’t nearly as interesting as making a flashy user interface or cutting execution time in half. In fact, maintaining requirements traceability can be completely anesthetizing. Waiting until the software development is winding down to update requirements or start protocol authoring can mean winding the team right back up again when traceability tracking uncovers forgotten requirements.

3.Too much detail in test protocols.

While it’s important to be able to reproduce test results by re-executing a protocol, it doesn’t mean every menu item and button click has to be recorded so that a monkey could do it. We don’t want monkeys verifying the devices (unless they’re part of the animal trial phase). The test plan or protocol can specify that a tester needs to have a working knowledge of the product. It’s much more efficient to maintain the documentation written at a higher level.

Compare this:

Verify that when you provide a valid quantity for this, that happens,” and have the tester record the quantity.

With this:

From this menu, select this menu item.
On this screen, in this field, type in this value.
Then press this.
Verify that happens.

The first example requires little revision. But, for the second example, every time any of the terms or user interface paths changes in the software, the protocol has to be updated.

4.Inappropriate mitigations for hazards.

We tend to see this when a safety-related requirement is overlooked until late in the process. At that point, the simplest way to address it seems to be slapping on a label, adding an alarm, or displaying warning text. But those are likely to be the least effective measures due to warning fatigue, noise, and desensitization. By considering all requirements along with risk assessment and mitigation in the early phase of a project, features to prevent hazardous conditions can be readily built into the software or physical device.

5.Failing to take into account use errors.

Given that the underlying cause of approximately thirty percent of device errors is use errors, human factors considerations in the software can reap enormous benefit in terms of safety and efficacy. Error checking on data fields or using technology like barcodes to prevent invalid values should be standard for anything of importance in the application. Software that directs and controls the workflow can go a long way in preventing harmful misuse, as can automatic maintenance detection and enforcement.

6.Unclear or disorganized documentation.

If auditors can’t find what they need, it can affect submission results or at the very least, the time to achieve acceptance. Even including too much information can be detrimental because it conveys the submitter’s lack of understanding of priorities.
Use checklists to ensure you have everything organized and complete. Your checklist may change over time but it will help to reduce errors of omission when a project is coming to closure and stresses are high.

7.Not incorporating vendors into the verification and validation traceability model.

When you are selecting vendors, a formal audit of their quality system is a good idea. However, depending on your quality system demands and how you will work with a vendor, you can reduce this burden significantly. If, for instance, your vendor will operate under your quality system, then the burden is shifted to training your vendor rather than verifying your vendor has a quality system that meets your standards, like ISO 13485.

When working with outside vendors, or even different groups within an organization, check that for tools that need validation, both theirs as well as your own are validated. Also be sure the document numbering and control mechanisms fit with your own. Assuming you both have document control systems that will be used, make sure there is a well defined process for handing off from one system to the other. A week of all-nighters to correct documentation issues will not only lead to seven cranky days, but can result in mistakes that won’t help your case.

By reviewing the development and verification practices in making medical devices compliant, seven can go from being an infamous number of sins to your lucky number.

++++++++++++
Andrew Dallas is president of Full Spectrum Software and is widely considered a software technology expert in the medical device and life sciences industries. Dallas is a member of MD+DI's editorial advisory board.

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like