Best Practices for Design and Development of Software Medical Devices

Posted in Medical Software by Heather Thompson on May 16, 2011

What is the difference between a standard operating procedure (SOP) that is used and an SOP that gets locked in a drawer? The answer might be open source process software.

Here’s a dirty little secret: You know those standard operating procedures that took so long to create and that you forced everyone to read? Of course you do! And assuming you have loyal employees who do what they are asked, the procedures were probably read, boring as they may be. That’s the good news. The bad news is that nobody (including the author) remembers exactly what those procedures say.

Medical device software is audited and controlled by standards defined by FDA, specifically 21 CFR parts 11 and 820. These are long and boring documents that are oddly phrased and difficult to apply. In many companies, training a software designer on standard operating procedures (SOPs) that satisfy the CFR involves having the developer read the company’s SOPs and then answer a few simple test questions about the documents. The tests are graded and signed and placed in a permanent file somewhere as proof that the developer knows the SOP.

A new developer’s job likely involves reviewing use cases, writing test scripts for those use cases, and running the scripts. When one task is finished, the next task is probably whatever the boss tells him to work on. Soon enough the SOPs that were read during the first few weeks of employment are a distant memory.

But, of course, those documents are important. Remember how the designer signed a test saying that he read and understood the SOP? Perhaps six months into the job, an internal audit of the project will be held. The purpose of this audit is to perform activities similar to those that might occur during an actual FDA audit for a 510(k). The designer must submit to interrogation by friendly (or otherwise) auditors.

The first question might be, “So, when you do your work, how do you know what to do?”

An answer from the new designer might be, “Uh, well, I guess I do whatever my boss tells me to do.”

Wrong answer.

The auditor continues, “No, what I mean is, how do you know what you are supposed to create?”

Here, the squirming might begin, “Um, because my boss tells me what to do… I read the use case and requirements, and write the test.”

The bizarre questioning continues for an uncomfortably long time, until finally the auditor gives up and tells the designer to return to his cubicle.

What Was the Problem?

So what went wrong during this audit? The obvious and most simple answer is that the designer wasn’t prepared. Yet, even readers may not be entirely sure what the auditor wanted to know. The project manager is irritated because the employee doesn’t understand the questioning, sure. But the overall impression is that the designer doesn’t know how to do the job.
The real problem, of course, is the process. The stacks of procedures in place, the training—they all look good. Someone worked very hard to come up with everything needed to establish how projects must be designed and developed per FDA’s requirements. However, a vital element is missing: application.

No one on the team is necessarily doing anything wrong. The designer was just doing whatever the boss said to do. That’s a good thing, right? And the boss was doing whatever her boss said to do. Also good, right?
Well, not really. In fact, no one is following the SOPs, the very system that was written specifically to tell each person what to do. They all know about the CFR, and that procedures had to be created to satisfy it. But the practical terms are missing.

Applying SOPs

21 CFR 820 covers quality system regulations (QSR) for medical devices (and software medical devices). It outlines current good manufacturing practices (CGMPs) that govern design and development of a software medical device. It explains the controls needed to implement as part of a quality system, but it doesn’t provide many concrete examples or specifics on how to best apply them.

For the purposes of this article, we will discuss SOPs only in terms of software development projects. It is necessary to have certain procedures laid out that explain how the QSRs are actually performed. To that end, there are two ways to write SOPs, each of which has its place.

The first approach is to tailor a number of procedures to be specific to the project. In this way, designers can handle changing technologies and environments. Over time, as things change, it will undoubtedly be necessary to revisit all of the SOPs. An even bigger problem is that this approach doesn’t lend itself to a one-size-fits-all set of SOPs. The outputs of the approach can become a procedural and documentation nightmare. This approach is fine for a small environment with only a few concurrent software projects that have similar environments, but it won’t scale for more extensive needs.

A more practical approach is to create SOPs that are agnostic in terms of environment and technology. This way, at least for all of the software medical devices being designed and developed, there is a set of procedures that make sense and remain relevant for more general and longer lasting use. This leads to a different approach to handling software project management. Such high-level SOPs, of course, are not pragmatic on their own (and this is a good thing!). They need another layer applied in a practical way.

In this second approach, there is no technical constraint by the procedures. The development teams don’t have to lean toward using a single version control system, database, ticketing system, or tool simply because it would be too difficult to refine the SOPs. The SOPs do not hold users hostage in what they must use; they only define what designers must do. An SOP that prevents efficient software design by prohibiting the ability to evolve with technologies and practices is a bad SOP.
Many good developers view “process” as a dirty word. All too often companies let processes become so cumbersome that they don’t serve any real purpose other than to frustrate those are forced to work within their confines. In such a situation, the processes are sidestepped, rendering them useless.

So how do you make a good process? How can an SOP be created to serve rather than restrict productivity and good design?

The answer is to let SOPs serve as guidelines and create work instructions on a per-project level that explain the practical use of those guidelines. This is done (as comprehensively as possible) during the project planning. The corporate-wide SOPs are specified to the project, giving designers the opportunity to determine the best applications. This means that for a given SOP designers, must have one or several work instructions explaining how to implement the SOP and applications are relevant. If a design control SOP states, “Versioning control is used for source code,” the project plan (or configuration management plan) states, “Subversion is used for source code version control. The repository for project X resides at…” The project plan should include detailed work instructions to help clarify and apply the procedures.

Remember the story about the awkward audit interview? Sure, our hero had read the SOPs but didn’t really know how theyapplied. This is because the SOPs (in this case) provided a high level of design control, but the practical application lived only in the heads of those who happened to be using the tools for the project. And even if, by luck, an SOP was followed, it was only because the approach being taken happened to be in line with a general guidance and not with the specifics of any work instruction. Those SOPs provided little more than lip service to the CFR.

Software designer’s work instructions point to the specifics on how and when to use project tools such as Redmine, Subversion, and Hudson (discussed later in this article). Essentially, the SOPs can be thought of as the framework and the work instructions as the implementation. And these work instructions should enable designers to use the technology available to make processes not only applicable, but also effective and helpful. It is important not to think of these procedures as a roadblock to work—on the contrary, they should make the work more efficient. If they do not, it’s a clear indication that something in the process is off.

Open Source Is the Key

When it comes to handling design and development activities of a software project, it is a good idea to choose a plan that works well for any software project, not just one in which FDA approval or clearance is being pursued. The biggest needs of all software development are good communication and traceability. Open source software can often provide the best methods to provide both attributes.

Put it into practice! Read this Use Case Development Example

Project designers want to know that use cases, business rules, requirements, documents, code revisions, hazards, and tests can be traced backward and forward. Despite having some excellent open source technology available (for free), many companies insist on manual documentation of such tracing. The result is a mess of documentation that is difficult (if not impossible) to maintain and likely to contain errors. You’re just begging an auditor to find a problem.

There is no need to hire someone to work full time to shuffle through this mess of documentation. Sure, a technical writer is appropriate, but there is plenty of technology available to make this aspect of design and development easy. There are already tools out there that, when used properly, can make software project management, from the highest level to the most detailed tracing, downright fun. So why isn’t everyone doing it this way? Some pervasive myths and objections can stymie adoption.

Myth: Third-Party Tools are Difficult to Use Within the CFR. Many OEMs are under the impression that the CFR requires all the software and tools be validated. The CFR states that OEMs must show intended usage of the software and show that the tools used work the way the company thinks they should and for its needs. Simple. CFR doesn’t exist to encourage poorly engineered software. Nor does it exist to tie users hands from the best new tools available. It exists to ensure the tools are used correctly, per the intended use.

Myth: Open Source Software Cannot be Trusted. This objection can only be addressed in a separate article. The author has learned repeatedly that widely used open source software is often better and just as well supported as its closed-source counterpart.  One need only look as far as GNU for proof (

Objection: Technologies Change. How Do I Know Subversion Will Be Used in 10 Years? You don’t, and it may not be. But as everyone out there using other legacy systems knows (Visual Sourcesafe, anyone?), this shouldn’t be a concern. The technology may change, but servers can be maintained and archived for as long as necessary.

Objection: We Don’t Have Time for Such Overhead or IT Support. At the risk of sounding cliché, OEMs don’t have time to overlook this. A little setup and thought up front streamlines the process and mitigates the risk of serious problems down the road. If a designer ever wonders how to update tracing long after the completion of a software requirement, document change or test, the procedures have failed. And if there is no practical approach to applying procedures, such an occurrence will happen at some point in the project. Sure, there is some upfront cost to be considered, but it comes with greater efficiency throughout the project life cycle.

The Tools

There is no one-size-fits-all approach when it comes to a setup, such as the one presented here. The tools used must be evaluated with consideration to the environment, project, and corporate needs. Although all the tools listed work for a wide range of needs, there are many other tools that are worthy alternatives (Trac, Git, Mercurial, etc.).
The main tools in this article are as follows:

Other tools that might be useful (not discussed in detail) include the following:

These tools should be familiar, and there are others to choose from. They serve the needs of a software project when integrated, and they are widely used and supported. They are open source and therefore free (as well as having been written by some of the best software developers in the world). For use in the typical corporate environment, each tool can use LDAP authentication. As far as environment restrictions go, each of the tools can be run on Windows, Linux, Solaris, and MAC OS X.

Version Control

The earliest part of any software project is the planning phase. At this stage, people involved with the project have meetings and discuss high level needs. There are probably some presentations and documents that are created. Project management plans have not been developed, but they should be thought about. And as stated previously, it is time to begin creation of the work instructions (the application of SOPs) in this stage.

The design history file (DHF) of a project must contain all of the historical data that goes into the project, so even at this early stage it is necessary to decide on a version control system and create a repository. Sure, there may be no tracing involved yet, but this early information should still be kept in the DHF. Because the earliest phases of the software project result in outputs that are to be included in the DHF, it is necessary to determine the version control tool and establish the version control repository early on (assuming the user has a basic understanding of version control or revision control systems).

Project Tracing

Tracing is everything, and Subversion, with its changesets (more on changesets later), lends itself to integration with other tools used throughout the project. When used with issue tracking software, every problem can be linked directly with a set of items in the repository that are related to addressing and resolving that issue. A click of the mouse reveals a list of all the project file modifications related to a single issue.

It may be best to use a single version control system and repository for all of the material that goes into a project. Material such as project management plans, documents, presentations, code, test data, and results should all go into the same repository for a project. If documents are stored in one place and software code is stored in another (or in a different version control system altogether), project traceability could be lost.

As a side note, when placing binaries in a version control system, there is no merge path as there is with text file source code. This means it is good practice for team members, when editing documents, to place a strict lock on the file while editing. This can be done in Subversion. Strict file locking allows others to be notified that another user is currently working on a file.

Although a clear benefit of project tracing is the fact that all of the bits and pieces of a project are associated with the same repository, some may view this as a problem with the setup. However, in terms of an FDA-regulated software product, it is beneficial to relate all elements of the project in a single traceable repository. Documentation can be versioned (and tagged) along with project source code, and this may or may not be desirable, depending on project needs.

Subversion is better than many of its predecessors because of its introduction of changesets. A changeset provides a snapshot in time of the entire project repository. When documents, presentations, or source code are changed and committed to the repository, a new changeset number is created. Users can check out all items in the repository tree as of that changeset. Designers can pull everything relevant to the project at a specific point of change. There is no need to tag or label the repository to revisit a particular instance in time (although Subversion still allows tagging). Every single commit to the repository effectively results in a tag.

This is not to say that tagging is no longer useful. On the contrary, all software releases, including internal releases, should be tagged (and the work instructions should tell project engineers when and how to perform tagging).

Another advantage of Subversion is that it allows for the control and history of directories and files (including file and directory name changes). The most commonly used predecessor to Subversion, CVS, did not maintain a version history of a file or directory if it was renamed. Subversion can handle the renaming of any version controlled object.

Releasing Software

When software is released, it is typically given some kind of version number (e.g., 1.0). This is good, but it doesn’t convey the specifics of what went into that build. It’s a good idea to include the Subversion changeset number somewhere in the release so that everyone knows exactly what went into the build. For example, using a build.xml (or build.prop) file somewhere that includes the version number of the release, the Subversion changeset number, and the date of the build is a good way to designate the version. The build scripts can (and should) generate the last two values.

As far as actually using Subversion, within Linux/Unix all commands are available from the command line. When working in Windows, TortoiseSVN seems to work well. It integrates with Windows File Explorer, showing icons that indicate the status of any versioned file. It also provides a nice interface for viewing file differences (even differences in the Word documents) and repository history.

Ticketing and Issue Tracking

It’s time to get rid of the notion that ticketing systems are bug trackers. One of the previously most-popular open source tools, Bugzilla, even used the word “bug” in its name. But issue tracking does not need to imply that it is only useful for tracking software defects. It can be used for everything in software design and development, including addressing documentation needs, capturing software requirements, and handling software defect reporting.

Further, and this may require an entire additional article, it might be best to get away from using standard documents for the capture of software use cases, requirements, hazards, and so on. By capturing everything related to a software project in our issue tracking tool, designers can leverage the power of a tool such as Trac or Redmine to enhance team collaboration and project tracing.

When initially conceptualizing this article, the author planned to use Trac as the primary example ( Trac is a great tool, but Redmine ( might be even better.

The principal shortfall of Trac is that it doesn’t lend itself to handling multiple projects. One installation of Trac can be integrated with only a single Subversion repository, and the ticketing system can only handle a single project. One benefit to Trac is that it can be used to group tickets into sprints, but by using subprojects in Redmine, a similar grouping can be achieved.

Redmine Has Wiki Power

Documents have all of the project management details, work instructions, use cases, requirements, and so on. If management is comfortable with it, such information can be placed into the wiki.

All developer setup, lessons learned, and other informal notes can be placed in the wiki, which allows developers to educate other developers on unique problems as they arise. A wiki page explaining the issue can help all developers do their jobs better.

Another power feature of the wiki is that with Redmine (and Trac), users can link not only to other wiki pages, but also to tickets (issues), projects, subprojects, and Subversion changesets.

It Has Subversion integration. With the integration of Subversion and Redmine designers can link back and forth between the two. Those work instructions explaining to the team how to use the procedures should explain that no ticket can be closed without a link to a Subversion changeset (unless, of course, the ticket is rejected).
Redmine can be configured to search for keywords in the Subversion changeset commit. For example, if a user is checking in several files that address issue #501, they might include a comment such as this: “Corrected such and such. This fixes #501.”

Redmine can be configured to look for the word “fixes” and use it. Redmine may use the trigger word as a flag to close the ticket and link to the changeset that was created when the user did that commit. Likewise, when viewing the Subversion history, “#501” attached to the changeset will show up as a link to the ticket. The tracing works both ways.

Redmine handles multiple projects and integration with different Subversion repositories. Trac’s biggest limitation is that it only handles a single project. Redmine handles multiple projects and can be used corporate-wide for all development. Each project can be tied to a different Subversion repository.

Additionally, a single project can have multiple subprojects. Redmine gives the flexibility to use subprojects for sprints and specific branched versions, as follows:

  • Hudson integration. With Hudson, integrated users don’t have to leave the wiki to see how the continuous integration (CI) builds look. Not only that, a specific CI can be built from any page within the wiki or ticketing system.
  • Full configurability. Everything can be configured in Redmine, including the flow of tickets.

Use the Issue Tracking System for All Project Design and Development

It is not enough to simply leave functional requirements in the software requirements specification document. It does not provide sufficient tracing, nor does it provide a clear path from idea to functional code. Instead, designers should institute the following steps:

  • All requirements and software design items are entered as tickets. For now they are simply high-level (or parent) tickets with no subtickets (child).
  • The development team, organized by a lead developer, breaks down each parent ticket into as many child tickets as necessary. Using the ticketing system setup relationships so that the parent ticket (the requirement itself) cannot be closed until all child tickets are completed. (Note: It may be a good idea to require corresponding unit tests with each ticket.)
  • Hazards (as in hazard and risk analysis) are mitigated by a combination of documentation, requirements, and tests. Leverage the ticketing system to capture hazards and provide tracing in much the same way as with requirements. This does not remove the need for a traceability matrix, but it does enhance the ability to create and maintain it. (As a side note, the Redmine wiki could be ideal for use cases, requirements, hazard analysis, software design documents, and traceability matrices, thereby allowing for linking within. Such a concept may be a hard sell to management).
  • Not all requirements are functional code requirements; many are documentation or quality requirements. These should be captured in the same ticketing system. Use the system to label the type of a ticket for different categories. By doing this, even documentation requirements are traceable.

Tickets are created, closed, and modified throughout project design and development process. The project plan (created before any code was written) explains the order in which tickets need to be done, focusing on the highest-level tickets. Nonetheless, it might be best to use some sort of iterative approach (and allow the development team to use sub­iterations, or sprints).

Any activity that results in a design output should first be captured as a ticket. The goal here is not to describe the process, rather, it is to illustrate the fact that a ticketing system can and should be used for all project activities. Whatever your software design and development process, RUP, Agile, traditional waterfall, and so forth, the ticketing system can be used throughout.

Continuous Integration Builds

A software project should have CI builds. This goes for large team projects as much as it does for small team projects. Although the CI build is useful for a large group to collaborate, there is a tremendous value for the smallest team. Even a software engineer working alone can benefit from a CI build.

A build can be traced to a Subversion changeset, and therefore to the ticketing system (and entire process). The CI environment, at least with regard to the ongoing development of code, gives a single point of overview for all other activities. Changesets, tickets, build status, and test coverage are visible. With the proper add-ons, users can gain insight into the quality of the code being developed.

To pull this off, of course, the ticketing system must be used wisely. With Redmine or other ticketing systems, designers can capture elements of software requirements and software design as parent tickets. These parent tickets have one or many subtickets, which themselves can have subtickets.

At its most basic level, Hudson only does one thing. It runs whatever scripts it is told to. The power of Hudson is that it enables users to log the outcome, keep build artifacts, run third party evaluation tools, and report on results.
With Subversion integration, Hudson will display the changeset relevant to a particular build. It can be configured to generate a build at whatever interval is wanted (e.g., nightly, hourly, or whenever there is a code commit).

Every time any code commit of significance is done, it is good practice to check the CI build for success. If the build is broken, the user can immediately start correcting the problem (and if it cannot be correct quickly, the user can roll the changeset out so that the CI build continues to work until the issue is fixed).

Hudson can be configured to e-mail team members on build results, and it may be useful to set it up so that developers are e-mailed should a build break.

The build artifacts of a project, in general, should not be a part of the repository (there are exceptions to this rule). Build artifacts belong in the CI build tool, where they can be viewed and used by anyone on the team. These artifacts include test results, compiled libraries, and executables.

Too often, a released build is created locally on some developer’s machine. This is a serious problem, because there is no good way of knowing what files were actually used to create that build. Was there a configuration change? A bug introduced? An incorrect file version?

Although developers have good reason to generate and test build locally, a formal build used for testing (or, more importantly, release) must never be created locally. Never.

Build artifacts are not checked into the source control repository for a number of reasons, but the most important of which is that developers never want to make assumptions about the environment in which those items were built. The build artifacts should instead remain in the CI environment, where the conditions within which the build was generated are understood.
Also, because these builds remain easily accessible and labeled in the CI build environment, any team member can easily access any given build. It may become necessary to use a specific build to recreate an issue that has been released for internal or external use. If the label of the build and repository changeset number are known, users can pull the correct build from the CI build server to recreate the necessary conditions.

Run Unit Tests Over and Over. Developers should do whatever they can to keep the CI build from breaking. Of course, this doesn’t always work. CI builds can be broken countless times for varying reasons, as follows:

  • Forgetting to add a necessary library or new file.
  • Forgetting to commit a configuration change.
  • Accidentally including a change in the changeset.
  • Having the build work locally but not when built on the CI build server (this is why the CI build server should, as much as possible, mimic the production environment).
  • Having unit tests work locally but not on the CI build server because of some environmental difference.

Providing a CI build environment with good unit tests prevents the chance that another team member would discover (and have to fix) problems at a later time.

The most difficult software defects to fix (much less find) are the ones that do not happen consistently. Database locking problems, memory problems, and race conditions can cause inconsistent but serious defects.

To fix such problems, it’s a good idea to have unit tests that go above and beyond what is traditionally expected and implement automating functional testing. Team members might dismiss functional tests because they (incorrectly) feel that there is not sufficient time to do all the work.

With Hudson running all of these tests, every time a build is performed, there might be situations in which a test fails suddenly and for no apparent reason. What worked before could fail even if there has not been any change to the code. This will happen.

Without going into too much detail, there are great tools, such as Findbugs, PMD, and Cobertura, out there that can be used with Hudson to evaluate the code for potential bugs, bad coding practices and testing coverage. These tools definitely come in handy. Use them.


In any software developer’s career  some roles can make the process part of the job annoying at best and a barrier to productivity at worst. But there are other places in which “process” is not a dirty word; rather, it is appropriately tied to daily activities to the point that the process can actually be enjoyable. This is the type of process, a very real application of a quality system that leads to highly effective use of the guidances and SOPs.

Revisiting the story shared at the introduction of this article, if the designer had been armed with work instructions to make the quality system an integral part of the daily process, the developer would have been much more prepared to respond to the auditor’s interrogation. It’s important to remember, however, that 21 CFR part 820 was not written so that individual contributors may appease auditors—it is there so that those implementing a quality system can figure out the best practices for their environment and needs.

Even so, if the goal of developers is only to achieve a 510(k) then they are missing the point. The goal should be, first and foremost, writing quality medical device software that does not harm or injure patients. The 510(k) is secondary. If we strive to make our quality systems useful in a practical way, getting the 510(k) can be a much simpler process.

Matthew Rupert is a software architect II for Talecris Biotherapeutics in Raleigh, NC and he also worked and consulted for XStor Medical Systems as a senior software engineer. He can be reached at or  

Printer-friendly version
Your rating: None Average: 4.6 (11 votes)

Login or register to post comments