It is popular to rail against regulations generally, and blame the regulatory burden for stifling innovation. This is especially, but not exclusively, true in the digital space, where structured design is anathema and freewheeling, make-it-up-as-you-go-along is the admired style. Software development captures this in its spiral design method in which work begins before you have all of the requirements. This style is facilitated by software itself since it is relatively easy to start, add to, and change, the latter exemplified by the reality of frequent “updates” and multiple digit version numbers. But weak design methods are not limited to software, and physical devices can also be rushed to completion without adequate planning and control. Avoiding weak methods is the intent of design controls, as codified in 21 CFR 820.30.
Therefore, a question with respect to design controls is: Are they a regulatory burden, good engineering, neither, or both? Another way to look at this is: Did FDA just make this stuff up in the mid-1990s, or did it adopt and codify well-developed design methodologies especially in the context of products with significant impacts on human health and wellbeing? These questions can be addressed by looking at each of the elements of design controls.
There is one front-end requirement, design and development planning, and three back-end requirements, design transfer, design changes, and design history file, all of which are important in any well-managed design process. However, I am going to focus on what I consider to be the design cycle itself, consisting of input, review, output, verification, and validation.
The purpose of design input is to “ensure that the design requirements relating to a device are appropriate and address the intended use of the device, including the needs of the user and patient.” Is it excessively burdensome to know at the onset what your product is actually for and how it will be used? Rather than burden, this sounds like not only good engineering but good business practice. If you don’t know what the intended use of your device is, how likely is it that your design will meet associated needs or be of any value? It should, however, be remembered that the input need not be fixed since the evolving design, reviews, and feedback may make it necessary to update requirements. As input becomes specifications, care should be taken that these specifications not become too narrow or limited in scope. In this regard, meeting your specifications (see verification) has limitations if the specifications weren’t very good to start with.
Design review during the design cycle has always been an essential part of good engineering. Such reviews may occur at interim steps in the design process as well as at the end, when a final review is obviously appropriate. Traditionally, reviews of a design may have been done more casually than the regulation’s “formal, documented” language suggests. The inclusion of someone who does not have direct responsibility for the design may also be atypical and challenging for small organizations. Yet the designers themselves can be too enmeshed in their own design to recognize its shortcomings. This can be especially true for usability issues since the designers have a level of familiarity with the operating controls that most users will never achieve. As with many meetings, it can be challenging to make a design review meeting real and meaningful as opposed to a disgruntled waste of time. The "we are only doing this because they make us" attitude is not helpful in this regard.
As design controls were rolled out in the 1990s a vice president of engineering that I knew, and whom I believed practiced and required good engineering, was concerned about whether the new regulations would be disruptive to their current system, and add new burdensome activities and documentation. His general position was that we do all of these things, albeit not necessarily as FDA had laid it out. I saw him after the regulations were in place, and design controls were now being duly practiced in his company. He said to me, “We had our first design review meeting, and it was good!”
Verification asks the basic question of did you design what you said you were going to design—i.e., did you meet your own requirements. Or in regulation speak, does the output meet the input (as the input may have evolved through review and iteration). If it doesn’t, there are clearly issues that should be resolved. The regulations require documentation of the results of the design verification, including methods and the identification of the individuals performing the verification. Documentation may always seem like a burden and runs the risk of being perfunctory, but the concepts are clear. You can’t do an effective verification if you have no method, and signing off on it may increase a sense of personal involvement. But there is also a risk that knowing the test methods for specific attributes can cause the design to become too narrowly focused in order to design to the test. This can be a good thing if the inputs and methods are reasonably complete and relevant, but it can be bad for too tightly crafted or irrelevant requirements that do not properly reflect what should have been the input.
The output is everything you have when you are done with the design—or perhaps think you are done. As above, the principle is clear, however burdensome doing it properly may be. In part, output is collecting what you have already done, and making sure it is all there. The design output must be documented, reviewed, and approved by identified individuals before release. The output becomes part of the design history file, which serves to collect in one place or point to all of the major design documents. Rather than simply being an archive, the design history file is a ready resource if there are any postmarket performance issues. If such a situation arises, you do not want to have to deal with scattered documentation in multiple files that are not easily located, or no documentation at all.
Validation is the comparison of the finished device to the real needs of users and patients. This includes testing of production or near production units under actual or simulated-use conditions. This may need to include actual packing and shipping, as well as set-up and installation. If a custom-built prototype is hand delivered and set-up by the most skilled employees, real-world effects may be lost. The important distinction between verification and validation that is verification is a comparison to requirements that were abstracted from the real world of intended use while validation is supposed to be a return to the real world to see if the product is actually functional and usable. The distinction between user needs and specifications is captured in part in the definition of design validation in 21 CFR 820.3, which is that it means establishing by objective evidence that device specifications conform with user needs and intended uses. A cutsey way of saying this that requires careful reading is the verification is did you design the thing right and validation is did you design the right thing. Of course, validation must be fully documented. Proper validation seems more than reasonable for the design of safe and effective products, and it protects against designs that are not capable of performing as needed and/or overly challenge users.
The regulatory burden argument generally takes the form that if it wasn’t for these nasty requirements, we could be zipping along, introducing all kinds of medical devices more quickly and cheaply. The missing part of this argument is whether such products would be good or what proportion of them would be good. The burden argument should carry the task of pointing to specific parts of regulations that are not just possibly burdensome but also unnecessary. This should include what the expectations would be of a design process that has weak inputs, no internal reviews, and little or no testing. This comparison is shown in the accompanying table. In my opinion, such expectations would be low, and the time to find out is not after the device has been marketed and used.
Establish appropriate design
requirements to meet the needs of the user and patient
Don’t understand the need, just start designing
Structured documented reviews as required during design cycle
Casual reviews, if any
Demonstrate that the design meets the requirements developed as Input
Go with what you’ve got, or write requirements based on the design
Capture all of the pertinent design documents
It's all here somewhere, at least what we wrote down
Demonstrate that the finished device meets the real needs of the real users
Sell it and see what happens