FDA Guidance on 510(k) Submissions

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI June 1997 Column

LETTERS

To the Editor:

In the Help Desk column entitled "Submitting a 510(k) for Changes in Device Design" (March 1997), FDA helps the editors to determine whether changes to a preamendment device warrant the submission of a 510(k) premarket notification. The device in question can be inserted into the body for aspirating body fluids. The changes to the device included: the addition of an O-ring to make it leakproof, the use of transparent plastic instead of the original colored plastic, and the addition of a sounding line to gauge the depth of insertion.

FDA stated that these changes required the submission of a 510(k) because they could significantly affect the device's safety and efficacy. I believe industry generally disagrees with FDA and would question the rationale the agency used to reach this conclusion. If FDA intends to apply the document in the manner described, its use could adversely affect patients, manufacturers, and the agency itself.

FDA's first contention, that adding an O-ring is a significant change and thus justifies the submission of a 510(k), is wrong. It is the manufacturer's responsibility to evaluate the material used in and the design of the O-ring to ensure its suitability for use. After doing so, the maker has established that the O-ring presents no new risk and does not significantly affect safety or efficacy. Moreover, the company has been manufacturing and marketing the device for over 20 years, so it should possess the knowledge and experience to make intelligent incremental improvements. A 510(k) should not be required for this change.

Secondly, FDA argues that changing the material that comes in contact with body tissue and adding a sounding line warrant the submission of a 510(k). In the normal course of doing business, a manufacturer would evaluate a material for its suitability for use, including its reaction to sterilization. Switching to transparent plastic is probably to allow users to view the aspirated fluids. If a known material that conforms to a recognized standard is used, the change can be considered insignificant. It most likely would not affect the device's safety or efficacy. Such a change shouldn't require a 510(k) submission.

No good decision can be made concerning the need for a 510(k) in this example by debating the technicalities of the guidance document. A good decision can be made by using common sense. Essentially, this device is the same one with the same intended use as the device distributed commercially by the same company for more than 20 years. It is not a new device. What public benefit would be derived from a 510(k) for the changes made to it? Aren't there other truly new devices with unknown risks competing for the resources that would be exhausted on a submission for the changes to this device?

The guidance document removes common sense from decision making. This example shows how the guidance can be used to justify a 510(k) submission for even the most trivial changes to a device. A major orthopedic device company may manufacture as many as 50,000 separate items, to which it makes hundreds of engineering changes over the course of a year. If most of those changes were to require 510(k)s, thousands of additional submissions would be added to FDA's workload. Moreover, it would discourage manufacturers from making incremental device improvements that benefit patients and users.

Orthopedic device manufacturers fear a return to the days of 1993 when hundreds of 510(k) submissions were backlogged at FDA. Total elapsed review times were as long as 18 months for orthopedic devices.

FDA reform legislation is needed to provide flexibility and encourage the use of common sense in the regulatory process. New legislation should point FDA resources toward matters presenting the highest actual risks to the public health. It should also minimize FDA activities that exhaust resources in order to control speculative and insignificant risks.

In the end, how FDA applies this new guidance will determine whether it floods itself with 510(k) submissions for trivial changes to devices.

Lonnie Witham
President
Orthopedic Surgical
Manufacturers Association
Warsaw, IN

FDA replies:

FDA's recently issued guidance provides the agency and industry a common ground for determining when to submit a new 510(k) for changes to an existing device. The agency recognizes that manufacturers make many changes to their devices each year and often question whether to submit new 510(k)s.

While developing this document, FDA received comments from industry and other groups as well as responses to a notice published in the Federal Register in the fall of 1995. The agency's goals were to eliminate the submission of unnecessary 510(k)s for trivial changes and to provide a mechanism for identifying those changes that could significantly affect safety and effectiveness.

The guidance stresses three basic premises:

  • Manufacturers must determine whether a new 510(k) is needed for the planned change.
  • Each change must be evaluated individually, using a series of flowcharts (i.e., labeling, performance, and materials flowcharts).
  • Industry can contact the agency if it needs further guidance or clarification.

The example used in MD&DI's Help Desk column gives little information about the aspirating device and does not provide its indications for use. Without this information, it is difficult to determine the need for a new 510(k).

How a change affects a device's safety and effectiveness varies greatly according to a device's indications for use. For example, modifying an aspirating device used at the skin's surface is different from changing a preamendment device so it can be used for aspirating through an endoscope (a possibility with the addition of the sounding line and O-ring). Another variable is whether the device is to be used to aspirate blood for reinfusion.

Finally, the type of body tissue contacted by the new material raises other concerns. If the new material will contact body tissues in vivo (requiring additional testing according to ISO 10993-1), the manufacturer would need to submit a new 510(k). The manufacturer then needs to evaluate the affect of these changes using the technology/performance flowchart of the guidance document. If the new material will not directly contact body tissues in vivo, and the aspirated fluids are not returned to the patient, then the manufacturer needs to determine whether the material change will affect device performance specifications.

FDA is currently working to reengineer the 510(k) process. We are analyzing design controls under the new quality system regulation and how they may affect the need for future 510(k)s. Additionally, the 510(k) reengineering team is considering the following:

  • Identifying additional device categories for exemption from 510(k)s.
  • Relying on conformance with recognized consensus standards, guidance documents, and other special controls to reduce the amount of information contained in 510(k)s.
  • Using third-party organizations to function as surrogates for FDA in the review of 510(k)s.

The agency wants to expend its limited resources on those activities that provide the most public health value to industry and the public.

Heather Rosecrans
Program Operations Staff
Office of Device Evaluation, CDRH, FDA
Rockville, MD


Copyright ©1997 Medical Device & Diagnostic Industry

DESIGN CONTROLS

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI June 1997 Column

FIRST PERSON

The managing director of LaBudde Systems, Inc. (Westlake Village, CA), outlines a regulatory strategy for improving design.

Contrary to popular belief, the design control requirements that are outlined both in ISO 9000 standards and in FDA's new quality system regulation are not only necessary, but should be toughened. As currently practiced, engineering lacks rigorous methods for capturing design requirements and verifying and validating the design itself. Mandating such methods through regulation is the only way to ensure the uniform development of quality products.

FAILURE OF LOGISTICS

ISO 9000 standards, total quality management, and the FDA quality system regulation require that manufacturers include logistics items in their development and manufacturing protocols. These include packaging, labeling, handling, shipping, installation, training, maintenance, and repair considerations. Engineers should also address these issues.

But solving logistics problems isn't enough. Poor product quality often results from bad design. Using design controls is necessary to ensure better products. But the ISO 9000 design controls, now part of FDA's device quality system regulation, do not go far enough. They require only that development include planning, requirements (design input), designs (design output), and document controls. Unfortunately, the controls do not define how to implement these requirements, so there is a substantial risk of design problems. Simply documenting a risk-prone process will not correct design deficiency.

FAILURE OF REQUIREMENTS

Our engineering discipline lacks sufficient methods for design requirements capture and validation. Most companies fail to capture all product requirements; most of their product specifications are incomplete, ambiguous, and inaccurate. In a large number of companies, marketing and engineering departments have trouble communicating with each other, thereby stifling requirements generation.

More often than not, users are left out, seldom getting the chance to validate requirements. Most requirements are documented by the use of natural language (text) documents that cannot define complex function, behavior, and performance requirements unambiguously. Each technical discipline uses different methods of requirements generation, leaving substantial communication gaps between the various technologies used to create the product. Designs cannot be properly verified if the requirements are inadequate. This often leads to inadequate product design. The result is that function, behavior, or performance fail to meet customer needs.

Most companies don't emphasize requirements or understand how they relate to design. There are usually poorly defined interfaces between requirements capture and design and documentation.

FAILURE OF PRODUCT DESIGN AND VERIFICATION

Design verification also needs improvement. Medical devices often consist of components using chemical, mechanical, electronic, and software technologies. Engineers must consider each technology when verifying product performance. But the current tools they use are inadequate.

There are several tools available for each of the technologies used in product design. Mechanical and electrical engineers have reasonably good computer-automated design-capture and design-simulation tools. However, these tools are mostly bottom-up (i.e., they require a detailed design) and have no place to embed top-level requirements into the design elements.

Electronic design products are among the most advanced design technologies. Yet, most schematic-capture products have no way to embed design requirements into the schematics. One notable exception is IntuSoft's Design Validator Spice tool, which allows requirements definition and simulates performance. Performance requirements are usually left to manual linking and verification, which are error prone.

Of all engineers, software engineers enjoy the fewest benefits from computer automation. There are signs, however, that some engineering and Windows-based products are being better supported with integrated development environments that allow high-level design entry and automatic code generation. But simulation is still missing from those products.

There are very few cross-disciplinary tools that capture and evaluate designs based on several technologies. A tool developed for electrical design cannot be used for mechanical or software design. These tools do not support a high-level abstraction of product function, behavior, or performance, but rather require a physical design. So engineers must currently manage interdisciplinary products by using cross-function teams rather than high-level design tools. The result is that total system behavior is often not understood.

Many companies manufacture products after testing only a limited number of units. Our research shows that engineers use few statistical sampling techniques, resulting in inadequate design verification. A sample of one is only valid for verifying math models. Also, engineers often specify acceptance test tolerances that do not consider product degradation due to environmental and life cycle factors. The result is poor product performance in the customer's environment.

LIMITATIONS OF CURRENT MANAGEMENT SYSTEMS

Nearly all existing project management planning and scheduling tools are based on task planning and the waterfall life cycle planning model. Unfortunately, users often end up spending a lot of time and money updating these tools during development. A project with more than a half-dozen engineers requires a full-time operator just to keep up with changes. And on large projects, the tool rapidly gets out of synch with the real project, and the level of detail that can be tracked is limited.

Existing planning systems can't link plans to item-tracking databases, requiring managers to build their own system and manually link each item. In addition, none of the existing planning systems can be linked to requirements-capture tools, design tools, or production systems like management resource planning systems. As a result, these planning tools require managers to manually connect management tools to engineering.

TOOLS NEEDED FOR THE FUTURE

Requirements-Capture Tools. A requirements-capture tool is a device (usually a computer program and database) that allows users to define what a product must do (i.e., functional design). It defines top-level (user-oriented) requirements and translates them into lower-level (designer- or manufacturer-defined) requirements with complete traceability. The tool helps prioritize requirements and automatically generates specifications for various sections of the product.

In the future, marketing and engineering staff will analyze user involvement with products. We will collect all product requirements and interfaces into a top-down system-requirements-capture tool that is independent of implementation technology (we will not be concerned with how the product will be made at this level). We will be able to simulate product functions, behavior, and performance before design details are determined. Thus, our requirements-capture tool will need to allow us to link requirements to equations that can produce actual numerical values through the use of simulation or analysis tools. Most of the equations will be contained in drag-and-drop prebuilt libraries of standard functional parts. Users will be able to simulate interactions with product interfaces by using virtual reality to validate product function, behavior, and performance--all before we even know how the product will be made!

The requirements-capture tool will need to have placeholders for all requirements including function, behavior, performance, environments, safety, logistics, manufacturing, and disposal. It will need to automatically trace and validate requirements. This tool should make complex system designs easy to understand by using graphics rather than the descriptive language of requirements-capture. It should allow project managers to generate reports on the status of all requirements and identify those that are missing or conflicting. We will automatically generate compliance or trace matrices for tracking the assigning out and validation of requirements.

Some companies are already offering parts of these system tools. Ascent Logic's RDD-100, Mesa Systems' Cradle, and i-Logix's Statemate are good examples. These are expensive, complex system- level tools that integrate requirements capture, function, and behavior simulations (and some automatic code generation). Because they require a lot of instruction, the cost and time required to use the tools may be too much for a small company. And they are incomplete because they do not have placeholders for all requirements and do not simulate all functions, types of behavior, or especially performance requirements.

There are some tools, such as Vitech's Core and Requisite, Inc.'s, RequisitePro, that use natural language for requirements capture. They have many of the needed features but have difficulty with graphical representations.

Until low-cost system-level requirements tools are available, we need to devise them ourselves. We can generate, capture, and validate requirements effectively by using a combination of organizational management and manual and semiautomatic techniques.

Organizational management involves manually ensuring that requirements are reviewed by, approved by, and disseminated to the proper organizations by physically moving documents around within the organization. This requires review and follow-up to ensure that requirements and design are properly related.

Current off-the-shelf office automation products enable users to build tools that integrate and link documents, drawings, specifications, and reports. Drawing tools produce graphics of product function, behavior, and performance, and captures them with traceability. One example of a semiautomatic tool (one that requires some manual links) is Shapeware's Visio, which attaches performance requirements to graphical objects in drawings and links objects to databases. It has an interface to AutoCad and stencils for nearly every technical discipline. A company can build its own system requirements and design-capture tool by using many of these off-the-shelf products, as long as it has a good vision of requirements capture, understands how to handle cross-discipline interfaces, and has organizational management skills.

Design Tools. A design tool is a device (usually a computer software package) that allows users to define how a product is made (i.e., its physical design). In the future, we will need design tools for systems composed of all the technologies used to design products. They will need to include or interface with mechanical, electrical, software, optical, chemical, and biological design tools. They will need to be tightly coupled to requirements-capture tools by assigning requirements to physical design, and they will need to be able to translate high-level abstractions of product design into physical design. High-level abstractions of function, behavior, and performance will allow users to easily design complex systems by using hierarchical abstractions. Engineers will verify the requirements and designs by using automatic test tools, such as analysis and simulations, even before the product's physical design is selected. The tool should automatically check design against requirements for completeness and consistency. It should allow proven designs to be reused. Any changes to the reusable part should be noted so they are properly verified.

Design engineers will be working more with high-level designs rather than low-level coding or circuit designs. Devices will be automatically created from high-level abstractions that will output mechanical, chemical, electrical, and software designs that are defect-free. For example, a phased locked loop will automatically be designed from bandwidth and stability specifications by design tools. The tool will automatically generate VHDL specifications for hardware and computer code for software.

There are some tools available that give us an idea of what the future holds. MatLab's Simulink and Integrated System's Matrixx capture and test system designs and output computer code for the software portion of design. But they still can't link designs to requirements. Presently, there are no technology-independent, high-level system design tools that cover all design technologies, but engineers can use many of these existing products as a bridge between high-level abstractions and physical design.

Project Management Tools. Future project management systems will need to allow users to track deliverable items rather than tasks and support other developments like the spiral model. They should be tightly coupled to requirements capture and design tools. When a requirement or design is created, a placeholder should be noted in the management tool so that resources and schedules for the items can be planned and updated. Changes in requirements, design, or planning should be simultaneously and automatically updated everywhere at the same time.

Additionally, the engineering tool set should be tightly coupled to the manufacturing systems like MRP, so that everyone in the company can determine development status.

CHANGES TO REGULATORY REQUIREMENTS

To correct the deficiencies in how engineering is currently carried out, regulatory requirements need to be modified to define good engineering practices. The following additions to design controls will help eliminate many design deficiencies:

  • Require user-level analysis to be conduc-ted and documented during design input.
  • Require that all requirements be validated before they are approved.
  • Require traceability of requirements as designs are translated into individual components.
  • Require that all levels of requirements be verified during development.
  • Require that at least two levels of requirements specifications be used, i.e., a functional design (what), and a physical design (how).
  • Require that interface control documents be created for every product interface at the user level, and between every engineering discipline (e.g., mechanical, electrical, and software).
  • Require that all requirements be verified sometime during development.
  • Require that the number of engineering models tested be consistent with statistical sampling methods.
  • Require formal safety analysis at appropriate points in development.
  • Require that factory acceptance tests factor in margins for environmental and life degradation of the product.
  • Require that engineering design specs be consistent with production test margins.
  • Require the use of formal risk management techniques.
  • Require written meeting minutes and item tracking during development.

    Implementing current regulatory requirements and including the above additions actually reduces development time and cost. Contrary to popular belief, more-structured engineering is the least expensive way to develop a product. Until engineers see how design affects the cost of manufacturing and logistics as well as user costs, they will continue to believe that shortcuts are acceptable and that regulation is nothing but an unnecessary burden.


    Copyright ©1997 Medical Device & Diagnostic Industry
  • Getting a Grip on Hand/Product Interactions

    Medical Device & Diagnostic Industry Magazine
    MDDI Article Index

    An MD&DI June 1997 Column

    DESIGNER'S TOOLBOX

    One of the bigger challenges in medical device design is minimizing human error. Such error manifests itself in the form of actuation of the wrong controls, improper settings, or delayed responses to critical events.

    Figure 1. Anthropometric data can be plotted to provide dimensional information for product design.

    More than 95% of all medical devices involve the dexterous use of hands. This article illustrates the use of usability studies for testing and evaluating hand/product interaction. Usability studies measure consumers' performance and their perception of their performance when using a product, a new product concept, or design models. A good usability study tells how consumers perceive a product's quality, aesthetics, performance, and value. It also provides quantitative human performance measures of a design's effectiveness.

    The next section outlines several research techniques for designing a good usability study. These techniques enable a designer to account for variations in hand size, grasping strategies and grip architectures, tactile and kinesthetic qualities, control layout and design, and how users perceive these factors.

    VARIATIONS IN HANDS

    There is a 13.9-in. difference in reach between a 5th-percentile Asian woman and a 95th-percentile African American man. Such anthropometric data have been aggregated into a few publicly available databases compiled by organizations such as NASA. These databases have also been incorporated into software packages for manipulating and analyzing body parts in their realistic positions. The databases typically provide data on the variations in physical dimensions as a function of gender, race, and, to a lesser extent, age. A product's success in accommodating the range of users' hand sizes relies on selecting appropriate dimensions and applying them to the design.

    Reach Zones. Identifying static dimensional characteristics is the first step in determining an appropriate design for a device that involves a human interaction. Such data can then be plotted as data points in a dynamic context (see Figure 1). Software programs such as Mannequin (Humancad, Melville, NY) and Jack (Center for Human Modeling and Simulation, Philadelphia) enable designers to determine the best digital reach zone.

    Grasping Strategies. Grip architectures refer to the number of fingers, the degree of involvement, and the configuration used to grasp and manipulate a tool or control. A long thin grip field typically elicits a fingertip grip, whereas more of the palmar surfaces of the digits are used for larger grips. If the action requires dexterous control, as in the use of a surgical instrument, then the diameter, along with several other factors such as grip material, texture, and topological detailing, will directly affect user performance. Using anthropometric software, designers can identify optimal grip architecture and grasping strategies.

    Tactile and Kinesthetic Feedback. Touch can vary as a result of aging or skin surface qualities, such as gloved or callused hands. This factor should be taken into account in designing the type and form of feedback provided to users.

    Control Design. Selection of a particular control should be based on the feature or value being manipulated, the number of levels in a discrete control, and the total number of controls being manipulated by the hand.

    Perception and Action. Visual information is sometimes referred to as a product's design language or product semantics. The designer can articulate the design of the product to be more or less intuitive. Using software to manipulate color, form, scale, organization, and texture can each contribute to the explicitness of use and operation.

    CONCLUSION

    These factors exemplify the design issues that product developers need to consider to optimize hand/device interaction. Systematically evaluating design and ergonomic requirements using anthropometric data will have a direct and tangible effect on a product's success in the form of fewer errors, better efficiency, and improved functionality. Usability studies provide the basis for product planning, design, and marketing.

    Bryce G. Rutter is principal of Metaphase Design Group (St. Louis).


    Copyright ©1997 Medical Device & Diagnostic Industry

    In Its Bold New Course, FDA Needs Industry's Help

    Medical Device & Diagnostic Industry Magazine
    MDDI Article Index

    An MD&DI June 1997 Column

    WASHINGTON WRAP-UP

    Anticipating budget cuts, CDRH dodges the question of device user fees but suggests streamlined procedural changes, third-party reviewers.

    The medical device industry and its primary federal regulators are at a political crossroads. Will the two bury their differences and move forward together? Or will they follow separate, fractious, and diverging paths? These questions will be decided this year when Congress passes the first serious FDA-downsizing federal budget and, possibly, makes its final move on separate FDA-reform legislation--which may well include medical device user fees.

    At press time, there were strengthening signals that omnibus FDA reform would be tied to renewal of the Prescription Drug User Fee Act (PDUFA). This strategy is rooted in the pragmatic realization that, if addressed separately, FDA reform would be more likely to fade away, as it did last year. Both FDA and the drug industry are demanding renewal of PDUFA, which has been an outstanding success at expediting product approvals. And as long as overall FDA reform (including medical device reforms) is linked to PDUFA legislation, the temptation to permit the agency to charge user fees for devices will be strong.

    In March, Bruce Burlington, director of FDA's Center for Devices and Radiological Health (CDRH), sent the medical device community (industry, professional groups, and interested consumer organizations) a 14-page report that combined a plea for help with unusual frankness. Its bottom line: Everything is on the table (which is not the same as saying that every wish will be granted).

    Facing budgetary cuts that in any previous year would have been called disastrous (30% is often mentioned), FDA is clearly running scared. Burlington's pitch to the device community is rooted in his desire to preserve what he can while yielding as much as is reasonable to the agency's critics.

    His appeal is built on the credibility CDRH has gained through its internal reforms. In three years, he points out, 510(k) submissions have moved from a backlog of over 2000 to "essentially zero." The proportion of investigational device exemptions approved within the required 30 days has doubled. For premarket approval (PMA) supplements, "we have reduced the number overdue by a factor of 10," and the number of new-technology devices approved in 1996 was "double the average number that had been approved during each of the previous 15 years."

    All this was accomplished during a time of dwindling resources and rising workload. Clearly, industry was right in 1994 to reject device user fees with the argument that FDA should trim its own fat and get on a fitness program. Burlington's letter says FDA has done so and will continue to do more.

    Burlington's report does not address the still politically incorrect issue of device user fees. However, close observers of his center wonder how many more cuts it can absorb before review times start to creep upward again.

    Burlington believes more cuts can be had by adopting "a risk-based approach to restructuring our workload." His proposals for achieving this, as follows, are little short of radical:

    • Shift the reviewer force from low-risk device 510(k)s to PMA applications, pre-1976 devices, and device reclassification. "The result should be timelier reviews while maintaining scientific rigor," according to Burlington. In addition, CDRH will reevaluate which device modifications will still require submission of resource-draining supplements.
    • Divert reviewers from lower-risk devices to the more technically complex 510(k) submissions (tier 3) that usually require clinical data. The remaining devices, Burlington hints, could be farmed out to external reviews or exempted from 510(k) review altogether. Another possibility is self-certification or third-party certification that the devices conform to recognized consensus standards. They could also be documented by their makers as conforming to FDA's new design control requirements.
    • Reform medical device reporting (MDR) management to make greater use of summaries and electronic filing so fewer people are needed to shuffle paper. This strategy would involve studying a "sentinel surveillance system" that would use reports from clinical personnel at participating facilities to validate and fine-tune MDR data, which is said to contain wastefully high levels of duplication.
    • Reduce the number of routine inspections and focus on compliance inspections and for-cause (enforcement) inspections. "We have been exploring the possibility of manufacturers using third-party audits to assess compliance with FDA requirements," Burlington writes in his letter.

    Not only is CDRH considering these procedural economies, but Burlington says it is also trying to reengineer the way it does business "to afford greater efficiency while retaining a high level of consumer protection."

    In a speech to the annual meeting of the Health Industry Manufacturers Association in March in St. Petersburg, FL, Burlington alluded to the magnitude of the impending change. He acknowledged that "our program at the center in the past has been driven by volume, not by public health--we've had thousands of 510(k)s, thousands of biennial inspections, tens of thousands of MDR reports."

    That volume has tended to focus FDA's attention on assessing conformance--a function that has few public health benefits and that FDA now would prefer to shift to the private sector (i.e., to third parties).

    Burlington said this shift would free FDA resources to permit increased effort in PMA reviews, pre-1976 Class III devices, for-cause inspections, and analysis of critical MDR reports. But this would not mean that FDA is going to stop assessing conformance, Burlington emphasized. The agency will simply focus on matters with a high health impact or "high potential for public outrage when we have done something wrong."

    Burlington cited 510(k)s as an example. "In order to fit this new model, we are going to have to have increased activity in developing guidances, test methods, and standards, as well as increased activity in training and auditing third parties. But when we move from setting criteria to conformance assessment, we don't think we need increased activity. We need increased activity in looking at clinical data and labeling where it exists in 510(k)s, not because we are going to get more clinical data, but because when we do get clinical data we need to be there."

    Although the device industry continues to resist user fees, such fees are still considered viable on Capitol Hill. Burlington's new approach offers an alternative to both user fees and congressional tinkering with FDA procedures. It embodies a startling degree of privatization--third-party review of many 510(k)s, adoption of non-FDA device standards, and third-party regulatory inspections.

    Significantly, just after Burlington's letter was distributed, he and other agency leaders appeared before the Senate Labor and Human Resources Committee without making even one suggestion for legislative reform. Committee chair James Jeffords (R­VT) had specifically asked the agency to bring a list of legislative fixes, inviting the agency to make its own mark on the changes that are surely coming from Capitol Hill. The agency, no doubt in close consultation with the White House, elected to bring nothing but its past accomplishments and a friendly demeanor.

    This is a high-stakes game. The administration--knowing that FDA's budget faces draconian cuts--appears to be gambling that legislative FDA reform will fail again. Thus the agency is aggressively seeking to accommodate its constituencies. Burlington's letter is one manifestation of this strategy.

    But the risks are great. If industry continues to demand legislative reform, there is a good chance that its sympathizers on Capitol Hill will be angered by FDA's unwillingness to join the party (there were signs of such anger during the Jeffords hearing in March). If that occurs, the agency may be punished with a tougher budget.

    On the other hand, if industry accepts Burlington's overtures, it must do so knowing that no matter how generous Congress is with FDA's budget this year, its generosity is not guaranteed for the future. In addition, Burlington's letter clearly anticipates shifting some of the cost of FDA functions to industry in the form of third-party reviewers and inspectors.

    Industry stands at a crossroads. Should it pay user fees or their equivalent under other names? Or should it help FDA salvage as much as possible of the current funding basis (100% congressional appropriations) while cooperating with Burlington's administrative proposals? Or some mixture of these?

    Whatever route is chosen, and what-ever compromises are made, FDA needs industry's help in the form of ideas and contributions to a constructive Capitol Hill attitude.

    James Dickinson is a veteran reporter on regulatory affairs in the medical device industry.


    Copyright ©1997 Medical Device & Diagnostic Industry

    Telemedicine: Seeking to Prove Itself in Niche Markets

    Medical Device & Diagnostic Industry Magazine
    MDDI Article Index

    An MD&DI June 1997 Column

    R&D HORIZONS

    Although telemedicine was first envisioned as an aid for the general practitioner, it has so far been successful only in bringing together medical specialists. What is the current state of this technology, and what can we expect in the future?

    When it was in its infancy just a few years ago, telemedicine was envisioned as a revolution in health-care information delivery for general practitioners. Yet its promise for general medicine has yet to be realized. Instead, this technology has been used so far in the medical specialties only, where there is a need to bring together a relatively few qualified doctors, who may be separated by large distances.

    Maureen Ryan of VTEL (Austin, TX) says telemedicine is now used in specialties where it makes "financial sense."Photo courtesy of VTEL

    For example, videos of patient skin, inner ears, and internal tissues obtained through endoscopes are sent out among specialists for consultation. Even the telemedicine systems that are currently installed in patients' homes send data only to specialists. Electrocardiographs are sent to be examined by cardiologists and the vital signs of low-birthweight infants are sent to neonatologists.

    "As we move from a fee-for-service environment to managed care, it is all about productivity and preventive medicine and keeping the patient out of the hospital," says Maureen Ryan, health-care program manager for VTEL (Austin, TX). "The specialties where telemedicine is being applied are those that make the most financial sense," she adds.

    THE TECHNOLOGY TODAY

    VTEL, which recently bought competitor CLI (San Jose), is one of several commercial companies that are now jockeying for position in the telemedicine market. These companies include PictureTel (Andover, MA), Mystech Associates (Manassas, VA), and Multimedia Medical Systems (MMS; Charlottesville, VA), which recently bought competitor md/tv. The academic institutions in the forefront of telemedicine research are the University of Washington (Seattle) and Pacific Northwest Laboratory (Richland, WA), which is a national laboratory operated by the Batelle Memorial Institute for the U.S. Department of Energy.

    There are two types of telemedicine technologies: real-time interactive systems and store-and-forward systems, which first record and then send patient data to a remote site for interpretation. Interactive systems provide instantaneous consultation by physicians across the state or across the globe. But this means physicians must be available at the same time. In the increasingly hectic world of health care, coordinating schedules for such consultation is difficult. This difficulty has led some developers to choose store-and-forward systems, which can be thought of as very sophisticated E-mail systems whose messages are electronic folders or directories that contain patient data. This data can include demographics, video and audio clips, and medical images such as magnetic resonance, computed tomography, and x-rays. The folders' contents can be examined at the convenience of the recipient.

    MMS offers both types of telemedicine systems. CareLink is the company's interactive system, which delivers separate but simultaneous medical images and data, as well as videoconferencing to desktop computers. CaseReview captures, annotates, stores, and then forwards medical information in an electronic patient record.

    These and similar products are finding their places in today's health-care system. MMS products, for example, are being used to support such specialty applications as renal dialysis and ophthalmology.

    Companies typically are customizing their general-purpose telemedicine systems. For example, CLI offers videoconferencing systems that have been tailored to be used in a wide range of specialties. The systems have been developed to address cardiology; dermatology; ear, nose, and throat medicine; emergency or trauma medicine; internal medicine; nephrology; neurology; gynecology; oncology; orthopedics; pathology; pediatrics; psychiatry or psychology; radiology; surgery; and urology. The company has installed these systems in medical centers across the United States, including at the Bowman Gray School of Medicine, Kansas University Medical Center System, and the Medical College of Georgia.

    Aside from using medical instruments such as endoscopes and electronic stethoscopes, these customized systems employ the same components that mainstream videoconferencing products use. The TeleProvider telemedicine workstation by Mystech, for example, uses a Pentium PC; a Windows NT or Windows 95 operating system; a high-speed communication line such as ISDN, T1, T3, or ATM; and Netscape, hypertext, and Java applets. These components, along with a video camera and an endoscope or electronic stethoscope, are tailored to meet the client's requirements. "We have to search for and test components to make sure they will meet the clinical requirements," says Mike Daconta, chief developer for TeleProvider. Which component is appropriate in a given situation "is very subjective, depending on the specialty and the ailments being addressed."

    TeleProvider, like many other such products, does not have characteristic packaging. Its components are simply spread out over a desktop or table.

    Some equipment developers have packaged their products for medical use. FRED (Friendly Rollabout Engineered for Doctors) from VTEL, for example, is packed into a cart that rides on casters and is encased in "scrubbable" stainless steel. Data and video ports interface with endoscopes, Dolby stethoscopes, otoscopes, ophthalmoscopes, and ECG machines. "The idea was to integrate these components into a whole product and run it from a simple interface," says Ryan. "FRED allows physicians to use their clinical tools in an environment that is merged with videoconferencing," she adds.

    ENGINEERING CONSIDERATIONS

    Because virtually all vendors of telemedicine equipment depend heavily on off-the-shelf hardware, the main challenge of building telemedicine systems is integrating the components and making them work in a cost-effective way.

    Another major engineering challenge is making the information accessible. Early telemedicine systems brought to market in the early 1990s were point-to-point systems, meaning that each physician had to view the information without the advantage of viewing it in groups. Today, engineers are increasingly adapting technology to practitioners' need for group participation. Data sent from one point are being channeled into networks. In July, MMS plans to release a Windows NT server that will disperse data throughout a medical institution to allow access from many different sites.

    Other companies are focusing on the Internet and institution-specific intranets to disperse information. Mystech engineer Daconta notes that TeleProvider packages data using Java, which is rapidly becoming a standard for Web-based products. As a result, TeleProvider data can be accessed using common Web browsers such as Netscape or Microsoft Explorer. "The point-to-point model is not practical," Daconta says. "Telemedicine won't work until it is ubiquitous and easy for the doctor. Using Java, we can get data onto many of the desktops within an organization."

    Resolution is another major concern in telemedicine, particularly when diagnosis relies heavily on images. The use of video for diagnostic purposes, as in the case of dermatologic and endoscopic images, presents special concerns.

    Building on software developed to improve the quality of images from space probes, the NASA Ames Research Center (Moffett Field, CA) has a software package that promises to improve both the spatial and gray scale resolution of video images by using images of the same area from slightly different perspectives. "If the camera moves even just a fraction of a pixel, it's an independent sample of the same area," says Peter Cheeseman, an Ames research scientist. "And it's the combination of all those independent samples that allows you to get the super resolution."

    Audio technology also presents a challenge for telemedicine. Current systems are typically monaural rather than stereo. If telemedicine is to achieve its potential for providing expert advice in complex environments such as busy operating suites and emergency rooms, a more realistic conveyance of sound will be necessary. In the modern operating room, for example, there is a cacophony of auditory cues from many different diagnostic and monitoring systems. Surgeons and nurses depend on these cues to monitor a patient's status. Remote physicians will need to identify the origin of each sound.

    The solution to audio limitation may already be in hand, as an outgrowth of work that is currently being conducted at the Spatial Auditory Displays Laboratory at NASA Ames.

    Durand R. Bequalt, PhD, a research scientist at the laboratory, has developed a system for localizing noises by providing three-dimensional sound over headphones. "Various things have different alarms, but a lot of them sound very similar; the sound of a heart machine might seem the same as the sound of another machine," Bequalt says. This technology may also be useful for patient consultations by spatially separating the voices to determine who is speaking. Standards that define which types of alarms are used on specific types of machines would also help remote physicians identify sounds.

    TELEMEDICINE OF THE FUTURE

    Although hearing and sight are the only senses used in telemedicine today, they may not be the only ones used in the future. Odors can sometimes play an important role in diagnosing conditions such as infection, a perforated bowel, or a nicked bile duct.

    Engineers at the Pacific Northwest Laboratory have developed TeleSmell, a prototypical system that would use an electronic nose to capture the essence of odors, encode and transmit the data to the telemedicine site, and then use a decoder to reconstruct the odor for the consulting expert to smell.

    Alternatively, a neural network might be introduced to analyze the odor for the operator, negating the need to reconstruct it remotely. Such systems are still in their infancy, however.

    Federal laboratories, such as Pacific Northwest, tend to address the most sophisticated technologies. That is especially true when national security is involved. The Defense Department and NASA are backing a number of initiatives that could provide a quantum leap in the definition of telemedicine.

    ATL (Bothell, WA), in collaboration with the University of Washington, is leading an effort to develop a telemedi-cine system that would relay video and diagnostic ultrasound images from a trauma scene to a remote medical command post where emergency-care specialists would guide the examination. "Patients could be triaged quickly, and a decision could be made about whether they should be transported for interventional therapy such as controlling hemorrhage or dealing with injured organs," says William Shuman, MD, a clinical professor who specializes in trauma care at the University of Washington.

    The ultrasound scanner for this system would fit in the palm of a medic's hand. Its development is being sponsored by the Defense Department and a consortium that includes ATL, the University of Washington, Harris Semiconductor (Melbourne, FL), and VLSI Technology (San Jose). A commercial product may be in hand by 2000.

    Similarly advanced and potentially useful research is under way at NASA. The space agency was arguably the first organization to embrace telemedicine. "Ever since the beginning of manned spaceflight, we have monitored the health of our crew members," says Roger Billica, MD, chief of medical operations for NASA at the Johnson Space Flight Center in Houston.

    Besides daily private medical videoconferences between the crew surgeon and shuttle crew members, ground crew members are also monitored on video. Parameters monitored include ECG, heart rate, oxygen consumption, heat production, and suit carbon dioxide levels.

    Extended stays in orbit, such as those anticipated onboard the international space station, require a more sophisticated system for rapid diagnosis of illness. NASA has developed a suitcase-sized package, called the telemedicine instrumentation pack. It contains an endoscope, ophthalmoscope, dermatology macroimaging lens, ECG, automatic blood pressure sensor, electronic stethoscope, pulse oximeter, and a computer with a two-way voice and video control. The telemedicine pack is scheduled to be placed on the shuttle soon.

    Efforts aimed at the battlefield and exotic environments such as space might be adapted to civilian use. University of Washington researchers and their colleagues at ATL already envision a battlefield triage system for use by paramedics aiding accident victims. And NASA is looking for commercial companies that might license its telemedicine kit for down-to-earth applications. "You could carry it anywhere, like a suitcase," Billica says. "Just plug it in, do an exam, hook it into a network, and have the physician on the other end participate in a basic physical examination."

    Greg Freiherr is a contributing editor to MD&DI.


    Copyright ©1997 Medical Device & Diagnostic Industry

    Top Ten Tactics for Surviving Medicare Reform

    Medical Device & Diagnostic Industry Magazine
    MDDI Article Index

    An MD&DI June 1997 Column

    BOTTOM LINE

    How successful medical device companies manage costs and deliver superior products and services.

    Medicare poured $178 billion into the United States health-care industry in 1995. That amount represents almost all the revenue for home-health-care agencies, hospices, and renal dialysis facilities. It constituted 25 to 75% of the revenue of clinics, laboratories, and ambulance services. It is the single largest source of revenue for hospitals and device manufacturers.

    Of that $178 billion, approximately 17.9%, or $31.9 billion, flowed through hospitals, clinics, laboratories, ambulance services, home health care, and the like to manufacturers of disposable products, therapeutic and diagnostic devices, and nonmedical supplies. Medicare dollars, in other words, accounted for approximately 72% of the industry's 1995 U.S. revenues of $44 billion.

    Medicare is the single largest source of revenue for the medical device industry. The implications of this are clear for equipment and device suppliers to long-term and home-health-care providers, renal dialysis facilities, clinics, ambulance services, and hospitals. When Medicare catches a cold, medical device manufacturers get pneumonia. Device manufacturers must not only absorb their share of the reduced funding but also bear the burdens of complying with federal regulatory requirements and meeting health-care providers' increased need for quantifiable data.

    WHAT TO EXPECT FROM WASHINGTON

    In two words, no relief. The debate between Congress and the White House is not and never was about which group was going to pay for Medicare's projected shortfalls. Last year both the administration and Congress suggested reducing the rates Medicare pays to doctors and hospitals. With hospitals, doctors, and manufacturers facing potential lower Medicare revenues, profit margin erosion will surely follow for many device companies, especially those with thin operating margins or high fixed costs.

    Last year, President Clinton proposed reducing the projected increase in Medicare spending by $124 billion. Of that, $89 billion would extend the solvency of the Part A hospital insurance trust fund from 2002 to 2006. Clinton's most recent proposal would wrangle $138 billion in savings, up from the $124 billion proposed last year but still down from the $158 billion proposed by the Republicans.

    Both the administration and Congress propose reducing the rates Medicare pays to doctors and hospitals. Underneath all the political bickering, both sides appear ready to craft a bipartisan Medicare reform package. Medicare will be reformed and possibly saved for the 38 million citizens on the plan. But this will come at the expense of the health-care industry and, more specifically, of the 300,000 workers in the medical device industry. The Medicare-led squeeze on costs is about to worsen.

    SURVIVING MEDICARE REFORM

    Over the last few years, some common strategies have emerged among the more successful manufacturers in the device industry in response to the institutionalization of cost containment. The following is my list of the top ten of those strategies.

    Use Your Inventive Technologies to Create Value. Get more use for the same price. If your customer wants to reuse your disposable product, innovate to help that customer better manage the costs of the product. If that's not enough, use your development programs to lower your cost of sales and to increase product differentiation. "The best way to predict the future is to invent it."--Acuson Corp., (Mountain View, CA), 1995 Annual Report.

    Differentiate Your Products. The trend toward smaller buy lists and reduced duplication is putting everyone's product differentiation claims to the test. Use independent testing services if you must, but quantify your product's unique features, and use your R&D dollars to increase its differentiation.

    Reduce Manufacturing Costs by Streamlining and Outsourcing Many Production Processes. For some companies, such as ADAC Labs, a Milpitas, CA­based manufacturer of medical imaging systems for hospitals and clinics, total quality management (TQM) is an answer. A well-established tool for improving operational efficiency, TQM is not the only or even always the best answer (each company must determine what is best for its unique circumstances), but it works for most firms. I have seen TQM transform average performers into exceptional performers. Any well-proven system embraced by a company can create more efficient production processes. The payoff at ADAC, in addition to its 37% gross profit margin, was the Malcolm Baldrige National Quality Award in 1995. Imagine how effective your company could be as the cost and quality leader.

    Cut Product Duplication. Two years ago, Eugene Corasanti, CEO of Conmed Corp., a manufacturer of disposable medical products based in Utica, NY, launched a long-range project to consolidate duplicate product offerings. His goals were to enhance the company's manufacturing efficiency and to merge the best qualities of his duplicate products to make one product. With Medicare revenues being squeezed, such efforts are appreciated by hospital administrators and purchasing agents. Corasanti's achievements have helped place his firm at the forefront of companies supplying products to hospitals. On December 9, 1996, Conmed announced that Premier, the nation's largest health-care alliance enterprise, had selected Conmed as its main supplier for electrosurgical products used for cutting and coagulation.


    WHY MEDICARE IS BECOMING INSOLVENT

    Part A and Part B are the two parts of Medicare. Part A is funded by FICA payroll deductions. Part B is funded by the general fund and payments from participants in the service. Part A covers hospital costs. In 1994, the average Medicare cost per enrollee of the Part A hospital trust fund was about $2900. Unfortunately, the average payroll tax revenue per beneficiary was about $2600. That $300 shortfall is being funded by a trust fund that, in 1995, still had $134 billion in it. At the rate Part A is losing money, however, that trust fund is expected to be empty in about four years.


    Develop a Culture That Rewards Efficiency. As you scrutinize the basic processes in your operation--typically starting with manufacturing--expand that effort elsewhere. Ongoing cost-improvement programs must address product change, operating procedures and automation, changes in raw materials and suppliers, and administrative activities. Terrence Wall, CEO of Vital Signs, Inc., Totowa, NJ­based manufacturer of single-patient-use medical products for anesthesia, respiratory, and critical-care areas, said of his company's efforts: "We made it systematic, rethinking and redesigning every fundamental business process, that is, 're-engineering' in its true sense, not just 'downsizing.'" Wall instituted an incentive program that awarded annual bonuses to employees based on the percentage of improvement in operating earnings. Not coincidentally, earnings from operations in fiscal 1995 improved $8 million (51%) over those in fiscal 1994. Furthermore, the culture of cost improvement created an environment in which all employees collaborated to reduce manufacturing costs by nearly $1.5 million in 1995.

    Walk a Mile in Your Accountant's Shoes. Reducing costs while maintaining quality care is the challenge hospitals face today. This quest has resulted in greater involvement by finance and administration in product purchasing decisions. The parallel challenge to device manufacturers is to speak the language of finance and administration. To walk in their shoes is to develop information systems that quantify the costs, outcomes, resources, and alternatives of your products. This is the raw material from which you and your customer can create compelling business solutions. These solutions often become product standardization programs, invoice- and paperwork- cutting programs, and improved customer/vendor coordination programs. Such solutions translate into profits for your customer and orders for you.

    Develop Alliances with Hospitals, Group Purchasing Organizations, and Distributors. Most manufacturers do not have a large enough market share in their respective product lines to be able to sell directly to hospitals and clinics. Just to get to the negotiating table, small to medium-sized manufacturers must adopt a buddy system with a major distributor or large purchasing organization. Such alliances enhance your company's value to end-users. As purchasing agents move toward one-stop shopping, bigger distributors will become increasingly important. Companies that invest the time and personnel needed to combine their product and manufacturing strengths with the strengths of well-established distributors will successfully capitalize on these new realities.

    Increase Obsolescence. Even if you've built and maintained an identity as a small, focused manufacturer with a wealth of knowledge and experience in a particular area, you will undermine that hard-won reputation if you fail to keep innovating and improving your products. But progress must be in sync with the dominant trends in improved patient outcomes, better resource use, more cost-effective solutions, less-invasive procedures, and reduced overall liabilities. By constantly reinventing your products and your company to satisfy these basic health-care needs, you will ride the wave, not be swamped by it.

    Maintain ISO 9001 and Other Certifications. These accreditations are not routine, and hospitals and buying groups know it. To achieve such recognition, manufacturers must meet very exacting quality and process standards. In an environment of shrinking vendor lists, third-party recognition improves your chances of surviving any purchaser's cut and also significantly improves your credibility and respect in overseas markets.

    Prepare for the End of Managed Care. Managed care is a transitional strategy. As the news out of Washington demonstrates, the real point is institutionalized cost containment. Managed care is just one strategy to contain costs. In the foreseeable future new systems and strategies will arise that, in the ag-gregate, may be referred to as disease management. For example, Mutual of Omaha Insurance Company began funding a study of patients who needed heart-bypass or angioplasty operations. The study used a classic disease management approach. Patients were put on the Ornish program, which incorporates a number of lifestyle factors, including a 10% fat diet, stress management, and exercise. Nearly two-thirds (65%) of the participants reported that their chest pain was gone after one year, and the progression of artery blockages was stopped or reversed. Mutual of Omaha found that annual medical costs were $3826 for the Ornish participants versus $13,927 for nonparticipants. As one cardiologist who used to do balloon angioplasty said, "I've hung up my balloons and I'm doing the Ornish program." Similarly, the DNA-mapping program known as the genome project may give physicians the ability to make earlier diagnoses and is a powerful tool to begin disease management programs.

    CONCLUSION

    Medicare's lesson of cost containment continues to be taught by a relentlessly political Congress and administration. It has been repeated for so long and in so many ways that it is now an institutionalized reality. In response, device manufacturers must change the way they operate their businesses. Successful device manufacturers will employ back-to-basics business skills and common sense and will understand that the only constant in this world is change.

    Robin R. Young, CFA, is senior healthcare analyst for Stephens, Inc. (Little Rock, AR), an investment banking firm.


    Copyright ©1997 Medical Device & Diagnostic Industry

    Using Patents to Secure Rights to Concepts and Inventions

    Medical Device & Diagnostic Industry Magazine
    MDDI Article Index

    An MD&DI June 1997 Column

    HELP DESK

    Securing a patent on a new concept or invention is key to maintaining a competitive edge. Delays can mean loss of the patent to a competitor or to the public domain, warns Stephen Glazier, partner in the law firm of Pillsbury, Madison & Sutro (Washington, DC) and holder of six U.S. patents.

    How do we obtain a patent once we conceive of an invention?

    Describe the invention to your patent attorney so he or she can draft and file a patent application in the United States and in any other country where you desire protection, and pursue that application to the issuance of a patent.

    An early patent search can help clarify whether the invention is patentable before you spend the time and money necessary to draft and file an application. Furthermore, conducting a search prior to writing the application improves it and increases its likelihood of success. The results of the search should be filed with the patent application to give strength and reliability to the final patent.

    Ideally, patent counsel will be familiar with obtaining, enforcing, and licensing patents. Integrate patent counsel into the entire business plan, including the identification of product concepts, the development of products, the identification of patentable inventions, the pursuit of patents, and the licensing and other commercialization of obtained patents. This integration will ensure the development of a patent portfolio that will aid the business plan of the patenting company and increase its profit margins.

    How do we conduct patent searches?

    Your patent attorney can arrange searches of U.S. and foreign patents when he or she has your invention disclosure. Your company can also access a variety of on-line services offering patent databases. However, these are only useful for obtaining technical and competitive information. Only a patent attorney can carry out the legal analysis of prior patents to determine the patentability of an invention.

    After inventing a new medical device, how soon do we have to apply for a patent?

    As soon as possible. Sooner is better. Later is worse. The invention does not have to be made before the application is filed. Just the basic concept can be the subject of a patent application.

    New ideas have a way of leaking to competitors, who may file their own patent applications for the leaked ideas. Additionally, a leak may allow the idea to slip into the public domain, that is, to become unpatentable by any party. Leaking is avoided, and legal and tactical advantages are gained, by filing for a patent at the earliest possible opportunity.

    Another potential problem is that interesting product ideas may be independently invented by several parties more or less contemporaneously. Early filing can be a determining factor in obtaining a patent for your idea to the exclusion of other independent inventors.

    Finally, normal commercialization activity for an invention, if it precedes filing of a patent application, can also cause the invention to enter the public domain.

    Patent applications filed early in the conceptual process for an invention can, and in many cases should, be amended and expanded to incorporate new developments and concepts as the original invention concept is refined to become a specific product.


    "Help Desk" solicits questions about the design, manufacture, regulation, and sale of medical products and refers them to appropriate experts in the field. A list of topics previously covered can be found in our Help Desk Archives. Send questions to Help Desk, MD&DI, 11444 W. Olympic Blvd., Ste. 900, Los Angeles, CA 90064, fax 310/445-4299, e-mail helpdesk@devicelink.com. You can also use our on-line query form.

    Although every effort is made to ensure the accuracy of this column, neither the experts nor the editors can guarantee the accuracy of the solutions offered. They also cannot ensure that the proposed answers will work in every situation.

    Readers are also encouraged to send comments on the published questions and answers.


    Copyright ©1997 Medical Device & Diagnostic Industry

    Software Risk Management: Not Just for the Big Manufacturers?

    Medical Device & Diagnostic Industry Magazine
    MDDI Article Index

    An MD&DI June 1997 Column

    Cover Story

    Risk management is not as complex or expensive as many small device manufacturers may think. In fact, it can actually save money.

    The use of software to control both the manufacture and operation of medical devices is rapidly increasing. As the need for such specialized medical software grows, so too does the importance of following rigorous and thorough software development methods. Whether the developer is a large firm with deep pockets or a small one with very little to spend on research and development, medical software must be safe for its intended use.

    Manufacturers can ensure that software will be safe by systematically identifying and eliminating software risks before they become either threats to successful operation or major sources of software rework.1 Regulatory officials recognize the need for such deliberate risk management. For instance, FDA's September, 1996, 510(k) draft guidance contains a section that spe-cifically addresses software risk management and requires that it be integrated into the software development life cycle. The International Electrotechnical Commission's 601-1-4 standard, released in 1996, also addresses software risk management for programmable medical devices. The standard requires the preparation of a documented risk management plan and the application of risk analysis and risk control throughout development.

    Risk management can not only help ensure that the final product is safe, but can also be used to determine whether a development project will be finished on time and on budget. Cost overruns and scheduling delays are two of the largest risks for any software project, and can be especially devastating for a small manufacturer.

    Despite the seeming consensus on the merits of risk management, however, many small medical manufacturers do not include it in their software development projects. There are several possible reasons why they don't. Manufacturers may have an exaggerated idea of the complexity of the math involved in risk management. They may believe that software risk management should only be performed on large software systems, or feel that formal software risk management will add too much to an already full development schedule. They may lack discipline in applying the methods, or doubt whether the techniques will have value, or simply lack knowledge of or training in software risk management.

    And it is not just misunderstanding or misinformation that keeps many small manufacturers from instituting formal risk management. Most available risk management methods are too complex or expensive to be practical for any but the largest corporations. For small manufacturers, a risk management method would have to be simple and cost-effective to be practical. In order to be not only practical but also ideal for small manufacturers, it would have to meet several criteria: It would have to be customizable. It would have to be mostly qualitative in nature, and quantitative if needed. It would have to apply to the full life cycle, including the maintenance phase. It would have to be easy to update as the development project progresses. And it would have to be compatible with any development model, such as spiral, waterfall, joint application design, recent application design, incremental, or prototyping.

    Figure 1. The relationship of software risk elements to risk factors.

    One risk management technique that meets these criteria is the Software Engineering Risk (SERIM), Model, which can be applied by anyone who is familiar with spreadsheet applications and who reads this article.2 SERIM uses risk questions to derive numerical probabilities for a set of risk factors. These numerical values are then analyzed using spreadsheets that have been programmed with particular statistical equations. To explain this quantitative analysis process clearly, some basic concepts, such as risk management activities, risk elements and factors, and measuring risk are addressed in the following sections.

    RISK MANAGEMENT ACTIVITIES

    Risk management can be organized in six general steps.

    1. Identification. Finding and recording project-specific risks. The identified risks can be generated by a number of techniques such as checklists, comparison with experience, common sense assessments, and analogies to well-known cases.

    2. Risk Strategy and Planning. Creating plans and alternatives for addressing each identified software risk and coordinating these plans with other documents, such as the software development plan.

    3. Risk Assessment. Categorizing each risk and determining its relative magnitude. This step includes deciding whether the risk will be managed or ignored during the development process.

    4. Risk Mitigation/Avoidance. Taking the steps that have been planned either to reduce each risk or to avoid it.

    5. Risk Reporting. Formally reporting the status of the risks that were identified by the first four steps.

    6. Risk Prediction. Using historical data and estimates to forecast risk. The information gathered in the previous five steps is used to derive the new risk predictions.

    RISK ELEMENTS AND RISK FACTORS

    Risk Elements. The three major elements of software risk are technical, cost, and scheduling problems. Technical risks, such as maintainability or reusability, are associated with the overall performance of the software system. Cost risks, such as budget or fixed costs, are associated with the cost of the software system. Scheduling risks, such as overlapping projects with shared resources, are associated with completion of the software system during development.

    Risk Factors. Software risk factors can be thought of as more specific subcategories of the three general risk elements. Risk factors are more closely relat-ed to software issues.3 A software risk factor can be a subcategory of more than one risk element (see Figure 1). Each software risk factor can have an influence on each risk element. That influence can be categorized as low, medium, or high.

    For the SERIM model, Karolak has identified 10 important risk factors.4

    1. Organization. The maturity of the organization's structure, communications, functions, and leadership.

    2. Estimation. The accuracy of the estimations of resources and schedules needed during software development, and their costs.

    3. Monitoring. The ability to identify problems.

    4. Development Methodology. The methods by which the software is developed.

    5. Tools. The software tools used when software is developed.

    6. Risk Culture. The management decision-making process in which risks are considered.

    7. Usability. The functionality of the software product once it is delivered to the end-user or customer.

    8. Correctness. Whether the product suits the customer's needs.

    9. Reliability. How long the software performs for the customer without bugs.

    10. Personnel. The number of people used in development and their abilities.

    Figure 2. The relationship of risk questions to software risk elements and risk factors.

    Table I shows the influence of the 10 software risk factors on each of the software risk elements. Each factor can have a different influence on each element. For instance, using a poor estimation technique may have little technical impact on a software project but lead to faulty prediction of the size and cost of a project, and, therefore, greatly extend the development schedule. From a risk element perspective, the influence of the estimation technique's risk factor on a project can be characterized as high on cost and schedule, but low on technical risk. Software risk factors can also be classified by their importance, either major or minor, to product or process risk (see Table II).

    Table I. The effect of software risk factors on risk elements.

    Table II. The effect of software risk factors on software categories.

    MEASURING RISK

    Risk Questions. One approach to assessing risk is to use risk questions to measure risk factors. Answers to the questions can be recorded in a yes-or-no format (0 or 1) or as a numerical range of possible responses. Response ranges can use any numerical value from 0 through 1. For example, a response range could be defined as none = 0, little = 0.2, some = 0.5, most = 0.8, and all = 1.0.

    Figure 2 shows the relationship of risk questions to risk elements and risk factors under the SERIM model. Note that each risk question can have a relationship with multiple risk factors, and there can be multiple questions for a given risk factor. The number of questions for each risk factor is determined by its relative importance in predicting success or failure in a software development project. The number of questions can be increased or decreased based on the characteristics of a project.

    The types of questions asked for each risk factor are usually based on industry trends, data, publications, and observations of both successful and unsuccessful software development projects. For each risk factor, a range of questions can be developed. They can be identified by the first initial of the corresponding risk factor and a number. For the organization software risk factor, for example, an O is used to identify each question.5

    O1. Are you using or do you plan to use experienced software managers?

    O2. Has your company been producing software similar to this in the past?

    O3. Is there a documented organizational structure either in place or planned?

    O4. Is the organizational structure stable?

    O5. What is the confidence level of your management team?

    O6. Does good communication exist between different organizations supporting the development of the software project?

    O7. Are software configuration management functions being performed?

    O8. Are software quality functions being performed?

    To devise a numerical range of responses to question O1, for example, a 0 could indicate that only managers who have little or no software engineering experience will be used on the project. A rating of 0.5 could indicate that a management staff of both experienced and inexperienced software engineering managers will be used. A rating of 1.0 could indicate that only managers who are experienced in software engineering will be used on the project. The lowest end of the scale actually represents the highest risk for a risk factor.

    Additional questions to cover the risk factors of estimation (E1­E7), monitoring (M1­M7), development methodology (DM1­DM7), tools (T1­T9), risk culture (RC1­RC11), usability (U1­U6), correctness (C1­C9), reliability (R1­R12), and personnel (P1­P5) are listed in Karolak.6 The SERIM model includes 81 questions for the 10 software risk factors.

    Table III. Correlation of risk management activities and organization risk factor questions.

    Mapping Risk Questions to Risk Management Activities. One way to ensure that questions are comprehensive and cover the scope of the software risk management activities is to correlate their risk factors to those activities. For example, Table III shows the organization risk questions related to the six risk management activities. The table shows that none of the eight organization questions covers the reporting activity. This information may indicate that additional questions to cover risk reporting should be formulated for this factor.

    Table IV. Integrated risk management activities correlated with software development life cycle phases.

    Mapping Risk Management Activities to the Software Development Life Cycle. An integrated risk management approach to software development should be able to predict risks across any phase of development. The six risk management activities should be present throughout the software development life cycle (see Table IV). Typical development phases include prerequirements, requirements, designing, coding, testing, delivery, and maintenance.

    Within each phase of development, the six risk management activities are also evaluated using the risk questions. Tables can be generated for each of the software development phases to show the relationship of those six activities and the relevant risk questions.

    ANALYZING RISK DATA: THE SERIM MODEL

    As the data on risk factors are gathered using the risk questions, they can be entered into any commercial spreadsheet application. The SERIM equations and parameters can be easily programmed into spreadsheets to create templates for data analysis. When the data are entered into the templates, factors can be analyzed during or before each development phase, or at any time needed. The model can be updated, expanded with more questions, or otherwise modified as more information is gathered during development. This flexibility makes SERIM well suited for small as well as large development projects.

    Model Parameters. The SERIM model uses simple probabilities to assess potential risks. The basic parameters of the calculations in this model include:

    1. P(A) is the probability of event A.

    2. The probability of event A ranges from 0 to 1.

    3. The probability of the sample space = 1, and the probability of no outcomes = 0.

    4. If A1, A2, ... An are a sequence of mutually exclusive events, then P(A1 * A2 * ... An) = P(A1) + P(A2) + ... P(An). Restated, the probability of a sequence of mutually exclusive events is equal to the sum of the individual probabilities.

    SERIM assumes that the probabilities are assigned by previous experience or by analogy to past events. Because this process is subjective, the probability assigned to a specific event may vary at different times in the software life cycle. It may also vary depending on the individuals who come up with the assignment. The numeric values used in the SERIM model are set by the responses to the risk questions. Simple probability trees are then used to calculate a risk statistic for each risk factor, which is a weighted average of all the responses to all the risk questions associated with that factor. This statistic is expressed mathematically as P(A) = w1P(A1) + w2P(A2) + ... + wnP(An) where wn is the weight for each probability.

    An Integrated Software Model. To describe SERIM as an integrated model, the six software risk management activities are related to specific questions that span a particular development phase for a project. In turn, each software development phase is connected to a set of software risk questions. These questions are related to specific software risk factors, which relate to the three software risk elements. These risk elements then constitute the total risk for the project.

    In this model, risk is represented as a probability tree. P(A) represents the overall probability of a successful development project. P(A1), P(A2), and P(A3) denote the individual probabilities for success in each of the three software risk elements. P(A4) to P(An) represent the probabilities for the software risk factors. P(B) through P(M) represent the probability of success of the project based on the software life cycle phase and the six software risk management activities.

    Figure 3. Model relationships between software risk categories and risk factors.

    Figure 3 illustrates the relationships between the software risk categories and risk factors for both process and product. P(N) is the probability of project success based on a specific software process. P(O) is the probability of project success based on product quality.

    The Analysis Equations. The basic equations used for the SERIM model are a series of probability trees for each parameter. Each equation is dependent on the number of questions for each software risk factor and the relative weights placed on each question. The main sets of equations that can be easily entered into a spreadsheet are given below. For instance, the probability of event A occurring is derived using Equation 1.

    3
    P(A) = [  P(An)]/3
    n = 1

    This equation assumes that all risk elements are equal in weight. P(A) is the probability of a successful project. If the weight of each element differs between them, then P(A) =w1P(A1) + w2P(A2) + w3P(A3), where w1 is a positive number and w1 +w2 + w3 = 1.

    The probability of a risk element 1 (technical risk) is given by Equation 2.

    13
    P(A1) = [  wnP(An)]
    n = 4

    Where w4 = 0.043, w5 = 0.043, w6 = 0.087, w7 = 0.087, w8 = 0.087, w9 = 0.13, w10 = 0.13, w11 = 0.13, w12 = 0.13, and w13 = 0.13. This equation assumes that a weight of 0.043 is assigned for a low value, 0.087 for a medium value, and 0.13 for a high value. The probability of risk elements 2 and 3 are calculated with the same formula, but with different weights.

    The probability of P(A4), the organization risk factor, is given by Equation 3.

    8
    P(A4) = [  P(On)]/8
    n = 1

    where On is the numeric value of risk question n for the category of organization. The probabilities of P(A5) through P(A13) are calculated with the same formula, but the upper limit of the summation and divisors for the number of questions will vary for each risk factor category.

    The probability of P(B), the prerequirements phase, is given by Equation 4.

    P(B) =    (O1, O2, O3, O4, O5, E1, E2, E3, E4, E6, E7, M1, M2, M3, M4, M6, M7, DM1, DM2, DM6, T1, T6, T9, RC1, RC2, RC3, RC4, RC5, RC6, RC7, RC8, RC9, RC10, RC11, C5, P1, P2, P3, P4, P5)/40

    where the elements of the sum are the values for the corresponding risk questions. The probability values listed in this formula all address the relationship between the prerequirements phase and the six risk management activities. The probabilities of P(C) to P(M) are calculated with the same formula, but the particular questions and the divisor will vary for each phase of development.

    The probability of P(N), the process, is given by Equation 5.

    13
    P(N) =[  wnP(An)]
    n = 4

    where w4 = 0.125, w5 = 0.125, w6 = 0.125, w7 = 0.125, w8 = 0.125, w9 = 0.125, w10 = 0.04, w11 = 0.04, w12 = 0.04, and w13 = 0.125. The formula assumes that a weight of 0.04 is assigned for a minor influence and 0.125 for a major influence. The value derived from this equation represents the probability of project success using the current software process.

    The probability of P(O), the product, is given by Equation 6.

    13
    P(O) = [  wnP(An)]
    n = 4

    where w4 = 0.045, w5 = 0.045, w6 = 0.045, w7 = 0.045, w8 = 0.14, w9 = 0.14, w10 = 0.14, w11 = 0.14, w12 = 0.14, and w13 = 0.14. The equation assumes that a weight of 0.14 is assigned for a major influence, 0.045 for a minor one. The value derived represents the probability of product success.

    SAMPLE APPLICATION OF THE SERIM METHOD

    A real-world example will best illustrate how SERIM can be used. The method was used to determine the risk of developing a custom software package to control the assembly and testing of an invasive cardiovascular device. The software, which was written in C, contained about 110,000 lines of code, and the application resided on five modular PC-controlled stations linked with a moving conveyor line. Each station controlled various assembly and testing steps. Data collection and acceptance testing of the device were also controlled by the application.

    The original schedule projected that development of the software would take about 13 months, including testing and complete FDA validation. Commercially available development tools were used for compiling and debugging.

    Table V. SERIM probability assessment for the real-world software development example.

    The probability assessments based on the responses to the SERIM risk questions are shown in Table V. The results represent the risks that were determined at the beginning of the proj-ect based on the original contract and the estimated schedule of delivery to the customer.

    Interpretation of the Probability Results. The probability of successful delivery P(A) for this example is 0.60. This low value of success is attributed to a number of software risk factors. The low value of 0.16 for the risk factor of estimation, P(A5), was returned because no software cost or schedule estimation data from similar projects were used for this project.

    The highest responses, 0.96 and 0.97, were for the software risk factors of correctness,P(A11), and reliability, P(A12). The correctness result related to requirements, design, code, test traceability, and the low expectation of new or changing requirements. There were very few changes or additions made to the software requirements during the development effort. The reliability result was associated with error and fault handling in the code, the use of software reliability modeling, proper types of testing, and defect tracking. All these activities were performed on this project.

    The model also identified a major risk that could occur during the prerequirements phase, P(B). This risk was related to the use of very few data from similar and previous projects in the development of the schedule and budget. The lowest probability from the six risk management activities (0.28) was in the area of risk strategy and planning, P(I). This suggests that few plans and alternatives were created for schedule and cost risks. The project had few schedule or cost alternatives other than reducing functional requirements and adding people to the development and testing effort.

    As it turned out, the risks of budget and schedule overruns were realized. The project actually took about 20 months to complete and was over budget. The model also predicted that the testing would be laborious because no automated tools or regression tools were used. The actual project data showed that testing, verification, and validation took about twice the time the initial schedule had predicted.

    CONCLUSION

    The SERIM method is a simple and flexible way to perform software risk management. It is particularly well suited for small manufacturers that may not be able to use more expensive and complex processes. Besides determining the risks of current projects, the method can predict risks for future projects by using benchmark data. It also allows updates throughout the development cycle. Numeric response values can be changed easily, and probabilities automatically calculated with an off-the-shelf spreadsheet application. The number and type of risk questions can also be customized to reflect the type and size of a project as well as any other specific project concerns. SERIM integrates well with other conventional project management tools, and it uses the development phases most software developers are already accustomed to using.

    SERIM can also help users satisfy FDA and IEC requirements for risk management. It can be part of the risk management file system and risk management plan required for IEC 601-1-4. It can be used in the risk analysis, risk control, and risk estimation stages of development. IEC 601-1-4 is currently being harmonized with the FDA 510(k) requirements, so compliance to the international standard will also ensure compliance with the 510(k) process. SERIM can help manufacturers meet the quality system regulation's design control requirements for software validation and risk analysis.7

    Risk management need not be prohibitively expensive or complicated for small medical device manufacturers. In fact, using it early in the process can actually reduce costs by identifying a project's vulnerabilities before they become disasters. Using a simple and cost-effective risk analysis method such as SERIM will help small manufacturers incorporate this important activity into every software development project.

    REFERENCES

    1. Boehm B (ed), Software Risk Management, Los Alamitos, CA, IEEE Computer Society Press, 1989.

    2. Karolak DW, Software Engineering Risk Management, Los Alamitos, CA, IEEE Computer Society Press, 1996.

    3. Karolak DW, Software Engineering Risk Management, Los Alamitos, CA, IEEE Computer Society Press, p 44, 1996.

    4. Karolak DW, Software Engineering Risk Management, Los Alamitos, CA, IEEE Computer Society Press, pp 44­49, 1996.

    5. Karolak DW, Software Engineering Risk Management, Los Alamitos, CA, IEEE Computer Society Press, pp 52­54, 1996.

    6. Karolak DW, Software Engineering Risk Management, Los Alamitos, CA, IEEE Computer Society Press, pp 52­75, 1996.

    7. 21 CFR 808, 812, and 820, "Current Good Manufacturing Practice (CGMP); Final Rule," Federal Register, October 7, 1996.

    John Suzuki is owner of JKS & Associates (Laguna Niguel, CA), a software consulting firm, and Dale W. Karolak, PhD, is a consultant (Brighton, MI).


    Copyright ©1997 Medical Device & Diagnostic Industry

    Plugging into an Untapped Business Development Resource

    Medical Device & Diagnostic Industry Magazine
    MDDI Article Index

    An MD&DI June 1997 Column

    SITE SELECTION

    Public utilities offer a variety of business development services to entice medical device companies to relocate to--or expand in--their areas.

    It might at first seem unlikely that power companies would have a major interest in the expansion or relocation of medical device manufacturers. After all, such manufacturers are hardly major energy consumers compared to such industries as chemical, metal, or paper processing.

    Yet power companies have for years directed advertising and marketing efforts toward bringing new medical manufacturers into their areas or helping those already in their areas expand.

    Figure 1. Graph showing that Central Hudson Gas and Electric's commercial rates are lowest in the state, based on usage of 250 kW, 90,000 kWh per month.

    So, what do power companies gain from these efforts? Many of them say that medical device manufacturing helps build up entire communities of power users. Mike Heaton is the economic programs coordinator for Cinergy/PSI (Plainfield, IN), a company that serves 69 counties in Indiana, offering a wide range of business development services for medical manufacturing. He says that "each new plant that is added has a dramatic economic impact on a community. It means more people who use hair dryers, more McDonald's, more drugstores."

    Medical device companies typically offer high-technology employment, which is very helpful in building a community infrastructure. According to Devin Meisinger, area development coordinator for the Omaha Public Power District, which serves 13 counties in southeastern Nebraska, medical device companies usually bring highly paid professionals into an area. "This good, core nucleus of people also brings in other jobs as well," Meisinger says. Simply by living in the community these new workers create the demand for service jobs of all types.

    Community building has historically been one of the major goals of power companies. For example, the Tennessee Valley Authority (Knoxville), today the largest electricity producer in the United States, was begun in 1933 with the stated purpose of revitalizing the Tennessee Valley, a rural area that was struggling during the Great Depression. The TVA continues to offer a wide range of community and business development services.

    Mike Eades is manager of economic development and marketing at MEAG Power (Atlanta), which offers business development in its service area throughout Georgia. He agrees that community building is the main reason power companies try to attract businesses like medical device manufacturing that don't consume a lot of power. "Power sales are important," says Eades, "but they aren't everything."

    For medical manufacturers that are considering expansion or relocation, then, energy companies are a good source of business development services. A typical power company offers a wide range of these services, from research and information to actual funding for business growth projects.

    And with the increasing competition between energy providers that is being spurred by the current move toward deregulation, the business development assistance provided by power companies should only increase in the future. Energy companies will make an even greater effort to tailor services to meet individual requirements. For example, in its corporate literature, IES Utilities, which provides power to more than 550 communities in Iowa, says, "In a competitive environment, a successful strategy will require segmentation of the market, with many more service and price options to meet customers' needs." The deregulation process, which began with the opening up of markets to independent power producers by the Public Utilities Regulatory Policy Act of 1978, will soon begin to take effect. In 1998, California will be the first state to open its market to competition. Not only are energy providers good sources of site selection assistance, therefore, but now is also a particularly good time for manufacturers to approach them for business development services.

    Once a manufacturer does begin dealing with energy providers to choose a site for expansion or relocation, it is helpful to know what kinds of business development services are typically offered. These services obviously begin with reliable and cost-effective energy service. They can also include a range of other business assistance, such as research and low-interest business loans.

    RELIABILITY

    Reliability is one of the most important energy considerations for medical manufacturers, just as it is for other industries. According to Gary Evans, the area development manager at the Omaha Public Power District, "Medical device manufacturing is so computer-driven and the tolerances are so close, that reliable power is critical. Options like redundant services are important." Power companies routinely supply historical information on power outages to manufacturers considering a new site, and also help find ways to best ensure the level of service that is necessary for a particular industry.

    Bill Stafford is the manager of economic development in the economic development department of Virginia Power, which serves about two-thirds of eastern Virginia. He says that when deciding on a new site, manufacturers should ask for a detailed analysis of the energy history at the site, including estimates of the probabilities of power failure.

    The Warren Rural Electric Cooperative Corp. serves eight counties in Ohio, and like most power companies offers assistance with power reliability concerns. "Today's machinery and computers require a constant supply of electricity which meets exacting requirements," the company says in its corporate literature, echoing Omaha's Evans. The firm promises manufacturers that it will "respond to power quality concerns, monitor your power supply for 'blinks' and 'harmonics,' make recommendations about the most economical answer to power-quality problems, analyze grounding problems, and assist with the installation of power-quality devices."

    When one medical device manufacturer, Streck Laboratories (now located in Omaha), was looking for a new site in 1995, power companies were among the sources it turned to for information on potential sites. Streck manufactures instrument calibrators containing biological components that must be kept under strict temperature control. According to Terry Agee, the company's operations manager, power reliability was a consideration in its relocation decision because power outages could mean not only interruption of business processes but also loss of product.

    ENERGY COST SAVINGS

    Most power companies will offer low rates for large industrial users. They can also offer low rates that are based not just on the amount but also on the type of use. One common rate reduction strategy is based on the time of use. Many energy companies offer savings if power is not used during peak hours. Others offer special discounts that reflect the priorities of their area. For example, Atlanta's MEAG Power offers rate savings based on the number of new jobs created by an industry.

    The Jackson Utility Div., which provides energy to about 33,000 customers in Jackson, MS, and surrounding areas, offers a variable pricing schedule based on the quality of power service that a manufacturer requires. For example, the company offers economy surplus power for customers that use more than 5000 kW per month. In this rate schedule, a company can designate up to 100% of its power as being subject to outages with 5 or 60 minutes' notice. Another rate is available for limited interruptible power (LIP) for customers who use more than 20,000 kW per month. These customers can buy up to half their power at the LIP rate. Power at this rate is subject to outages with a minimum of 24 hours' notice and maximum of 15 days per outage. A test-and-restart power rate is offered for new or experimental processes for existing industries or for restarting plants or processes that have been idle for more than one year. An enhanced growth credit of $6/kW for three years is also available for new or expanding manufacturers that are classified under the standard industrial classification (SIC) codes 20 through 39; use all electrpower for heating, ventilation, and air conditioning; have 50% of their floor space heated and cooled and have 50% or more of the rated electric load represented by heating, ventilation, and air-conditioning systems, interior lighting, and cooking; or are adding at least 250 kW of electrical load. The company also provides a 5% large manufacturing credit for companies that have a minimum demand of 5000 kW per month and whose activities are classified under SIC codes 20 through 39.

    As the rate options offered by Jackson Utility Div. demonstrate, determining the comparative savings offered by power companies can sometimes be complex. Most power companies make the process easier, however, by providing rate comparisons for typical businesses in their area and several nearby sites (see Figure 1).

    INFORMATION

    Like other site selection resources, power companies provide valuable information about their areas. This can include labor force statistics, site information, and even tax information. Cinergy/PSI, Energy, for example, commissions industry-specific market reports based on SIC codes that give estimates of the cost of doing business in Indiana. In a report on the surgical appliance and supplies industry, for example, PSI compares the costs of locating a typical surgical supplies plant in its territory to the cost of locating the same plant in southern Michigan, western Ohio, or eastern Illinois.

    Another company, Northern States Power, offers the Rite Site Guide, which includes a manual of the steps necessary in selecting a site, as well as helpful worksheets. The company also offers a business resource directory.

    In addition, many power companies offer to help a company research the best way to use energy. This research can include recommendations for reengineering the manufacturing processes to cut energy costs. The department of economic development at the New York State Electric and Gas Corp., which serves about one-third of New York state, offers this type of engineering and design support. Washington Water Power, which manages areas from the northern Rockies across the northwestern United States, offers to fund the first $1500 of company studies on energy use.

    FINANCIAL ASSISTANCE AND
    OTHER SERVICES

    Some power companies even offer loans and other forms of financial assistance to manufacturers as incentives for expansion or relocation. For example, the TVA offers low-interest loans to several types of businesses, such as small and minority-owned firms. The New York State Electric and Gas Corp. offers leasing and financing arrangements. Cinergy/PSI offers to put clients in touch with sources of financing. IES Utilities offers to provide funding for a portion of the direct costs associated with the marketing of a speculative building project. Washington Water Power will provide partial funding for installation of high-efficiency improvements, such as heating, ventilation, and air-conditioning systems; variable frequency drives; or fan, compressor, and pump systems.

    Power companies also offer a variety of special services. For example, the TVA offers business incubators, which are multitenant facilities where small businesses can share equipment, space, and expertise. Pennsylvania Power and Light offers to assist manufacturers with site inspections. Many companies offer electronic networking resources, such as databases or Web sites. Power companies often work together with more-traditional site selection resources, such as local chambers of commerce. While doing site research, therefore, it is likely that a medical device manufacturer will eventually be directed to these types of companies. Because of the wealth of services they can offer, however, it is a good idea to contact energy providers at the beginning of the process to be sure their offers will be factored into site selection.

    Leslie Laine is a senior editor for MD&DI.


    Copyright ©1997 Medical Device & Diagnostic Industry

    Assessing Pass/Fail Testing When There Are No Failures to Assess

    In the course of their work, persons involved in manufacturing medical devices are often required to sample and test products or product components. Often this testing involves the collection of what are known as variable data. Variable data are continuous, quantitative data regarding such things as temperature, pressure, and efficiency. By their nature, these types of data provide an enviable precision in measurement, which in turn provides product developers the luxury of small sample sizes without a concomitant loss of statistical power. With such precise data the risk of making a wrong decision concerning products being tested is minimized.

    However, quite often product development personnel are called on to sample and test a product, or product component, in which the only information gathered is whether it meets one of two possible outcomes, such as passing or failing a test. This category of information is known as attribute data. Attribute data are a discontinuous form of data resulting in the assignment of discrete values, such as yes or no, go or no-go, 0 or 1, or pass or fail.

    Attribute data are often collected by engineers, product designers, product/project managers, and others who require initial basic information about a material or product component in order to judge its suitability for use in a medical device. The usefulness of attribute data in pass/fail testing lies in its allowing user-defined failure criteria to be easily incorporated into research tests or product development laboratory tests--tests whose results, as a rule, are easy to observe and record. In general, if one observes that the test product meets defined criteria, the observation is recorded as a "pass"; if it does not, the observation is recorded as a "fail." The number of passes and fails are then added up, descriptive statistics presented, conclusions drawn, and manufacturing decisions made.

    A FALSE SENSE OF SECURITY

    However, the results of such attribute tests can be misleading because the risk associated with decision making on the basis of them is often understated, or misunderstood. This is particularly true when samples are tested and no failure events are observed. When failure is observed in a product being tested the logical course of action is to proceed with caution in drawing conclusions about the acceptability of the test product. In other words, there is a recognition of risk brought about by the observation of one or more failures. Conversely, a zero failure rate observed during testing generally leads to a decision to proceed with the product being investigated.

    However, there is a risk in drawing conclusions about a product when no testing failures are observed. Zero failure brings about a sense of security that is often false. There is a tendency to forget that even if 10 components were tested without failure, we still can't be absolutely sure how the 11th would have performed. The resulting overoptimism could result in the inclusion of a component in a product, or the introduction of a product into the marketplace, that fails to perform as expected.

    A false sense of security is a particular danger when the investigator has not thought about the relationship between risk and sample size. As an example, more risk is involved in stating that a product is acceptable if we sample 10 with no failure from a population than if we sample 500 with no failure from the same population. This is because we derive more information about the population from testing a sample of 500 than from testing a sample of 10. It is more important to know that zero failure occurs in a sample size n than simply that zero failure occurs.

    When no failures are found after a particular round of pass/fail testing, the estimated failure rate is zero--if that single test is looked at in isolation. What is often misunderstood in pass/fail testing is that a zero failure rate for the given sample tested does not ensure that the failure rate for the entire product or component population is zero. When no failures are reported during sample testing, the natural tendency is for researchers to overlook the maximum failure rate that could occur for the population as a whole. The maximum failure rate for the population, not the sample, must be understood, and should be part of the risk assessment involved in decision making.

    SAMPLE SIZE

    What is an appropriate sample size for pass/fail testing? It depends on how critical the product or component being tested is, and how much risk the investigator (scientist, engineer, project manager, decision maker) is willing to accept when deciding whether or not to accept that product or component for manufacture or distribution.

    Figure 1. Upper probability of failure when zero failures are observed, based on 90 and 95% confidence intervals (/2). (/2 is the risk associated with rejecting the null hypothesis. Its division by 2 addresses the fact that any established rejection region exists in both tails, or ends, of the distribution, and that the probability of error is divided equally between the two tails.)

    For any given sample size, with zero failures observed, there is an ascribed confidence interval--worked out and tabulated by statisticians--in which the true failure rate will be found.1 Shown in graph form in Figures 1 and 2 are the upper bounds for that failure rate, based upon 90 and 95% confidence intervals (/2). The advantage to presenting this information in graphic form is that a knowledge of statistical theory is not required to interpret it.

    Figure 2. Upper probability of failure when zero failures are observed.

    Figure 1 shows the upper limits at 90 and 95% confidence intervals for failure rate when zero failures are observed. It is clear from the graph that the fact that no failures are observed does not mean that no failures are to be expected in the total population of parts or components; rather, failure may be expected to be as great as that defined by the curves. The graph can be interpreted in several ways by considering the following scenarios.

    Example 1. You have just completed a test in which 40 samples were evaluated and you observed 0 failures. From Figure 1, the upper bound for the true failure rate is 8.8%. One can then state that, with 95% confidence, the true failure rate will be contained in an interval not to exceed 8.8% failure.

    Example 2. You are required to make a decision about continuing with the development of a product line. Because of time and cost limitations, the decision involves considerable risk. You decide to proceed with development if pass/fail testing indicates a 90% chance that the true failure interval does not exceed a 3% failure rate. How many samples are needed with 0 failures observed? The answer is 100, found by following the 90% confidence limit curve downward until it crosses the 3% probability line. The point of intersection corresponds to 100 on the sample axis.

    Example 3. A sample size of 150 is tested with 0 failures observed. From the graph you find that there is a 95% chance that true failure will occur within an interval bounded by an upper limit of 2.4% failure. The question you must ask yourself is this: Am I willing to proceed knowing that I have a 95% chance of product failure that could be as great as 2.4%? In other words, does this risk analysis represent sufficient information about the product under development?

    Table I. Upper boundary of expected failure from 90 and 95% confidence intervals in which true failure probability is expected to be exhibited. The probability of the upper boundary is equal to /2.

    The colors of the graphs range from red (danger) to yellow (proceed with caution). If you are pass/fail sampling and observe zero failures from a sample of size n during the test, you should determine where on the confidence limit curves your upper range of failure exists. To do this, locate on the x-axis the number of samples you have tested, then move vertically until you cross either the 90 or 95% confidence curve. The color area you are in will give you a subjective determination of the risk of failure if you proceed with the development of this product (with red equaling higher risk and yellow equaling caution, or lower risk). You may then locate along the y-axis the upper probability of failure occurring when all that you know about this product is that zero failures occurred in your sample size. Notice that the graphs do not contain the color green (go). This is because there is always risk involved.

    For further reference, Table I presents the upper limits of expected failure when zero or one occurrence of failure is observed during testing.

    CONCLUSION

    Statistical analysis shows that in both attributes and variables testing, as the amount of valid information increases, the associated risk in making a decision based on that information decreases. In pass/fail testing this means that the ability to estimate with confidence the upper bounds of the true failure rate when the observed failure rate is zero is critically dependent upon sample size. Thus, decision making is also critically dependent on sample size.

    REFERENCES

    1. Collet D, Modeling Binary Data, New York, Chapman & Hall, 1991.

    2. Fisher RA, and Yates F, Statistical Tables for Biological, Agricultural, and Medical Research, 6th ed, Edinburgh, Oliver and Boyd, 1963.

    Thom R. Nichols is senior research statistician and Sheldon Dummer is senior quality engineer at Hollister, Inc. (Libertyville, IL).


    Copyright ©1997 Medical Device & Diagnostic Industry