MD+DI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

CQOE and Funky Math

Article-CQOE and Funky Math

Shutterstock/Kaspars Grinvalds CQOE and Funky Math
What type of score will your company earn under FDA's Pre-Cert Program for Software as a Medical Device (SaMD)?

FDA's Digital Health Software Precertification (Pre-Cert) Program has proposed various measures that might be used to determine whether a software developer in the Software as Medical Device (SaMD) space has demonstrated that the quality of its operations is sufficient to provide reasonable assurance that its SaMD product is safe and effective.

Culture of quality and organization excellence (CQOE) is something like the quality system as defined by the Quality System Regulations (QSR) (21CFR820), but it is not identical to the QSR. They share the philosophy that process is a measure of product, but regular devices still have to undergo review before marketing. CQOE might in some way reduce or eliminate such review. To the extent that CQOE is used instead of QSR compliance, a broad-spectrum manufacturer has to deal with two different quality systems—one for pre-cert eligible software and the more traditional QSR for the rest of their portfolio.

The proposed excellence elements of CQOE are Patient Safety, Product Quality, Clinical Responsibility, Cybersecurity Responsibility, and Proactive Culture. These elements are further defined as:

  • Patient Safety: Demonstration of a commitment to providing a safe patient experience and to emphasizing patient safety as a critical factor in all decision-making processes.
  • Product Quality: Demonstration of a commitment to development, testing, and maintenance standards necessary to deliver SaMD products at the highest level of quality.
  • Clinical Responsibility: Demonstration of a commitment to responsibly conduct clinical evaluation and to ensure that patient-centric issues including labeling and human factors are appropriately addressed.
  • Cybersecurity Responsibility: Demonstration of a commitment to implement appropriate measures to ensure cybersecurity and to proactively address cybersecurity issues through active engagement with stakeholders and peers.
  • Proactive Culture: Demonstration of a commitment to a proactive approach to surveillance, assessment of user needs, and continuous learning.

In each case, the “commitment” language seems curious. Under Safety, is a “commitment to provide” the same as actually providing, such that a commitment to cybersecurity is the same as actually achieving cybersecurity?

It is noteworthy that these are essentially qualitative attributes, yet it has been suggested that they could each be assessed and numerically scored, in part by using Key Performance Indicators (KPIs) for each attribute. An overall CQOE could then obtained by adding the five individual scores, possibly with weighting factors. One preliminary suggestion for weighting is that the value of the five factors might not be equal, e.g., Quality and Safety 30 points each, Cybersecurity 20, Clinical Responsibility 10, and Culture 10. Such a division is inherently arbitrary.

KPIs are known on the drug side, but I believe they are new for devices. FDA has suggested that KPIs may be different for different companies, perhaps adding further confusion to evaluation and comparison. This type of numbers game has inherent limitations, beginning with the accuracy of the assignment of numerical values to what are essentially qualitative factors. This is an inherently uncertain process. Linear combinations of scored factors, even with weights, are hard to justify and also suggest that there can be tradeoffs between elements (e.g., that being relatively strong in Cybersecurity Responsibility can offset being relatively weak in Clinical Responsibility). This also suggests that different developers could have the same overall score, but different component scores. This makes the comparison of the two overall scores complex at best, and possibly meaningless.

There is no corresponding tradeoff concept in the QSR such that being good at say Design Controls can offset being bad at Document Control. Furthermore, the process suggests that the developer would end up with an overall quality score, which might be made public. Perhaps ranges of CQOE scores would then be converted back to qualitative categories, such as gold, silver, bronze, and maybe disqualified. A gold-level developer would clearly be happy with their result, while someone earning a bronze might be unenthusiastic about publicizing it. This is quite different from today’s FDA medical device processes, which are for the most part pass/fail (e.g., either your 510(k) was cleared or it wasn’t, but it wasn’t graded). A possible exception to this is when warning letters, which are publicly available, are informally graded by one’s peers, in part related to the number of elements cited in the letter and its overall length. Inspectional 483s might also be assessed for number and levels of serious issues, but 483s are not routinely publicly available, although if you are lucky your 483 might end up in FDA’s Reading Room.

Physicians are currently facing a possibly comparable public accounting in the form of the Merit-Based Incentive Payment System (MIPS) scores, which reflect a complex additive measure of physician quality compliance with a self-selected set of factors chosen from hundreds, with the possibility of added bonus points. In addition to effecting payments, these scores will be made public even though they reflect quite different things. More internal processes, such as risk management, also generally involve scoring and multiplying with little justification. Some multi-parameter cybersecurity risk assessments also take the scores, weight, and add approach. As an aside, restaurant inspections in New York City, where I live, involve letter grades (as opposed to pass/fail), which are mandated to be displayed in the front window. The trade-off question arises in any grade less than A. For example, is a B the result of evidence of rodents, or food held at the wrong temperature, and are these equivalent? And the question for customers is, do I want to eat in a B-graded or a C-graded restaurant?

Pre-cert and CQOE are currently under pilot program status, so we will have to wait and see what actually emerges. Furthermore, only a subset of medically related software will ultimately be affected, since a product has to fit in the niche of being a medical device (SaMD), having low risk, and not in effect being already exempt from close regulation by being Class I or otherwise in the FDA’s “yes it’s a medical device but we are going to ignore it” category. A related issue is whether pre-cert will help or hurt start-ups, since bureaucracy is always better handled by big companies versus small ones. Even knowing the rules is not the same as knowing how the game is actually played. Also, it would be prudent to pay attention to whether any of the ideas of pre-cert migrate over to other medical device regulations. One day, you or your product might get a score.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.