Controlling Device-Related Errors in Hospitals

Originally Published MDDI March 2004EDITOR'S PAGE

March 1, 2004

3 Min Read
Controlling Device-Related Errors in Hospitals

Originally Published MDDI March 2004


A new study shows that adverse medical device events occur too often in 
hospitals, but there is not yet a reliable method of detecting them.

Ever since the Institute of Medicine released its much-publicized report on medical errors in 1999, the healthcare community has come under pressure to reform some of its practices. However, most of the response so far has been aimed at medication errors. A recently released study suggests it's time to take action on medical device-related errors, too.

The study, published in the January 21, 2004, edition of the Journal of the American Medical Association, concludes that better surveillance for potential device-related errors is needed. It does not, however, produce a consensus on what the best methods might be.

From January through September 2000, a team of researchers from the University of Utah School of Medicine (Salt Lake City), LDS Hospital (Salt Lake City), the Dept. of Veterans Affairs, and FDA conducted the first study of surveillance methods to identify medical device events in hospitalized patients. They analyzed events in a 520-bed tertiary teaching hospital as detected by computer-based flags, telemetry problem checklists, the International Classification of Diseases, Ninth Revision (ICD-9) discharge codes, clinical engineering work logs, patient surveys, and traditional voluntary reporting. Then they compared the results from each method to determine frequencies, proportions, positive predictive values, and incidence rates by each technique. 

Of the 20,441 patients (excluding obstetric and newborn patients) admitted during that time, 504, or 2.5%, had a problem related to a medical device. Overall, there were 552 indications of a device-related hazard (a problem that does not cause harm to a patient) or an adverse medical device event (in which harm occurred.) Traditional incident reports uncovered adverse medical device events at a rate of 1.6 per 1000 admissions, compared with 27.7 per 1000 for computer flags, and 64.6 per 1000 for ICD-9 discharge codes. Combining all the surveillance methods, the rate was 83.7 per 1000. Reported adverse medical device events included electrocautery-device-induced burn, catheter-related bloodstream infection, failure of a monitor to detect asystole, the misprogramming of an infusion pump resulting in narcotic overdose, and the loosening of a prosthetic joint. There was one fatality attributed to a device event, an infection after a catheter implant.

The problem is that there was very little overlap between each method, and few events were detected by more than one method. So although each method had some use for detecting different types of problems, none were adequate enough for the researchers to recommend they be relied on exclusively. 

This means that more research must be done at more facilities to better define just how widespread the device-related error problem is. And it means there is a need for someone to design a comprehensive surveillance program that not only detects errors reliably, but also catches them before they cause patient harm. Hospitals, already cash-strapped, are unlikely to bother investing in any kind of electronic surveillance technology unless it can be proven to detect events at a far better rate than nonelectronic methods. 

Whether you prefer to believe the IOM's estimate that 44,000 to 98,000 Americans die each year from medical errors, or that of a study published in JAMA in 2001 putting the figure at 5,000 to 15,000, medical errors happen far too often for comfort. Whatever the device industry can do to reduce device-related errors or develop a better system for catching them, it should do.

The Editors

Copyright ©2004 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like