Medical Device Hacking—Why Are Patients Innovating and Companies Failing to Deliver?

Patient-led initiatives are engineering improvements to medical devices and companies looking to catch up will need to apply significant rigor to risk management, hazards identification, and user testing processes.

Mike Dunkley and Samantha Katz

There are two sides to medical device hacking.

On one hand, there is justifiable concern that nefarious objectives could motivate hacking wireless medical devices, which may include gaining unauthorized access to private patient data, or in an extreme example, the potential to assassinate a sitting Vice President (in the case of Dick Cheney’s implantable defibrillator).

On the other hand, the rise of the maker-culture coupled with meaningful pain points is leading to creative medical device hacks where the end goal has a clear user benefit. As an example, flaunting the social media hashtag #wearenotwaiting, NightScout represents a valuable initiative for remote monitoring of continuous glucose readings by hacking currently available medical device technology. A group of concerned parents has figured out a way to connect Dexcom’s continuous glucose monitor (CGM) to the cloud so they can monitor their children’s glucose levels remotely via their smartphones.


User expectations for medical devices are being shaped by their experiences with consumer products and services, and the confluence of these experiences with their unmet needs as patients and care partners is creating clear demand for medical devices that look, feel, and operate like consumer devices. Where companies are failing to innovate fast enough to keep up with consumer technologies, patients and care partners are becoming increasingly impatient and, consequently, are innovating directly or via online communities of like-minded people.

Medical device hacking is perhaps the clearest expression of unmet need. People are asking themselves why they are able to monitor the performance of their favorite stock in real time, but cannot do the same for their child’s glucose reading. The technology pieces are all out there, they wonder, so why aren’t companies putting them together to design and deliver the new products and services their customers are demanding? If companies will not deliver, then patients are becoming increasingly driven to do it themselves.

One of the major reasons holding companies back is the need to address patient safety risk in the design and deployment of such systems. Take Type 1 diabetes, which is the focus for Nightscout. It is clearly valuable to be able to monitor your child’s glucose levels while she is at a sleepover, but what happens if the link goes down? How can you be sure the reading you’re seeing is correct? If not adequately mitigated, these risks have the potential to lead to serious problems, such as incorrect insulin dosing, and companies rightfully have to take such risks very seriously.

FDA has created a regulatory environment that requires companies to design medical devices with demonstrable rigor and with particular emphasis on risk management. In the case of devices that leverage mobile infrastructure, it has also offered guidance to companies as to whether a particular type of device is subject to oversight, whether it will use enforcement discretion, or whether a device presents sufficiently low risk that it is not considered a medical device.


For remote monitoring of Type 1 diabetes, it is clear that FDA will need to be satisfied that a company has been diligent in its risk management activities before it gains market clearance. This includes extensive testing and documentation at the device and system levels.

The key starting point, particularly for systems involving mobile elements and multiple interfaces, is to be clear about how the system functionality is distributed, because it can have significant impact on the regulatory implications. For instance, if a CGM system is configured with a primary display on the patient’s smartphone and a secondary display on the parent’s smartphone, then the risk management activities will be likely impacted by the expectation that both patient and caregiver are involved in monitoring glucose levels. In this scenario, the glucose sensor and patient’s smartphone application would be regulated as Class III medical devices, while the secondary display (on the parent’s smartphone) may be separately regulated as a Class II accessory device.

Alternatively, if the patient’s (and, hence, parent’s) smartphone apps both serve as secondary displays with the company’s proprietary, FDA-approved display device serving as the primary display, the apps would be considered lower risk Class II devices. This was the case with Dexcom’s G4 PLATINUM with Share CGM system. This is because the approved Class III Dexcom display device that is responsible for alerting the user to glucose excursions must be present in order to use the secondary app, and, consequently, the rest of the cloud-based system.

A crucial next step for any risk management activity in the regulated medical realm is to be clear about the intended use and associated marketing claims, even if these might be reasonably inferred by a user rather than explicitly stated by the company. In other words, a parent might reasonably expect that a remote CGM system will reliably alert them if their child’s glucose levels drop to dangerously low levels. They may not anticipate that cellular signals could be interrupted or that their child’s sensor could temporarily lose connectivity if the child is sleeping on it.

Next is a thorough hazards identification process with input from medical professionals that forms the basis for the risk assessment work that follows. Key concerns for remote CGM monitoring will likely include hypoglycemia and hyperglycemia with the former being the most serious and immediately life threatening.

The high-level failure conditions that might enable either of these situations include “no result,” “wrong result,” or, depending on the user’s expectations for timeliness of CGM data, “late result.” Effective risk analysis seeks to identify, quantify, and mitigate potential failures in the system, which have the potential to lead to a hazard. The “art” of risk management is to put yourself in the user’s shoes and understand how they will incorporate the device into their daily life, including all the potential opportunities for misuse, to ensure the optimum balance of safety and usability for a product or system that is commercially viable. Ultimately, in the United States, FDA is the arbiter of whether the overall approach is adequate but companies who apply smart, user-centered, and timely methodologies are more likely to succeed.

Consider the potential failure mode of “no result”—the system is not reporting data to either the patient’s or the parent’s smartphone with the potential harm being that the patient could be at risk from entering a dangerous hypoglycemic state. Risk management needs to consider the likelihood and severity of this event and also balance the potential mitigations with usability implications for different stakeholders. Should the system always alarm the patient in the event of “no result,” even if it is a likely scenario that the patient is asleep and laying on top of the CGM sensor, impeding the wireless connection with the smartphone? Should the system also alarm the parent and should it distinguish between connectivity failures local to the patient and those related to unreliable network connectivity? These potential mitigations need to be mapped to each stakeholder’s expectations and needs for the system and also relate back to the manufacturer’s intended use and related claims. The right approach must find the optimum tradeoff between providing effective warnings without becoming intrusive or overly irritating.

Lastly, when it comes to mobile technologies, it is important to consider how verification and validation activities will be executed, given the speed at which mobile operating systems can change independent of the medical devices in the system. A solution for an identified hazard could elegantly mitigate a safety risk but at the same time generate a lengthy revalidation process, which merely shifts patient risk to business risk if part of the medical device system is inoperable due to the OS change.

There is a clear complexity to a remote CGM system with multiple displays and users, as well as the incorporation of third party hardware and cloud-based infrastructure. Therefore, the analysis required for a CGM system with significant safety hazards requires tremendous rigor and will need to be backed by effective and representative functional and user testing.

Returning to the topic of hacking, in many cases, rather than being something to eschew, designing these workarounds for medical devices can be a clear indicator of unmet patient needs and therefore point to an attractive opportunity space for medical device manufacturers seeking to develop regulated products and applications. Groups like NightScout are innovating to improve the lives of children with diabetes and lessen the burden on their parents because the medical device industry simply cannot move as quickly to get to a regulated, commercially available product. However, for the companies who can successfully catch up, the commercial benefits will be sizable, while the improvements in the lives of the patients and caregivers will be invaluable.

Stay on top of the latest trends in medtech by attending the MD&M East Conference, June 9–11, 2015, in New York City.

Mike Dunkley is senior vice president and Samantha Katz is a senior strategist and digital health lead at Continuum, a global innovation design consultancy.



Device talk Tags:


Perhaps we need more words to subdivide the all inclusive hacking into distinct parts. Taking information out of a device (especially via the air) in a way that has no effective on its technical operation is clearly far different than altering the device's technical operation whether for good or bad reasons. What you do with information does matter as you point out.

People capture output from a variety of medical devices especially when the output is proprietary but the user/owner wants to do something with the data aside from what the vendor wants them to do. Medical device interoperability (when not openly provided by the device maker) and "middle ware" has this objectives but I haven't seen this described as hacking. In its early definition of Medical Device Data Systems the FDA clearly distinguished between receiving device data and then doing something with it as opposed to altering the medical device's function.

As for a no-signal warning, this issue parallels that of a hospital based leads-off alarm. Leads-off is not a clinical crisis--unless something of importance occurs while the leads are off that is then missed. If you are relying on a system to provide critical information, you have to know if silence means everything is OK or it isn't working. This is why smoke detectors give low battery warnings.

As for Cheney's defib hack prevention, this was a solution to a theoretical (hypothetical) risk, of which there are many. But we usually want to identify and prioritize risks and respond accordingly, rather than chase every possible risk, or perhaps out favorite risks.