Connected Healthcare Systems Are a Cybersecurity Nightmare
Could DTSec, a new security standard for connected medical devices, change that?
May 16, 2016
Could initiatives like DTSec, a new security standard for connected medical devices, change that?
David Kleidermacher
Throughout the digital world, consumer confidence in service providers' ability to protect our personal information, our health and well-being, and our critical infrastructure is disappointingly low. Ask 10 random people how they feel about digital security today. Cover your ears before the expletives fly.
Is the outlook improving? Are we winning the war, even if we continue to lose some battles? Hardly: In 2015, the U.S. healthcare industry suffered its five worst ever data breaches while FDA and the Department of Homeland Security issued warnings about serious vulnerabilities in networked medical equipment. And a spate of ransomware attacks against hospitals has marked 2016.
Yet, we have no shortage of alleged panaceas from security pundits. If you ask security tech companies, they'll roll out a string of newfangled products: machine-learning-based anomaly detection, quadruple factor biometric authentication, and more bits of encryption than you can imagine. If you ask consultants, they'll roll out a string of risk management processes that will cost you a boatload to hear about and a shipload to implement. If you ask politicians, they'll roll out a string of regulations that implore product and service providers to do better but lack practical implementation details or enforcement teeth. How well have these helped so far?
These initiatives are doomed to failure because we lack the fundamental ability to evaluate whether a technology, process, or policy can protect our digital systems against modern sophisticated attackers. With the exception of financial transaction technologies such as smart card integrated circuits (because we're talking about people's money here folks!), effective standards for security assurance--scientific approaches for independent stakeholders to gain high confidence in security--do not exist. The standards we do have, such as HIPAA and the Payment Card Industry Data Security Standard (PCI DSS), are expensive for the minimal amount of confidence they deliver. Does HIPAA prevent protected health information breaches? Does PCI DSS prevent personally identifiable information breaches?
Cybersecurity pundits are like politicians running for office, and voters are sick and tired of empty campaign promises. If you run for president of the digital security world, show the public how your approach delivers--measurably and objectively--on its claimed security benefits.
We need nonbureaucratic standards (not always an oxymoron!) that include a cost-efficient, risk-based determination of security requirements for a medical product or system. We need assurance programs for evaluating that product or system against those requirements. These programs must include, via expert vulnerability assessment, emulation of the sophisticated attackers we now know to be threatening our healthcare systems.
And in no industry is there more on the line than healthcare. Not only is the black market for electronic protected health information attracting sophisticated attackers, but the consequences of attacks include not only a loss of privacy but also impacts on the health and wellbeing of patients. This is the age of the Internet of Medical Things, where our medical systems are network connected. So far, evidence shows we are unprepared for the risk environment implied by this connectivity. For example, widely deployed hospital infusers were designed a decade ago with no firmware integrity verification, open network ports, and no user authentication. Once an attacker gains a foothold in one of these systems, it can be used as a launching point for attacks across the hospital network. Attackers look for the weakest link, and some medical devices represent the soft underbelly of healthcare networks.
DTSec is the first big step in the right direction. DTSec is a security standard for connected medical devices. The DTSec steering group includes the widest range of healthcare stakeholders ever assembled to tackle this problem, including numerous government regulators and agencies (including FDA, Health Canada, DHS, and NIST); caregivers (physicians, nurses, and professional caregiver organizations); academic researchers in medical devices; patient-advocacy organizations and ethical hackers; medical device manufacturers; liability attorneys; and independent technology and cybersecurity experts (BlackBerry, IBM, Intel).
What makes DTSec different from other failed security standards is that it includes a methodology for specifying--via risk-based multi-stakeholder collaboration-- product-dependent security requirements as well as a program for efficient evaluation of those requirements against actual products to gain the high levels of assurance we need at reasonable cost and at the speed of digital innovation. In fact, the ability to evaluate and assure security will enable new therapeutic and quality of life opportunities. For example, today we are limited in how we can leverage consumer electronics in healthcare settings. Smartphones and wearables are permitted for information recording and observation but may not directly control life-critical functions. If we have high assurance standards in place, technology solutions will follow.
In the 1800s, we had a different kind of panacea. Unscrupulous salesmen sold snake oil as a cure-all to unsuspecting consumers. Today, snake oil could not succeed because we have a system in place for assuring safety claims of drugs, foods, and medical devices. It may not be perfect, but it works. And yes, it includes regulation (although poor regulation can be worse than no regulation). In contrast, organizations can (and do) make sweeping claims about the security of their wares, despite lacking independent assurance of these claims.
Indeed, security assurance for medical devices is inherently different than safety assurance, and as computer ethics author Deborah Johnson has explained, it is dangerous to think we can understand our security obligations by applying policies and arguments from older issues and technologies.[2] Using an insulin pump a billion times on millions of people provides assurance the pump will be clinically safe for the next user but provides no assurance it can protect the millions of people against determined and well-resourced hackers.
Regulation and public agencies can't solve the assurance crisis. Government bears a responsibility to assist, guide, and promote solutions, but an open, international multi-stakeholder community must bear the burden of implementation. This problem can be solved, and there's good news: It's not rocket science. But all stakeholders must rally around this effort to make it work. Device manufacturers must show leadership in putting their products voluntarily through independent security evaluation to provide customers and patients with increased confidence. Hospitals and other healthcare providers must exercise purchasing power in requiring suppliers to offer systems evaluated under rigorous assurance standards. Payers must exert their financial power in providing rate preferences for the use of evaluated products. Caregivers must drive adoption of these standards by educating patients and healthcare institutions about the risks and mitigations associated with connected medical devices. Patients must demand that healthcare providers and their suppliers provide the objective multi-stakeholder evidence of security protection that can only be obtained with initiatives like DTSec.
David Kleidermacher, chief security officer for Blackberry Limited will speak at at MEDSec 2016, a conference all about security and privacy for the Internet of Medical Things held on May 23 and 24 in San Jose.
[2]Johnson, D. G. (2001). Computer Ethics. Prentice Hall, 3rd edition.
[image courtesy of DAVID CASTILLO DOMINICI/FREEDIGITALPHOTOS.NET]
You May Also Like