In an increasingly connected world, how do we ensure that the software that controls medical devices functions as planned with little to no risk of harming the patient?
Medical devices, including patient monitors, ventilators, and defibrillators are created to provide lifesaving capabilities. Today's medical device manufacturers are also at the forefront of developing innovative new ways to deliver patient care, from wearable electrocardiograms for use at home to more complex devices such as robotic nurse assistants to help caretakers lift patients within a healthcare facility. However, to ensure that patients are not at risk, it is critical that any of these medical devices behave correctly in all circumstances.
These devices contain a significant amount of complex embedded software. In an increasingly connected world, how do we ensure that the software that controls these devices functions as planned with little to no risk of harming the patient?
Thoroughly testing this embedded software for security weaknesses is critical in verifying that the medical device will operate as expected and forms the basis of a trusted computing platform. If the foundational software can't be trusted, it undermines the entire security plan. Security vulnerabilities can enter a product as soon as the first lines of code are written, but the real danger is if they are not detected until much later.
For example, earlier this year FDA confirmed that there were implantable cardiac devices which had vulnerabilities that could allow a hacker access to deplete a battery or administer incorrect pacing or shocks. Thankfully, no patients were harmed as a result, but this once again brought security to the forefront as a critical issue for device makers and software developers alike.
Creating the Next Generation of Secure Medical Devices
Just like quality, security is a process that is best implemented at inception. Developing secure applications requires constant vigilance in all stages of development. Challenges need to be addressed during development because it is too costly, complex, and risky to redesign these advanced systems after they have already been shipped. This means using tools that can detect possible vulnerabilities when writing code, integrating modules, and testing compiled binaries on target hardware.
One of the most commonly used tools by security testers is static application security testing (SAST). This type of testing is designed to analyze application source code, byte code, and binaries for common vulnerabilities, including coding and design conditions that might lead to potential security vulnerabilities.
Adopting SAST is good in theory, as it is common for developers to want to know: a) are there any issues with the software; b) how many; and c) what and where are they? Assessing the code with a static analyzer will provide some direction, but is not a catch-all solution, especially when security, and ultimately safety, are at stake. This is because SAST tools do not actually execute the code, but instead try to understand what the code is doing "behind the scenes" to identify where errors are. They analyze elements such as syntax, semantics, variable estimation, as well as control and data flow to identify issues in the code.
Usually rule-based and run late in the development cycle, the results from SAST when used alone can create potential false positives (when the tool reports a possible vulnerability that is not an actual vulnerability). That leaves security engineers looking for a "needle in the haystack" when identifying the genuine vulnerabilities. Furthermore, many SAST tools only help zero in on at-risk portions of the code to help developers find flaws more efficiently, rather than finding the actual security issues automatically. This can lead to time-consuming processes as well as incomplete analyses, both of which can be detrimental in the software development world.
To address this problem, new dynamic unit testing methods are emerging that expose defects in software by generating a test case and confirming exploitability. Utilizing MITRE's classification of a common weakness enumeration (CWE), the approach uses automated software testing methods to interrogate an application's software code and identify possible weaknesses. The community-developed formal CWE list serves as common language for describing software security weaknesses in architecture and code, and is a standard, common lexicon for tools detecting such potential weaknesses.
In the CWE taxonomy, there are numerous weaknesses where the use of dynamic testing can highlight vulnerabilities--in particular, anything with hard errors such as the use of null pointers or dividing by zero.
In the dynamic testing approach, once a potential CWE is found, a test exploiting the identified issue is generated and executed. After execution, test tools can analyze the execution trace and decide if the potential CWE is a genuine threat. That issue can then be classified as a common vulnerability and exposure (CVE).
Figure 1: Dynamic unit testing methods can expose software defects by generating a test case and confirming exploitability. Once a potential CWE is found, a test exploiting the identified issue is generated and executed. After execution, test tools analyze the execution trace and decide if the potential CWE is a genuine threat, which can then be classified as a CVE.
The approach is based on the "synthesis" of executions leading to specific software issues (e.g., the automatic construction of a dynamic test exploiting a given vulnerability), allowing for the identification and automatic testing of undiagnosed cybersecurity vulnerabilities. The construction of this exploit is then paired with its dynamic execution to determine if the vulnerability is genuinely exploitable. This type of dynamic testing method performs an upfront analysis of the code to detect potential issues (much like a static analyzer), which could contain false positives. However, once a potential issue has been identified, it also attempts to perform "automatic exploit construction."
Figure 2: A sample of code from a medical device showing a C code error. In this example, ptrTaskData was never checked for null pointers, causing a crash to occur.
Unlike static analysis-based approaches, this type of software security testing will only flag an issue if it is genuinely exploitable, mitigating the issues of false positives. Furthermore, the generation of test artifacts allows for their future re-execution to demonstrate the mitigation of a potential issue after software redesign.
As new technologies that change the threat landscape continue to evolve, security is more important than ever. Static analysis security testing has its own set of benefits, but dynamic testing can further expose defects in software by generating a test case and confirming exploitability to find vulnerabilities more definitely--and ultimately creating more secure medical devices. When lives are potentially at a stake if security is compromised, or software fails, this is crucial.
Jeffrey Fortin is head of product management at Vector Software, a provider of automated software testing tools.
[Top image courtesy of PIXELCREATURES/PIXABAY.COM; other images courtesy of JEFFREY FORTIN]