Adopting Static Analysis Tools

Software engineers need to understand the value of static analysis tools and how to implement such tools into their processes.

Andrew Dallas

August 1, 2008

7 Min Read
Adopting Static Analysis Tools

SOFTWARE VALIDATION


static_analysis_tool_thumb.jpg

(click to enlarge)
The results page of a static analysis tool. In this example, the tool found 1400 uninitialized variables in less than 20 minutes.

Recently the FDA software forensics lab announced that it was using a set of five static analysis tools to test the software in medical devices that had been recalled due to adverse events.

In an interview with The Gray Sheet, Brian Fitzgerald, the deputy director of the forensics lab, said, “We're hoping that by quietly talking about static analysis tools, by encouraging static tool vendors to contact medical device manufacturers, and by medical device manufacturers staying on top of their technology, we can introduce this up-to-date vision that we have.”

To improve software quality, firms may want to consider static analysis tools. However, device companies must first understand the capabilities of static analysis and run-time analysis tools and how to effectively integrate them into a software development environment.

Static Analysis and Run-Time Analysis

Static analysis tools read the source code and identify certain classes of errors without actually running the code. They have evolved from simple syntax checkers to powerful tools that algorithmically examine code for errors and defects, even in large code bases. A software development team can use these tools to detect and fix errors early in the software development process. Run-time analysis tools are incorporated into the build process and identify errors while the code is running.

Each type of tool has advantages and limitations. It is best to use both static analysis and run-time analysis tools in conjunction. The two types of tools are complementary in that each looks for specific types of errors that the other doesn't.

However, static analysis tools have not historically been widely used by software developers. Such reticence is primarily because of the high cost of entry for some tools. Some of the more sophisticated options cost $20,000 or more. And although some tools are available under free (or nearly free) open-source licenses, the time and effort required to configure such systems has also restricted broader adoption.

Getting Started: Identifying Meaningful Errors

For medical device software, static analysis tools can be employed from day one. However, in the early stages of software development, static analysis tools generate more false positives (errors that are not true errors) because the code is just starting to be developed. The results improve as the body of analyzed code grows larger. Early and frequent use of static analysis can help team leaders identify whether certain developers consistently write code that produces certain types of errors. Such information can help the software team correct poor coding practices early in the process before they negatively affect the overall quality of the code. In addition, a software team leader can begin to tag certain errors as irrelevant so that more meaningful errors are easier to see.

What are meaningful errors? A static analysis tool vendor recently announced availability of new concurrency defect detection capabilities in its tool for C/C++ and Java. This technology introduces static defect detection of race conditions, one of the most difficult-to-find concurrency errors that occurs in multithreaded applications. A race condition is an undesirable situation that occurs when a software system attempts to perform two or more operations at the same time, but the nature of the system requires that the operations be done in sequence.

Race condition defects have been responsible for some notorious failures of device software. For example, race conditions in the software of the Therac-25 radiation therapy machine were cited as contributing to the death of five patients. In that case, the software was performing the two competing operations in a random fashion, a defect that was virtually impossible to find manually because the software performed the operations in a different order each time it executed.

Race conditions are just one example of defects that are difficult to find without an automated tool. Other errors are situations in which engineers have failed to initialize variables properly, have written redundant code, divided by zero, allowed memory leaks, or failed to implement memory-freeing techniques and calls properly. Another error is having incomplete states that can result in unpredictable behavior.

Run-time analysis tools complement the static analysis tools by tracking and reporting problems such as unhandled exceptions or failures in the code, out-of-bounds parameters that are being passed to functions within the code, and report memory errors such as freeing the same block of memory twice. Such errors can be difficult to detect using only manual techniques such as formal code reviews.

Integrating Static Analysis into Software Development

It should be noted that static analysis tools are software development test tools, not software quality assurance test tools. A software team leader can use the results to focus quality efforts, but the error-correction warnings are meant to be interpreted and corrected by developers.

One of the challenges in implementing a static analysis tool is learning to properly configure the tool to the environment and to the base of source code. Medical device companies should understand that they will see a high false-positive-to-true-error ratio, often as high as 10:1. This can be frustrating for a software team leader, who must weed through all reported errors.

Although combing through false positives is a daunting task, it is necessary. The key to success is in continuously configuring the tool.

As development proceeds, a static analysis tool must be modified. The sensitivity of the tool should be lowered gradually as the base of developed code grows. Additional flags for nonmeaningful errors can continue to be set by the software team leader. The tools can be configured to ignore certain types of errors and report only classes of errors specified by the software team leader.

In cases for which the software development effort has hopelessly stalled, static analysis and run-time tools can be employed as emergency diagnostics to find structural defects in the code. If the team has never used a static analysis tool, it is often worthwhile to bring in an expert to set up the configuration, analyze the results, and implement corrections.

When a project is stalled the best option may be to apply code hardening. Code hardening is the process of stopping the test, debugging cycle, and bringing in static and run-time analysis tools to analyze, stabilize, and prioritize development. Code hardening at this stage is expensive and time-consuming, but it is often the only way bring the project to a successful conclusion.

Costs of Software Errors

Software industry averages for defect-correction costs help illustrate why its important to start early.1–3 The cost of correcting a single defect during the design phase is $455, during the coding phase it is $977, and in the final test phase it is $7136. There can be hundreds and sometimes thousands of defects discovered during a typical software project.

Although there is no documented data on the cost of a software rescue mission involving code hardening, anecdotal evidence suggests that costs can reach up to half a million dollars.

Conclusion

Based on the costs of error detection and correction at various phases, the business case for investing in analysis tools is strong. From a software development process perspective, it may be challenging to select the most appropriate static analysis tools and accept the steep learning curve associated with employing them in an optimal fashion.

It may also be difficult to garner acceptance into the development team's culture. However, allocating the time required to learn and implement the tools may be well worth the effort.

Andrew Dallas is president and founder of Full Spectrum Software (Framingham, MA), which provides custom software development and testing services. He can
be contacted via e-mail at [email protected].

References

1. C Jones, Software Assessments, Benchmarks, and Best Practices, (Indianapolis: Addison-Wesley, 2000), 100–101.

2. W Humphrey, Introduction to the Personal Software Process, (Indianapolis: Addison-Wesley, 1997), 213–217.

3. B Boehm and V Basili, “Software Defect Reduction Top 10 List,” IEEE Computer 34, no. 1 (Washington DC: IEEE Computer Society, January 2001): 135–137.

Copyright ©2008 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like