Accurate tolerance stackups have become a key element in complex, multicomponent medical device assemblies.

March 17, 2014

9 Min Read
Why Perform Tolerance Stackups in Medical Device Assemblies?

By Greg Berrevoets

Medical device customers often say, “The design is close to lockdown.” This means that the list of requirements has been met and that the project is nearing handoff to manufacturing. The next steps are full-scale verification construct testing and 510(k) submission. From here on out, any design changes will be costly. However, the design engineer may be aware of issues that have not been addressed head-on. Among them is the tolerance stackup.

Used to address mechanical fit and mechanical performance requirements in assemblies, tolerance stackups represent the cumulative effect of part tolerance. If a stackup is performed incorrectly, the components may not assemble correctly, resulting in time delays, revisions, and extra costs.

Three different methods are employed for analyzing and presenting tolerance stackups in multicomponent medical device assemblies: worst case analysis, root sum square (RSS) analysis, and statistical tolerance allocation. Each method results in progressively less variation, increasing the likelihood that a device will assemble properly. To demonstrate how complex medical devices can be assembled successfully, this article will demonstrate:

  • How the use of worst case tolerancing schemes can result in delays and cost overruns.

  • How other methods, such as RSS analysis, can provide more tolerance than worst case methods.

  • How RSS, although it is better at predicting reasonable limits than the worst case model, does not account for process capability.

  • How statistical allocation analysis enables engineers to assign tolerances based on established process deviations, adding certainty to a stackup.

Tolerance Stackups in Medical Device Assemblies

 

Figure 1. A schematic diagram shows the dimensions of a multicomponent polyaxial bone screw used in spinal fixation procedures.

Medical device design engineers face increasing challenges as a result of technological advances and complex design inputs. The incorporation of more and more components in an assembly results in increased complexity and the cumulative effects of additional tolerances. While a solid model, combined with a drawing graphic sheet and external documentation, should communicate the design intent, how can the engineer know with reasonable certainty that all the component variables will produce a robust design? The answer is tolerance stackups. If the tolerance stackup is right, all the components will assemble together and function as intended. But if it is wrong, the project will experience project delays and cost overruns.

Figure 1 illustrates a multicomponent polyaxial bone screw for use in spinal fixation procedures. To begin the stackup analysis, it is necessary to identify fixed versus variable tolerances. Fixed tolerances are defined as those over which the design engineer has no control, such as components used in other products and purchased components. In Figure 1, dimension E—a complex calculated dimension and tolerance involving spherical contact with two other components—is treated as fixed. The design input under evaluation is the GAP, which has a performance requirement of 0 > GAP < 0.015 in. Because the other dimensions in the schematic diagram are variable, they serve as the basis of the tolerance scheme and determine the choice of stackup methodology.

Worst Case Analysis. A common stackup method, worst case analysis has been around for a long time. Many design engineers rely on it out of inertia or—worse yet—because they don’t think they have time to do stackups any other way. Unfortunately, this can be a mistake, as shown in Table I.

 

  Table I. Example of a worst case tolerance stackup analysis.

Table I is based on a common tolerancing approach of ±0.002 in. for the location of turned grooves. While this bilateral tolerance is accepted industry practice and would appear to be a reasonable tolerance for all intents and purposes, analysis tells another story. Using worst case analysis, tolerances are added together, resulting in a nominal gap of 0.007 ± 0.0153 in. As a result, the gap would be as great as 0.0223 in. or as small as –0.0083 in., creating interference. Thus, this analysis method does not enable the engineer to meet the design input of 0 > GAP < 0.015 in.

At this point, the engineer could easily satisfy the gap performance requirement by reallocating the tolerances and recalculating the values. Thus, all of the evenly distributed variable tolerances of ±0.002 in. would be reduced to ±0.0003 in., yielding an adjusted nominal gap of 0.007 ± 0.0068 in. Such a tight gap would have to be “inspected in quality” to ensure that it meets the tolerance requirements, resulting in increased inspection time and more nonconforming parts.

As this example demonstrates, worst case analysis is limited because it assumes that all parts are within tolerance. However, the greater the number of dimensions involved, the greater the gap, or the slop, must be to meet the design goals.

RSS Analysis. Root sum square analysis is another way to tackle the stackup problem. Derived from simple probability theory, RSS assumes that the odds of randomly selecting every component in its worst-case condition during assembly will not occur. As shown in Table II, the assembly stackup tolerance is calculated mathematically by squaring each tolerance, adding them together, and then taking the square root of the sum.

 

  Table II. Example of an RSS tolerance stackup analysis.

Again using the common tolerancing approach of ±0.002 in. for the location of turned grooves, RSS analysis results in a nominal gap of 0.007 in. ± 0.0069 in., resulting in a gap range of 0.0001 to 0.0139 in. This range meets the design requirement without the need to reallocate the tolerances.

While RSS offers a more realistic representation of an expected assembly build than worst case tolerance stackup analysis, it still has limitations. The RSS model assumes that the process is a normal distribution centered at nominal, that the process will change if a tolerance is changed, that all parts are mixed and selected at random, and that at least four independent components or dimensions must be measured.

Although RSS is better at predicting reasonable limits than the worst case model, it does not account for process or equipment capability. Another method, statistical allocation analysis, allows engineers to mitigate some of this risk while accounting for long-term variation.

Statistical Allocation Analysis. Derived from Six Sigma theory, statistical allocation analysis alleviates some of the risk associated with RSS by determining how components are made and then assigning tolerances based on established process deviations. Tolerances are assigned so that components can be made economically and with predictable process capability.

Obtained from published data, existing inspection data, or internally produced data from statistical process control or capability studies, process deviations are based on the process used to manufacture a component or feature. While engineers should always rely on the best data available, internally produced process data generate the most reliable results. Once these data have been populated in a spreadsheet, the engineer can easily extract standard deviations.

 

  Table III: Example of a statistical tolerance allocation.

For example, process standard deviation data for turning lengths are estimated to be 0.000357 in. in Table III of Dimensioning and Tolerancing Handbook.1 Using a long-term process variation of 4.5 sigma yields a variable tolerance of 0.0017 in. The nominal gap measures 0.007 ± 0.0065 in., yielding a minimum gap of 0.0005 in. and a maximum gap of 0.0135 in. This result meets the design requirement without having to reallocate tolerances. Further predictive calculations reveal an assembly build sigma of 7.23, a level that will always ensure a successful assembly build.

When the process and the process standard deviation are known, engineers can calculate process capability (Cp), the process capability index (Cpk), and defect rates. Armed with this knowledge, they can make better-informed decisions, enabling them to maximize tolerances while making objective choices about process capability and cost. Fortunately, many available spreadsheets contain these data.

Once the engineers have done their homework and completed their tolerance stackups, they can apply them to their models and drawings. To take full advantage of statistical allocation, the data should be combined with geometric dimensioning and tolerancing (GD&T).2 In contrast to traditional plus/minus tolerancing, which leaves too much room for individual interpretations, GD&T allows engineers to fully capture and convey design intent. When properly applied, it takes full advantage of allocated tolerances, essentially providing more tolerance by fully utilizing geometric tolerance zones.

Conclusion

Based on the simple 1-D stackup example presented here, it is correct to conclude that RSS results in more-accurate tolerance analysis than worst case analysis, providing a more realistic representation of expected assembly builds. Clearly, ±0.002 in. is greater than ±0.0003 in. Contract manufacturers would never approve of a design with tolerances of ±0.0003 in., responding with either a “no quote” or prohibitive pricing. And even if a manufacturer were to approve a design with such tight tolerance, the manufacturing delays resulting from excessive part rework could delay the product launch.

Statistical allocation supplements RSS with real-time, hard process capability data. While worst case analysis assigns tolerances, after which a stackup is performed to determine whether design goals can be met, statistical allocation assigns tolerances based on actual manufacturing processes. This approach ensures that the design goals will be met. In brief, worst case analysis predicts extreme limits of variation, RSS accounts for probable variation, and statistical allocation combines probable with long-term variation.

Too often, stackups are eliminated altogether because a project’s timeline or budget does not allow for them. This is a huge mistake, if only the cost of revisions is taken into consideration. Studies have shown that revision costs are ten times higher during testing than during the design phase. By the time a project reaches the production phase, the cost of a revision is 100 times higher than during the design phase. Consequently, the cost associated with performing a proper stackup is much more attractive than the cost associated with revisions and missed product launches.

In the face of continually advancing technology, engineers must work with smaller, more complex medical devices and increasingly challenging design inputs. Quality issues, rework, and manufacturing variations challenge all medical device companies, often resulting in lost revenues. Robust tolerance stackups, in conjunction with GD&T, are tools the design engineer can use to address these issues head-on.

References

1. Paul Drake Jr., Dimensioning and Tolerancing Handbook (New York: McGraw Hill, 1999).

2. ASME Y14.5-2009, “Dimensioning and Tolerancing” (New York: ASME, 2009).

Greg Berrevoets is a new-product development engineer at Minneapolis-based Lowell Inc. A certified GD&T professional through ASME and a Six Sigma Black Belt, he has 20 years of experience in the medical device industry, 15 of which were spent at Pioneer Surgical as a senior project engineer. Also an avid outdoorsman and fisherman, he can often be found after work on one of Minnesota’s 10,000 lakes. Reach him at [email protected].
 

Bob Michaels is senior technical editor at UBM Canon.

[email protected]

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like