We hear plenty of advice on what practices will help improve manufacturing process and lower costs. But what should we stop doing to improve our manufacturing processes?

10 Min Read
Stop Doing This to Improve Your Device Manufacturing Process: Part 1

Editor's note: This is Part 1 of a two-part series. 

The manufacturing world is full of practices to follow to improve manufacturing processes and reduce associated costs. These practices include, but are not limited to: minimizing the eight wastes of Lean, recognizing and eliminating non-value added work, implementing design for manufacturing and assembly, and using statistical process control (SPC). Unfortunately, as we continually add practices to our tool kit we lose track of the bigger picture. This occurs personally and organizationally, driven by the daily crush of getting stuff done. We do not think holistically about the organization, and end up losing focus on the important things because we try to do too much. 

This raises the question: what should we stop doing to improve our manufacturing processes?

Stop Separating Development from Manufacturing

Most of the problematic issues that arise in manufacturing (high cost of goods, high scrap rate, slow manufacturing throughput, high training costs, high costs of inspection, etc.) are created during the design process. It is, not surprisingly, during the design process that important design decisions are made. These decisions include: the product’s general design complexity and manufacturability, specifications on components and subassemblies and their sourcing options, and the specifications and tolerances on the manufactured product.

Hear more from Cushing Hamlen during his session, "Improving Process Control to Reduce Time to Market" at the MD&M Minneapolis Conference and Expo, November 8–9.

Complex designs increase the costs of goods, and make assembly more difficult with less tolerance to variation. The latter slows manufacturing throughput, increases training costs, increases the costs of inspection, and increases scrap rate. People experienced in manufacturing know this. However, too often the design group accepts design decisions it should not. This occurs because they operate under schedule and budget pressure and with incentives that are not linked strongly to manufacturing costs.

So stop separating manufacturing from design and development.

Give both groups a unified incentive structure that is strongly linked to current and downstream manufacturing costs. A further step might be to require rotation of employees between manufacturing and design. Taken even further, nobody should be allowed on a design team at all until they have spent significant time in manufacturing.

Stop Arbitrarily Defining Requirements and Specifications

Once you have eliminated the separation between development and manufacturing, you are ready for this step, which is intimately associated with the design controls of 21 CFR 820.30 and ISO 13485.2016 §7.3. First you have to . . . stop following the regulations so adamantly. It is often said that the regulations constitute a minimum expectation—but often we lose sight of what this admonition really means.

The language of design controls in the regulations and standards was drawn from well-known and accepted engineering practices (i.e. Six Sigma, Design for Six Sigma, systems engineering, and project management). All of these practices, roughly speaking, define quality as satisfying the customers’ needs and expectations. To accomplish this, the practices would all have you first document those customer needs you intend to satisfy (i.e. product requirements) and then validate the resulting pertinent design outputs as satisfying those requirements.

Six Sigma defines "Critical to Quality" (CTQ) outputs as those that need to be identified and verified/validated because they are related to satisfying customer needs. Systems engineering calls for verification/validation of design output based on "established and traced requirements." Likewise, Part 820.30(f) requires that “verification shall confirm that the design output meets the design input requirements.” This is not the same as saying that all design output must be verified/validated.

The high-level perspective, drawn from the engineering practices that gave rise to the FDA and ISO documents, is that only design output that is Critical to Quality in the customer's eye needs the rigor of verification/validation. Likewise, only those CTQ design outputs need the associated and ongoing manufacturing controls, which are expensive! It could be argued, out of concern some auditor will ask why some aspect of the design was not validated, that it’s easier to just over-specify requirements and validate everything. 

This “audit-friendly” approach has at least three significant problems. First, the validation process itself is expensive, and doing it for everything adds significant cost. Second, once you have implicitly defined this design output requirement, you are obligated to monitor and control it. If you do not do so that omission is an audit risk. Third, the regulations require that verified/validated design output must be traceable to input requirements. If that trace does not exist, that lack of traceability is another audit risk.

Remember, auditors are not empowered to create new regulatory requirements. If you are clear and confident in both your design inputs and their traces to verified/validated design output—then you have every right to stand up and argue your case. Doing so with confidence and data to back up the argument will win the day.

When you eliminate the artificial boundaries between design and manufacturing you can more effectively see, trace, and document the relationship between design input (requirements) and CTQ design outputs. Focus your manufacturing resources on those CTQ outputs. Just because you can call a design output a requirement does not mean you should. Most importantly, stop confusing “what you can buy” with a CTQ design output. First understand your CTQ outputs and their acceptable variation. If widget XYZ model 123 from vendor ABC satisfies the CTQ requirements, then go ahead and source it and document it is satisfying the requirement. Too often we define widget XYZ model 123 from vendor ABC as the requirement itself. When we do that, and something changes (e.g. vendor changes model # or we want an alternate source), then we have an expensive problem in both the quality and regulatory realms. The authors have seen this misstep occur, with associated great expense, on items as minor as: AA batteries, adhesive tape, Ziploc bags, printed labels, solvents, and even custom manufactured components. In none of these cases were aspects of these items CTQ.

Design output that is not CTQ can and should be defined much more loosely—with less rigorous manufacturing controls. The ultimate interpretation of this is to simply not specify a design output on a drawing used for inspection at all. For instance, a drawing is necessary to manufacture a part or assembly, but that drawing does not embody the CTQ design outputs that are pertinent to design validation testing (DVT). A separate drawing used for DVT and incoming inspection can be created that only includes the CTQ design outputs. Alternatively, the CTQ design outputs can be clearly indicated and thus distinguished on the manufacturing drawing. Either approach should work.

When you truly understand how your design input requirements trace to your CTQ design outputs, you can focus your resources on tightly monitoring and controlling them. When you do, you gain the following benefits:

  • Your resource needs go down because you are doing less monitoring and control overall, and you are focusing on the right things!

  • Scrap goes down because those non-CTQ items you previously rejected are now acceptable.

  • Incoming inspection costs go down because you are doing less of it (focusing only on the CTQ items).

  • Inspection costs in general go down because you are focused on the CTQ inspection points.

Stop Testing Everything and Design for What You Do Test

By the word “testing” here, we are referring to both acceptance (pass/fail) testing as well as ongoing monitoring such as SPC. When you successfully eliminate artificial boundaries between design and manufacturing, and use this continuity of process to accurately link CTQ design outputs to input requirements, you have a choice to make in manufacturing: what do you test and monitor in incoming inspection, assembly process monitoring, and manufactured components and assemblies?

The obvious, but perhaps not the best, answer is to only test and monitor those aspects of parts, processes, and assemblies that represent CTQ design output. Unfortunately, very often we do far more than that.

It is common that when SPC first gets introduced to an organization, control charts are used excessively. There are three unfortunate implications of this approach. It adds expense and resources to collect and process all that data. It creates visual and intellectual clutter that dilutes focus away from the truly CTQ parameters that should be monitored. Lastly, that diluted focus makes it easy to mis-apply SPC, leading to incorrect interpretations of the results. All of this can easily lead to the conclusion that SPC does not work and is not worth it, which is an unfortunate and incorrect conclusion.

It is also common to collect inspection/test data far more frequently and on more test points than we should. There are important implications to collecting too much data: we incur additional expenses and resources; we dilute the focus of those resources away from CTQ activities; and if we don’t act on the data, we create downstream regulatory and liability risk.

By doing too much, we cannot possibly satisfy our definitions of what we are going to do, which is a regulatory risk. We also just plain waste money. So, whenever an argument is made to collect data because it would be nice to have or it might be needed later, that proposal should always be held up to the question—“What are you going to do with the data?” If there is not a clear and actionable response to this question, then the data should not be collected.

Returning to the question of what design outputs to test or monitor—certainly any test point that is CTQ, or acts as a surrogate, should be either tested or monitored via SPC. If monitoring is used, an acceptable process/design capability should be defined along with action limits and activities. Especially for design output that is problematic in terms of rework, setting an expectation of a very high process capability is something to consider. If you have effectively and rigorously traced input requirements to CTQ design outputs, the number of monitoring and testing points will likely be surprisingly small in number.

Having done the above, there will be a lot of design output that is not CTQ but that must exist just to be able to manufacture a part or assembly. We should take a risk-based approach to deciding how we manage that design output. Doing this accords with both good engineering practice and recent regulatory trends. For those design outputs that are not CTQ, but for which you have some reason to suspect there will be problematic variation, monitor or test, but do so at a reduced frequency relative to what you practice for the CTQ outputs. For those where you have little reason to suspect variation or where significant variation does not present any concern, test annually, or consider not testing at all. Especially for commercial off-the-shelf (OTS) components, especially where there are industry standards applied by the manufacturers—for example, AA batteries—consider not testing at all.

Design for what you do test

It does not make sense, in terms of either cost or quality, to define a CTQ design output that is difficult or impossible to test or monitor. 21 CFR Part 820 does define procedures to follow for “special processes,” but validation of such processes is expensive, the data used is often problematic, and this validation frequently becomes invalid over time in the face of significant variation of incoming components and assemblies.

The solution is to design for what you do test. During the design process, identify your CTQ design outputs, and create the final CTQ design outputs in such a way that they are accessible to and allow non-destructive testing/measuring. Those CTQ design outputs should also be defined so that they are robust to variation. Remember, there are a multitude of design options that will satisfy the design inputs. Do not lock on to the first one identified, and reject those that do not satisfy the above criteria.

Also remember that the criteria in the previous paragraph apply widely. Mechanically you need to design to allow physical access to the CTQ measuring points. You also need to avoid stack-up conditions that result in extremely tight tolerances. For electrical circuits, you should create and supply access points for probes in appropriate locations. For printed circuit boards, design in a sufficient number of probe points to test a high percentage of the circuit. Reach for a mid-to-high 90% testing coverage so you have high confidence in a truly functional circuit that passes testing. That high coverage also allows you to troubleshoot component failures, which reduces rework that produces thermal excursions, which in turn reduces reliability in the field. In software, consider adding functionality that is strictly present for testing purposes.

If you read the above paragraph closely, a theme should be evident. It is fine, even beneficial, to add content to the design output that is strictly intended for testing purposes, as long as it is linked to or is a stand in for CTQ design output.

About the Author(s)

Cushing Hamlen

Cushing Hamlen is principal consultant at DPM Insight, LLC

Bradley Fern

Bradley Fern is a principal supplier quality engineer at Entrust Datacard.

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like