During design verification, how many samples should you use for testing? A consultant answers the question.

November 21, 2014

3 Min Read
Statistical Sampling Plans: How Many Samples Do You Need for Testing?

During design verification, how many samples should you use for testing? A consultant answers the question.

calculator-rasmusthomsen(1).jpg

In the “MyQuickConsult Question Grab Bag," a new series, David Amor, cofounder of medical device consultant marketplace www.myquickconsult.com, will answer one question submitted by clients for the MD+DI readership.

Today’s question comes from a start-up in Minneapolis, MN:

Q: We are planning to launch a second generation of a currently existing product in our portfolio for plaque excision. The product is currently undergoing design verification, but we do not have a standard operating procedure (SOP) or work instructions that specify sample sizes. During design verification, how many samples should we use for testing? Note: This data will be used in our 510(k) submission.

A: Design verification testing should confirm design inputs that you have designated in a product specification, requirements matrix, or a similar document. Sampling plans should take a risk-based approach for determination. Before entering design verification, a safety profile of your device should be created using ISO 14971:2012 or other similar methodology. If a design FMEA has been generated as part of ISO 14971 risk analysis, certain failure modes will be associated with higher risk indices or risk regions than others.

Depending on your SOPs and work instructions, higher risk should correspond to more stringent confidence and reliability intervals for testing and, typically, more samples. Furthermore, depending on the type of test method and data you are collecting, sample sizes will be different. (Example: As a rule, continuous, or variable, data will require fewer samples than discrete, or attribute, data). A good example of sample size guidance is ANSI/ASQ Z1.4-2008 Sampling Procedures and Tables for Inspection by Attributes for attribute or discrete data testing. Another typical industry standard or rule of thumb for sampling is as follows:

  1. Minor defects (minor risks) -> Confidence/ Reliability interval = 90/90

  2. Moderate defects (moderate risks) -> Confidence / Reliability interval = 95/90

  3. Major defects (major risks) -> Confidence / Reliability interval = 95/95

Once these reliability metrics are determined from risk levels, use the statistical methods and tables described above to obtain sample sizes.

david-amor.pngDavid Amor is a medical device consultant who has worked with companies such as Boston Scientific, St. Jude Medical, and Hospira to develop quality management systems and guide FDA remediation projects. A graduate of the Senior Innovation Fellows program at the University of Minnesota Medical Device Center, Amor was named one of MD+DI’s Top 40 Under 40 Medical Device Innovators in 2012. He founded MEDgineering, a niche quality consulting firm focusing on remote compliance solutions including FDA remediation, quality staffing and consulting, and medtech investment due diligence. Amor and his Medgineering team cofounded www.myquickconsult.com, an online consulting marketplace that allows flexible question/answer and small project consults. He also currently serves as chief operating officer of ReMind Technologies, a mobile health startup dedicated to tackling medication adherence by using smart-device-based medication dispensing units and software applications.

 [main image courtesy of RASMUS THOMSEN/FREEDIGITALPHOTOS.NET]

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like