MD+DI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Usability Also Applies to FDA Submissions: Adventures in Medical Device Usability

Eventually, I’ll have something to say about device usability. This time, though, I want to talk about something else—the usability of FDA submissions. Most of the people working on medical devices have gotten the message that medical devices need to be reasonably easy to use, at least far as ease-of-use relates to safety. It follows that it’s necessary to apply a methodology:
Stephen B. Wilcox, Ph.D., FIDSA
  1. Studying how devices are used under real-world circumstances (i.e., contextual inquiry). 
  2.  Applying technical information about users, both “physical human factors data”, like hand sizes and strength data, and “cognitive human factors data” like information about what is and isn’t easy to remember. 
  3. Conducting iterative usability testing to identify use errors, so they can be eliminated.
And so on.
The question I want to ask: Why shouldn’t this methodology for achieving usability  also be applied to FDA submissions? After all, submissions are products that are used by actual human beings, just as medical devices are. At least when it comes to usability, the key people are in the Office of Device Evaluation in the Center for Devices and Radiological Health (the ODE of CDRH). They’re collectively called the “Human Factors Premarket Evaluation Team.” Here’s how their job is described on the FDA Web site:
The purpose of the FDA’s Human Factors Pre-Market Evaluation Team is to ensure that new medical devices have been designed to be reasonably safe and effective when used by the intended user populations. The effort primarily involves reviewing new device submissions, promoting effective and focused human factors evaluation and good design practices for medical devices.
Let me refer to them as the HF Team.
For FDA submissions, we can’t actually study the users in their natural environments, and we can’t really do traditional usability testing (Items 1 and 3 above), unless you consider making a submission itself a type of usability test, with approval being the measure. However, we can apply principles of human factors (Item 2) to our submissions. In this particular case, our task is made easier because the HF Team has told us how to put together an “HFE/UE report” (i.e., a human factors engineering/usability engineering report) in Appendix A of the Draft Guidance for Industry and Food and Drug Administration Staff: Applying Human Factors and Usability Engineering to Optimize Medical Device Design, issued in June, 2011. The HF Team provides chapter and verse of what should be in a report. It describes seven sections and goes into a fair amount of detail regarding what should be in each section.

More Adventures
in Medical Device Usability

You Need Human Factors Folks and Designers

Why Old-Timey Radio Programs Could Create Better Alarms

The Myth of Brainstorming

So, the first human factors principle to apply to your FDA submission is:
  • Don’t violate the user’s expectations. 
And, as I mentioned, because of the Draft Guidance, we know what the users’ expectations are. 
Some other principles (and I take responsibility for these; I don’t mean to imply that they come from the HF Team) are also important.

Use images where it will help comprehension.

Some things can be easily communicated in images that would take hundreds of words to express and, even then, still be difficult to comprehend (think about a road map, for example). Potential examples are the overall structure of the HFE/UE program and how it fits into device development, the device itself along with details of displays and controls, navigational structures and screens for software-driven devices, packaging details, icons, and the testing setup for the actual validation research. This isn’t an exhaustive list. Research has consistently shown that documents combing words and pictures result in better understanding that documents with only one or the other.

Allow for efficiency of use.

Longer is not necessarily better. In fact, it’s often much worse, particularly when there’s a lot that the reviewer doesn’t want or need.

Don’t force the user to interrupt the flow of a procedure.

It’s good to avoid constant references to other documents that have to be tracked down in order to follow the logic of a report. Such references are fine, and often necessary, but the body of the report should be sufficient for the reviewer to make decisions, so that the other documents serve only as appendices, to be consulted in special cases.

Avoid jargon unless you’re absolutely sure the user is familiar with it.

Don’t assume that the reviewer will understand acronyms just because you and your colleagues use them every day.

Create an intuitive information hierarchy.

The document should have a clear, intuitive structure that allows the reviewer to easily go from one section to another and to find a particular topic of interest.

Be consistent.

One of the easiest ways to create confusion is to be inconsistent with a document’s structure, terms that are used, etc.

Make sure the form mirrors the content.

Any change in form (e.g., in typefaces, colors, spacing, indentation) should reflect a change in content.

Use real-world metaphors to take advantage of previous knowledge.

This suggestion particularly applies when a new device is very different from what came before.

The basic idea is to take a “user-centered” approach; the key for all user-centered design is to look at it from the point of view of the users, who are inevitably quite different from the designers. In this case, the users don’t live and breathe your devices, and they don’t live and breathe your design processes, like you do. Submissions reviewers are motivated to make sure that patients aren’t harmed, not to make sure that R&D dollars pay off (although I’m confident that, within the constraint of protecting patients, they’re all for companies making profits). What you probably have in common with the HF Team is an incredibly heavy workload. So, like you, they appreciate anything that allows them to do their jobs more efficiently, and this is where the usability of submissions fits in.
I certainly don’t claim that a more usable submission is more likely to win approval (Lord knows, things like that are not for me to say), but I can’t imagine it would hurt, and I would expect it to at least save time. And, in the ensemble, saving time at FDA should logically lead to shorter review periods, which would help all of us.
Let me just say in closing that (and here I may shock you) I’m not one of those who rails about the evils of regulation in the medical device industry. One of the themes we heard in the presidential debates is that the US medical device industry (along with the rest of America) is at risk of being destroyed by “overregulation”—that, now we’re giving an advantage to OUS competitors. However, call me an idiot, but I can’t see how making it a little tougher in the US market doesn’t actually help US industry by stimulating more US exports (if indeed it’s easier to gain approval OUS) and making it harder for OUS companies, who are likely to be less adept at navigating US regulations, to compete in the US marketplace. 
Now, I admit to being biased. After all, how can you dislike people who force the world to take your discipline seriously? We’re all better off with tougher, but more consistent usability-related regulations. From where I sit, FDA does, in fact, make patients safer, and by improving regulatory consistency, they make the whole approval process much more predictable (frankly, it used to feel like a crap shoot). And they give an advantage to the responsible companies, who were already developing devices with good systems in place, relative to the corner cutters, who used to have the competitive advantage of lower R&D costs.
The folks at FDA are hard working, dedicated professionals who are truly motivated to make sure that devices are safe and effective. I actually think its rather refreshing to find people who really are trying to do the right thing (even when it makes their jobs harder) in a world where the nightly news gives us daily evidence of venality in just about every sector of society. Those of you who want to “starve the beast” can count me out, at least when it comes to FDA.

Sorry, I got a little political, didn’t I? I promise it won’t happen again.


Stephen B. Wilcox, is a principal and the founder of Design Science (Philadelphia), a 25-person firm that specializes in optimizing the human interface of products—particularly medical devices. Wilcox is a member of the Industrial Designers Society of America’s (IDSA) Academy of Fellows. He has served as a vice president and member of the IDSA Board of Directors, and for several years was chair of the IDSA Human Factors Professional Interest Section. He also serves on the human engineering committee of the Association for the Advancement of Medical Instrumentation (AAMI), which has produced the HE 74 and HE 75 Human Factors standards for medical devices.

St. Jude vs. Medtronic: Who's Winning the War Over Defibrillator Leads?

The gloves are off and no one's pulling any punches in the bitter battle currently underway over defibrillator leads. A study conducted by prominent cardiologist Robert Hauser and his colleagues aimed at assessing deaths associated with St. Jude Medical's recalled Riata leads and comparing the numbers to Medtronic's Quattro Secure leads served as the catalyst for the verbal assault. But the two medical device giants quickly escalated the situation, lashing out at each other in a series of public rebuttals and press releases, with each apparently looking to deliver a knockout punch to the other. As each day seemingly brings another retaliative blow, however, one has to wonder what exactly is the end game? And can anyone really emerge the victor?

While the public mudslinging and damage control surrounding defibrillator leads has escalated in recent weeks, the controversy dates back several months. On the heels of the Riata recall in December, Hauser, the cardiologist, began actively speaking out against the risk associated with the leads via various media channels and public forums. In a perspective piece in the New England Journal of Medicine, for example, Hauser opined that the Riata situation underscores a larger problem: a flawed postmarket medical device surveillance system. In response, however, St. Jude defended its actions and addressed several of Hauser's allegations and assertions in a letter to the editor of the NEJM. This initial dispute between Hauser and St. Jude was then followed by a piece by CBS News in which Hauser featured prominently and reiterated his views on both the leads and related postmarket surveillance failings.

But the current defibrillator lead debate truly took off as a result of Hauser's and his colleagues' subsequent denunciation of the Riata leads in a study published by Heart Rhythm at the end of March titled, "Deaths Caused by the Failure of Riata and Riata ST Implantable Cardioverter-Defibrillator Leads." Using FDA's Manufacturers and User Facility Device Experience (MAUDE) database, the study examined deaths of Riata and Riata ST ICD patients and compared them with deaths of patients with Medtronic's Quattro Secure leads. Suffice it to say, the results were not favorable to St. Jude.

Last week, St. Jude issued a press release demanding a retraction of Hauser's manuscript, citing factual errors in the authors' evaluation of Medtronic's leads and accusing them of bias. Replicating the methods outlined in the manuscript, St. Jude found what it claims are a multitude of inaccuracies or flaws in methodology. The company also claims that, despite dedicating 300 hours to the endeavor, it was unable to reach the same conclusions as Hauser's team. "St. Jude Medical's independent search of the MAUDE database found 377 reports of deaths involving Quattro Secure leads, not 62 as stated by Dr. Hauser in a manuscript posted online and accepted for publication in the Heart Rhythm Journal," according to the release. "St. Jude Medical's analysis of Riata and Riata ST lead events found in the MAUDE database also indicate that Dr. Hauser did not report an additional three deaths, which would change the number from 71 in Dr. Hauser's manuscript to 74."

Among the company's additional criticisms of Hauser's study were:

  • FDA explicitly advises against using the MAUDE database to evaluate or compare adverse events
  • The study only analyzed one Quattro Secure lead model compared with all Riata and Riata ST models
  • Medtronic "generally reports the least amount of detail compared with other companies in the industry" and became more diligent as of 2009, before which events may have been "underdetected"
  • Based on the above, analysis of lead-related deaths is therefore biased against more-transparent reporting
  • The comparison of a recalled silicone-only insulated lead versus a lead with polyurethane outer insulation is, essentially, comparing apples to oranges and is not a fair or logical comparison
  • St. Jude was not consulted prior to publication and not provided an opportunity to validate data

This diatribe proved to be only the tip of the iceberg. Hauser stood by his team's study, according to the Minneapolis Star Tribune, while a Medtronic spokesperson commented to the paper that the debate appeared to be "nothing more than a difference of opinion between St. Jude Medical and Dr. Hauser."

However, Medtronic's role in the increasingly public spat over ICD leads quickly rose to the status of costar. An article in the New York Times noted that St. Jude executives accused Medtronic of launching a whisper campaign implying that the company's Durata lead was vulnerable to the same risks as the Riata. "This has become a topic of competitive marketing," St. Jude CEO Daniel Starks told the paper. "We have competitors going to physicians and informing them, either incompletely or mistakenly, of a competitively hostile view of the facts." A Medtronic spokesperson refuted St. Jude's claims in the article, adding, "They have been very contentious."

And if this back-and-forth wasn't enough, this week brought a fresh wave of accusations, denials, and press releases. Kicking off the week was Medtronic, which announced that it supported and validated Hauser's initial findings based on his stated methodology. Whether this declaration was a defensive move, an effort to rub salt in St. Jude's wounds, or a bit of both is for you to decide. Hot on the heels of Medtronic's public support, the editor of HeartRhythm publicly rejected St. Jude's request for a retraction of Hauser's contested study and stated plans to publish it in the journal's upcoming print issue. He did, however, tell the New York Times that the piece would be slightly edited "for inflection."

No doubt stinging from the rejection, St. Jude fired off yet another press release on Tuesday proclaiming that it had posted the information drawn from the MAUDE database that showed the reported 377, rather than 62, deaths associated with Medtronic's Quattro Secure lead. The company then demanded that Hauser respond whether he stands by his initial findings or not, and invited Medtronic to review the information as well.

If the previous few weeks are any indication, a rebuttal or press release from someone involved should be cropping up any day now. But when will it end? And what is this very public, very contentious snipefest really accomplishing? In the beginning, St. Jude was performing some understandable and reasonable damage control in response to some damning criticism. But the company's escalating aggression, obsession with its competitor, and rapidity of response to anything and everything over time is beginning to emit an air of desperation and pettiness. Likewise, Medtronic originally entered into the fray to defend its reputation, but appears to be taking a somewhat antagonistic tone recently. And as to St. Jude's accusations of a whisper campaign, well, right now they appear to be just that: accusations. If true, however, Medtronic would probably benefit from focusing on positive aspects of its own products rather than spreading rumors about competitors and being characterized as an opportunist. It likely isn't going to garner any esteem.

What it boils down to is that this heated lead-centered feud certainly isn't helping either company's public image and isn't benefiting clinicians or patients, either. "I can't recall seeing a more contentious and open dispute between medical device companies in my 19 years working in this field," Edward Schloss, director of cardiac electrophysiology at the Christ Hospital in Cincinnati, told MassDevice. It's time to dial it back, get a grip, and declare a truce. So, who will lead that effort? --Shana Leonard

TEDMED Opens with Call to Think Outside the Box to Solve Healthcare Problems

The crowd at TEDMEDTEDMED 2012 kicked off yesterday in Washington, DC, with a session addressing the theme of “embracing the unconventional.”

The four-day conference brings together experts from a variety of disciplines to share knowledge and imagine the future of healthcare. The first session featured acrobatic and musical performances and talks by a lawyer, a graphic designer, and the founder of a healthcare nonprofit.

People working in healthcare tend to connect in silos, with specialists interacting mainly with others in the same field, said TEDMED curator and emcee Jay Walker. The goal of conference, he said, was to assemble a diverse crowd to focus on innovation, imagination, and inspiration.

“People from the front lines of medicine across all fields are here,” Walker said.

The event opened with a performance by Montreal-based acrobatic dance troupe 7 Fingers that set the standard for what healthy bodies are capable of. The performers rollerbladed, skateboarded, and jumped through hoops in ways that seemed to defy gravity.

“Performing arts open our minds and imaginations to things we don’t think are possible,” Walker told the crowd.

The first speaker was Bryan Stevenson, founder and executive director of the Equal Justice Initiative, a nonprofit organization that litigates on behalf of underprivileged defendents who have not received fair treatment in the justice system. He talked about the “power of identity,” which he said could “get people to imagine a future they can’t otherwise imagine.”

Stevenson urged the crowd not to ignore poor, underprivileged, and underrepresented members of society when working to solve problems in healthcare and other areas. Though it may be hard to stand up for those who can’t stand up for themselves, it is every person’s obligation to do so, he said.

“Humanity requires us to respect every person’s human dignity,” Stevenson said. He left the crowd with a bit of advice given to him by the janitor at a court in which he was arguing: “Keep your eyes on the prize and hold on.”

Graphic designer Teresa Monachino presented what she called a “sicktionary,” an A-to-Z list of words that are unclear, inaccurate, or don’t necessarily mean what most people think they do. Among the words she called out was “consumer,” which she said has recently become a replacement for “patient.”

“Do patients become more powerful by becoming consumers?” she asked. “One word can change the picture entirely.”

Next, musical director Jill Sobule shared a song about what it would have been like if different historical figures, from Edgar Allen Poe to the Old Testament God, had taken modern drugs such as Prozac and Ritalin. She claimed to have finished writing the song only 10 seconds before going onstage.

Last up was Rebecca Onie, cofounder of the nonprofit Health Leads, an organization that harnesses college students to connect patients with the basic resources required to get and stay healthy. She told of how after working in college with a Boston law firm that represented low-income families in housing disputes, she became frustrated with the fact that interventions were coming too late to help families before difficult situations became crises.

“I was frustrated because we were intervening too far downstream,” she said.

In an attempt to reach those in need earlier, Onie partnered with Barry Zuckerman, MD, at what was then called Boston City Hospital, to ask doctors what they would give patients if they had unlimited resources. The answer they heard time and again was that the underlying cause of many patients’ health problems was the fact that they didn’t have access to basic necessities such as food and shelter.

To solve that and other problems, Health Leads works within existing elements of the healthcare system. Take, for example, the clinic waiting room, once a place where patients did nothing more than wait to see a provider.

“If airports can become shopping malls and McDonald’s can become playgrounds, surely we can reinvent the clinic waiting room,” Onie said. “The waiting room became a place where Health Leads turned the heat back on.”

Health Leads now places college volunteers in clinic waiting rooms to help connect patients with the community resources they need to stay healthy.

Another healthcare institution Health Leads has repurposed is the electronic medical record (EMR), which Onie said the organization has changed from a “static repository” of data to a “health-promotion tool.” When a patient’s weight indicates an elevated body mass index that puts them at risk for obesity, the EMR automatically triggers a response through Health Leads to connect the patient with resources such as healthier food and exercise programs.

“If we know what it takes to a have a healthcare system rather than a sick-care system, why don’t we do it?” Onie asked.

TEDMED sessions will continue through Friday, April 13. The live event is held at the John F. Kennedy Center for the Performing Arts, and TEDMEDLive simulcasts are being shown at institutions across the country.

MD+DI will be reporting from TEDMED simulcast locations all week.

Jamie Hartford is the associate editor of MD+DI and MED. Follow her on Twitter at @readMED.

Transforming Business: The Silver Lining of the Sunshine Act

The Physician Payment Sunshine Provision is scheduled to go into effect sometime this year, 90 days after CMS finalizes the regulations. The intention of the legislation is to allow for greater transparency of the relationships between industry and the medical community. Sometime next year, the first disclosure reports mandated by the provision will be due, requiring most U.S. manufacturers of drugs, devices, biologics, and medical supplies to annually disclose payments exceeding $10 per program or $100 per year provided to physicians and teaching hospitals. This legislation will add significant visibility to physician-level spending.
sunshine act, physician, networks
Figure 1. Case Study: A client leverages physician networking technologies to identify the optimal 25,000 primary care physicians from a universe of 107,000. Click image for larger view.
Although this federal transparency provision has support from diverse stakeholders, including industry, consumer, patient groups, professional medical associations, and provider organizations, its transformational effect will be uniquely felt by medical device manufacturers in the areas of cost, day-to-day operations, and physician interactions.
However, as a major catalyst for change, the Sunshine Act represents an opportunity to reevaluate all areas of efficiency and traditional physician collaboration. Device makers should ask themselves: Can we accelerate adoption of important technology advances and enable our sales and marketing teams to engage more efficiently and effectively by understanding the effect that physician influence networks have on the practice activity of a community?  The answer is yes.

Historically, medical device and pharmaceutical companies have allocated significant resources to support a range of physician relationships and, as in most industries, positively affect customer behavior. Key opinion leaders have been relied upon as program chairs and keynote speakers for their expertise, national recognition, and ability to influence others. To conduct successful programs, resources have been allocated based on research and valuation results, as well as a need to provide speaker honoraria, and lunch and dinner programs that often involved travel.
Public scrutiny has increased in recent years, and the Sunshine Act is likely to intensify the external glare of media. But, while this legislation brings with it the tasks of compliance, tracking, and reporting, it also provides an opportunity to more closely examine the strategies behind resource allocation. With the bright light of the Sunshine Act on our spending strategies, will they hold up to internal scrutiny?

Will the implementation of this provision affect speaker bureaus?  Will we lose some national or regional thought leaders?  No one can be sure yet.  But what is certain is that there is a window of opportunity to optimize spending prior to 2013 and ensure that we identify customer engagements that are more strategic and valuable to our organizations.

Make Plans to be “Sunshine-Smart”

As the Sunshine Act dawns, there are steps that manufacturers should take to reassess resource allocation strategies. First, there are a few key questions to ask: Are there better ways to identify optimal customers; Are we overlooking key relationships that may be highly advantageous?  
Compared to five years ago, the effect of social networking on our lives has made understanding the networks of connectivity between people increasingly vital. Studies now show that physicians are far more likely to find out about products from friends and colleagues than they are from a company’s marketing efforts.  
Now, it is much harder to control how people first come to experience our messages. Therefore, it is important to identify the communication architecture, the network interactions, that drive the practice behavior of a community.
All of us belong to different networks made up of different kinds of relationships. Networks are not new. People have always clustered themselves into groups based on different types of relationships between individuals. Social networks are simply the online manifestation of our natural social behavior.
Due to the specific demands of practicing medicine, physicians have very defined professional relationship networks that facilitate sharing opinions and experience with peers who are considered experts. The purpose is to cut through the noise and receive an honest evaluation, to validate, vet, and convey information. Professional opinions are shared in networks and physicians’ participation and role in each is unique. Some members are merely influencers, while others in the network are highly influential triggers for decision-making.
Advanced analytics and networking technologies identify valuable new insights in data, making sense of millions of practice interactions between physician pairs. Results enable us to understand who affects the practice decisions of whom and the practice behaviors of networks of professionals.

Making Sense of Interactions Between Physicians

Treatment and/or referral decisions are made every day in every community, and are often the result of shared experiences between peers. Medicine is not practiced in a silo. Behavior changes and decisions are the result of any number of network interactions. For example, segmenting physicians by a certain attribute, such as identifying all gastroenterologists, is valuable, but identifying the gastroenterologist with practice activity that affects the behavior of a primary care physician (PCP) reveals additional value. Networking technologies are enabling medical device companies to identify the inner dynamics of physician peer networks to activate more valuable physician engagements.
These technologies now identify the following:
  • The physician influence and communication network architecture of every community by disease state/condition.
  • Key networks that drive the practice activity of their communities.
  • Interspecialist network relationship insights and the emergence of new practice patterns and activity.

Case Study: Leveraging Relationships

In this case study, the client had excellent relationships with a targeted universe of customers—specialists in the therapeutic class. The team saw an opportunity with PCPs, but resources were very limited. It was important to identify a very manageable number of optimal PCPs from a pool of more than 100,000 who were diagnosing, and sometimes treating and referring, patients in this therapeutic area.

The client saw opportunity and value in leveraging the relationships with their current specialist customers, by identifying PCPs who were in the influence network of current key customers.  Identifying influence networks across the country, where customer specialists had direct connectivity or influence on PCPs, enabled the client to efficiently narrow the universe by 75%—from approximately 100,000 to a more manageable 25,000 PCPs—each representing a new opportunity to leverage current key customer relationships (see Figure 1).

New, data-centric technologies reveal physician peer network interactions in each community, providing better ways to understand the value of current relationships, and leverage physician network relationships to identify new market opportunities.


With some of the most significant changes to the healthcare industry in the last half century set to go into effect in the very near future, there is understandably a great deal of uncertainty felt by device and drug manufacturers across the United States. By leveraging the synergistic approaches offered by new physician networking technologies, we can ensure that customer engagements are more strategic and valuable to our organizations. The Sunshine Act may seem like a dark cloud on the horizon, but the silver lining here is the opportunity to rethink and significantly improve the way we engage and connect with our customers and their networks.
Alan G. Reicheg is chief commercial officer at Qforma (Princeton, NJ). He has held leadership positions in marketing and managed care for MedPointe Pharmaceuticals and was senior director of marketing at Savient Pharmaceuticals. He received his BA from Rutgers University and is a member of the Coalition for Healthcare Communication. Reach him at [email protected].

Your Call is Important to Us: Managing and Resolving Customer Complaints in an Enterprise Quality Management System

Five Questions with Tim Mohn, Associate Director, Enterprise Quality Management Systems at Johnson & Johnson

1. Why are customer complaints important to medical device companies?

Device manufacturers are required to record, track and trend customer complaints according to the Quality System Regulation (QSR) defined in 21 CFR Part 820. In addition, FDA Part 803 regulations require firms that have received complaints of device malfunctions, serious injuries or deaths associated with medical devices to notify the FDA of the incident.

2. How do device companies typically manage this process?

This varies by company, but most struggle to maintain global visibility to the entire complaint process. Many companies have multiple layers of systems, from call intake systems supporting regional call centers to service and repair systems to inventory systems to CAPA systems. These are all designed to meet a specific business need, but none are designed to manage the holistic complaint handling process.

3. Why is this a problem?

This disjointed approach makes it difficult to truly understand what’s happening within the complaint system and more importantly, can result in missing indicators of alleged patient risk. Furthermore, the disconnected nature of these solutions is inefficient, and can add regulatory scrutiny due to compliance challenges like missed reportable events or failing to adequately investigate issues.

4. What should companies use instead?

Companies should seek to implement one global system that aggregates all sources of complaint data and centralizes the investigation and regulatory reporting aspects of complaint handling. Many companies deploy an Enterprise Quality Management System (EQMS) for this purpose, seamlessly interfacing to the multiple up-front call capture and service systems and ensuring timely regulatory reporting and CAPA activities.

5. What does EQMS offer?

Among a multitude of other benefits, EQMS lets companies automate the process of assigning complaints and related investigations based on manufacturing location, product type or any other criteria they establish. Additionally, it helps organizations error-proof their processes by utilizing decision trees, specific to their products, to help drive investigations and determine regulatory requirements around the world. Finally, EQMS facilitates improved compliance by providing companies with templates and electronic reporting capabilities to ensure the consistency of complaint data when it comes time to submit it to the proper authorities.


Tim Mohn Associate Director, Enterprise Quality Management Systems at Johnson & Johnson. He was previously the industry solution director at Sparta Systems and the former manager of worldwide quality systems for Johnson & Johnson Ortho-Clinical Diagnostics. Mohn is widely recognized as a key thought leader on quality and compliance issues in the medical device market.

Medtech Buzzwords: Helping and Hurting Innovation

Over the past month, I’ve spoken with several professionals in the medtech industry about the election and where its potential outcome could take us. As usual, innovation was one common thread, so here’s what some of your peers have said when asked:
Is the 2012 "air of uncertainty" in the medical device industry hurting innovation?
Mark Bonifacio
founder of Bonifacio Consulting (Natick, MA)

Len Czuba
president of Czuba Enterprises Inc
(Lombard, IL)

Toby Buck
Chairman and CEO of Paragon Medical (Pierceton, IN)
"In certain markets that might be more commoditized and the margins might not be as healthy, yes. But in other areas, I haven’t seen it.  
I still see a lot of innovative activity out there. There are a lot of start ups. I’m in Boston, [and the] area is very vibrant and alive [with] a lot of new technologies, electronics, telemedicine, and e-health services. If you’re working on something innovative and new, you’re trying to predict the landscape and it might slow you down. At the moment, I see the economy as a bigger drag on innovation as opposed to the uncertainty regarding the healthcare legislation."
"It seems to me that the policies the government has put in place—federal and local government—need to foster the climate of business-friendly environment.
In Illinois, Governor Quinn implemented a huge tax on business. The result of that is companies have looked to move out of the state. If we go the opposite [direction] and give companies huge tax breaks to come into the state and do business, we all of sudden put people to work, we reduce the ranks of unemployment; once people have the money to spend on housing and goods and food and entertainment ."
"The regulatory environment has gotten so arduous with all the validations and the way that a Class I, II or III device behaves in the marketplace. We’re not helping introduce healthcare reformation through innovation, we’re constraining it."

Maria Fontanazza is managing editor at UBM Canon. Follow her on Twitter: @MariaFontanazza.

Trends in 3-D Minimally Invasive Endoscopic Surgery

Trends in 3-D Minimally Invasive Endoscopic Surgery

 The first recorded use of a laparoscopic instrument in a human is generally credited to Hans Christian Jacobaeus in 1910. As is often the case, the understanding of the concepts involved in minimally invasive surgery (MIS) came well before the technologies and equipment that would make it a standard procedure. In fact, more than 60 years after Jacobaeus’s first human laparoscopic surgery, Kurt Semm was under fire from his colleagues for promoting the use of laparoscopic surgery. As related by The New York Times: “In 1970, after Dr. Semm became the chairman of obstetrics and gynecology at the University of Kiel, his co-workers demanded that he undergo a brain scan because, they said, 'only a person with brain damage would perform laparoscopic surgery,' Dr. Mettler said.”1

Although the theory of minimally invasive surgery has been available for more than a century, it is only within the last 30 years that it has become accepted within the field. As a young field surrounded by ever expanding technological advances, there are many exciting developments in process and on the horizon. This article examines one of those developments, the advent of stereoscopic (3-D) endoscopy.
We begin by examining some of the most recent findings about the efficacy of 3-D endoscopy. We then discuss some of the unique challenges that are introduced by adopting this technology (challenges that are inherent to stereoscopic imaging as a whole, as well as some equipment specific challenges). Finally, we consider how the same trends that enable stereoscopic endoscopy may prove critical to another currently debated technique, natural orifice surgery (NOS, or scarless surgery).
Operating principle of a single-objective 3-D endoscope system. The viewer is looking down towards the objects imaged on the sensor. Each of the colored rays represents the center of mass of the cone of light that reaches the sensor as the optical modulator switches between the right view and the left view states. Note that the optical modulator selects different angles for the light rays in each view, creating separate viewpoints within the single lens. Projecting the right view image to the right eye and left view image to the left eye generates a stereoscopic image. Click image for larger view.

An Introduction to 3-D Endoscopy

At first glance, introducing depth information to minimally invasive surgical procedures would seem to bring a host of benefits to those procedures. After all, surgeons work in a 3-D space within the patient, yet standard monitors show only a 2-D plane, requiring the surgeons to learn to interpret the nonstereo depth cues available to intuit the third dimension while performing surgeries.
However, such a task is not unique to surgery.  People are trained to do something similar when watching television or movies, a medium that asks us to understand a 3-D landscape via a 2-D display. As a whole, people are remarkably good at understanding gross-depth relationships even when presented on completely flat display surfaces.
To understand why this is the case, we must consider that while there are 14 cues that our brains use to determine depth and depth order, only three of these require stereoscopic vision. The other 11 can be determined and interpreted without left-eye/right-eye image disparity. 2
What those 11 monoscopic cues don’t provide, however, is the ability to accurately judge distances between objects on the z-axis (i.e., judging how far ahead or behind an object is from another). Judging distances on the x- and y-axis in 2-D is fairly easy, as the objects on screen provide easy reference points to measure against one another, but there is very little depth information provided.

Efficacy of Stereoscopic (3-D) Endoscopy

Perhaps not surprisingly, evidence pointing towards the efficacy of introducing 3-D to surgical procedures is mixed. Many studies report a subjective preference for 3-D, though few show a statistically significant performance increase in any objectively measurable area among experienced surgeons. The often small sample sizes of the studies also present problems, with some studies using as few as six experienced surgeons in the testing groups.
There is one common theme; while there are no statistically significant performance improvements among experienced surgeons, novice surgeons may experience lower error rates in some tasks. In addition, all groups report improved depth perception (as would be expected).
For example, in a study involving 21 novice and 6 experienced surgeons, a report comparing 2-D and 3-D camera systems in laparoscopy stated, “The 3-D system provided significantly greater depth perception than the 2-D system. The errors during the two tasks were significantly lower with 3-D system in novice group, but performance time was not different between the 2-D and 3-D systems. The novices had more dizziness with the 3-D system in first two days. However, the severity of dizziness was minimal (less than 2 of 10) and overcome with the passage of time. About 54% of the novices and 80% of the experienced surgeons preferred the 3D system.”3
Similarly, a study involving 13 patients undergoing endonasal endoscopic transsphenoidal surgery found “no significant differences in operative time, length of stay, or extent of resection compared with cases in which a 2-D endoscope was used. Subjective depth perception was improved compared with standard 2-D scopes.” 4
While various studies report different reactions to 3-D endoscopy among experienced surgeons, almost all note increased reporting of depth-perception accuracy among novice and experienced surgeons alike, greater task performance among novices, and no performance reductions when using 3-D as opposed to 2-D endoscopes.
Nelson Oyesiku, MD, PhD, FACS of Emory University addresses the subjective benefits, stating “The reason why three dimensional is important is that it tells you where you are in space. That could be critical in an area where you have a lot of stuff around, like in the pituitary, where you’re surrounded by blood vessels and nerves, and the last thing you want to do is to stumble into an artery because you just didn’t know that it was that close. So the 3-D takes from the 2-D endoscope and it takes from the 3-D microscope and marries the best features of both in one instrument.” 5

How 3D Works

Creating a 3-D image is a fairly simple two-step process.
Step One. Capture two images of the same scene with an offset between them (often referred to as a stereoscopic image pair, or just an image pair). These images are referred to as the “left eye” and “right eye” images respectively (see Figure 1).
Step Two. Display the image pair in a way that enables one to view the pair as a single image. This process is accomplished by having each eye see a separate image in the pair (i.e. your left eye will see the left eye image, and your right eye will see the right eye image) and can be achieved in one of three ways:
  1. Screens working with active glasses display images at twice the normal rate, and the glasses block one of your eyes in synch with the TV display. While the screen displays the left eye image, your glasses block your right eye. These systems are the least expensive to manufacture, because they simply require screen refresh rates that are high enough to accommodate the image switching.
  2. An alternate option is a passive screen paired with polarized glasses. In these screens, the two images are displayed simultaneously but at different polarizations. The glasses you wear with these screens work by using different polarized filters on each eye, allowing you to see only the right-eye and left-eye images through those respective eyes.
  3. A third method is known as autostereopsis, or glasses-free 3-D. These screens can work by using technologies like eye tracking, parallax barrier, or lenticular lenses on the screen. These technologies are still in their infancy with respect to autostereoscopic 3-D.
Figure 1.ISee3D’s core technology consists of modifying a lens system to enable the capture of stereoscopic image pairs through a single lens. By occluding a portion of the lens, it is possible to shift the effective center of the lens–by doing this in sequence with different occluded areas (see images above--left eye, left; right eye, right), it is possible to capture stereoscopic image pairs with a fixed interaxial distance. Click images for larger versions.

The Unique Problems of 3-D

One issue noted with using 3-D technology was a feeling of dizziness that occurred among a small sample of the surgeons in their trial.3 In addition, 3-D systems introduces the risk of dizziness or nausea among some viewers even in short duration exposure (such as a movie or TV show). This issue must be addressed when considering 3-D for use in the operating theater with longer exposure times and much greater repercussions for viewer discomfort.
End-viewer discomfort caused by 3-D has two primary causes—the source material itself (capture errors) and problems unique to the display environment (display issues). Below we’ll consider the causes of end-viewer discomfort and ways to mitigate both of these areas.

Problems with Capturing Images

When capturing images using two cameras or two lenses, it is critical that the lenses are perfectly aligned and calibrated. The most common cause of 3-D discomfort (including dizziness and nausea) is from lens misalignment. Vertical misalignment is the most egregious offender, though horizontal or rotational misalignment can also lead to discomfort.
Mismatches in lens focal lengths, color filters, and aberrations found in one lens of a pair can also cause artifacts that lead to discomfort when viewing a stereoscopic image pair (for example, trying to interpret a scene that has an object visible only to the left eye).

Problems with Display

Three key factors that cause viewer discomfort can be primarily attributed to the display. Two of these are directly related to the 3-D glasses (weight and bulk, and flickering), while the third is an issue of synch between the screen and glasses (crosstalk).
The issue of flickering glasses is constrained to active shutter glasses. These glasses alternate occlusion of the left and right eye in synch with the display of the right and left eye images on the display. This interaction produces a faint but noticeable flickering, which is obvious when looking at anything other than the display. The flickering can cause eye fatigue and may result in end-viewer discomfort.
The second issue, which relates to the weight and bulk of 3-D glasses, is primarily an active shutter issue; tthese glasses generally require a battery, an infrared synch, and liquid crystal elements, causing them to be larger and generally bulkier than their polarized lens counterparts.
Crosstalk, or ghosting, occurs when there is not a complete separation between the left and right images—you may see some parts of the left image “leak over’”to the right side, or vice versa, resulting in what appears to be a translucent object on the screen (hence the term ghosting).  This is the result of an imperfect synch between active shutter glasses and the screen.

Addressing Display Issues

Given that the three major causes of display-dependent viewer discomfort come from active glasses and monitors, it is fair to ask why these systems are considered in the first place. The answer is resolution. Given that they display sequential full frames, active shutter systems offer true HD resolution. Passive systems, which must display both the left and right images simultaneously, will always be presenting at reduced resolution (on an HD screen, either at 960 × 1080 or 1920 × 540 pixels).
However, a detailed article by Raymond Soneira, MD recently showed that viewer testing among passive and active systems almost universally showed a preference for passive systems in all areas relating to 3-D quality.6 It is reasonable to conclude that until autostereoscopic displays advance sufficiently to enable wide viewing angles at high resolution, the best display solution is a passive screen.

Addressing Capture Issues

Capture issues are trickier to address, because many of the errors that occur are built into the nature of dual lens capture. Synchronization, alignment, and intensity and chroma calibration will be issues whenever there are multiple optical channels.
One approach to this problem is to eliminate the second optical channel altogether, by using an approach that enables the capture of 3-D through a single lens. For example, one company takes this approach and has developed a single lens 3-D endoscope that measures less than 4 mm in diameter.
The general physics of such an approach are relatively easy to understand—by occluding a portion of the lens, you create a new “center viewpoint” of the lens closer to the edge of the nonoccluded side. If you block the left half of the lens and capture an image frame, then block the right half of the lens and capture an image frame, you’ll have two images from different viewpoints—ideal conditions required to create a stereoscopic image pair.
 Since such an approach uses only a single lens, there are no issues with calibration, synchronization, or differences in focal length or color filtering. The image pairs are perfectly matched, enabling high quality, comfortable 3-D without the discomfort issues associated with 3-D captured via dual lens methods.
The movement to a single lens to capture 3-D has the additional benefit of enabling much smaller 3-D endoscopes as well as offset scopes, opening up many 3-D minimally in MIS procedures that would be difficult to accomplish with larger 3-D endoscopes. This advance enables endoscopes to follow the general trend of all electronic technology, which is to become smaller and less expensive while including more processing power than preceding generations.

Applicability in Natural Orifice Surgery

NOS is a proposed MIS technique that uses natural orifices as instrument entry points, as opposed to the current practice of creating an incision to allow the instruments to enter the patient. It is still in very early stages, with some of the first reported successful human natural orifice transluminal endoscopic surgery (NOTES) taking place in 2007.
Since NOS is still in such early stages, there have been no known studies on the efficacy of 3-D versus 2-D endoscopes in NOS. However, we can safely assume that the preference for smaller instruments will be as pronounced or more in NOS as it is in MIS. If 3-D endoscopes are to be used in NOS, they must be able to meet these size requirements.
As evidenced by Oyesiku’s quote about the importance of depth information while performing MIS in areas that are “crowded,” the introduction of 3-D may become more helpful as more surgeries take place in narrower or more crowded environments (as will be the case with many NOS procedures).

Technological Effect on Manufacturing

Advances in technology in general, and endoscopy specifically, have dramatically changed the landscape of MIS. As supporting technologies become smaller in size, more powerful (increases in processing power, resolution, and reliability) and less expensive to implement, device manufacturers are presented with both the opportunity and the imperative to innovate.
As an example, consider field-of-capsule endoscopy. Prior to 2001, it was effectively impossible for medical professionals to examine the entire length of the digestive tract. The emergence of miniaturized camera components enabled the development of a camera that could be swallowed by a patient and then record images at every point in the digestive tract. Capsule endoscopy was approved by FDA in 2001 and has presented a significant opportunity for both endoscope manufacturers and equipment suppliers.
Similarly, the field of robot-assisted surgery has become extremely popular. It has provided opportunities and challenges for manufacturers to redesign existing components to be seamlessly integrated with the robotic surgery system, as well as the requirement for new types of equipment that are not feasible in traditional MIS but required for robot-assisted surgeries.
This trend holds true even outside the field of robot assisted surgeries. As MIS is performed in smaller environments (ENT, neurosurgeries, NOTES, etc.), not only do components need to become smaller, in many cases they need to be completely redesigned. A recent report by BCC Research suggests that NOTES surgery will require single instruments that can perform many of the functions traditionally associated with multiple instruments. Adapting and redesigning existing equipment to fit the size and functionality needs of the emerging MIS climate will provide opportunities for existing and emerging companies to add value to this field.


3-D endoscopy is a valuable evolution that has proven to reduce error rates in novice surgeons, as well as providing subjective improvements to depth perception in both novice and experienced surgeons alike.
While care must be taken to avoid end-viewer discomfort (which can have root causes both in the capture and display of 3-D), these issues are known and can be accommodated. By using a passive display system (as opposed to active), and ensuring proper calibration and alignment of the 3-D endoscope (or, in time, switching to a single lens 3-D endoscope so that alignment and calibration are nonissues), surgeons can reap the benefits of enhanced depth perception during surgeries without facing the discomfort that is sometimes associated with 3-D.
More research must be done on 3-D endoscopy, but early results suggest that there are no ill effects when the display and capture issues are taken into account, and that there may in fact be objective positive effects for noviceand experienced surgeons alike.
Taking a wider perspective, advances in technology are allowing rapid advancements in MIS. Within the past decade we have seen the emergence of capsule endoscopy, robot assisted surgery, and high-quality 3-D visualization introduced to the field of MIS. NOS holds promise in the near future. As with most areas of technology, the development of components that are smaller, more powerful, and less expensive will continue to drive technology advances and entirely new procedures.


1. C Baranauckas, “Kurt Semm, Founder of Laparoscopic Surgery, Dies at 76,” The New York Times, (July 27, 2003).
2.  Mark Schubin, “What is 3-D and Why It Matters,” NAB Show (Las Vegas, April 2010).
3. SH Kong and BMo Oh, “Comparison of Two- and Three-Dimensional Camera Systems in Laparoscopic Performance: A Novel 3D System with One Camera,” Surgical Endoscopy 24 (2009): 1132-1143.
4. A Tabaee and V Anand, “Three-Dimensional Endoscopic Pituitary Surgery,” Neurosurgery 64 (May2009): 288–295.
5. N Oyesiku, 3-D Endoscopic Pituitary Tumor Removal, Emory Healthcare (July 14, 2010), 3 min., 9 sec.
6. R Soneira, “3D TV Display Technology Shoot-Out,” DisplayMate Technologies.
Shawn Veltman is Product Manager for Vancouver-based ISee3D Inc. The company develops proprietary 3-D image capture products based on a patented single lens technology. While the technology has direct application in several verticals, ISee3D is currently focused on the medical imaging space as it pertains specifically to the $9 billion annual expenditure in the areas of microscopy and endoscopy.

Selecting the Right Laser For Medical Manufacturing

During the Minnesota winter of 1957-58, in a garage workshop, the first wearable transistorized cardiac pacemaker was invented by Earl Bakken. Made at the request of world-renowned heart surgeon Dr. C. Walton Lillehei of the University of Minnesota, this invention kicked off a global industry manufacturing implantable, life-saving devices that revolutionized medicine.

Multiaxis laser systems cut, drill, and weld the complex features of pacemakers and other medical devices.

In the early days of medical device manufacturing, proper device function and biocompatibility were major problems. The capability of a prosthesis implanted in a body to function properly and exist in harmony with tissue without causing deleterious changes was a constant challenge.

The industrial laser had not yet been invented. Bakken and his design team had to struggle to produce the components they needed with the conventional materials and machine tools of the day.

However, since the mid-1960s, lasers have been used to manufacture pacemaker components, carrying Bakken’s early designs to increasingly higher performance levels. Device function and biocompatibility have improved many times over, and the range of laser processed medical products has increased to produce a range of devices—stents, orthopedic devices, defibrillators, pacemakers, and sterile packaging—with high accuracy, efficiency and quality.

The objective of this article is to describe the main types of lasers used for materials processing (cutting, drilling and welding) in medical device manufacturing and to provide guidelines for helping to identify the correct laser for a medical device application.

It is important to remember that the laser source is only one component of a successful laser system, analogous to the engine of an automobile. Evaluation of laser-based manufacturing equipment for medical applications must consider the laser and the other components such as the motion system, control, process sensors, and ancillary components in the context of the requirements of the process and finished product.

Common Lasers For Medical Manufacturing

Carbon Dioxide (CO2) Laser. The laser beam is produced by electrical excitation of a mixture of carbon dioxide, nitrogen, and helium gases. This laser type is one of the earliest used in manufacturing with its first documented industrial application in the cutting and welding of titanium in 1966. The wavelength of this laser is in the far infrared (IR) at 10.6 µm.
Nd:YAG Laser. The laser beam is produced by excitation of a neodymium doped YAG (yttrium aluminum garnet) crystal by one or more high intensity flashlamps, or diode lasers.  The wavelength of this laser is in the near infrared at 1.06 µm. Nd:YAG lasers used for medical manufacturing are primarily pulse types, Continuous wave (CW) versions have largely been displaced by fiber lasers.

Fiber Laser. The laser beam is produced by excitation of Yb (ytterbium) doped optical fiber using diode lasers. Note that there are other dopants for optical fiber but Yb is the most cost effective for high power application in medical manufacturing.  The wavelength of this laser is also in the near infrared at 1.07 µm.  The output of fiber lasers can be both pulsed and continuous.

Ultrashort Pulse Length Laser: The name refers to a class of solid state pulsed laser sources having output pulse lengths of a few hundred picosecond (10-12 seconds; ps) to femtosecond (10-15 seconds; fs) pulse lengths. These lasers typically have a fundamental wavelength in the near IR at 1.06 µm. The fundamental frequency (inverse of wavelength) is often doubled, tripled, or even quadrupled to produce visible or UV wavelengths when there are advantages to processing with these shorter wavelengths or when there is the need to focus the laser beam to a diameter smaller than can be achieved using the fundamental wavelength. The common feature of this type of laser is the ability to produce a very high quality beam, with very short pulse lengths at high frequencies, typically in the hundreds of kHz up to MHz frequencies.
Throughout the article are reference to the different mechanisms of material processing arising from the wide range of pulsed output. Laser characteristics are presented in Table I. Generally, the processes change from heating to melting to vaporization as the duration of the pulses decrease and peak power increases. The following sections describe the three main processes in medical device manufacturing and and the pros and cons of each laser type for these processes.

Laser Characteristics

Wavelength Determines how readily the laser beam is absorbed by the materials (metals or alloys, ceramics, composites, plastics) that are being processed as well as the type of optics to be used. Metal, silicon, and zinc selenide are used for far-infrared wavelengths; glass and zinc sulfide are used in near-infrared wavelengths.
Beam Delivery The near-infrared Nd:YAG and fiber laser beams may be delivered from the laser source to the work area by flexible fiber optics as well as by reflective optics. Infrared wavelength ultrashort pulse length and far-infrared CO2 laser beams are generally delivered only using fixed reflective optics. This is an important consideration when beam delivery is to a glove-box or clean work areas.
Pulsed versus Continuous Wave (CW) The nature of the output influences processing speed (cutting, drilling, welding) and the amount of heat input to the work piece.
Average Power Determines the maximum thickness to be cut or welded and the cutting or welding speed.
Peak Power For CW laser output, the peak power is the same as the average power. For a pulsed laser output, the peak power is defined as the pulse energy divided by pulse duration. The higher the peak power, the greater the rate of heating of the surface and the greater the absorption of the laser energy.
Pulse Energy For a pulsed laser output, pulse energy determines the amount of material that can be heated, melted, and vaporized by a single pulse.
Pulse Width (duration) For a pulsed laser output, pulse width determines the interaction time with the material.  Lasers with pulse durations from tens of milliseconds to femtoseconds are available to address a wide range of applications.  Note that the pulse energy divided by the pulse width is referred to as the Peak power.

Pulse Repetition Frequency, or Pulse Rate
For a pulsed laser output, repetition strongly influences the throughput. Note that the product of pulse energy and pulse rate is referred to as the average power of a pulsed laser.
Beam Quality For any laser, beam quality determines the ease of focusing the laser beam (i.e., the minimum focused beam diameter) and the energy distribution within the focused beam (i.e., Gaussian, ‘top hat’).
Table I. Medical manufacturers should understand basic laser characteristics determine what they might need for each applications.


Probe pictured has a precision laser cut slot feature. Hard metals such as stainless steel and titanium can be clean cut free of burrs using a CO2 laser.


Laser cutting occurs when a focused laser beam of sufficient intensity to melt the work piece material is absorbed at the surface of the work piece. For cutting with CW and modulated CO2 and fiber lasers and CW and pulsed Nd:YAG lasers (nanosecond pulses to CW), an assist gas is delivered coincident with, and normally coaxial to, the focused laser beam to provide mechanical energy. The gas assist can also create chemical energy (ferrous materials react with oxygen to create an exothermic reaction) to remove the laser-melted material from the back side of the component.

A process known as clean cutting is used with many materials and involves the use of an inert gas, such as nitrogen, argon,or helium, to create an edge that is unreacted and free from burrs and debris with a small heat-affected zone depending on the choice of laser parameters and cutting speed. Cutting speed depends on the laser parameters, the capability of the motion system, and the work piece geometry. With ultrashort ps- and fs-pulse length lasers, assist gas is not required because material is removed by vaporization or sublimation, sometimes referred to as cold ablation. Little heat is absorbed by the work piece as a result of the combination of low energy, short pulse duration, and high repetition rates. The result is a cut edge characterized by a small, almost unmeasurable heat-affected zone, no recast, and no burrs or splatter. For medical cutting applications, the pros and cons of the four main laser types are summarized in Table II.

Laser Type Cutting Pros Cutting Cons
  • Process is well understood.
  • Wavelength is absorbed by a wide range of metallic and non-metallic materials, including plastics, organic materials, and polymer based composites.
  • Some nonferrous materials, especially copper and brass, are difficult to process reliably due to high reflectivity to the laser wavelength.
  • Plasma formation above the surface can absorb or defocus the laser beam when clean cutting thicker metals.
  • Primarily used for deep hole or high depth-to-diameter ratio hole drilling, but can also be used for cutting.
  • Wavelength is absorbed well by most metallic materials.
  • Process is relatively slow compared with other laser types, given the limited average power compared to CO2 and fiber lasers.
  • Wavelength is not absorbed well by nonmetallic and organic materials.
  • Cutting is generally done with oxygen-assist gas to improve the cut speed and quality. But it produces an oxidized cut edge, which may require some post process treatment.
Yb Fiber
(or Fiber)
  • High beam quality and focusability for producing narrow cuts and fine detail.
  • Wavelength is absorbed well by most metallic materials.
  • Practical maximum thickness for clean cutting common medical alloys is 4–5 mm.
  • Relatively new laser type; less understood than the others.
  • The edge quality of clean cut-metals of thicknesses >5 mm is not as good as with a CO2 laser.
Ultrashort Pulse
  • Short interaction time and ablation mechanism of material removal leads to small, almost unmeasurable heat-affected zones.
  • Absence of burrs and a refined surface quality.
  • Cutting rates are considerably slower than with pulsed (millisecond to nanosecond) lasers and applicable only to very thin metals, such as foils.
Table II. The pros and cons of laser sources for cutting applications.


Nd:YAG lasers provide flexibility and precision for drilling holes of any shape and at all angles to the part surface. Holes smaller in diameter than a human hair are obtainable with repeatable precision.

Drilling generally refers to the process for creating relatively small diameter (<1.5 mm) holes that range from normal-to-the-surface to angles as shallow as 10° from the surface. An assist gas may be used in this application to remove debris from and improve the quality of the holes and to protect the focus optics from the drilling debris.

Percussion Drilling. In the percussion method, the laser beam is focused to a diameter approximately equal to the required hole diameter. The laser is focused onto the surface of the stationary material and a series of laser pulses is delivered to the material, each one removing material by melt expulsion, until a hole is created through the material. A method of continuous hole production, usually referred to as Drill-on-the-Fly (DoF), is an extension of percussion drilling in which a cylindrical component, such as tubing, is rotated at a controlled speed while single laser pulses are applied to a series of hole locations. On subsequent rotations of the part, additional pulses are delivered to each hole location until a row of holes is created. Percussion drilling and DoF make use of the ability to obtain energy per pulse (up to 30 J) at a pulse rate of tens of pulses per second. Each pulse melts a portion of the material to be removed creating a high pressure within the melt cavity, and leading to the molten material being expelled in the form of droplets. The size of the droplets depends on the specific laser parameters and material composition.

Trepan Drilling. In the trepan method, the laser beam is focused to a diameter that is smaller than the required hole diameter. The material is first pierced with a small pilot hole, usually in the center of the required hole diameter. A circular, orbiting motion is then applied, either to the laser beam or the component, to cut the hole to the required diameter.

Ablation. During ablation, a high frequency, ultrashort pulse length laser is used to progressively ablate material to form a blind or through hole (Photo Four). The laser focus diameter is generally small in relation to the hole diameter. The beam is usually delivered to the work piece by means of a scanner, which can be programmed to raster (scan from side to side in lines from top to bottom) the laser focus at relatively high speed across the material to form the required shape, or in a circular or helical pattern to create round holes. Material is removed in layers from a few nanometers per pulse to one micrometer per pulse until the required depth is reached or a through hole is formed. Assist gas is not usually required for this type of drilling. While this type of drilling can produce features at high speed, the low energy, high frequency nature of the laser pulses means that, in practice, this method is limited to thin materials or shallow ablation depths. Ultrashort pulse length lasers can be used to produce low (<10) aspect-ratio holes in metals  up to 5 mm thick, but they are more commonly used for thinner materials for which the vaporized material is efficiently and cleanly expelled from the hole.

Trepan drilling gives a good compromise between quality (roundness, taper, heat affected zone) and throughput (few holes per second) in a wide range of materials and thicknesses.  Using DoF and ablation, holes can be produced at rates of more than 100 holes per second. However, the actual rate depends on the depth, diameter, and quality requirements of the hole. Generally, because of their high beam quality and millisecond pulse rates, Nd:YAG lasers provide a good balance of throughput and precision in producing holes in medical device components.

For medical drilling applications, the pros and cons of the four main laser types are summarized in the Table III.

Laser Type Drilling Pros Drilling Cons
  • Wavelength is good for drilling nonmetallic materials
  • Not applicable for small holes having angles less than 45° to the surface.
  • Hole geometric and metallurgical quality requirements mean that this laser type is unsuitable for drilling the majority of holes.
  • The combination of high peak power, high pulse energy, and low average power makes this the ideal laser type for small hole drilling.
  • The near IR wavelength and good beam quality means that a relatively small focus diameter can be achieved with a good depth of focus to produce holes with aspect ratios of up to 50 to 1.
  • Low electrical energy efficiency compared to other laser types means higher operating cost for systems with this laser type.
Yb Fiber
(or Fiber)
  • High beam quality and focusability for producing small holes (0.05-0.1 mm) normal to the surface at high rates.
  • Wavelength is absorbed well by most metallic materials.
  • High capital cost for laser having the necessary pulse energy and peak power required for shallow angle, high aspect ratio holes.
Ultrashort Pulse
  • High beam quality gives excellent focusability and produces holes with a diameter of a few micrometers.
  • Short interaction time and ablation mechanism of material removal leads to small, almost unmeasurable heat affected zones.
Absence of burrs and refined surface quality.
  • Low average power, typically less than 100 W, and low pulse energy (microJoules). Material removal rates are significantly slower than for longer-pulse and CW lasers. Ns and ps lasers remove material at approximately the same rate. A typical 50-W ps laser should remove stainless steel at a rate up to 15 mm/3 mins.
 Table III. Comparing the four main laser types for drilling applications.
Due to the ablation nature of the material removal, ultrashort pulse length lasers have unique capability to produce fine patterns of slots and holes with very fine finishes and no significant heat affected zone.


Laser welding also makes use of the ability to concentrate energy from the laser of sufficient magnitude to melt the materials in the joint. Although there are applications for laser welding of plastics, this discussion will focus on the welding of metals. The key to a successful laser welding application is fixturing and joint preparation to ensure consistently good fit-up (minimal gaps) during welding.

Laser welding falls into two categories. In autogenous welding materials are fused together without the addition of extra materials. This form of laser welding requires the highest level of fixturing and joint preparation. Because no material is added, it is essential the materials to be welded remain in intimate contact during the welding process. Any separation of the materials can result in, at best, an unacceptable weld profile and, at worst, complete failure of the welded joint. (Photo Five)
The second category is additive welding. Extra material is added to the weld, usually in the form of metallic wire or powder. By adding extra material, the joint becomes more tolerant and acceptable welds may be produced from joints with less than perfect fit-up. The addition of wire or powder to the joint does, however, create extra control variables and careful consideration should be given before any choice of weld type is made.

Laser welding requires the use of an inert shield gas to prevent oxidation of the weld and the surrounding area. Depending on the materials to be welded and the joint configuration, a broad spectrum of shield gas delivery options can be implemented. At its most basic level this can take the form of a simple co-axial or off-axis nozzle to deliver a cloud of shield gas to the local area. In its most complex form, this can take the form of a complete inert, dry, oxygen free glove-box.
For medical device welding applications, the pros and cons of the four main laser types are summarized in the following Table IV.

Laser Type Welding Pros Welding Cons
  • Capable of creating deep, high aspect-ratio welds.
  • Same laser used for cutting can also be used for welding, often with only a change of the focusing lens assembly.
  • The far IR wavelength shows higher absorption by plasma than near IR wavelengths. Uncontrolled plasma can absorb or defocus the laser beam leading to instability in the welding process.
  • Pulsed versions are frequently used for thin autogenous metal welding because of the relatively low average power.
  • Allows weld penetration to be controlled independent of weld speed. Joints to be made in heat-sensitive materials and components.
  • Allows flexibility in heating and cooling of the weld to control metallurgical properties.
  • Relatively slow and only suitable for thin materials, typically <2 mm thickness.
Yb Fiber
(or Fiber)
  • Capable of creating deep, high aspect ratio welds.
  • Same laser used for cutting can also be used for welding, often with only a change of the focusing lens assembly.
  • Less likelihood of plasma formation as compared to CO2 lasers.
  • Relatively new laser type less well understood than the others.
Ultrashort Pulse
NA. These lasers are seldom used for welding for which melting is required. As with most rules, there are exceptions. For example, ps-pulse length lasers have been shown effective for producing a weld at the interface of two overlapping sheets of glass.
 Table IV. Laser pros and cons for welding applications.

Selecting Lasers for Multiple Applications

Autogenous micro-welding of dissimilar components is possible using a fiber laser system. This process opens a new range of medical device design possibilities especially where miniaturization is required for implantable devices.

This discussion has concentrated on four laser types: CO2, pulsed Nd:YAG, Yb fiber, and ultrashort pulse length, and three medical processing applications: cutting, drilling and welding.

It is clear from the various descriptions that two of the laser types, CO2 and Yb fiber are more or less equally capable of cutting and welding thin and thick materials for medical devices. Ultrashort pulse length lasers are best used for cutting and drilling thin materials for which there is a requirement for little or no heat affected zoneor post-processing of the finished part.
Likewise, pulsed Nd:YAG lasers are the only type capable of drilling high aspect ratio holes, although recent developments in high peak power fiber lasers could lead to this type of laser also being used for hole drilling.

Selecting a laser for a combination of cutting and welding is a relatively easy choice. Selecting a laser type for a combination of drilling and cutting, drilling and welding, or all three processes remains difficult, because the decision may result in a system that does not give a satisfactory solution to all the processing requirements.

As soon as the drilling of high aspect ratio holes becomes a requirement, the laser choice must be a pulsed Nd:YAG. The only consideration after that is “What else can I do with this laser?” Certainly a wide variety of cutting and trimming applications would be possible, albeit at lower speeds than could be achieved with the other laser types. There would also be some autogenous welding capability but welding with filler materials would be beyond the capabilities of the typical pulsed Nd:YAG drilling laser.

The high capital cost of ultrashort pulse length lasers tends to restrict the use to specialist ablation and drilling tasks where CO2, Nd:YAG or fiber lasers give unsatisfactory results.


The choice of laser source for medical manufacturing applications is a key consideration in specifying a laser system. Guidelines presented in this article can be used to identify the best type or, at least, create a list of questions for identifying the optimum laser source.


The authors wish to thank Chuck Ratermann, RPMC Lasers Inc. for his input on the application of ultrashort pulse length lasers.


Terry VanderWert is president, Prima Electro North America (Chicopee, MA) and president of Prima Power Laserdyne (Champlin, MN).  He has more than 35 years of experience in materials processeing and was a founding member of Laserdyne. Vanderwert holds a master of metallurgy and materials science and bachelor of metallurgical engineering from the University of Minnesota. He is a registered professional engineer in Minnesota.
Peter G. Thompson is technical director of Prima Power Laserdyne. He has more than 35 years experience with international laser and aerospace industries including heading his own company specializing in industrial laser applications and training. Thompson has held senior process engineering positions with Lumonics Ltd, Lucas Aerospace, Ltd. and MTE Ltd. He holds engineering degrees in both mechanical and electrical engineering.

Nanobubbles Could Aid in Cancer Treatment

Scientists from Rice University (Houston), the MD Anderson Cancer Center (Houston), and Baylor College of Medicine (Houston) are developing new methods to inject drugs and genetic payloads directly into cancer cells. By delivering chemotherapy drugs using light-harvesting nanoparticles to convert laser energy into "plasmonic nanobubbles," 30 times more cancer cells could be killed than by using traditional drug treatment. These nanobubbles could also enable clinicians to use less than one-tenth of the standard clinical dose of chemotherapy drugs.

Delivering drugs and therapies selectively is a major obstacle in drug-delivery applications. While efforts to sort cancer cells from healthy cells has been successful, it is both time-consuming and expensive. Researchers have also used nanoparticles to target cancer cells, but because nanoparticles can be absorbed by healthy cells, attaching drugs to the nanoparticles can also kill healthy cells. "We are delivering cancer drugs or other genetic cargo at the single-cell level," notes Rice's Dmitri Lapotko, a biologist and physicist. "By avoiding healthy cells and delivering the drugs directly inside cancer cells, we can simultaneously increase drug efficacy while lowering the dosage."

The Rice scientists' nanobubbles are not nanoparticles. Short-lived events, they are tiny pockets of air and water vapor that are created when laser light strikes a cluster of nanoparticles and converts them instantly into heat. As the bubbles expand and burst just below the surface of cancer cells, they briefly open small holes in the surface of the cells and allow cancer drugs to enter.

The nanobubbles are generated when a pulse of laser light strikes a plasmon, a wave of electrons that sloshes back and forth across the surface of a metal nanoparticle. By matching the wavelength of the laser to that of the plasmon and dialing in just the right amount of laser energy, the team can ensure that nanobubbles form only around clusters of nanoparticles in cancer cells.

To form the nanobubbles, the researchers must first insert gold nanoclusters into cancer cells. They accomplish this by tagging individual gold nanoparticles with an antibody that binds to the surface of the cancer cell. Cells ingest the gold nanoparticles and sequester them together in tiny pockets just below their surfaces. While a few gold nanoparticles are taken up by healthy cells, the cancer cells take up far more. The technology selectivity capability results from the fact that the minimum threshold of laser energy needed to form a nanobubble in a cancer cell is too low to form a nanobubble in a healthy cell.

MD+DI Covers TEDMED 2012

Beginning April 10 MD+DI will be offering coverage of the TEDMED 2012 conference being held from April 10-13 in Washington D.C. Be sure to keep checking for the latest news, ideas, and medical device innovations being unveiled at the conference.
What is TEDMed?
TEDMED is a community of passionate, leading-edge thinkers and doers who come from every discipline within the fields of health and medicine, as well as from business, government, technology, academia, media and the arts. Every year, the community gathers at the annual TEDMED Conference, where TEDMED delegates “think out loud” together about the challenges and opportunities facing health and medicine today and in the near-term future. By exposing each person in the community to new thinking and interdisciplinary perspectives, the Conference generates an exciting cross-pollination of ideas. The Conference also gives everyone in the TEDMED community an opportunity to make powerful connections with leading thinkers in other disciplines and fields – people whom they would not have met otherwise.
MD+DI’s Heather Thompson, Maria Fontanazza, Brian Buntz, and Jaime Hartford will be reporting on the latest developments throughout TED2012. Be sure to follow them on Twitter and join MD+DI on Facebook to stay abreast.
For those looking to keep the conference at their fingertips, we strongly recommend downloading free the TEDMED app.
Be sure to use #TEDMEDLive in any of your tweets!