MDDI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Let's Talk About Success, Baby

We’ve gathered our best advice from medical device experts to help you nail your next project or business milestone.

  • Whether you're at a cash-strapped startup or a large public company, everyone can benefit from a little free advice. That's why we've gathered the best advice in the business. May these nuggets of wisdom help you with your next medical device project or your company's next milestone.

  • Embrace This 4-Letter F Word

    The industry is saturated with success stories, but the best learning opportunities come from the projects that fail.

    The ability to embrace failure became a recurring theme throughout the 2018 MD&M Minneapolis conference, and at least three speakers bit the bullet and shared stories of projects that, for one reason or another, fell short.

    Dale Larson, director of commercial initiatives at Cambridge, MA-based Draper Laboratory, was among the first to share a failure story during a panel discussion of identifying and adapting technologies with crossover potential.  He talked about an internal R&D project that Draper had pursued involving bioresorbable electronics, a technology that initially seemed like it would have a lot of potential in healthcare.

    "We were working with some surgeons at the Brigham and Women's Hospital and we hit upon this thing called a surgical leave-behind sensor, something that the surgeon wanted to query post-operatively," Larson said. "And everybody got excited. The surgeons were foaming at the mouth, the engineers were like 'yes, it's going to go'. But then we asked, 'What are you going to measure?'"

    After drilling it down a bit, the team discovered that there wasn't really anything of great value that could be measured with what was, at the time, a very primitive set of electronics.

    "I killed the project because we couldn't come up with a killer app," Larson said. "It still hurts, and it's been five years since we've killed it."

    But, as painful as that kill was, Larson and his team learned a valuable lesson. Making cool stuff is great, but only if there's a clear application for the technology.

    Inspired by Larson's valor, fellow panelist Alex Thaler, senior global product manager at Minneapolis, MN-based Smiths Medical, shared a story from a previous company he worked for that was called Perspire Diagnostics, before it flopped, for lack of a better term.

    The idea was no sweat (forgive the pun). The company wanted to develop lab tests using patients' sweat. "A few months in, we found that the correlation exists at an aggregate level, but not at a specific patient level.

    Interestingly enough though, other researchers are still pursuing similar technology using wearable devices. "It would be an amazing story to tell if it had worked out," Thaler said.

    But these experiences are in no way limited to startups and smaller R&D organizations. During a separate session, Steve Geist, director of R&D within the transcatheter mitral and tricuspid therapies division at Irvine, CA-based Edwards Lifesciences, emphasized the important role failure plays in innovation.

    "The learnings that come from failure are fundamental to future successes," Geist said. 

    He used the example of a time when Edwards decided to kill a project the company was working on in-house. The learnings from that project was later applied to a platform that Edwards acquired, Geist said.

  • What If It Just Won't Work?

    Speaking of failures, how do you tell management that an idea just won't work? 

    To find out the answer, MD+DI recently spoke with Stephanie Whalen, an engineer with Swope Design Solutions and Bryce Rutter, PhD, founder and CEO of Metaphase Design Group.

    Whalen suggests one of the first things those defining devices should do is clearly define the problem a company is hoping to solve.

    “Instead of labeling something as impossible (an entrepreneur, for example, thrives on making the impossible a possibility), clearly and objectively describe the risks involved in attempting the approach,” Whalen, told MD+DI.

    She also said that empathy plays a significant role.

    “Speaking of stakes and stakeholders, try to understand the product as a whole system as much as possible, rather than one subsystem or component,” Whalen said. “It will, again, help you better empathize with management, so you can more effectively understand your point of view, and make sure solutions offered are compatible with the corporate strategy or with individual's goals, etc.”

    Age is More Than A Number

    Rutter told MD+DI that it isn’t uncommon for age to play a role in being a barrier in having conversations about a project that isn’t going well.

    “Unfortunately, age seems to confer wisdom,” Rutter said. “People assume that you’re older and have more experience. That’s not the case. People who are serving up results that are not desirable can be of any age. But that is an added challenge of younger people trying to deliver bad news.”

    He added, “Another challenge with a younger person is just having access to the right person who can kill an idea, just because it’s a bad idea. They might have to report up the chain and they’re concerned with issues of going around their immediate boss. Their immediate boss might not be listening to them. They’re looking at a dead idea and they’re trying to get it to someone’s ears who’s going to pay attention. The hierarchy within an organization presents a challenge for a younger person because they’re typically lower on the food chain. With the older crowd, in many cases, those things are muted to a certain extent or eliminated. That makes things better, but now you’re in the room with the person you’ve got to serve up [the bad news] to.”

    And that scenario leads to a totally different set of issues. However, Rutter said in his experience he has developed a one-size-fits-all strategy, regardless of age, to help deliver the bad news.

    “Everyone has kind of fallen in love with the idea because it’s in R&D already,” he said. “So now you’re telling me my baby is ugly. There’s a lot of emotion there. What I have found to be the most effective tool is removing the emotion that can be intrinsic in any one-on-one conversation.”

    Rutter noted, “You take a one-minute video that is comprised of maybe 30 five second clips that highlight the problem, so they can see directly from the consumer's point of view. My experience has been that they look up at you and go ‘holy cow we’re screwed'. So now it’s not my opinion. I have removed myself out of that emotional confrontation or that emotional dialogue. Now it’s the voice of the customer getting through to [upper management] to really hear what they’re saying.”

    If All Else Fails

    Rutter said, “I think when you’re part of the R&D team, whether you be an industrial designer, or a human factors person, or an engineer, or anyone – and you know that there’s total ignorance and refusal to accept the facts, then you have a moral dilemma. And that moral dilemma is you know this thing is going to blow up eventually. The moral question is do you want to be there and be a part of that fire – do you want to participate in the charade. I think that most people would say no.”

    Whalen noted that the person must also consider their mental health and how the work environment is impacting them.

    “Lastly, don't forget to focus on your own mental health, and don't let a toxic work environment get to you (as best as you can),” she said. “I don't think there is any shame in taking a step back or leaving entirely if the situation is damaging to your mental health.”

  • 5 Issues Project Managers Face and How to Prevent Them

    Nigel Syrotuck, a mechanical engineering team lead for StarFish Medical, says that if you stay vigilant against these five issues, you can prevent program budget and schedule underestimation. 

    1. Optimism Creep 

    The Problem: There is an inherent desire by project planners to be optimistic while selling their brain-child in order to get it approved. We can see this happening repeatedly with elements of the Olympic Games, which haven’t stayed on budget in decades. Consultants may also feel this pressure in order to win business in the short term.

    The Result: Stakeholders feel deceived, and the program developer or consultant receives a tarnished reputation. Often, funding for future projects is very difficult unless leadership is replaced.

    How to Prevent It: Recognize that usually everyone will be better off in the long run if estimates are as accurate as possible up front. If you’re worried that you’ll lose stakeholder approval by coming in with a longer and pricier schedule than expected, you will be better served by explaining the estimate and instilling confidence that your plan is realistic.

    2. Intentionally Aggressive Plans

    The Problem: A schedule or budget is intentionally aggressive in an effort to get staff to work harder than normal to meet targets.

    The Result: Best case, this leads to high employee turnover within your organization and burn out while attempting to meet aggressive goals. Worst case, this leads to the “we’re already late and over budget, what’s another few weeks on top of that” mentality, resulting in more harm than gain.

    How to Prevent It: Schedule realistically and remind your team that finishing early and beating their milestones is desired, then reward them when they do. It’s easier to reinforce just how important milestones are when you aren’t missing them all the time.

    3. Insufficient Time for Innovation

    The Problem: When designers are expected to come to a conclusive, innovative solution given only a whiteboard and a limited chunk of time as opposed to allowing them to ideate, prototype, and repeat as often as needed to converge on the right solution.

    The Result: A hastily designed device can create a poor foundation for the rest of the program and result in overages later on.

    How to Prevent It: When estimating, put aside enough time to collect unbiased data (site visits), document what the user and business actually need, and re-imagine the product as often as necessary to ensure you’re starting down the right path to make the right device. Don’t forget that as long as you’re collecting data, the design may need to change. Putting aside extra budget for early adjustments will save even more time and effort later.

    4. Misalignment on the Scope of a Minimum Viable Product

    The Problem: Creating a simple “Minimum Viable“ product does not necessarily equate to a big reduction of the required effort. Simpler medical products are certainly easier, but still require full design and manufacturing development, including all documentation and safety testing. They typically are not a fraction of the effort required for a more fully featured product.

    The Result: A misalignment between the development and management teams on this topic can lead to pressure to significantly (but incorrectly) reduce estimates whenever small features are removed.

    How to Prevent It: The design team and product owner should clarify and agree during the estimation process on the actual expected time and impact for each additional feature.

    5. Insufficient Understanding of the Testing Process

    The Problem: Where testing is defined to include only running tests, without the inevitable troubleshooting, fixing, and re-testing needed when things don’t go according to plan.

    The Result: As testing is often conducted at the end of a program, failure to leave adequate time results in a delay at the penultimate moment, when funds and time are running short.

    How to Prevent It: Test early, test often, and be realistic about your expected testing failure rate and what will actually happen when tests do fail.

  • 3 Things to Know Before Selling Your Startup

    Selling to a large company can be an attractive option, but medtech startups must go into the sales process prepared. Here's what you need to know to sell your company at top dollar.

    1. Focus on Your Proof Point

    Medtech startups with an eye toward selling must focus. A company looking to buy will need to know how, specifically, a new product will enhance their portfolio and their bottom line. But sometimes, early-stage companies get carried away demonstrating every potential use for their innovation.

    If you invented your product to, for example, make prosthetic hands work better, but found that it also can help prosthetic arms and prosthetic legs and even orthotic hip braces, it would be natural to get excited by those additional applications. But when a large company looks to buy a product that improves prosthetic hands, they'll need to know how, specifically, your product will enhance their prosthetic hand portfolio. And they'll need solid evidence of it.

    For a capital-constrained startup, it's hard to craft a compelling case that the product does what it is intended to do when the supporting data is an inch deep but a mile wide. The startup should focus on one or two markets or therapy areas and dive into gathering the deep data set that will turn into proof points for adoption.

    2. Invest in Data

    Even cash-strapped startups must invest in data. Without data, there is no evidence that a product works. And thus, no good way to sell your company at top dollar.

    But collecting the right data needn't be a struggle. Startups must have a clear plan to carefully allocate resources for gathering this supportive information. Be clear about your product's competitive differentiation. Is your product faster, easier to work with, or less expensive than its competitors? Prove it.

    To articulate your unique value proposition or differentiation relative to the standard of care, invest in statistically significant clinical studies as appropriate to the target market or indication. Depending on the type of device, a startup may begin with animal study data, then move on to human data. Human data is more valuable though, even if only from a few users. Learn from similar companies and regulatory experts to understand the threshold of data needed to sell a device like yours.

    3. Market to the Right Buyers

    Though startups often make business decisions based on opportunity, they must focus and be proactive to determine the right potential buyers. Large medtech companies are more likely to acquire a product if they already have a sales channel that will align with the target market of that product.

    To determine a good fit, it's all about research. Anticipate the buyer's questions. How is your product synergistic to the buyer? How is their sales channel set up to sell your product? How does your product fit into a company that sells its products at a higher price point than yours? Would acquiring your company expand the buyer's market or hinder it?

    Learn everything you can about the buyer. If it's a public company, read their earnings reports and find out their strategies. Create a competitive matrix to determine where your product would fit into a potential buyer's offerings.

    After determining which companies are the best potential buyers, consider your options for marketing to them. Tactics like whitepapers, customer testimonials, and conference attendance help startups put their products in front of potential buyers. Startups benefit by deliberately gathering and sharing the most compelling data in whitepapers and conference presentations.

    These tips from Susan Tomilo, a managing director and leader of the healthcare and life science practice at Sikich Investment Banking, were part of an article Tomilo recently wrote exclusively for MD+DI. See Tomilo's article for a more detailed summary of these tips.

  • Don't Let FOMO Dictate Your Requirements

    Nigel Syrotuck, a mechanical engineering team lead for StarFish Medical, says the fear of missed feature opportunities in medical device design is one of the biggest worries he sees when working with clients. The following is an excerpt from an article Syrotuck wrote exclusively for MD+DI. To read the full article, click here.

    The reaction to the fear of missing out on component-level improvements is often to define the device requirements at the feature-level documentation. This is often manifested in an effort to exhaustively write down both the required and desired features individually to be 100% sure the design engineers are aware of them. These desired requirements (or desirements, if you prefer) usually end up in the requirements document, which in turn get can get passed on to the risk specification, verification, and validation documents. It is true that design teams appreciate well-written design input documentation, but the added burden of including these desirements in multiple DHF documents is significant and adds cost and time to device development.

    Hitting all the Markets

    Similar to missing out on a key performance feature, the fear of missing a key market can have an even bigger impact on the development path. In this particular nightmare, the designers don’t just miss out on a better battery option; they miss out on the whole idea of a portable unit to begin with, which results in a product unsuitable for certain users, cutting the market down significantly. Once again, the reaction to this is often to "simply" add a number of desirements to the requirements document so that the device could do everything possible.

    In a black and white world, the designers would know which desirements are reasonable, include those, and then not achieve the remainder. This costs a bit more but results in the best achievable product possible. In reality, things are a lot muddier. Those engineers who are always trying to be as efficient as possible are unlikely to quickly accept failure and move on. Instead, design teams will start to make small concessions and the product will probably get hit with scope creep:

    “If we just make it out of metal, it can be sterilized.”

    “If we just make it slightly longer, it will be able to treat hands and feet.”

    “If we just add a solar panel, it can run forever.”

    It’s obviously a good idea to try to attract the largest market possible with the least amount of effort, but to designers who are used to going the extra mile and delivering an A+ product, not meeting requirements (even if non-essential) can feel like a failure they will try hard to avoid.

    Counteracting FOMO for Components

    Instead of detailing out every possible scenario, time and cost can be saved by investing up front to ensure everyone has a working understanding of the device’s intended use case and environment. This can (and should) be documented separately to keep the requirements as concise as possible, while still capturing that knowledge in a different place for easy reference. From there, the engineering team’s ingrained desire to make the best possible product can take over without feeling challenged to meet all the desired requirements. It seems like a small difference, but the verbal approach to convey a vision for a future that may or may not happened is very different from a written challenge no one wants to fail to meet.

    Fighting FOMO by Reducing Documentation for Desired Features

    A simple method to avoid writing every desired feature down in the requirements document is to create a separate, lower-level document that details out the “important but not essential” design features of the device. This is typically the system architecture document, and it is written in plain language. It isn’t used for safety or efficacy verification, or user validation, so the resultant burden of adding low-level desirements is much less in the long run. Writing it will also ensure the design team knows why certain features are or aren’t included.

  • Device Implementation Sets the Stage for Successful Adoption

    The key to successful medical device adoption may lie in the device implementation. The following advice comes from an article Susan Brown wrote exclusively for MD+DI. Brown is the chief nursing officer at ivWatch, a company focused on improving patient safety and the effectiveness of intravenous therapy. Brown is responsible for clinical research and training programs for the company. To read the full article, click here.

    Data Supports Business Case for Investment

    For a new device, the expectations are always high. Not only must it be intuitive, but it must offer greater functionality over its predecessor – whether that be a competitive device or manual clinical assessment. All of this is compounded for first-to-market innovations. For medical professionals to adopt new technology, it should possess unique qualities that current offerings don’t deliver.

    As the device or technology experts, company representatives should help clinicians gather all relevant information on the benefits to the patient, clinician/physician, and organization. Perhaps most important is how the product fulfills an unmet need. This includes but is not limited to building a baseline of the current frequency and severity of the problem the device solves. The current healthcare atmosphere emphasizes both efficiency and quality, so being able to use data to support the need will help clinical champions get buy-in. As more institutions are improving electronic health records processes, this is an ideal vehicle to streamline the gathering of necessary data points.

    Evaluation Considerations

    Before leaping to a purchase or an installation, many organizations first require a product evaluation. Prior to the start date, device companies and institutions must collaboratively establish and agree upon their evaluation expectations and upon a clear definition of a successful trial. This includes data collection, education, evaluation objectives, and personnel support.

    To set up the parameters of the evaluation, company representatives should first work with the project lead to determine the number of departments to include based on the size and average daily census and the number of potential candidates to use the device. Staple considerations for determining the size of an evaluation are the presence and ability of an evaluation team leader from the institution and a clinical education representative from the device manufacturer to ensure correct use. To properly staff an evaluation and collect enough data to assess the device or technology, the team will typically need about three to five days, but that timeframe can vary by device classification.

    Training and Policy Development

    After the trial and receipt of the required support for installation or implementation, company representatives can help build a training team with knowledge in the relevant specialty and instruction skills. Creating a training team isn’t a one-size-fits-all endeavor; we suggest that our customers bring in other disciplines as they relate to the device benefit. As an example, those in the vascular access space have a complete understanding of access techniques, best practices, and complications, but that level of knowledge may not apply to clinicians across the hospital. Educators are crucial for filling the gap.

    Also, medical device manufacturers can help develop training content and provide guidance for implementing new procedures. This could be as simple as providing a checklist on ways the device will fit into current policies, including providing examples of policy and procedure language. Discuss important workflow considerations like user competencies, patient criteria, device storage, troubleshooting, and cleaning instructions. It’s the responsibility of the device team to recommend the most optimal conditions and guidelines during implementation to ensure proper use.

    Education Leads to Accelerated Acceptance

    Even the most promising technologies are not immune to traditional industry characteristics, like clinicians with an aversion to change and high professional turnover. Succeeding in this environment requires buy-in from hospital leadership and consistent training, assessment, and support.

    Training is a financial investment, and although it’s an essential one, its importance isn’t always conveyed. Mandated training is possible and can happen in various ways; we’ve seen education and device training included in house-wide onboarding and rolling education or as part of a dedicated skills day. Aligned institution and device manufacturer collaboration is essential to bring in new innovations to support the ultimate goal of bettering outcomes and care.


    [Image Credit: ivWatch]

  • Breaking Through the Bias: Debunking Common Misconceptions About Usability Studies

    Greg Martin reviews three of the most common misconceptions, mistakes, and excuses companies might encounter when it comes to usability. Martin is the director of design for Product Creation Studio, an integrated medical product design and engineering consultancy in Seattle. The following is an excerpt from an article Martin wrote exclusively for MD+DI. To read the full article, click here.

    I showed my network the product, and they loved it. This is a sufficient usability study.

    This happens all too often. While this may get you some good information, you may not be getting unbiased, controlled data. Good user studies provide transparency regarding your product’s strengths and weaknesses. This means their results can be uncomfortable to hear. This might interfere with an entrepreneur’s or even a seasoned veteran’s ego. It often takes admitting that we don’t know what we don’t know, and this can be hard for anyone, especially if they’ve been in the industry for a long time.

    There is an enormous amount of personal bias to overcome. It can be in the form of enthusiastic friends who want to support your efforts. It can be the result of unspoken pressure felt by employees or the amount of work an employee has already put in. Even if the person giving feedback is an expert in the field, there are subtleties of use that most of us aren’t aware of—humans perform tasks through a series of unconscious gestures, especially experts.

    Some informal user studies are necessary and can have valuable results in the early stages of development. However, these are not validated and therefore might not yield the true potential of a product. Informal studies, when used to replace formal ones, can impact the decision process to the point of failure.

    Professional usability experts aim to combat bias and provide both consistency and documentation of decision making, which is valuable for regulatory requirements, as well as uncovering more-nuanced user habits. Even if the outcome is similar to that gained through informal processes, the differences can be subtle, but they are critically important.

    I won’t get real scientific data from a usability study.

    There is no guarantee that a usability study will yield significant learnings. In some cases, the data only reveal patterns that may or may not be useful. Results from studies can be subtle and aren’t always what we normally think of as hard scientific data.

    Companies might be tempted to skip or ignore the outcomes of a study because they don’t see how it can improve the product. But again, they are doing themselves a disservice. The science of user studies is often rooted in psychology and human behavior. And in some ways, the art of these studies is as important as the science.

    Make no mistake, user studies are a science. They rely on a specific structure and involve rigorous documentation of the results to provide unbiased insight.

    A good user study considers all the factors that could influence design. It is scientific in its selection of participants, recruiting users who are diverse enough to yield a wide understanding of the product but still fit a specific profile.

    Further, tools are being developed to improve data collection so that the results really are better at capturing use conditions. In addition to videos, questionnaires, and environmental observation, sensors can be employed to capture use data that might help illuminate how users engage in with a product. This is an emerging science that will require effort and change on the part of usability experts.

    The value in user studies is in being able to tie information back into insight that can inform design or operation of the final product.

    It is never (and yet it is always) a good time to do a usability study.

    This has truth. There is a real problem of knowing the precise point at which a user study will yield the best results. It might be early in the design phase, when key design decisions are being made, or it might be later, when those tests begin to resemble or coincide with marketing efforts. Too soon and you won’t get the full benefit; too late, same problem.

    This is why understanding the various tools of usability is so critical. They can help a company determine the best time as well as the best tools to conduct user studies.

    Some well-worn words of wisdom are “test early and often,” but various methods are best deployed at different times. Depending on the phase, you might be strategizing, executing, or assessing.

    Strategizing is a beginning phase in which a company typically considers new ideas and opportunities for the future. Research methods should be both qualitative and quantitative. Tools might include field studies, diary studies, surveys, data mining, or analytics.

    Eventually, a company will reach a "go/no-go" decision point when it transitions into a period of continually improving the design direction that’s been chosen. This is the execution phase. Research in this phase is mainly qualitative and includes card sorting, field studies, participatory design, paper prototype, usability studies, desirability studies, and customer emails.

    Finally, a company is ready to engage in assessment, where it seeks to measure how well the product is received. This is typically quantitative in nature and might be done against the product’s own historical data or against its competitors. Usability benchmarking, online assessments, surveys, and A/B testing might all be employed at this point.

    Use the testing and research until you are confident that you understand all angles of your users and the problem your product is trying to solve. It takes work, understanding, and compromise to create a marriage between your products and the world people live in.

    A good indication is to consider that a large part of the value of a user study is its process of documenting and testing to gain credibility. Conducting user studies just before you need that credibility will help you move forward in a positive manner.

  • How to Get to Market Quickly, Without Driving Your Team Crazy

    Project engineers often spend weeks, if not months, waiting for management to formally approve a new project. Then, as soon as the green light is on, so is the pressure.

    "The minute our executives say go and everyone is excited, a week later they come down and start asking us, 'okay, when are you going to do this? When can we get this product out?''" said Nikhil Murdeshwar, a senior principal research engineer at Olympus Surgical Technologies America, referring to past experiences in the industry (not specific to his current company).

    Murdeshwar spoke at MD&M Minneapolis 2018 about managing a successful project, exceeding sales estimates, meeting launch commitments, innovating for success, and satisfying users.

    "In our world of medical devices our intentions are to get these devices to the market quickly, but in doing that our industry forgets that we're working with people, and we're working with the FDA, so we want to make sure we do everything in a responsible way," Murdeshwar told MD+DI.

    There are many instances in which once a project is finished, the team members never want to work together again because it was such a bad, and stressful, experience, he said. But there are instances when the opposite is true and the team can't wait to work together again.

    "We have to be mindful and appreciate the people we work with ... so that we don't drive people out of the company, or out of the industry," Murdeshwar said.

    Murdeshwar's presentation will include five specific tips for managing a successful project. Among them, is the idea that tools do not replace experience. By that, he means the experience gathered in the field during the research phase of a project.

    He explained that on the business side of R&D there are a lot of estimations that must be made about the potential market size of the new product, potential sales, etc. In this day and age, it's all too tempting to turn to computer-aided design tools and spreadsheets to turn over those estimations quickly and from the comfort of your own cubical. But what happens when, down the road, you realize there is a huge gap between what you estimated and what you're actually seeing?

    Ideally, when an engineer observes a problem, they develop a hypothesis, test it out, and develop a solution, Murdeshwar explained. Then it's time to sit down at the computer and use those tools to present the idea to the rest of the team and your superiors. 

    Someone who does not do that diligently would probably observe a problem, then run straight to their tools and present it to the team," he said.

    The scary part is, in this fast-paced world of medical innovation, the company leaders may not have the time to question project leaders about how they came up with those estimates and whether or not they actually went out and talked to surgeons to inform the product design, Murdeshwar said. So it's up to the project engineers to ensure that the proper fieldwork has been done so that the company doesn't invest time and money into developing a device that surgeons won't even use.

  • Take Device Performance to the Next Level

    Perry Parendo has spent the last 30 years developing a series of Design of Experiments (DOE) techniques, a method that runs all the way back to his days working for General Motors Research Labs in 1986. Parendo eventually began to use these DOE techniques to help solve complex problems that would arise during the product and process development phase for international design teams tasked with new product development.

    These DOE techniques are what eventually led to the founding of Perry’s Solutions in 2006, when Parendo began helping organizations and companies with critical product development activities. Parendo has spent countless hours consulting with design teams from different organizations to help them maximize product performance to take their product to the next level. 

    The following is a 2018 Q&A between MD+DI and Parendo.

    MD+DI: For starters, can you talk about the process of evaluating your product during design characterization? What are some of the things that are important to look for to help boost product performance?

    Parendo: The process starts with knowing your requirements and which ones may be challenges. Then, assemble potential input variables that may influence those outcomes. This includes any noise factors or environmental conditions. This can all happen at the beginning of the project. Characterize these items and then “confirm” the next tier requirements are working as expected. If not, then more characterization work is needed to reduce project risk.

    It is important to look beyond the obvious design factors. It is easy to identify 30 or more factors that could impact performance. I am not suggesting to go crazy with it during a test, but limiting ourselves to the easy and obvious two to three variables is clearly limiting. Being in an unstable operating zone with one input variable can make the entire design/product unreliable. By taking advantage of hidden replication, we can ensure our sample size is still manageable.

    MD+DI: When it comes to evaluating a product’s ability to perform in different environmental conditions, what are some basic barometers for success that you look for?

    Parendo: Success depends on understanding the sensitivity of the key design requirements. How much shift in performance is going to be noticed by a user? Any noticeable change is likely going to be considered bad. How can we operate in a stable performing region? How do we know where it is at? Once these answers are determined, the design tradeoffs can be made. It is one thing to create a capable design, but it really takes no more effort to find a level of robust design.

    MD+DI: What are some ways that designers can accelerate the evaluation and testing process, and how will that benefit the product in the long run?

    Parendo: We need to understand which design parameters may interact together and make sure to test them together. Design of Experiments (DOE) is the only technically and statistically efficient way I know to do that. When I ask people which variables may interact, it is a complex set of possibilities. It is rarely a clear cut answer. However, those same variables are often tested in isolation, which does not allow us to extract the information. We assume, or hope, that the interactions are small, but the truth is some of them are not. We do not know which one it is without performing efficient tests of the combinations. This is considered strategic testing.

    MD+DI: What are some tips that you could share with design teams when it comes to selecting the best combination of variables for short-term testing?

    Parendo: Testing in isolation is bad because we lose combination effects. However, testing everything together is too complex and inefficient. The best tip is to group our tests with variables that may interact together. As a simple example, in one test, combine mechanical variables together, and in another test combine the electrical variables together. I have heard it called testing within the “energy bands” and is something I consider on every test I set up.

    MD+DI: In your experience, how can designers try to be more proactive when it comes to both predicting and addressing issues in a fast and effective manner?

    Parendo: It is not easy, but we need to be honest about our designs. Be upfront about risks and uncertainty. This does not make you a bad designer. Instead, it makes you realistic. We get surprised at times, and that is okay. Ensure we evaluate our higher-risk items deeper than our lower-risk items. Unfortunately, many designers test everything to a similar level, regardless of the risk involved. We are optimistic that things will work out, and we end up doing inadequate work.

    MD+DI: Finally, if you could give one piece of advice to designers when it comes to taking product performance to the next level, what would it be? In essence, what’s a good piece of advice that you’ve picked up over the years that many design teams often overlook?

    Parendo: Be humble in design. Testing is intended to help us learn, so a “failure” can be a good thing early on. Use early tests to make a better design, not just prove that you were right. It is a subtle, but very important difference.

  • 6 Steps for Getting the Most Out of Your Design Review

    Mike Kahn reviews three of the most common misconceptions, mistakes, and excuses companies might encounter when it comes to usability. Kahn is the director of electrical and firmware engineering for Product Creation Studio, an integrated medical product design and engineering consultancy in Seattle. The following is an excerpt from an article Kahn wrote exclusively for MD+DI. To read the full article, click here.

    1. Plan Ahead and Prioritize.

    With a tight schedule, adequate time for a review and review updates is often the first thing to go. Depending on the complexity of the item being reviewed, one or more days may be required. Make sure the project schedule includes time for review preparation, time for design review participants to pre-review material, and time to make identified changes.

    2. Checklists are Your Friend.

    Many critical review items are completed outside of formal design review meetings. Checklists are a great way to make sure important items are not missed and to capture institutional knowledge to avoid repeating mistakes.

    Our electrical team, for example, maintains a set of checklists for review of schematics, layout, and production of fabrication documentation. Checklists enforce best practices. Some are development tool–specific, such as ensuring proper execution of automated design rule checks and CAD exports are configured with the right setting to create outputs for fabrication. Others enforce style and necessary provisions for considerations such as test, debug, risk mitigation, and manufacturability.

    3. Leverage the Right People.

    Based on the objective of the design review, the right group of people will range from the appropriate mix of disciplines, staff across the organization, or independent reviewers within the same discipline.

    A reviewer with a fresh set of eyes on a technical review will not only find issues directly but also will ask questions that provide a fresh perspective on why requirements are implemented in a certain way. Be sure to include team members who are intimately familiar with the project as well as others who are independent and seeing it for the first time.

    Outside resources to consider for the technical review:

    • Leverage outside consultant(s), if an expert is not available within your company.
    • Leverage component suppliers’ field application engineers. On many complicated integrated circuits, especially for complex peripherals or system on chip processors, dedicated application engineers will review schematics, layout, and firmware. This is a great under-utilized resource.

    Internal extended team resources to consider for reviews:

    • A design review at the architectural level or a technical requirements review will leverage a cross-disciplinary team.
    • A technical requirements review, for example, adds marketing, product management, quality engineering, safety engineering, and compliance.

    Take time to think about the right group of people to include in the design review and plan ahead to ensure enough notice is given in advance to provide sufficient time for the review to fit into their schedule.

    4. Conduct a Review Kickoff Meeting.

    A review kickoff meeting, prior to the actual design review, is an effective strategy to ensure coverage, save overall intake time, and help engage busy reviewers not specifically assigned to the project.

    Reviewers new to the project benefit from a project overview to establish context such as review objective, fidelity, constraints, and architecture. It is also important to provide a summary of key requirements that drive implementation and aspects of the design such as power, cost, form-factor, thermal, reliability, and critical performance elements.

    Many reviewers who receive review content in advance may not have a chance to closely look at the material prior to the review (and some may not look at it at all!). This targeted meeting introduces the review participant to the design and reduces that chance of procrastination by eliminating common barriers.

    In the review kickoff meeting, the responsible designer can provide a design overview, walk through areas of high complexity or concern, and make sure that reviewers can locate supporting material. A reviewer who is new to the design and project will then be less likely to get stuck on basic questions, allowing time to focus on the more complex elements of the design.

    Additionally, rather than having everyone focus on every part of the design, reviewers can be assigned to go deep on specific areas to ensure coverage of the more complex and high-risk areas of the design.

    5. Assign Roles During the Review Meetings.

    In order to get the most productive feedback during the review meetings, it is helpful to assign some critical roles. Here are some key roles:

    • Design owner: The design owner sets the objective and leads the meeting. The design owner should plan the agenda ahead of time and walk through elements of the design that require a specific focus in a logical order.
    • Notetaker: Assign a dedicated note taker who is not the design owner. This allows the design owner to focus on the review and feedback. The note taker should capture all feedback and action items.
    • Time enforcer: Assign a person to keep the meeting on track (this is often the project manager). It is easy for the meeting to get consumed by too much detail in a specific area or discuss topics that do not support review objectives. This person should have the notes taker add such topics to a “parking lot.” Discussions or meetings can then be later scheduled to specifically address them. It’s best at the start of the meeting to introduce this person’s role along with meeting time constraints and meeting objectives to ensure understanding and acceptance when a refocus is needed.

    6. Tune the Process.

    The designer is ultimately responsible for the work product under review and has final control of how that input should be addressed. After deliberation, the responsible designer should close the loop with the reviewers by reporting how their specific feedback items will be addressed. This also includes inputs and feedback that will not be acted on with the rationale provided.

    At project milestones or completion, project members should tune the review process by reviewing actual problems and issues uncovered. Adjustments to the review process could be as simple as updating a checklist, including a missed organizational member, or leveraging a specific specialist at the appropriate time in the project.

  • Share Your Industry Insights at MD&M Minneapolis 2019

    The call for speakers for MD&M Minneapolis 2019 is now open. The conference will be held Oct. 23-24, 2019.

    We’d love to hear your suggestions for delivering a unique educational experience for R&D engineers and managers, design engineers and human factors/industrial design/UX professionals, project and process engineers, manufacturing engineers, quality and regulatory engineers, as well as C-suite-level executives and managers involved in medical device commercialization. Sharing your innovation strategies and tactics can help move medtech product development forward in 2020.

    SUBMISSION DEADLINE: Friday, May 10, 2019, 11:59 PM EST

    Click here for more details about what we're looking for and how to submit your topic proposal.

Filed Under
500 characters remaining