Intertech Engineering Associates, Inc.

The Essential Performance SNAFU part 2 of 4

The Essential Performance SNAFU:

Defining what Essential Performance is for a medical device is a challenge, but the definition is especially challenging and often confusing when software is part of the product. This article series is intended to share what is happening in the industry once Essential Performance is defined, and provide some guidance on how to address compliance and safety avoiding the SNAFU’s we have observed in this process.


The compliance community has been using the Essential Performance (EP) to focus testing and evaluation of medical products since the 3rd edition of IEC60601-1 (published in 2005). Some medical device manufacturers are fortunate enough to have their Essential Performance defined for them through Particular standards if they are available for their medical device. Those less fortunate, who are left to define this for their products, are often challenged to determine what is essential.  This can often be confusing and we have found that industry test houses are not helping manufacturers make sound decisions when it comes to devices containing software.  


Part 2. Defining Essential Performance

The 3rd Edition of IEC60601-1 and Programmable Electrical Medical Systems (PEMS)

The changes made in the third edition of IEC60601-1 define a general approach of adopting two new main principles as described in the standards’ introduction. The first principle is the change in approach in the series of standards related to the concept of safety which has been broadened to include Basic Safety considerations and now Essential Performance matters. The second change in principle is the addition of a provision for assessing the adequacy of the process compliance when this is the only practical method of assessing the safety of certain technologies. There are two such examples that stand out in the standard, one being the application of ISO14971 risk management processes and the other is adopting software lifecycle processes to support programmable electronic medical systems (PEMS).  This PEMS consideration is clause 14 of the standard and it, when applicable, requires a software development lifecycle and software validation.  In considering these processes it is important to recognize that the ISO14971 standard requires a risk management process to support and assure safety and not just identification of characteristics of essential performance and basic safety per the IEC60601-1 standard. This article will further examine the consequences of how what is determined as essential performance might impact other supporting process standards.


Essential Performance (EP) and the Impact on PEMS

Examining the PEMS clauses further and clause 14.1 of  IEC60601-1 we find:

“The requirements in 14.2 to 14.12 (inclusive) shall apply to PEMS unless:

– none of the Programmable Electronic Sub-System (PESS) provides functionality necessary for Basic Safety or Essential Performance; or

– the application of Risk Management as described in 4.2 demonstrates that the failure of any PESS does not lead to an unacceptable Risk.”


Clause 14.1 indicates that PEMS applies if software provides functionality necessary for Basic Safety and Essential Performance or if application of clause 4.2 (ISO14971) does not lead to an unacceptable risk. An important part of note is the “or” part of the clause which might be easily ignored or missed when determining if the PEMS clauses apply. The PEMS clauses include the requirement to apply a software development lifecycle including software validation. The annex A in IEC60601-1 of the standard is intended to provide general guidance. In the annex A of the standard the guidance for clause 14.1 says this:


“Requirements have been minimized to those that are essential to assuring Basic Safety and Essential Performance. This has been done in recognition of the extensive and growing literature in the fields of software assurance and Risk Assessment techniques as well as the rapid evolution of this discipline.”


Is this suggesting that clause 14 (PEMS) is only applicable to Basic Safety and Essential Performance? There has been some indication that some manufacturers are interpreting that this is the case, and doing so with or without completing risk analysis on PEMS.


Interpretation of PEMS and Software Validation Driven by EP definition

There have been instances where clause 14 (PEMS) of the IEC60601-1 standard has been interpreted by medical device manufacturers that a software development lifecycle and software validation is not be required. The tendency of a manufacturer to conclude excluding additional, non-required processes and the additional resource costs and time needed for these activities is easy to understand. Unfortunately, manufacturers are already constrained by time boxed commitments, limited and dwindling budgets by the time they start looking at standard compliance. Additionally, many test houses are also concluding and complicit with manufacturers stating that if the software on a medical device can’t affect Basic Safety or Essential Performance then PEMS will not apply.  This has led to cases inside and outside the US,   where manufacturers have excluded implementing a software development lifecycle and software validations, to find out later the consequences of this.


Is a Software Development Lifecycle and Software Validation Important?

It is easy to agree to the premise that software validation for a medical device, where the safety of the patient is at risk, should be performed by the medical device manufacturer to provide adequate assurance the software in a device is safe. But is following a standard and applying software development activities and validation always a good idea?


Advances in technology has gotten us to the point where software has become a more and more significant driving force in how all devices with hardware operate. There is literature as well as our personal experience to support that the practices of defining objectives, agreeing on requirements, planning tasks and activities and analysis and testing of potential solutions lead to a higher confidence in outputs that meet intended needs. With this in mind, we would not recommend complete exclusion of the application of a software development lifecycle and development process nor some level of software validation on any product, whether it be a medical device or not.  Even companies that provide consumer products apply some software development process and testing; does it make sense that complete exclusion for a medical device is sensible?


Is the Process for Identifying EP the Same as Conducting Risk Analysis?

ISO14971, the Industry recognized standard for risk management has been around since 2000. This standard requires systematic use of available information to identify hazards and to estimate the risk for medical devices. Typically, risk analysis is conducted by identifying hazards including the use of an examination of potential systematic failures, such that the risks associated with the device can be evaluated. Examining systematic failures requires a bottom up analysis, where the known and previously unknown hazards and potential harms are evaluated and discovered.


Basic Safety refers to physical hazards; this is defined in clause 3.10 of the 60601-1 standard and was discussed in the previous section of this article. The identification of the Essential Performance aspects can be limited to those aspects that fall under what is performance as well as what is a clinical function as per section 3.27 of IEC60601-1. Do the definitions of Basic Safety and Essential Performance together include consideration of all potential Hazards? Examples of Hazards provided in Annex E of 14971 include evaluating functional operational failures for risk, providing examples such as “Incorrect or inappropriate output or functionality” or “Erroneous data transfer”.  What makes this challenging for manufacturers is whether a hazard such as these examples is something that is evaluated (or identified) by following IEC60601-1, particularly if these failures don’t fall under the definition of Essential Performance or Basic Safety. In some cases manufacturers are limiting the identification of risk controls if the function or characteristic is not defined as Essential Performance or Basic Safety and may not be supporting these conclusions through the process of risk management to assure their product is safe.


Is the FDA Using EP and is it the same as “essential to the proper functioning of the device”?

The FDA Design Controls refers to a process of identifying characteristics that are essential for the proper function of a medical device. This design control regulation and guidance was written and in place before the IEC60601-1 standard defined Essential Performance. The FDA Design Control Regulation was put in place in 1996 and identified in section 21 CFR 820.30(d) that:


“Design output procedures shall contain or make reference to acceptance criteria and shall ensure that those design outputs that are essential for the proper functioning of the device are identified.”


The obvious conclusion is that both Essential Performance of a clinical function and identifying what is essential to the proper functioning of the device should both be outputs of risk analysis, although this may be a result of risk analysis focused at different points of the hierarchy of the system. For example, the evaluation of risks related to clinical functionality is closer to the inputs and at the top of the system architecture versus evaluation of risks of device system or software functionality which are closer to the outputs and at a lower level of the design.


The FDA clears medical products through the premarket notification process, much like a notified body provides medical product manufacturers a CE mark. The FDA provides guidance for what the manufacturer should submit. For medical devices marketed in the US the regulation requires medical device software validation and risk analysis per 21 CFR 820.30(g). The FDA regulation is less prescriptive in terms of what the process is for risk analysis. The FDA Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices, issued in 2005, is more prescriptive on what documents are submitted for the medical device submissions. This guidance requires submissions to contain some level of documentation for hazard analysis and software validation for ALL medical devices that contain software, for all levels of concern.  The FDA General Principles of Software Validation, issued in 2002, defines what is expected for software validation.


The FDA General Principles of Software Validation does provide a provision for the scale of the software validation effort in that they say that:


“The resultant software validation process should be commensurate with the safety risk associated with the system, device, or process.”


This statement in the FDA guidance is similar to what IEC60601-1 is getting at with identifying Essential Performance characteristics.  They are both derived through an evaluation of risk, although the Essential Performance characteristics are limited to the “performance of a clinical function” per its definition. The interpretation that risk analysis and applying a software development lifecycle, including software validation is obviously not something the FDA expects manufactures to exclude in their processes, or in submissions for clearance to market.


The FDA’s expectations of voluntary standards and performance standards will be further discussed in the next section of this article series.  We would be interested in your own experiences with essential performance and if you have had issues with CE marking process or the FDA.



Medical Device Validation Series. Part 2 of 4

Part 2, Validation: More Than Testing

This is part two of a four-part series on medical device validation practices.

A common misperception is that validation of software is synonymous with the testing of software. This is not at all accurate.

Federal regulation requires software validation, not software testing. Validation, by the FDA’s definition, is the “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.”

Certainly, testing activity may be a component of validation, but note that the definition above does not use the word “test” at all. In fact, the definition mentions specifications and requirements specifically, assuming they exist and therefore creates a de facto linkage between validation and requirements.

The GPSV describes at length the definitions of, and differences between, software validation and software verification. Only a few of the related activities would be considered test activities. Similarly, verification activities, though narrower in scope, involve reviews, evaluations, and testing activities.

Keep in mind that all verification and test activities are validation activities, with other activities making up the remainder. Some testing is considered a verification activity, but there are verification activities that are not testing activities, and there is also testing that is not verification testing. Stay mindful that validation is not the same as testing.,

From: Vogel, David A., Ph.D. “Validating Medical Device Software Includes and Goes Beyond Testing.” Medical Product Outsourcing Mar. 2006: .

Build The Cybersecurity IN!

Build The Cybersecurity IN!

by: Gary Girzon

“Build the Quality In” is one of the pillars of lean manufacturing as first introduced by Dr. W.E. Deming and well perfected by Toyota Production System. The same practice should be applied to medical device cybersecurity. “Build the Cybersecurity In”, or face the consequences, as recently disclosed in FDA warning letter to St. Jude Medical (now owned by Abbott), on April 12, 20171. The letter is the latest chapter in issues concerning St. Jude implantable cardioverter defibrillators (pacemakers) and their wireless monitors. The full story continues to be well documented. Here’s a chronology of the key events: 

August 25, 2016 Muddy Waters, a hedge fund specializing in short stock positions, and MedSec, a medical device security firm, publish a report4 describing several vulnerabilities in the St. Jude devices, such as 

  • Debugging and development capabilities left on the device; 
  • Lack of encryption and authentication in the communication protocol between the implanted and external devices; 
  • Examples of ability to remotely crash the device and drain the battery 

MedSec claimed it had no choice but to go public in a joint press release with Muddy Waters since St. Jude had already known about these vulnerabilities and failed to address them.   

August – September 2016 – St. Jude refutes the Muddy Waters claims5. Muddy Waters responds to St. Jude6. University of Michigan cybersecurity researchers finds flaws in MedSec findings7. St. Jude Medical sues Muddy Waters and MedSec8. 

October 2016 FDA issues a “Safety Communication” on Premature Battery Depletion in St. Jude devices but recommends continued usage of devices9. FDA notes the investigation of cybersecurity allegations. Muddy Waters and MedSec release additional evidence (videos showing emergency shock, vibration and disabling of features) of how the devices can be compromised10, and St. Jude in turn responds11. MedSec hires an independent expert witness (Bishop Fox) to collaborate the original findings12. 

January 9, 2017 St. Jude releases a software update with cybersecurity fixes13. FDA issues another “Safety Communication” affirming the software update, with the recommendation that “health benefits to patients… outweigh the cybersecurity risks”14. ICS-CERT, a team within the NCCIC division of US Department of Homeland Security, issues an updated advisory on a “man-in-the-middle” vulnerability which has been mitigated by the software update15. 

April 12, 2017 FDA issues a “Warning Letter” to Abbott (St. Jude) describing several reliability and security problems, notably: 

  • St. Jude did not “confirm all corrective and preventative actions were completed, including full root cause investigation of actions to correct and prevent recurrence of potential cybersecurity vulnerabilities” 
  • St. Jude failed to perform a full verification of network port vulnerability – no threat testing was performed using an unauthorized interface. 
  • St. Jude did not fully address and incorporate a 3rd party security assessment commissioned in 2014 as part of its internal risk assessments. The 3rd party identified a “hardcoded universal unlock code as an exploitable hazard”.    

The FDA letter also noted several battery related issues not related to cybersecurity. Abbott’s initial response acknowledged the FDA observations – the company has 15 days to fully respond. Meanwhile, any such class III devices will not be approved for sale to the public. 

One hopes that St. Jude / Abbott, and other medical device companies, learn from this story. It may be that Abbott has addressed some of the issues noted by FDA, and either was incomplete in testing (such as not doing “negative” testing) and disclosing the changes (perhaps the “hardcoded universal unlock code” was removed). It may be that the cybersecurity fixes are more complex than originally thought, as could be the case when dealing with a communications protocol.  Clearly, St. Jude did not follow the FDA premarket guidance on cybersecurity16 and design it in at the beginning of the product creation. And by not acknowledging the full extent of issues found by Muddy Waters / MedSec early St. Jude at best created the perception of a cover-up. Even if no patients have been harmed by a lapse in security so far it’s the possibility that alarms the medical and patient communities. The FDA has recently released guidance for postmarket management of cybersecurity in medical devices17 in which methods collaboration and communication between medical device manufacturers, cybersecurity researchers, clearing houses and the public to disclose vulnerabilities is deemed integral to managing such threats. To be safe and secure, design the cybersecurity in at the start of the product lifecycle, model the threats, and be prepared for any new vulnerabilities.   

Medical Device Validation Series. Part 1 of 4

Medical Device Validation Series. This is part one of a four-part series on medical device validation practices.

Part 1, Introduction

The Food and Drug Administration (FDA) pays special attention to software because it is now embedded in a large percentage of electromedical devices, and the amount of device functionality controlled by software is continually growing. Software also controls many of a medical device manufacturer’s design, development, manufacturing, and quality processes, regardless of whether software is a part of the manufactured device.

Software failures often can be invisible and difficult to detect; thus, these failures can have disastrous consequences on the operation or quality of medical devices. For this reason, the FDA specifically requires validation of both device and quality-system automation software. Validation activities are meant to keep defects from getting into the software, as well as to detect and correct any defects that do end up in the software.

The FDA’s control over software used by medical device manufacturers is detailed in the Quality System Regulations (QSRs) found in FDA regulation 21 CFR 820. Software regulations focus on the development and use of two large categories: (1) software that is part of the device being manufactured and (2) software that is used to design, develop and manufacture the product or otherwise automate any part of the quality system.

Guidelines for complying with the FDA’s regulations are published by the agency as a “Guidance Document”. These documents are updated periodically, and new a guidance is issued as the need arises. While compliance with the guidelines is voluntary, the device manufacturer should be prepared to explain and defend any deviation from the guidances.

The most important FDA guidance available for the validation of software is the General Principles of Software Validation (GPSV), and can be obtained for free from the FDA’s website (…/ucm085371.pdf ). This is a “must read” for all software engineers and quality engineers working with software in the medical device industry.

From: Vogel, David A., Ph.D. “Validating Medical Device Software Includes and Goes Beyond Testing.” Medical Product Outsourcing Mar. 2006

The Essential Performance SNAFU

The Essential Performance SNAFU:

Defining what Essential Performance is for a medical device is a challenge, but the definition is especially challenging and often confusing when software is part of the product. This article series is intended to share what is happening in the industry once Essential Performance is defined, and provide some guidance on how to address compliance and safety avoiding the snafu’s we have observed in this process.

The compliance community has been using the Essential Performance (EP) to focus testing and evaluation of medical products since the 3rd edition of IEC60601-1 (published in 2005). Some medical device manufacturers are fortunate enough to have their Essential Performance defined for them by Particular standards if they are available for their medical device. Those less fortunate, who are still defining their products, are often challenged to determine what is essential.  Often the manufacturer is left confused and industry test houses are not helping manufacturers make decisions to assure their device is safe when it comes to matters relating to complex devices containing software that have no EP or if they haven’t defined or understood the aspects of the software that is EP.

Part 1: IEC60601-1 Essential Performance (EP) and Basic Safety (BS) Definitions and Origins

When did EP become required and why?

The National Committee issued the 3rd Edition of IEC60601-1 in 2005. This 3rd edition was driven by the committee’s desire to address the perceived end users need to ensure that both Basic Safety and Essential Performance together be considered in one standard for medical devices.  This recognition meant that separate handling of Basic Safety and Performance, as is the case with standards of other (non-medical) equipment, would be ineffective in addressing the hazards resulting from the inadequate design of medical equipment.

What is the definition of EP and its impact?

The IEC60601-1 standard defines measures to apply in the design and evaluation of a medical device intended to provide a degree of confidence the device operates safely.  Specifically, the measures in the standard are intended to support that the device demonstrates Basic Safety and meets its Essential Performance during operation. Basic safety refers to controlling the risk of physical hazards such as electrocution, burns or other physical injuries that the device could cause.  Essential Performance relates to controlling the risk of operational hazards that can arise if the device does not operate within performance expectations. Manufacturers are directed per IEC60601-1 to determine what the Essential Performance characteristics are for their medical device, through analysis and an understanding of risks.

Once the Essential Performance for a medical device is defined, many clauses in the standard identify specific handling and testing of the device to ensure that the device operation does not impact this defined essential performance such that a potential hazard can be created. The definitions provided in the standard are listed below:

Essential Performance is defined per IEC60601-1 section 3.27 as:

“Performance of a clinical function, other than that related to Basic Safety, where loss or degradation beyond the limits specified by the manufacturer results in an unacceptable Risk.

Note Essential Performance is most easily understood by considering whether its absence or degradation would result in an unacceptable Risk.”

Basic Safety is defined by clause 3.10 of IEC 60601-1 as:

“Freedom from unacceptable risk directly caused by physical hazards when ME Equipment is used under Normal Condition and Single Fault Condition”

IEC60601-1 is referred to as the General standard.  Within the series, there are collateral standards and particular standards. Collateral standards are semi-horizontal standards that cover other pertinent topics such as EMC, Usability or Alarms. Particular standards, are standards that have additional applicable clauses based on the type of medical device, such infusion pumps or hemodialysis equipment. Not all devices have a Particular standard.  Particular standards end in a “-2” and are often referred to as “dash 2” standards. The Particular standard provides aspects of essential performance to be considered in the design and evaluation.

If there is no dash 2 standard available for the class of devices, the manufacturer must define for themselves what is Essential Performance for their medical device. The definition (in clause 3.27 of IEC60601-1) guides manufacturers and test houses to conclude that all unacceptable RISK is characterized by the Essential Performance which is the collection of device functions identified as “clinical function” and Basic Safety.  An example would be infusion pumps delivery accuracy.

The definition of Essential Performance and the guidance provided to identify Essential Performance in the standard has posed challenges for manufacturers. This challenge becomes evident in asking this question: Can there be a functional failure that is not of a clinical function and also is not related to Basic Safety? To answer this question, a risk analysis of the final product and its software is necessary. Shortcutting this question and necessary risk analysis would put medical device users and patients at risk.

One topic of particular concern to medical device manufacturers that have devices with software in them is:  “Does Essential Performance impact what we need to do for software?”. Certainly, if risk analysis identifies failures to software that can lead to a risk to the patients or users, then software practices such as those in IEC62304 can be applied as well as software validation. However, there exists a perceived loophole in IEC60601-1 that may lead manufacturers to conclude, based on the definition of the device’s Essential Performance, they do not need to include software validation nor comply with IEC62304 during software development. This perceived loophole all rests on what manufacturers choose to declare as Essential Performance or more importantly what they choose not to declare as Essential Performance. We will further discuss this loophole and how to define Essential Performance for a device in a future part of this article series.

The CE Mark, the MDD (soon to be MDR) and Software Validation?

In the European Union (EU) manufacturers need a CE mark to legally market their medical devices. To receive a CE mark the device will need to be certified to internationally recognized standards identified as a part of the manufacturers’ technical file. IEC60601-1 is a typical standard referenced for medical electrical equipment and systems as well as ISO14971 which is the standard typically utilized for medical device risk analysis. Manufacturers are also obligated to determine which Medical Device Directive (MDD) is applicable to them and certify to compliance for this, which includes having processes in place to meet the directive. IEC60601-1 and ISO14971 both include clauses addressing software. The medical device directive will take precedence over voluntary standards. It is important to recognize that the current Medical Device Directive, Council Directive 93/42/EEC in M5 clause 12.1a, and the new Medical Device Regulation defined in Annex I, clause 17.21, requires software validation for medical devices (regardless of what is Essential Performance for the device).

Medical Device Cybersecurity Introduction and Attack Types

Medical Device Cybersecurity Introduction and Attack Types

Medical device manufacturers have been aware of the importance of Cybersecurity for several years and the FDA has been publishing guidance on it since 2005.

The responsibility for cybersecurity is shared among the various stakeholders one of whom is the medical device manufacturer. Device manufacturers have recognized both the business and regulatory imperatives to address this through the design and development process.

The FDA has indicated through the cybersecurity guidance documents and a publication that it feels the responsibility for the manufactures to address cybersecurity is implied in the Quality System Regulations (QSRs). Specifically, the need for manufacturers to assess cybersecurity risk and develop product requirements to address any vulnerability as part of software validation and risk analysis.

Several of the FDA guidance documents recommend that device manufacturers develop a set of controls to both assess and maintain functionality and safety in the presence of cybersecurity threats. This assessment is recommended to start during the development and encompasses the definition of “design inputs” related to cybersecurity.

Like other risk management perspectives, business, and product, addressing cybersecurity needs to be considered throughout the development process.

To understand how a company should consider approaching these regulatory requirements it is helpful to start by understanding the potential motivations and nature of the attacks seen to date. Although we should anticipate that new attack methods will be developed, minimally we need to address known methods.

Learn More “Medical Device Cybersecurity Introduction and Attack Types”