Proposed EU AI Act’s application to medical devices
The recitals of the proposal for a Regulation laying down harmonised rules on artificial intelligence (the “AI Act”) states that “By improving prediction, optimising operations and resource allocation … the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes”, in particular in the area of healthcare.[1]
At the same time, the European Parliamentary Research Service has highlighted that the use of AI in healthcare poses a number of clinical, social and ethical risks, particularly with regard to medical devices including software as a medical device.[2]
In order to balance those risks and advantages, the proposed AI Act sets out rules that will regulate so-called ‘AI systems’ based on their capacity to cause harm to society following a ‘risk-based’ approach.
To that end, the proposed AI Act sets out strict rules for the use of what are termed ‘high-risk’ AI systems, ie AI systems that:
- are “intended to be used as a safety component of a product, or the AI system is itself a product” that is subject to EU harmonisation legislation listed in Annex II of the proposed AI Act (including notably Regulation 2017/745 of 5 April 2017 on medical devices or Regulation 2017/746 of 5 April 2017 on in vitro diagnostic medical devices);
- where the product, or the AI system as a product, “is required to undergo a third-party conformity assessment, with a view to the placing on the market or putting into service” pursuant such EU harmonisation legislation (article 6).
Given the reach of that definition, a significant percentage of AI systems used in medical devices (classes IIa, IIb and III) and in vitro diagnostic medical devices (class D) are likely to be captured by the proposed AI Act.
Thereafter – in addition to their existing obligations under the MDR and IVDR – providers, deployers, importers and distributors of medical devices qualifying as high-risk AI systems will be subject to a raft of new requirements, including:
- Establishing, implementing, documenting and maintaining a risk management system and, for providers of such systems, implementing a quality management system;
- Developing training models with data on the basis of training, validation and testing data sets that meet certain quality criteria;
- Drawing up and keeping it up-to date technical documentation;
- Ensuring the capability of automatic recording of logs over the duration of the system’s lifetime;
- Ensuring sufficient transparency that enable deployers to interpret the system’s output and to use it appropriately and, for providers of AI systems intended to directly interact with natural persons, ensuring that such systems inform the concerned persons that they are interacting with an AI system, unless this is obvious;
- Ensuring effective oversight by natural persons throughout the system’s lifecycle; and
- Ensuring that the system achieves an appropriate level of accuracy, robustness, and cybersecurity.
In addition, deployers of high-risk AI systems that are bodies governed by public law or private operators providing public services (ie clinics and hospitals) will be required to perform an assessment of the impact of the system’s use on fundamental rights.
Non-compliance by providers of high-risk AI systems shall be subject to administrative fines of up to 15 million euros or, if the offender is a company, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Beyond these penalties set out in the proposed AI Act, Member States will need to legislate penalties that are “effective, proportionate, and dissuasive”, as well as other enforcement measures in case of infringement.
The proposed AI Act was approved by the Council of the EU’s Committee of Permanent Representatives on 2 February 2024 and was endorsed by the European Parliament’s civil liberties and internal market committees on 13 February. The full European Parliament plenary vote is anticipated in April this year.
As the text of the future AI Act moves closer to being legislated, entities active in the medical device sector or involved in deploying medical devices would be well-advised to get a head start on the new EU rules applicable to AI systems – and the national provisions that will quickly follow – in order to avoid interruptions to their day-to-day operations.
Jean-Baptiste Chanial
FIDAL, Senior Partner
Working Group Healthcare & Life Science
Ruslan Churches
FIDAL, Senior Associate
Working Group Healthcare & Life Science
[1] Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.
[2] Artificial intelligence in healthcare: Applications, risks, and ethical and societal impacts’, European Parliamentary Research Service, Scientific Foresight Unit, PE 729.512, June 2022.