unyer

Payback on sales of medical devices

Pending litigation in Italy

 

Around two thousands of claims were raised before the Italian Administrative Court of Rome against the Ministerial and Regional Decrees which, implementing the Legislative Decree 2015, No. 78, Article 9-ter, required the supplier of medical devices many years after – at the end of 2022 – to pay an amount corresponding to the percentage incidence of their sales to the Regional Healthcare Service (Servizio Sanitario Regionale), in order to contribute to the coverage of the regional governments’ public expenditure on medical devices in excess of a certain limit (as identified by Ministerial Decree 6 July 2022) for FYs 2015, 2016, 2017 and 2018.

The total amount due is about two billion euros, a prohibitive sum for pharmaceutical companies.

The fundamental macro-arguments contained in such claims refer, inter alia, to:

  1. the violation of the constitutional principle of reasonableness, proportionality as well as transparency;
  2. the impossibility for the private companies to know and quantify, in terms of provisions and/or potential liabilities, the excess of the public expenditure;
  3. lack of transparency about the list of suppliers, the uniformity of the products and the figures.

The Administrative Court of Lazio, in the second half of 2023, issued a temporary decision in almost each pending claim which suspended all the deeds challenged, until the final decision in the merits.

In the meantime, the same Court has published a temporary decision, deciding to submit to the Constitutional Court the issue relating to the legitimacy of the payback system, provided by the said Legislative Decree No. 78/2015.

Therefore, the outcome of all the claims raised before the Court is still uncertain and will depend on the decision of the Constitutional Court which is expected by the end of 2024.

Not only. To date, the Companies manufacturing and distributing medical devices are going to face further difficulties in running their business.

A Ministerial Decree, published in the Official Journal on 9 February 2024, implementing EU Regulations No. 2015/745 and 746/2017 and European Delegation Law No. 53/2021, which established the “medical device government financing system” provides for the payment of an annual share of 0.75% of their turnover, net of VAT, deriving from sales of medical devices to the National Health Service.

Many of the arguments and complaints of constitutional illegitimacy made in the payback litigation could ground further claims against the said rules and regulations.

Consequently, further initiatives are expected from the Companies involved to protect their profit margins already seriously jeopardized by the payback system, with the additional risk that inevitable increases of bid prices would turn in a greater regional public spending for the purchase of medical devices and further difficulties to guarantee an efficient health service to the citizens.

 

Ermanno Vaglio
Pirola Pennuto Zei & Associati, Associate Partner
Working Group Healthcare & Life Science

Proposed EU AI Act’s application to medical devices

The recitals of the proposal for a Regulation laying down harmonised rules on artificial intelligence (the “AI Act”) states that “By improving prediction, optimising operations and resource allocation … the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes”, in particular in the area of healthcare.[1]

At the same time, the European Parliamentary Research Service has highlighted that the use of AI in healthcare poses a number of clinical, social and ethical risks, particularly with regard to medical devices including software as a medical device.[2]

In order to balance those risks and advantages, the proposed AI Act sets out rules that will regulate so-called ‘AI systems’ based on their capacity to cause harm to society following a ‘risk-based’ approach.

To that end, the proposed AI Act sets out strict rules for the use of what are termed ‘high-risk’ AI systems, ie AI systems that:

  • are “intended to be used as a safety component of a product, or the AI system is itself a product” that is subject to EU harmonisation legislation listed in Annex II of the proposed AI Act (including notably Regulation 2017/745 of 5 April 2017 on medical devices or Regulation 2017/746 of 5 April 2017 on in vitro diagnostic medical devices);
  • where the product, or the AI system as a product, “is required to undergo a third-party conformity assessment, with a view to the placing on the market or putting into service” pursuant such EU harmonisation legislation (article 6).

Given the reach of that definition, a significant percentage of AI systems used in medical devices (classes IIa, IIb and III) and in vitro diagnostic medical devices (class D) are likely to be captured by the proposed AI Act.

Thereafter – in addition to their existing obligations under the MDR and IVDR – providers, deployers, importers and distributors of medical devices qualifying as high-risk AI systems will be subject to a raft of new requirements, including:

  • Establishing, implementing, documenting and maintaining a risk management system and, for providers of such systems, implementing a quality management system;
  • Developing training models with data on the basis of training, validation and testing data sets that meet certain quality criteria;
  • Drawing up and keeping it up-to date technical documentation;
  • Ensuring the capability of automatic recording of logs over the duration of the system’s lifetime;
  • Ensuring sufficient transparency that enable deployers to interpret the system’s output and to use it appropriately and, for providers of AI systems intended to directly interact with natural persons, ensuring that such systems inform the concerned persons that they are interacting with an AI system, unless this is obvious;
  • Ensuring effective oversight by natural persons throughout the system’s lifecycle; and
  • Ensuring that the system achieves an appropriate level of accuracy, robustness, and cybersecurity.

In addition, deployers of high-risk AI systems that are bodies governed by public law or private operators providing public services (ie clinics and hospitals) will be required to perform an assessment of the impact of the system’s use on fundamental rights.

Non-compliance by providers of high-risk AI systems shall be subject to administrative fines of up to 15 million euros or, if the offender is a company, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

Beyond these penalties set out in the proposed AI Act, Member States will need to legislate penalties that are “effective, proportionate, and dissuasive”, as well as other enforcement measures in case of infringement.

The proposed AI Act was approved by the Council of the EU’s Committee of Permanent Representatives on 2 February 2024 and was endorsed by the European Parliament’s civil liberties and internal market committees on 13 February. The full European Parliament plenary vote is anticipated in April this year.

As the text of the future AI Act moves closer to being legislated, entities active in the medical device sector or involved in deploying medical devices would be well-advised to get a head start on the new EU rules applicable to AI systems – and the national provisions that will quickly follow – in order to avoid interruptions to their day-to-day operations.

 

Jean-Baptiste Chanial
FIDAL, Senior Partner
Working Group Healthcare & Life Science

Ruslan Churches
FIDAL, Senior Associate
Working Group Healthcare & Life Science

 

[1] Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.
[2] Artificial intelligence in healthcare: Applications, risks, and ethical and  societal impacts’, European Parliamentary Research Service, Scientific Foresight Unit, PE 729.512, June 2022.

Revolutionising Healthcare and Life Science Supply Chains with Metaverse Technology

The Healthcare and Life Science sector is currently facing numerous supply chain challenges arising from the shortage of materials, increased costs, and staff shortages due to the COVID-19 pandemic, wars, and other ongoing crises.

It is now more crucial than ever to address these challenges, and one way to do so is by utilising new technologies such as Artificial Intelligence (AI). Intelligent workflows have been shown to effectively assist supply chain managers, and by incorporating AI into the supply chain, it can be made more effective and reliable. The implementation of AI can lead to the creation of a digital supply chain that can automatically respond to any crisis based on the programmed control unit. For example, if inventory levels fall below a particular value, AI can perform predictive ordering by checking networked databases on prices, delivery terms and general terms and conditions. Once AI places an order, it can confirm with another AI by checking inventory and production capacity.

Metaverse technology can further improve the digital supply chain by using “Predictive Maintenance” which monitors the performance and condition of equipment and assets, reducing the chances of failure.

However, the adoption of AI technology calls for appropriate regulations to create a legal framework that ensures legal certainty: Who concludes the contract in an automated ordering process between two AI? Is the AI an ‘e-person’ with legal capacity? What is the content of the contract? These questions require clear answers as AI does not weigh divergences in the contract as an experienced lawyer would. It is even more concerning when AI makes incorrect declarations due to technical defects or programming errors.

To mitigate these issues, the European Union is currently developing an AI law to ensure that AI systems in the European Union are safe, transparent, traceable, non-discriminatory, and environmentally friendly. To prevent harmful consequences, the European Parliament advocates for the oversight of AI systems by humans instead of automated mechanisms. Furthermore, there is a strong effort of the European Parliament to establish a technology-neutral, unified approach to AI systems for application to future systems.

The legal framework could solve the legal uncertainties that may arise from the use of AI in the supply chain. In December 2023, the European Parliament reached a provisional agreement with the European Council on the AI Act. The agreed text will now have to be formally adopted by both the European Parliament and the European Council to become EU law.

 

Dr. Christoph von Burgsdorff, LL.M.
Luther Lawfirm, Partner
Industry Group Healthcare & Life Science

Luisa Kramer
Luther Lawfirm, Associate
Industry Group Healthcare & Life Science