AI Act: New Perspectives for the Healthcare Sector – Current implementation status
With the adoption of Regulation (EU) 2024/1689, known as the AI Act, the European Union has established for the first time a systematic and binding framework for artificial intelligence, becoming a pioneer in the regulation of a technology which is bound to profoundly affect the economy, fundamental rights and essential services. One of the sectors most affected by the new regulatory framework is Health, which has been included among “high risk” segments.
The AI Act has adopted a risk-based approach, scaling obligations based on the potential impact of artificial intelligence systems on different sectors. AI healthcare applications fall within the category of high-risk systems, as they directly affect the diagnosis, treatment, clinical decisions on and, ultimately, the life and physical integrity of patients.

This category includes, first of all, the AI-based safety components integrated into medical devices, such as, for instance, those used in robot-assisted surgery. But there’s more: high-risk AI systems are also systems meant for use by, or on behalf of, public authorities to assess the eligibility of private individuals to essential public services, including healthcare services, as well as to grant, reduce, revoke or recover such services. Finally, this category includes AI systems used to assess and classify emergency calls, to send first aid or determine priorities – including with regard to the police and fire brigade – as well as patient selection systems in emergency healthcare.
The Regulation provides for a number of strict obligations to be fulfilled by high-risk systems before being placed in the market or put into service. These include in particular:
- The adoption of risk management and mitigation systems, to prevent harmful effects on health;
- The use of high-quality representative and unbiased data sets, to reduce the risk of discriminatory or clinically unreliable results;
- The traceability of transactions, to enable ex-post checks and audits;
- Detailed technical documentation, to enable the competent authorities to assess the compliance of AI systems;
- Transparency obligations vis-à-vis users (e.g., healthcare facilities and operators);
- Adequate human oversight measures, to prevent blind reliance on automated decisions;
- High robustness, accuracy and cybersecurity standards.
In a scenario where digital medicine and the use of artificial intelligence are bound to grow rapidly, the AI Act constitutes a crucial stepping stone toward more efficient, responsible and transparent healthcare, compliant with EU values.
An initial set of general rules, contained in Chapters 1 and 2, entered into force on 2 February 2025. Particularly important is the obligation for suppliers and deployers of AI systems to take measures ensuring an adequate level of AI literacy of the personnel involved in the operation and use of the systems, taking into account technical skills, experience and work context, as well as the persons they are expected to affect. Moreover, the Regulation introduced a list of prohibited AI practices which are considered incompatible with fundamental rights. These include, for instance, systems using subliminal or manipulative techniques, systems exploiting the vulnerabilities of specific groups of persons, social scoring mechanisms, risk assessments of the likelihood that a person may commit a criminal offence solely based on profiling and some forms of massive facial recognition and emotion recognition in the work and school environment, with some limited medical or security exceptions.
Subsequently, the core rules of the AI Act entered into force on 2 August 2025, introducing, among others, the classification of AI models relevant for the purposes of high-risk systems and specific obligations for suppliers, who shall be required to report to the European Commission as well as to prepare and keep up to date detailed technical documentation on the AI model, including training, testing and assessment processes, to be made available to the European AI Office and the national competent authorities on request. In addition, the following bodies have been created: an AI Office, to develop the Union’s know-how and skills, and the European Artificial Intelligence Board, with an advisory and support role for the Commission and Member States, facilitating a consistent application of the AI Act, including through the coordination of national authorities, the sharing of good practices and the contribution to the harmonization of administrative practices.
Furthermore, by 2 August 2026, Member States will have to establish an AI regulatory sandbox at the national level.
Instead, the obligation for high-risk AI systems to undergo a third-party conformity assessment as a condition for their being placed on the market and put into service, will apply from 2 August 2027.
In any event, the European Commission can decide to update the list laid down in Annex III and the list of prohibited AI practices once a year throughout the delegation of power period, which at present ends on 1 August 2029 but which could be tacitly extended for additional five-year periods.