Quick Guide

11 minute read

The European Union Artificial Intelligence Act

Unpack the implications of the European Union Artificial Intelligence Act (EU AI Act) on healthcare AI tools and data ethics, from risk classification to compliance requirements.

The European Union Artificial Intelligence Act (EU AI Act) is one of the world's first and most comprehensive AI regulations. This cheat sheet outlines the EU AI Act’s key principles and mechanics, as well as implications for AI developers and users in healthcare. Readers in the EU can use this guide to understand how the EU EI Act works to inform organizational AI strategy. Readers who don’t live in the EU can use the guide to understand potential AI policy frameworks and implications that may take shape in their jurisdictions, as EU policies on topics like data privacy and consumer protection often influence global policy.


What is it?

The European Union Artificial Intelligence Act (EU AI Act) is one of the first — and now one of the most mature — cross-sector AI regulations by a major regulator anywhere in the world.

The EU AI Act was originally proposed in 2021 and was approved in May 2024 after three years of drafts, amendments, and public consultation. The goals of the EU AI Act are to:

  • Regulate AI capabilities according to risk.
  • Standardize “high-risk” AI development and deployment.
  • Formally designate accountability and liability to AI “providers” (organizations that develop AI tools) and "deployers" (people or organizations that deploy an AI system in a professional capacity, not individual end users).
  • Set rules for general purpose AI models, or tools that can perform multiple functions, including those they weren't originally designed to perform.

The EU AI Act covers AI applications across all sectors including healthcare, medtech, life sciences, and pharmacology, but does not establish distinct rules for each sector.

AI policy has been introduced or proposed over the past three years in places such as California,1 Colorado,2 China,3 and Canada,4 although none of these policies currently reach the scale and global impact of the EU AI Act. Business leaders and regulators worldwide are paying close attention to the EU AI Act and its repercussions for two reasons.

Firstly, because the EU AI Act was proposed in 2021 and drafting and negotiation involved every member state, the EU AI Act is more mature than most other AI regulations. As such, many leaders view it as a precursor to future regulations in other markets. Healthcare and technology leaders are watching the EU AI Act in case it meaningfully influences or informs global healthcare technology regulation, much like the EU’s General Data Protection Regulation (GDPR) and Medical Device Regulation (MDR) have done in the past.

Second, the EU AI Act applies to AI products developed outside the EU that are utilized within its borders, meaning the regulation will affect companies or products that operate globally.


How does it work?

The EU AI Act broadly defines AI as encompassing everything from machine learning to large language models (LLMs), a type of AI that has been trained on vast amounts of text to understand existing content and generate original content. The EU AI Act assigns AI applications one of four risk categories:5

  1. Unacceptable risk applications are entirely banned. These are applications that contradict the EU’s values for human equality, freedom, dignity, democracy, and the rule of law.
  2. High risk applications are subject to specific legal and compliance requirements because they pose harm to health, safety, and human rights. Many healthcare tools will likely fit in this category.
  3. Limited risk applications are those that interact directly with humans and therefore need transparency verification.
  4. Minimal risk applications are all other applications that are largely left unregulated.

The EU AI Act entered into force on August 2, 2024, with bans for prohibited systems applied from February 2, 2025. Compliance for high-risk AI systems and most other systems will apply from August 2, 2026.

EU AI Act holds both AI developers, called “providers,” and AI users, called “deployers,” liable for breaching regulatory requirements. For example, if a hospital develops or uses an in-house, high-risk AI tool that doesn’t meet the compliance requirements set out under the EU AI Act, then that organization will be penalized. If a hospital deploys a product that a vendor provides, and that product does not comply with regulation, both the hospital and the vendor will be subject to noncompliance penalties. The current noncompliance penalty can reach €35 million or 7% of annual gross revenue, whichever is higher, and applies to both developers and providers who breach the EU AI Act’s terms.

The magnitude and formality of this liability has added urgency for AI providers and deployers across the EU to address specific AI challenges such as governance, safety and ethics, and vendor relationships.

How does the EU AI Act affect high-risk AI tools in healthcare?

Healthcare data is highly vulnerable because it often includes sensitive information such as medical histories, diagnoses, and financial details. Improper AI use within the sector can lead to potentially harmful effects including data privacy breaches, outcome or process biases, and inaccurate, unreliable, or even harmful recommendations. Because of this, the EU AI Act is likely to consider many healthcare AI applications as high-risk.6

The EU AI Act subjects high-risk AI tools to additional layers of scrutiny and regulation.7 AI providers, whether external vendors or healthcare organizations, must follow a five-step process when developing and employing high-risk tools:8

  1. AI provider develops a high-risk AI tool.
  2. The provider or a third-party organization must conduct a conformity assessment. This assessment involves ensuring the tool’s quality management aligns with the EU AI Act, reviewing and confirming technical documentation meets the EU AI Act’s requirements, and verifying consistency between the tool’s documentation and design and post-market monitoring systems. This conformity assessment verifies the AI tool's compliance with the EU AI Act's seven requirements: risk management, data governance, technical documentation, record-keeping, transparency and provision of information, human oversight, as well as accuracy, robustness, and cybersecurity.
  3. After successfully passing the conformity assessment, the provider must register the AI tool in an EU database before deploying or marketing it.
  4. The AI provider must create a signed EU declaration of conformity for the AI tool and affix a CE marking to demonstrate compliance with EU standards.
  5. The AI tool can be used or put on the market. If product lifecycle changes violate the EU AI Act's requirements, the provider must stop using the tool or remove it from the market and revisit step 2. AI providers and deployers must conduct post-market monitoring and continuously ensure human oversight of AI tools.

The European Commission established the AI Office to monitor, supervise, and ensure compliance with the EU AI Act across the member countries. The AI Office will employ over 140 staff including experts in technology, administration, law, policy, and finance. The AI office will also consist of five units that reflect its mandate, such as the “regulation and compliance” unit.


How does it affect LLMs?

A common question that has arisen since the EU AI Act proposal is, “How does the EU AI Act affect LLMs?” This question is not surprising. In healthcare, AI developers have already trained LLMs on extensive literature, research, and clinical datasets to understand and solve pervasive medical or operational challenges. LLM investment is substantial and it’s still growing. The global LLM market is set to reach a value of $260 million by 2030, with an 80% compound annual growth rate (CAGR).9 Many healthcare organizations around the world have also increased their spend on LLMs — according to one Gradient Flow study, 56% of U.S. healthcare providers increased their LLM budget by 10 to 100% across 2024.10

LLMs are at risk of noncompliance, but developers are finding ways to make products comply

In the EU, the AI Act and AI Office treat GPAI models like LLMs as high-risk tools. The tools are therefore exposed to heightened regulatory scrutiny, and AI developers must provide their training datasets to the AI office. But LLMs are trained on enormous datasets that are difficult to verify and explain. The potential opacity of these datasets could therefore preclude regulatory approval.

 

While AI regulations could limit the opportunities for LLMs in healthcare, market activity indicates that AI providers are already finding methods for LLM-driven tools to reach regulatory compliance. There are many vetted LLM products with “provable” data retrieval processes already meet EU AI Act standards.

"If the inputs [of LLMs] are essentially infinite and the outputs are essentially infinite, how do you prove it’s safe?"

Head of IT and digitization
Danish health system

One representative example is Ada Health,11 an LLM-driven, symptom checking chatbot app from Germany. Ada is certified under the EU Medical Device Regulation, which uses similar regulatory instruments to the EU AI Act, with both using risk-based regulation. Ada Health obtained this certification two years early by adopting strict quality principles, such as using retrieval augmentation generation to keep its outputs free from hallucinations (false or misleading output as a result of processing issues or incomplete training data). This means the AI references an authoritative and verified knowledge base outside of its training datasets before it generates a final response. The tool also explains its outputs to the end user and doesn’t sell any data or track cookies. Early-approved tools like Ada potentially chart a path for future healthcare LLM adoption.


Why does it matter?

AI opportunities in healthcare are multiplying as tools promise to provide powerful solutions to evergreen challenges like unsustainable clinical workloads, inefficient scheduling, and drug discovery. But AI use in the healthcare sector — which uses vulnerable patient data and has the potential for real-life consequences for staff and patients — has thus far been largely unregulated globally. Governments are increasingly under public pressure to regulate AI and ensure safe, equitable, controllable AI utilization across healthcare.

The EU AI Act provides governments around the world with a blueprint that they can use to inform their own regulations. Further, the EU AI Act’s rollout can offer “on the ground” healthcare leaders a glimpse at the signals and implications to prepare for, should any reader’s home jurisdiction follow a similar policy approach to regulating AI.

The three major implications of the EU AI Act that AI providers and deployers should monitor as they prepare for future regulations in their own jurisdictions are:

  1. Additional financial exposure
  2. Inflexible compliance processes
  3. Increased market and public scrutiny

Implication 1: Additional financial exposure

The EU AI Act exposes AI providers and deployers to additional legal, quality control, and administrative costs beyond the noncompliance penalties previously discussed. For example, compliance processes and conformity assessments for high-risk AI tools cost providers an additional €9,500 to €14,500 per tool.12 This figure excludes external legal advice, audit, or consultancy fees. Additionally, high-risk AI providers must have a quality management system in place, costing as much as €330,000 to establish.12 Even without additional compliance expenses, training an existing healthcare AI-based app costs an average of €38,000, and developing a comprehensive, custom tool can far exceed €100,000.13 These costs significantly impact the investment (and return on investment) calculation for both AI providers and buyers.

Implication 2: Inflexible compliance processes

Although 69% of physicians think AI may help workflow efficiency,14 deploying high-risk healthcare AI tools in a tightly regulated environment may limit the benefit these tools have on workflow improvements. With many AI tools now classified as high-risk or limited-risk, AI providers and deployers must undertake workload-intensive compliance checks, conformity assessments, and publish training data to confirm their legality. The EU AI Act also adds these additional regulatory layers to AI tools under development, which delays their market entry and introduces financial risks for tools requiring redevelopment due to noncompliance. AI deployers must also establish new levels of AI governance levels and processes to ensure the tools they purchase meet EU AI Act standards and maintain safe, ethical use with comprehensive human oversight.

Implication 3: Increased market and public scrutiny

Noncompliance with the EU AI Act can severely damage an AI provider and deployer’s reputation. Regulation breaches in healthcare will breed mistrust from the public, especially with 60%15 of patients already feeling uncomfortable with healthcare providers using AI in their treatment. Bigger picture, 62%16 of patients ranked hospital reputation as the most important determinant for selecting a healthcare provider, so providers must comply with regulations to avoid eroding confidence or losing patient loyalty.

On the developer side, AI market confidence is highly volatile and AI providers risk rapid valuation drops if users discover biases or errors in their AI tools — Alphabet lost $100 billion in a single day in 2023 after the public identified an error with their AI chatbot in a promotional video.17 The volatility that affects market confidence also applies to regulation breaches.

Overall, the EU AI Act and regulations that follow from it are poised to amplify the public and market-facing risks that AI tools and users currently face.


Conversations you should be having

The complexity and rapid development of the AI market often outpaces healthcare organizations’ abilities to keep up, challenging them to determine how to react to or prepare for AI regulation. Below are conversations both AI providers and deployers should be having to set their AI tools up for potential regulatory compliance regardless of jurisdiction.

  • Audit your exiting AI capabilities to ensure they comply with upcoming regulations or follow a certified ISO standard (a standard published by the International Organization for Standardization) that aligns with existing regulations elsewhere.
  • Establish a multistakeholder AI governance structure that stringently controls AI use and procurement and engages all levels of leadership, staff, and end users.
  • Commit to a standardized process to ensure safe, equitable, and responsible AI development and use across your organization.
  • Create a standardized vendor/customer evaluation process to ensure all tools you buy or sell comply with regulations.
  • Engage patients and staff in every step of the AI development and implementation journey, ensuring full process and use case transparency.

Unless otherwise noted, all information in this case study was drawn from the following sources: The EU Artificial Intelligence Act. Accessed December 15, 2024.

 

1 Toby D, et al. California enacts sweeping new AI regulation. DLA Piper. October 1, 2024.

2 McPhillips J. Colorado's New Comprehensive AI Legislation. Clifford Chance. July 22, 2024.

3 Yang A, Li B. AI Watch: Global regulatory tracker – China. White & Case. May 11, 2024.

4 Artificial Intelligence and Data Act. Government of Canada. September 27, 2023.

5 McKenna L, McLoughlin D. EU AI Act: different risk levels of AI systems. Forvis Mazars. Accessed December 17, 2024.

6 Flowers T, et al. The EU AI Act: Implications for the health sector. Access Partnership. May 29, 2024

7 Article 47: EU Declaration of Conformity. EU Artificial Intelligence Act. Accessed December 15, 2024.

8 What is the Artificial Intelligence Act of the European Union (EU AI Act)? IBM. September 20, 2024. Accessed December 15, 2024.

9 Uspenskyi S. Large Language Model Statistics And Numbers (2024). Springs. September 19, 2024.

10 Generative AI in Healthcare: 2024 Survey. Gradient Flow. Accessed December 19, 2024.

11 Ada. Accessed December 2024.

12 Belletti V, Orlando SR. Cecimo paper on the artificial intelligence act. European Association of the Machine Tool Industries. October 5, 2022.

13 Alkhaldi, N. Assessing the cost of implementing AI in healthcare. Itrex. September 2, 2024.

14 AMA Augmented Intelligence Research. American Medical Association. February 11, 2025.

15 Tyson A, et al. 60% of Americans Would Be Uncomfortable With Provider Relying on AI in Their Own Health Care. Pew Research Center. February 22, 2023.

16 Ellis R, et al. National Evaluation of Patient Preferences in Selecting Hospitals and Healthcare Providers. Med Care. 58(10), 867-873. July 23, 2020.

17 Alphabet shares dive after Google AI chatbot Bard flubs answer in ad. Reuters. February 9, 2023.


SPONSORED BY

INTENDED AUDIENCE

AFTER YOU READ THIS
  • You'll learn what the EU AI Act is and how it works.
  • You'll learn how it impacts healthcare.
  • You'll learn how it affects the development and use of high-risk AI tools.

Don't miss out on the latest Advisory Board insights

Create your free account to access 1 resource, including the latest research and webinars.

Want access without creating an account?

   

You have 1 free members-only resource remaining this month.

1 free members-only resources remaining

1 free members-only resources remaining

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

This content is available through your Curated Research partnership with Advisory Board. Click on ‘view this resource’ to read the full piece

Email ask@advisory.com to learn more

Click on ‘Become a Member’ to learn about the benefits of a Full-Access partnership with Advisory Board

Never miss out on the latest innovative health care content tailored to you. 

Benefits Include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

This is for members only. Learn more.

Click on ‘Become a Member’ to learn about the benefits of a Full-Access partnership with Advisory Board

Never miss out on the latest innovative health care content tailored to you. 

Benefits Include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox
AB
Thank you! Your updates have been made successfully.
Oh no! There was a problem with your request.
Error in form submission. Please try again.