What is the EU AI Act?

Basics

EU AI Act overview

The official name of the Act is quite lengthy: “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).” This is why everyone uses the shorter name “EU AI Act.”

The Act was proposed by the European Commission and approved by the European Parliament in June 2024.


Why is the EU AI Act important?

This Act is important in at least three ways.

First, it regulates the AI technology that is developing very quickly, and that could have large negative impacts on individuals and societies if not regulated.

Second, this is a regulation, so it applies to all EU countries, without the need for a particular EU Member State to publish its own AI laws.

Finally, it is the first such major regulation worldwide, and it will probably have an effect on other (non-EU) countries, which will most likely start introducing similar regulations. The same thing happened with the EU GDPR, when other countries started introducing very similar privacy regulations.

Most important requirements in the EU AI Act are related to high-risk AI systems, general-purpose AI models, and transparency rules.

Who must comply with the EU AI Act?

This Act applies to any company that uses, develops, or sells AI systems within the European Union.

It doesn’t matter if the company is based outside of the EU; what matters is that they perform any of these activities within the European Union.


What are the roles related to AI?

The EU AI Act defines the following roles:

  • AI Provider – the company or individual that creates an AI system or has it created, and then places it on the market under their own name or brand
  • AI Deployer – the organization or person that uses an AI system in their operations, unless it’s for purely personal use
  • AI Importer – an EU-based company or individual that brings an AI system from a non-EU provider into the EU market
  • AI Distributor – any entity in the supply chain (other than the provider or importer) that offers an AI system for sale or use within the EU

The Act also mentions “AI operator,” which includes any of the above-mentioned roles.


Where can I find the EU AI Act text?

You can find the original text of the EU AI Act on the EUR-Lex website: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng.


The structure of the EU AI Act

The Act has 113 articles that are divided into the following 13 chapters:

  • Chapter I: General provisions
  • Chapter II: Prohibited AI practices
  • Chapter III: High-risk AI systems
  • Chapter IV: Transparency obligations for providers and deployers of certain AI systems
  • Chapter V: General-purpose AI models
  • Chapter VI: Measures in support of innovation
  • Chapter VII: Governance
  • Chapter VIII: EU database for high-risk AI systems
  • Chapter IX: Post-market monitoring, information sharing and market surveillance
  • Chapter X: Codes of conduct and guidelines
  • Chapter XI: Delegation of power and committee procedure
  • Chapter XII: Penalties
  • Chapter XIII: Final provisions

On top of these chapters, there are 13 annexes — here are some of the more interesting ones:

  • Annex I: List of Union harmonisation legislation
  • Annex III: High-risk AI systems
  • Annex IV: Technical documentation referred to in Article 11(1)
  • Annex VII: Conformity based on an assessment of the quality management system and an assessment of the technical documentation

EU AI Act timeline

The Act was adopted in 2024; however, parts of it will be applied in four phases.

From February 2, 2025:

  • General provisions (Chapter I), including the obligation for AI literacy training (Article 4)
  • Prohibited AI practices (Chapter II)

From August 2, 2025:

  • Notifying authorities and notified bodies (Chapter III Section 4 — Articles 28 to 39)
  • General-purpose AI models (Chapter V)
  • Governance (Chapter VII)
  • Penalties (Chapter XII — Articles 99 and 100)
  • Confidentiality (Article 78)

From August 2, 2026, most other requirements come into force, including:

  • High-risk AI systems (Chapter III sections 1 to 3) for all AI systems listed in Annex III
  • Transparency obligations for providers and deployers of certain AI systems (Chapter IV)
  • Measures in support of innovation (Chapter VI)
  • EU database for high-risk AI systems (Chapter VIII)
  • Post-market monitoring, information sharing and market surveillance (Chapter IX)
  • Codes of conduct and guidelines (Chapter X)
  • Delegation of power and committee procedure (Chapter XI)
  • Fines for providers of general-purpose AI models (Article 101)
  • Final provisions (Chapter XIII)

From August 2, 2027, the last requirement comes into force:

  • Classification rules for high-risk AI systems (Article 6 paragraph 1)

What is the EU AI Act? A detailed and straightforward guide

Requirements

EU AI Act risk categories

The core idea of the Act is to regulate different AI systems based on the risk they pose.

Even though the EU AI Act mentions only “high-risk AI systems,” by reading the Act, one can see there are basically four categories of AI system risks:

  1. Unacceptable risk — these are basically AI systems and activities that are listed in Article 5 “Prohibited AI practices.”
  2. High risk — these are high-risk AI systems listed in Article 6 “Classification rules for high-risk AI systems” and Annex III “High-risk AI systems referred to in Article 6(2).”
  3. Limited risk — these are AI systems that do have some risks; however, they do not pose high risks. They are specified in Article 50 “Transparency obligations for providers and deployers of certain AI systems.”
  4. No risk (or minimal risk) — these are AI systems that are not mentioned anywhere in the EU AI Act; therefore, they are not regulated.

Note: The Act does not classify general-purpose AI models into any of those categories (since models are not the same thing as AI systems) — the EU AI Act has requirements for general-purpose AI models that are not related to the classification described above.


Prohibited activities (unacceptable risk)

The following AI practices are not allowed:

  • use of an AI system that deploys subliminal techniques beyond a person’s consciousness, or purposefully manipulative or deceptive techniques
  • use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability, or a specific social or economic situation
  • use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behavior or personality characteristics, with the social score leading to unfavorable treatment of persons or groups of persons
  • use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offense
  • use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
  • use of AI systems to infer emotions of a natural person in the areas of workplace and educational institutions
  • use of biometric categorization systems that individually categorize natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation
  • use of real-time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement

Definition of high-risk AI systems

According to Annex III, AI systems used for the following areas are considered to be high-risk AI systems (see the text of the Act for details):

  • biometrics
  • critical infrastructure
  • educational and vocational training
  • employment
  • essential private services and essential public services
  • law enforcement
  • migration, asylum, and border control management
  • administration of justice and democratic processes

According to Article 7, the EU Commission has the right to amend Annex III.


Compliance requirements for high-risk AI systems

There are lots of requirements for high-risk AI systems, so let’s briefly go through them:

  • Implementing a risk management system (Article 9) — providers must operate a continuous, documented risk management process that identifies, analyzes, evaluates, and mitigates risks throughout the AI system’s lifecycle.
  • Introducing data governance (Article 10) — providers must ensure that training, validation, and testing data are relevant, high quality, representative, free of errors as far as possible, and managed under a strong data governance framework.
  • Producing technical documentation (Article 11) — providers must prepare and maintain technical documentation that demonstrates the AI system’s compliance with all applicable requirements.
  • Generating records (Article 12) — high-risk AI systems must be designed to automatically generate logs that enable monitoring, incident investigation, and verification of compliance.
  • Transparency and provision of information (Article 13) — AI systems must be sufficiently transparent and accompanied by clear instructions enabling users to understand how to operate them safely and appropriately.
  • Human oversight (Article 14) — AI systems must include effective human oversight mechanisms that allow humans to prevent or minimize risks, including the ability to intervene or stop the system.
  • Accuracy, robustness, and cybersecurity (Article 15) — AI systems must be accurate, resilient against errors, robust under expected conditions, and protected against cybersecurity threats.

Articles 16 to 27 further specify requirements related to the obligations of providers, importers, distributors, and deployers; Quality Management Systems; documentation keeping; automatically generated logs; cooperation with competent authorities; authorized representatives; responsibilities in the AI value chain; and impact assessment.


Transparency obligations (limited-risk AI)

Transparency obligations are quite short — they simply require providers of AI systems to notify AI users that the content was artificially generated or manipulated.

This applies to providers or deployers of the following AI systems:

  • AI systems generating synthetic audio, image, video, or text content
  • an emotion-recognition system or a biometric categorization system
  • AI systems generating or manipulating image, audio, or video content constituting a deepfake
  • AI systems generating or manipulating text that is published with the purpose of informing the public on matters of public interest

Requirements for general-purpose AI (GPAI) models

A general-purpose AI model is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks…” — in other words, those are the models that stand behind popular chat applications like ChatGPT, Claude, Gemini, and others.

There are two types of GPAI models: a “regular” general-purpose AI model, and general-purpose AI models with systemic risk. Those with systemic risk are the ones that have high-impact capabilities, or that are designated as such by the decision of the EU Commission.

Note: The GPAI model may or may not become part of a high-risk AI system — this is not related to the “model with systemic risk” mentioned in the previous paragraph.

Providers of regular general-purpose AI models have the following obligations:

  • to prepare and keep technical documentation up to date, including training/testing details and evaluation results
  • to provide information and documentation to downstream AI system providers, enabling them to understand model capabilities/limitations and comply with their obligations
  • to put in place a copyright compliance policy for EU copyright law
  • to publish a sufficiently detailed summary of training data sources

On top of the requirements listed above, providers of general-purpose AI models with systemic risk have the following obligations:

  • to conduct state-of-the-art model evaluations, including documented adversarial testing, to identify and mitigate systemic risks
  • to continuously assess and mitigate systemic risks at the EU level arising from development, market placement, or use of the model
  • to track, document, and report serious incidents without undue delay to the AI Office and national authorities, including corrective measures taken
  • to maintain robust cybersecurity protection for both the model and its supporting physical infrastructure

EU AI Act implementation

The biggest effort for compliance with the EU AI Act will be for operators of high-risk AI systems and for providers of general-purpose AI models.

For those companies, it is recommended to introduce an AI governance framework that is based on the leading AI governance standard — ISO 42001.

The section below further explains this international standard.

Enforcement

Penalties

Here is a summary of penalties specified in articles 99, 100, and 101:

Noncompliance type Maximum fine
Prohibited AI practices Up to €35M or 7% turnover for operators and €1.5M for EU bodies
Major obligations (providers, distributors, deployers, etc.) Up to €15M or 3% turnover
Incorrect/misleading info to authorities Up to €7.5M or 1% turnover
General-purpose AI models Up to €15M or 3% turnover
Other violations by EU bodies €750k

Which government bodies are in charge of enforcement?

European Commission & AI Office. The Commission is the top enforcement authority, especially for general-purpose AI models. It exercises its powers mainly through the AI Office, which monitors general-purpose AI model providers, investigates non-compliance, runs the Union safeguard procedure, and can impose corrective measures and penalties at the EU level.

European Artificial Intelligence Board. The Board is a coordinating and advisory body of Member State representatives. It harmonizes enforcement by supporting national competent and market surveillance authorities, coordinating joint actions, and issuing opinions and guidance on implementation and enforcement.

Advisory Forum. Even though it is not a government body, the forum is important because it is a multi-stakeholder body that feeds technical and practical input into the board and the Commission. It does not enforce directly, but it influences how enforcement rules, guidance, and common specifications are shaped.

National competent authorities. Every Member State must designate national competent authorities, including at least one notifying authority and one market surveillance authority, and name a single point of contact. These authorities are responsible for supervising application and implementation nationally, resourced with technical, legal, and fundamental rights expertise.

National public authorities or bodies protecting fundamental rights. These are equality bodies, human rights commissioners, etc. They are empowered to obtain AI documentation and trigger testing where necessary to enforce fundamental rights law (non-discrimination, etc.) in relation to high-risk AI use, working jointly with market surveillance authorities.

European Data Protection Supervisor (EDPS). For EU institutions and agencies, EDPS is both the competent authority and the market surveillance authority, and it can run AI sandboxes. It supervises their use of AI and enforces both the Act and data-protection requirements in that context.

Financial supervision authorities. When high-risk AI is used by banks and other supervised financial institutions, the financial supervisors (e.g., national competent authorities under CRD/MiFID, etc.) act as the market surveillance authorities for the EU AI Act, integrating AI compliance into prudential/market supervision.

Relationship with other frameworks

EU AI Act vs. ISO 42001

ISO 42001 is the leading international standard that specifies the requirements for AI Management Systems — in other words, for AI governance.

Even though the EU AI Act does not mention ISO 42001, this standard is recommended as an implementation method for the Act, since it provides an internationally accepted framework on how to systematically assess risks and control AI systems. Essentially, ISO 42001 is as important for the EU AI Act as ISO 27001 is for NIS2 or DORA.

EU AI Act ISO 42001
Type Regulation published by the European Union Industry standard published by the International Organization for Standardization
Focus Requirements for high-risk AI systems, general-purpose AI models, and transparency rules Framework that defines AI governance
Applies to Companies that use, develop, or sell AI systems in the European Union Any company that uses or develops AI systems
Mandatory Yes No
Companies can certify No Yes

EU AI Act vs. EU GDPR

The European General Data Protection Regulation is a comprehensive regulation that focuses on the protection of personal data of EU citizens.

The EU AI Act references the EU GDPR in several articles, and in most cases, it confirms that AI systems must comply with GDPR requirements.

Since most AI systems process personal data, this means that their operators must comply with both the EU AI Act and the EU GDPR.

EU AI Act EU GDPR
Type Regulation published by the European Union Regulation published by the European Union
Focus Requirements for high-risk AI systems, general-purpose AI models, and transparency rules Legal and technical requirements for the protection of personal data
Applies to Companies that use, develop, or sell AI systems in the European Union Any company that processes personal data in the European Union
Mandatory Yes Yes
Companies can certify No No

EU AI Act vs. NIS2 and DORA

Both NIS2 and DORA are European legislative frameworks focused on cybersecurity and resilience.

The EU AI Act requires the implementation of cybersecurity for high-risk AI systems for general-purpose AI models. Further, in its Annex III, it directly relates to specific critical infrastructure sectors — it specifies that high-risk AI systems are “AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating, or electricity.”

EU AI Act NIS2 DORA
Type Regulation published by the European Union Directive published by the European Union Regulation published by the European Union
Focus Requirements for high-risk AI systems, general-purpose AI models, and transparency rules Cybersecurity measures to protect IT systems Cybersecurity and resilience measures to protect IT systems
Applies to Companies that use, develop, or sell AI systems in the European Union Critical infrastructure companies that operate in the European Union Financial entities that operate in the European Union
Mandatory Yes Yes Yes
Companies can certify No No No

To learn about the details of ISO 42001, sign up for this free ISO 42001 Foundations Course — it will give you a detailed overview of each clause from this AI governance standard together with practical examples of how to implement them.

Advisera Dejan Kosutic

Dejan Kosutic

CEO & Lead Expert for ISO 27001 and ISO 42001

Leading expert on cybersecurity and AI governance and the author of several books, articles, webinars, and courses. As a premier expert, Dejan founded Advisera to help small and medium businesses obtain the resources they need to become compliant with EU regulations and ISO standards. He believes that making complex frameworks easy to understand and simple to use creates a competitive advantage for Advisera's clients, and that AI technology is crucial for achieving this.

As an ISO 27001 and ISO 42001 expert, Dejan helps companies find the best path to compliance by eliminating overhead and adapting the implementation to their size and industry specifics.

Read more articles by Dejan Kosutic