EU AI Act Key Takeaways

The European Union’s Artificial Intelligence Act (Regulation (EU) 2024/1689) establishes harmonized rules for the development, deployment, and use of AI systems within the EU. It aims to ensure AI is safe and respects fundamental rights, democracy, the rule of law, and environmental sustainability, while simultaneously fostering innovation and establishing the EU as a leader in trustworthy AI. In this resource, we present the EU AI Act key takeaways.

The Act applies to providers placing AI systems on the EU market or putting them into service, regardless of their location, and to deployers (users) located within the EU. It seeks to create legal certainty and prevent fragmentation of the internal market due to differing national rules.

Key principles and approach

Product safety regulation

The AI Act is fundamentally a product-safety regulation, developed under the EU’s New Legislative Framework, the same legal architecture used for regulating machinery, medical devices, and other high-risk products. This approach focuses on ensuring that AI systems placed on the EU market are safe, trustworthy, and respect fundamental rights, particularly when they are used in sensitive contexts like healthcare, employment, or law enforcement.

Like other laws under this framework, the AI Act assigns clear obligations to different economic operators (e.g., providers, importers, distributors), requires conformity assessments before high-risk AI systems can be sold or used, and mandates post-market monitoring and incident reporting. This ensures that AI, like any other regulated product, meets common EU-wide standards for safety and compliance.

Risk-based classification

The AI Act follows a risk-based approach, imposing obligations proportional to the level of risk an AI system might pose.

EU AI Act Key Takeaways - Risk-based classification of AI systems

Prohibited AI practices

Certain AI systems presenting an unacceptable risk to fundamental rights are banned. These include:

  • Systems using subliminal techniques or exploiting vulnerabilities (age, disability) to materially distort behaviour causing harm.
  • Biometric categorization systems inferring sensitive attributes (race, political opinions, sexual orientation), with limited exceptions.
  • Social scoring systems by public authorities.
  • Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except in narrowly defined situations.
  • AI predicting criminal propensity based solely on profiling or personality traits.
  • Untargeted scraping of facial images from the internet or CCTV to create facial recognition databases.
  • Emotion recognition in the workplace and educational institutions (except for medical/safety reasons).

High-risk AI systems

Systems with a potentially significant impact on safety or fundamental rights are classified as high-risk. These are subject to strict requirements before market placement or use. These systems fall into two groups.

Annex I AI systems: AI as a safety component of regulated products

These are AI systems that are embedded into products already regulated under EU harmonization legislation. These products are covered under existing EU laws listed in Annex I, and the AI component is treated as a safety-critical element. Compliance with the AI Act is achieved through the existing conformity assessment procedures under those sectoral laws, with additional AI-specific checks integrated where necessary. Examples include (non-exhaustive):

  • Medical devices
  • Automated vehicles
  • Aviation equipment
  • Toys or machinery

Annex III AI systems: Stand-alone high-risk AI systems

These are AI systems used in specific sensitive areas that are considered high-risk regardless of whether they are part of another product. They are listed in Annex III of the AI Act and include systems used in:

  1. Biometric identification and categorization
  2. Critical infrastructure (e.g., transport, energy).
  3. Education and vocational training
  4. Employment and worker management
  5. Access to essential private and public services (e.g., credit scoring, social benefits)
  6. Law enforcement
  7. Migration, asylum and border control management
  8. Administration of justice and democratic processes

These systems are regulated directly by the AI Act and must comply with strict requirements such as risk management, data governance, transparency, and human oversight.

Limited-risk AI systems

Certain AI systems require specific transparency measures. Users must be informed when they are interacting with an AI system (like chatbots) or when confronted with AI-generated content (deepfakes), unless obvious from the context.

Minimal or no-risk AI systems

AI systems not falling into the above categories (e.g., AI-enabled spam filters or video games) face no additional obligations under the Act, representing the vast majority of AI systems.

Specific rules

Real-time remote biometric identification (RBI) for Law Enforcement

Use in public spaces is generally prohibited. Exceptions are strictly limited to vital objectives like searching for specific victims (kidnapping, trafficking), preventing imminent threats (terrorism, serious harm), or identifying suspects of specific serious crimes (punishable by at least 3-5 years, depending on the specific exception). Such use requires prior judicial or independent administrative authorization, is limited in time and scope, necessitates a fundamental rights impact assessment, and adheres to strict safeguards.

General-purpose AI (GPAI) models

The Act includes rules for GPAI models (models with broad capabilities adaptable to many tasks). Providers must comply with transparency obligations, including providing technical documentation. Providers of GPAI models presenting systemic risks face additional requirements, such as model evaluation, risk assessment and mitigation, and cybersecurity measures.

Obligations for providers and deployers

Providers of high-risk systems have extensive obligations related to quality, risk management, documentation, conformity, and post-market monitoring. You’ll need to demonstrate compliance with the requirements from Chapter III, Section 2:

  • Art. 9 Risk management system
  • Art. 10 Data and data governance
  • Art. 11 Technical documentation
  • Art. 12 Record-keeping
  • Art. 13 Transparency and provision of information to deployers
  • Art. 14 Human oversight
  • Art. 15 Accuracy, robustness, and cybersecurity

The AI Act doesn’t dictate a specific approach or framework, but Article 40 states that if a provider of high-risk AI systems implements and is certified against a harmonized standard (which CEN/CENELEC is still developing), the system will be presumed to comply with the requirements of Chapter III, Section 2. See also our resource on ISO 42001 vs NIST AI RMF.

Deployers (users) of high-risk systems must ensure human oversight, use systems according to instructions, monitor their operation, and, for public bodies or certain private deployers, conduct a fundamental rights impact assessment before deployment.

Governance and enforcement

  • National Competent Authorities: Each Member State designates authorities responsible for implementation and enforcement, including market surveillance authorities.
  • European AI Office: Established within the European Commission, the AI Office plays a key role in overseeing GPAI models, coordinating enforcement, developing standards, and providing guidance. See also: The AI Office: What is it, and how does it work?
  • European AI Board: A formal group made up of representatives from national competent authorities (i.e., each EU country’s regulator for AI), working together to ensure consistent application of the AI Act.
  • AI Regulatory Sandboxes: Member States are encouraged to establish sandboxes to allow for the development and testing of innovative AI in a controlled environment under regulatory supervision.
  • Penalties: Non-compliance can result in significant administrative fines, calculated based on the infringement type and company turnover, designed to be effective, proportionate, and dissuasive.
  • Individual Rights: Citizens have the right to lodge complaints regarding non-compliant AI systems and the right to receive explanations for decisions made by high-risk AI systems that produce legal or similarly significant effects on them.

Timeline and application

The Regulation entered into force on 1 August 2024 and its implementation follows a phased approach.

EU AI Act Key Takeaways - implementation timeline

The implementation timeline is as follows:

  • 2 February 2025
    • Chapters I & II apply (definitions, scope, AI literacy).
    • Prohibitions on unacceptable AI practices (Article 5) take effect.
  • 2 May 2025
    • Codes of practice should be ready (to guide compliance).
  • 2 August 2025
    • Key sections apply: Chapter III Section 4, Chapters V, VII, XII, and Article 78.
    • AI Office and governance structures become operational.
    • Obligations for general-purpose AI model providers apply.
    • Penalty rules (fines, enforcement) come into force.
  • Between 2 February 2025 and 2 August 2026
    • Deployers of real-time remote biometric identification systems must conduct Fundamental Rights Impact Assessments (FRIA) if exceptions apply.
  • 2 August 2026
    • Full application of the AI Act for high-risk systems.
    • At least one AI regulatory sandbox must be operational in each Member State.
    • Commission must adopt an implementing act for post-market monitoring.
    • Guidelines for Article 6 classification (with examples) must be issued.
  • 2 August 2027
    • Legacy general-purpose AI models (on market before 2 August 2025) must comply with AI Act.
  • End of 2030
    • Public-sector high-risk AI systems must comply.
    • Legacy high-risk AI in large-scale IT systems (e.g., Schengen Information System) must comply.

Further information

The European Commission, particularly through the AI Office, is expected to publish guidelines and supporting documents to facilitate the implementation of the AI Act. In addition to these EU AI Act key takeaways, you can keep an eye on the official EU websites for updates.

Assess your EU AI Act compliance today

info@thegoverners.eu