EU AI Act Milestone. 2 August 2025. Black and white photo of an old milestone in a Kassel meadow, surrounded by tall grasses.

100 days until the next EU AI Act milestone: Which rules kick in on 2 August 2025?

·

·

August 2, 2025 marks a major milestone in the implementation of the EU Artificial Intelligence Act. With just 100 days to go, organizations across the EU and beyond should prepare for a new phase of regulatory oversight and operational change.

In this post, we break down in simple terms which provisions of this EU AI Act milestone will start applying on 2 August 2025 and what that means.

1 The system of Notified Bodies becomes operational

One of the most impactful changes is the activation of the Notified Bodies system under Chapter III, Section 4 of the EU AI Act. These are independent conformity assessment organizations appointed by a Notifying Authority in each Member State and formally notified to (and recognized by) the European Commission.

What will Notified Bodies do?

Notified Bodies are responsible for evaluating certain high-risk AI systems before they can be placed on the EU market or put into use. Their task is to check whether these systems comply with the applicable requirements of the AI Act, from risk management to transparency to data quality.

This is critical for providers of high-risk AI systems (see our EU AI Act Key Takeaways resources) and follows the principles of product safety legislation as it applies to the AI Act.

Important distinction:

Do not confuse Notified Bodies with Market Surveillance Authorities. The latter enforce the Act after AI systems are on the market, e.g., through inspections, audits, and potential penalties for non-compliance.

2 New rules for General-Purpose AI models

Chapter V of the AI Act brings forward a tailored framework for General-Purpose AI (GPAI) models, also known as the foundational models that power many of the Generative AI tools we use today, including large language models (LLMs).

What should GPAI providers do?

Under the new rules, providers of GPAI models must:

  • Prepare and maintain detailed technical documentation, as outlined in Annex XI (not to be confused with Annex IV, which outlines the required technical documentation for high-risk AI systems – see How to decide if your system qualifies as an AI system?)
  • Share information with downstream AI system developers
  • Publish summaries of the data used to train the model (where feasible and appropriate)

In the same way as AI systems go through a risk classification, the AI Act introduces a risk-based approach for GPAI models. Models deemed to pose “systemic risk” face additional obligations, including more stringent transparency and risk mitigation requirements.

There are limited exceptions for open-source GPAI models, unless they pose systemic risks.

What about existing models?

Providers of GPAI models already on the market before August 2, 2025 have a two-year transitional period, until August 2, 2027, to comply (see Article 111(3)).

Still waiting on Codes of Practice

To support compliance, the Commission and stakeholders are developing official Codes of Practice. These aim to clarify how obligations can be met in practice.

While the initial deadline for publishing the first Codes of Practice was May 2, 2025, it’s now clear this target will be missed.

3 Governance structure goes live

Chapter VII of the Act introduces the formal governance structure to oversee AI deployment and compliance across the EU.

This includes:

  • The AI Office, established as a function within the European Commission, is tasked with both operational and supervisory responsibilities
  • The AI Board, composed of representatives from each Member State, plays a coordinating and advisory role
  • In addition, each Member State must designate its National Competent Authorities:
    • At least one Notifying Authority (for appointing Notified Bodies)
    • At least one Market Surveillance Authority (to monitor compliance post-market)

This structure should help ensure application and enforcement across the EU.

Different AI bodies

To avoid confusion between EU’s AI bodies:

The AI Board (Art. 65) represent EU Member States and plays an advisory and coordination role (Art. 66). The AI Office (Art. 64) is a function within the European Commission that has operational and supervisory responsibilities.

The AI office provides the secretariat to the AI Board but has no voting rights.

These bodies function similarly to the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) but with some notable differences. For instance, the EDPS oversees EU institutions, while the AI office has direct enforcement power over general purpose AI models (Chapter V + Art. 88). The EDPS is an independent authority of the EU and the AI office is a function within the EC. Etc.

4 Confidentiality provisions from Article 78 start to Apply

The confidentiality article requires the European Commission and national authorities to:

  • Collect only the data necessary for their enforcement and supervisory tasks
  • Handle sensitive business information with appropriate confidentiality and security measures
  • Delete data once it’s no longer required

This protects:

  • Trade secrets and intellectual property
  • Source code
  • Investigation integrity
  • National security concerns

These safeguards are vital for building trust between regulators and industry — and for ensuring that innovation doesn’t come at the expense of sensitive business data.

5 The Penalties Get Real

Perhaps most importantly for risk managers and legal teams: Chapter XII‘spenalty provisions also kick in.

From August 2, 2025, administrative fines become enforceable:

  • Up to €35 million or 7% of global annual turnover for breaches of prohibited AI practices (Article 5)
  • Up to €15 million or 3% for non-compliance with other obligations
  • Up to €7.5 million or 1.5% for providing false or misleading information to Notified Bodies or national authorities

Final thoughts on the 2 August 2025 EU AI Act milestone

Since 2 February, we’ve had the first provisions on AI literacy and forbidden AI practices. The application of these new provisions marks a turning point for AI governance in Europe. While the full AI Act won’t apply until mid 2026, the activation of notified bodies, GPAI model rules, governance, confidentiality, and penalties this year should capture your attention.

This post is not an exhaustive overview. If you’re working on implementation or advising others, we’d love to hear what you think is missing. Always check the official text of the regulation.

Or if you require assistance with implementing AI governance, contact us to discuss your need.