The AI Act is the first legal framework for AI. It addresses AI risks and helps Europe take a leading role globally.
The AI Act Regulation (EU) 2024/1689 is the world’s first comprehensive legal framework for AI. Its goal is to encourage trustworthy AI in Europe. If you have questions about the AI Act, visit the AI Act Single Information Platform.
The AI Act introduces risk-based rules for AI developers and users depending on how AI is used. It constitutes part of a broader set of policies to support trustworthy AI, including the AI Continent Action Plan, the AI Innovation Package, and the launch of AI factories. These efforts seek to ensure safety, protect fundamental rights, promote human-centered AI, and boost AI adoption, investment, and innovation across the EU.
To help with the move to the new rules, the Commission has started the AI Pact. This voluntary program supports future implementation, involves stakeholders, and invites AI providers and users from Europe and elsewhere to follow the AI Act’s main rules early. At the same time, the AI Act Service Desk provides information and support to ensure the AI Act is implemented smoothly across the EU.
What are the rules on AI?
The AI Act makes sure that Europeans can trust what AI has to offer. While most AI systems pose limited or no risk and can help solve many societal challenges, some pose risks that we must address to avoid undesirable outcomes.
For example, it is often hard to know why an AI system made a certain decision or prediction. This can make it difficult to tell if someone was treated unfairly, such as in hiring or when applying for public benefits.
Current laws offer some protection, but they are not sufficient to address the specific challenges posed by AI systems.
In brief.
The EU AI Act was published in the official Journal of the European Union on 12 January 2024. Companies that develop or use AI technologies should note that the act will take effect 20 days later on 1st August 2024, most of its rules will apply from 2nd August 2026, but some provisions have different deadlines based on the risk level of the AI systems.
Recommended Actions
ACT covers all stages of working with AI. If you develop or use AI and have not checked how the ACT will affect your business, now is a good time to start. Review your AI systems to determine whether they fall under the ACT and which risk category applies to them.
In More Detail
Most of the EU AI Act will apply from 2 August 2026, but some rules have earlier or later deadlines. Depending on the risk category of the AI systems, companies should pay attention to these different timelines.
1 August 2024: the EU-AI Act enters into force.
August 2, 2025: The ban on prohibited systems begins. These include:
- Subliminal techniques
- Systems that take advantage of vulnerable groups
- Biometric categorization
- Social scoring
- Individual predictive policing
- Facial recognition using targeted scraping
- Emotion recognition in workplaces and schools
- Real-time remote biometric identification in public places
Public spaces for law enforcement: In some cases, there are specific thresholds and limited exceptions to these bans.
2 May 2025: The AI Office will help develop codes of practice for providers of general-purpose AI models working with Member States and industry. The Act defines a general-purpose AI model as one trained with a large amount of data using self-supervision at scale that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications (except AI models that are used for research and development or prototyping activities before they are released on the market). If these codes of practice are not ready or considered adequate by 2 August 2021, the 2025 Common Rules for GP/AI Providers will be put in place.
2nd August 2025:
- GPAI Governance Obligations now apply. These are generally less strict than those for high-risk systems, but still require:
- Technical documentation
- A policy to comply with copyright law
- A sufficiently detailed summary of the training dataset
- GPAI Systems with systemic risks must meet these extra requirements.
- Rules for notifying authorities now apply. Member States must appoint competent authorities and establish rules on penalties and administrative fines.
February 2, 2026:
- The Act now applies to general obligations for high-risk AI systems listed in Annex 3. Take effect by covering areas such as biometrics, critical infrastructure, education, employment, access to key public and private services, law enforcement, immigration, and justice. These rules include pre-market checks, quality and risk management, and post-market monitoring.
- Each member state must have at least one national regulatory sandbox for AI in place.
August 2, 2027: high-risk system rules now apply to products that already need third-party confirmatory checks, such as:
- Toys
- Radio equipment
- In vitro medical devices
- Agricultural vehicles
The Act now covers the GPAI systems sold before 2nd August 2025.
31 December 2030: AI systems that are part of large-scale IT systems listed in NXX and were sold or put into use before 2nd August 2027 must now comply with the Act.
Baker McKenzie’s team of experts can help you with every part of the EU AI Act compliance, responsible AI governance, and related policies and processes.
Sources:https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai










