The AI Act is the first legal framework for AI, addressing its risks and positioning Europe as a global leader. The AI Act (Regulation (EU) 2024/1689) is the first comprehensive global legal framework for AI. My goal is to promote trustworthy AI in Europe. For questions, please visit the AI Act single information platform.  

The AI Act establishes risk-based rules for AI developers and deployers. It constitutes part of a broader policy package that includes the AI Continent Action Plan, the AI Innovation Package, and the launch of AI factories. These measures promote safety, fundamental rights for human-centric AI, and support AI adoption, investment, and innovation across the EU.  

To support the transition to the new framework, the Commission launched the AI PACT, a voluntary initiative that stimulates early compliance with the AI Act and stakeholder engagement. The AI Act Service Desk also provides information and support for effective implementation across the EU.  

Why Do We Need Rules on AI? 

The AI Act aims to build trust in AI for Europeans. While most AI systems entail little or no risk and can help address societal challenges, some systems present risks that require regulation to prevent negative outcomes.  

Example – It is often difficult to determine why an AI system made a specific decision or prediction. This can make it difficult to assess whether someone was unfairly disadvantaged, such as in hiring or public benefit applications.  

Existing legislation offers some protection but does not fully confront the unique challenges posed by AI systems.  

A Risk-Based Approach 

Unacceptable risk 

AI systems that clearly threaten safety, livelihoods, or rights are banned. The AI Act prohibits eight specific practices:  

  1. Harmful AI-based manipulation and deception  
  1. Harmful AI-based exploitation of vulnerabilities  
  1. Social scoring  
  1. Individual criminal offense risk assessment or prediction  
  1. Untargeted scraping of the internet or CCTV material to create or expand facial recognition databases  
  1. Emotion recognition in workplaces and educational institutions  
  1. Biometric categorization to deduce certain protected characteristics  
  1. Real-time remote biometric identification for law enforcement purposes in publicly accessible spaces  

The prohibitions took effect in February 2025. The Commission published two key documents to support practical application.  

  • The Guidelines on Prohibited AI Practices under the AI Act provide legal explanations and concrete examples to help stakeholders understand and comply with the prohibitions.  
  • The AI system definition guidelines help stakeholders determine the scope of the AI act.  

High Risk 

Use cases that can seriously affect health, safety, or basic rights are called high-risk. Here are some examples:  

  • By safety features in key infrastructure, such as transport, where a failure can put people’s lives or health at risk.  
  • Tools used in schools or universities that can affect access to education or influence someone else’s career path, such as exam scoring systems.  
  • AI-powered safety features are used in products, such as those for robot-assisted surgery.  
  • AI tools are used for hiring, managing employees, or helping people find self-employment, for example, software that sorts CVs for recruitment.  
  • Some AI use cases provide access to fundamental private and public services, such as credit scoring, which can deny people the chance to get a loan.  
  • AI systems are used for remote biometric identification, emotion recognition, and biometric categorization. For example, an AI system that can identify a shoplifter after the fact.  
  • May I use cases in law enforcement that could affect people’s fundamental rights, such as evaluating the reliability of evidence?  
  • AI use cases in migration, asylum, and border control management. For example, automated examination of visa applications.  
  • AI solutions used in the administration of justice and governance processes, such as tools that help prepare court rulings, are subject to strict obligations before they can be put on the market.  
  • Adequate hazard evaluation and control systems.  
  • High-quality datasets are used to train the system, reducing the risk of discrimination.  
  • Logging of activity to ensure traceability of results.  
  • Detailed documentation that provides all necessary information about the system and its purpose, enabling authorities to verify compliance with the rules.  
  • Clear and sufficient information is provided to the person or group using the system.  
  • Proper measures to make sure humans oversee the system.  
  • A high level of strength, cybersecurity, and accuracy.  

The rules for high-risk AI will start to apply in August 2026 and August 2027.  

Transparency risk 

This refers to the risks associated with the obligation to be transparent about AI use. The AI Act introduces specific disclosure obligations to ensure that humans are informed, when necessary, thereby preserving trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can make informed decisions.  

Moreover, providers of generative AI must ensure that AI-generated content is identifiable. On top of that, certain AI-generated content should be clearly and visibly labelled, namely, deepfakes and text published to inform the public on matters of public interest.  

Transparency rules of the AI Act will come into effect in August 2026.  

Minimal Or No Risk 

The AI Act does not set rules for AI that is considered minimal or no risk. Most AI systems used in the EU are in this group. Examples include AI in video games or spam filters.  

How does it all work in practice for providers of high-risk AI systems? 

Once an AI system is on the market, authorities are responsible for market surveillance. Deployers ensure human monitoring, and providers have a post-market monitoring system in place. Providers and deployers will also report serious incidents and malfunctions.  

What Are the Rules for General-Purpose AI Models? 

General-purpose AI (GPAI) models can do many different tasks and are now the foundation of many AI systems in the EU. Some of these models could pose bigger risks if they are very powerful or widely used to keep AI safe and trustworthy. The AI Act sets rules for providers of these models, including requirements for disclosure and copyright. If a model could pose greater risks, providers must identify and mitigate them. The GPAI rules began in August 2025. 

Source:https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai 

Amazon

Leave a Reply

Your email address will not be published. Required fields are marked *