The EU AI Act was officially adopted in May 2024 and came into force in August 2024. It is a landmark regulation that establishes a risk-based legal framework that categorizes AI systems by their potential to cause harm. The Act places strict obligations on “high-risk” applications used in critical infrastructure, healthcare, law enforcement, and other sensitive domains.
To demonstrate readiness, organizations must implement comprehensive risk management systems, maintain rigorous data governance, produce detailed technical documentation, ensure human oversight, and achieve high standards of accuracy and cybersecurity.
The Act is being enforced through a phased timeline, which prohibits unacceptable-risk AI systems as of February 2025. High-risk systems must comply by August 2026. Penalties for non-compliance can reach up to €35 million or 7% of global annual turnover.
That being said, the Act also offers support measures, such as AI literacy training to help businesses, particularly SMEs, to adapt to these changing requirements. We’re going to walk you through an EU AI Act compliance checklist to help you ensure your business stays on the right side of these regulations.
What Is the Goal of This Act?
The main objective of the EU AI Act is to ensure that all AI systems used in the EU are:
- Safe
- Transparent
- Traceable
- Non-discriminatory
- Environmentally friendly
To prevent any harm, it also ensures that AI systems are overseen by humans, rather than being left entirely to automated technology.
The Act aims to protect users and the general public from safety risks and ethical or human rights violations.
What the Act Prohibits
According to the EU AI Act, the following AI practices have been classed as posing an “unacceptable risk” and have been prohibited since February 2025:
Manipulation and Exploitation
AI systems that use subliminal or manipulative techniques to change behavior and undermine informed decision-making are prohibited. This is especially relevant when they target vulnerable groups, such as children, or exploit vulnerabilities related to age, disability, mental capacity or socio-economic status.
Social Scoring
AI systems that evaluate or classify individuals based on social behavior or personal characteristics that lead to unfair treatment are banned.
Sensitive Biometric Categorization
AI systems that make conclusions about people based on race, political opinions, religion, sexual orientation, or trade union membership are prohibited. An exception to this applies for legally acquired data or law enforcement purposes.
Criminal Risk Profiling
AI systems that assess the likelihood of a person committing crimes based on profiling or personality traits are banned. However, AI may be used as a tool alongside human assessments, provided they have a specific objective based on verifiable facts.
Untargeted Facial Recognition Databases
Compiling facial recognition databases by indiscriminate scraping of facial images from the internet or CCTV footage is prohibited.
Emotion Recognition
AI systems that infer emotions in workplaces or educational institutions are banned unless they are used for safety or medical reasons.
Real-Time Remote Biometric Identification
The use of AI for live facial recognition in public spaces by law enforcement is prohibited. Exceptions to this rule are very limited and may apply in cases of searches for missing persons, prevention of imminent threats, or identification of suspects in serious crimes. For example, biometric identification may be allowed at a border crossing if there are reasonable grounds to suspect human trafficking.
EU AI Compliance Checklist
Organizations must take certain steps to make sure their use of AI aligns with the EU AI Act. Every company operating within the EU should have a comprehensive compliance program based on risk classification and continuous monitoring.
Here are the most important actions organizations should take:
Immediate Actions
Identify and discontinue prohibited AI systems by February 2025, including manipulative AI and certain biometric categorization systems, as these are banned outright with no grace period.
Foundational Steps
These are the changes that need to be made at a foundational level to ensure that future growth and change maintain the same standard of AI governance.
- Create a complete AI inventory cataloguing the purpose and risk levels of all AI systems.
- Clarify organizational roles related to AI systems to determine specific obligations.
- Establish structures of governance, including creating policies and an internal AI oversight committee that defines responsibilities and decision-making processes.
- Implement mandatory AI literacy training for all staff involved with AI systems. These should be role-specific and account for which AI applications they are using.
Compliance Requirements
- Conduct risk assessments for each AI system, especially those classified as high-risk, and implement risk management systems.
- Ensure strict data governance by documenting the origin and quality of training data.
- Implement human oversight mechanisms that allow for intervention in AI decision-making processes when needed.
- Maintain technical documentation, including user manuals and training data summaries, that auditors can review.
Special Considerations
High-risk AI systems used in areas such as critical infrastructure, healthcare, and law enforcement must undergo pre-market assessment, be registered in the EU database, and meet strict accuracy standards.
Providers of GPAI (General Purpose Artificial Intelligence) models must supply technical documentation and conduct model evaluations if systemic risks are identified.
All things considered, the EU AI Act is designed to protect people. While some organizations will have to adjust their use of AI, it is in the best interest of all businesses to get up to standard as soon as possible as this legal framework is phased in.
FAQ
Does the EU AI Act apply to our company if we’re based in the US or UK but serve EU customers online?
Yes, the Act has extraterritorial reach. If your AI system outputs are used within the EU, regardless of where your servers sit or where you’re incorporated, you must comply with the prohibited practices rules.
What’s the difference between being an AI “provider” versus a “deployer,” and why does it matter for fines?
A provider develops or commissions the AI and a deployer just uses it. Providers bear the heaviest burden, while deployers must follow instructions, and ensure human oversight.
Do we need to halt our current AI projects immediately, or is there a grace period for systems already in production?
Any system deemed high-risk must meet the August 2026 deadline regardless of when it was deployed. However, prohibited practices (like social scoring) had to cease by February 2025 with zero tolerance for legacy systems. You can continue development, but new high-risk systems launched after August 2026 must be compliant from day one.
Can our existing Data Protection Officer handle AI Act compliance, or do we need specialized legal counsel?
While the skill sets overlap on data governance, the AI Act covers technical risk management and conformity assessments that go beyond GDPR. SMEs can start with the DPO, but high-risk applications usually require a dedicated AI compliance officer or external counsel who understands both the technical standards.

Recent Posts
-
Published on: January 6, 2026
-
Published on: December 16, 2025
-
Published on: December 2, 2025