EU AI Act Enforcement 2026: Prohibited AI Practices Are Now Banned
AI, Compliance
The EU AI Act is no longer a future problem. EU AI Act enforcement is underway: the first category of rules, prohibited AI practices, entered into application on February 2, 2025, with the full penalty regime activated in August 2025. Companies found using banned AI systems EU-wide now face fines of up to 35 million euro or 7% of global annual turnover.
Most SMBs do not build banned AI systems on purpose. The real risk is subtler: many SaaS platforms have quietly added AI features that their vendors never flagged as potentially prohibited. If you have not audited your tools against the prohibited practices list, now is the time.
What changed on February 2, 2025
The AI Act uses a phased rollout. The first rules to become binding are the strictest: a complete ban on AI systems that pose an unacceptable risk to people's rights. These rules entered into application on February 2, 2025, and the penalty regime followed in August 2025.
Since February 2025, any company operating in the EU, or offering services to people in the EU, must ensure it does not use, deploy, or provide any of the banned AI practices.
Enforcement sits with national authorities in each EU member state, coordinated at the EU level through the AI Office.
Which AI practices are banned
The AI Act prohibits the following types of AI systems. For background on how the AI Act's risk-based framework works, see our introduction.
1. Social scoring
AI systems that evaluate or classify people based on their social behavior or personal characteristics, leading to unfavorable treatment. Think: scoring employees based on social media activity, or rating customers on personal traits that have nothing to do with the service they receive.
2. Manipulative or deceptive AI
AI systems that use subliminal techniques, manipulative methods, or deceptive practices to distort a person's behavior in a way that causes, or is likely to cause, harm. This includes AI-powered dark patterns that nudge people toward decisions they would not otherwise make.
3. Exploitation of vulnerabilities
AI that targets specific groups, based on age, disability, or socioeconomic situation to distort their behavior in harmful ways. A clear example: AI-driven marketing that specifically targets elderly people with misleading financial products.
4. Real-time biometric identification in public spaces
Using AI for real-time facial recognition or biometric identification in publicly accessible spaces for law enforcement purposes, with very narrow exceptions. This applies primarily to government and law enforcement, but private companies that provide such technology are equally affected.
5. Emotion recognition in workplaces and schools
AI systems that infer the emotions of employees in the workplace or students in educational settings. If you use tools that claim to detect worker engagement, stress levels, or student attention through facial analysis or voice patterns, those are now prohibited.
6. Biometric categorization using sensitive characteristics
AI systems that categorize people based on biometric data to infer sensitive attributes like race, political opinions, religious beliefs, or sexual orientation.
7. Untargeted facial image scraping
AI systems that build facial recognition databases by scraping images from the internet or CCTV footage without consent.
Why this matters for SMBs
You might be thinking: "We do not build social scoring systems. This does not apply to us."
Fair enough for the obvious cases, probably not. But the exposure for SMBs tends to arrive through the back door:
Vendor tools with hidden AI features. Many SaaS platforms have added AI capabilities in recent updates with little fanfare. Some HR tools now include "engagement analysis" or "sentiment detection" that may qualify as emotion recognition under the Act. Some marketing platforms use behavioral profiling that could edge into manipulation territory.
Third-party integrations. If you integrate tools that use AI to influence user behavior, you could be deploying a prohibited system without realizing it, and the Act makes no distinction between builder and user.
Hiring and HR tools. AI-powered candidate screening, personality assessment, or video interview analysis need careful scrutiny. Some features sit uncomfortably close to prohibited or high-risk categories. If you are working through your GDPR compliance checklist at the same time, your AI tool review overlaps significantly, do them together.
The key question is not whether you built the AI. It is whether you use or deploy it.
How to audit your AI tools
You do not need a massive project. A focused, methodical review is enough to start.
Step 1: List all tools that use AI
Work through your software inventory and flag every tool that mentions AI, machine learning, or automated decision-making in its features or documentation. Pay particular attention to:
- HR and recruitment platforms
- Customer engagement and marketing tools
- Analytics and behavioral tracking software
- Communication monitoring tools
Step 2: Check each tool against the prohibited list
For every flagged tool, ask:
- Does it analyze emotions, sentiment, or engagement of employees or users?
- Does it score or classify people based on personal or social characteristics?
- Does it use techniques designed to influence behavior in non-transparent ways?
- Does it process biometric data to infer sensitive personal attributes?
If the answer to any of these is yes, or genuinely unclear, escalate for further review.
Step 3: Contact your vendors
Ask vendors directly whether their AI features comply with the EU AI Act's prohibited practices provisions. Request written confirmation. A responsible vendor will have a clear answer ready. If they do not, that is a red flag worth taking seriously.
Step 4: Document your findings
Write it down. Even if every tool comes back clean, documented evidence of a proactive audit strengthens your position when regulators, customers, or partners come asking questions.
What comes next in the AI Act timeline
The prohibited practices are just the opening phase. More requirements are on the way:
- August 2025: General-purpose AI model rules took effect, covering transparency and copyright obligations for foundation models.
- August 2026: The full high-risk AI system requirements become enforceable. This includes mandatory risk assessments, human oversight, and technical documentation for AI used in areas like hiring, credit scoring, education, and law enforcement.
Start your audit now and you will be well ahead of the curve when the high-risk rules land in August.
Steps to achieve AI Act compliance in 2026
- Run the audit. Use the steps above to review your current AI tools against the prohibited practices list. Set aside one to two hours for the initial pass.
- Update your AI inventory. If you followed our AI governance guide, add AI Act compliance status to each tool in your inventory.
- Talk to your vendors. Request AI Act compliance statements and build this into your standard vendor review process.
- Assign ownership. Decide who in your organization is responsible for ongoing AI Act compliance. It does not need to be a full-time role, but it does need a name attached to it.
- Plan for August 2026. Start assessing whether any of your AI tools fall into the high-risk category. Early preparation is far cheaper than last-minute remediation.
How ComplianceHive helps
ComplianceHive gives you a structured way to track AI tools, vendor compliance, and regulatory requirements in one place. You can:
- Maintain your AI tool inventory with risk classifications
- Track vendor DPAs and AI Act compliance statements
- Assign review tasks and ownership across your team
- Keep audit-ready evidence for regulators and customers
The AI Act does not have to be overwhelming. With the right structure, it is manageable.
This article is for general information and does not constitute legal advice. For legal interpretation, consult qualified counsel.