Onyx Logo Onyx
Menu

EU AI Act Compliance: A Practical Checklist

Onyx Team
EU AI Act Compliance Regulation Risk Management
EU AI Act Compliance: A Practical Checklist

The EU AI Act is now in force, with phased enforcement starting in 2026. If you’re deploying AI systems in Europe, you need a compliance strategy—not just legal review, but operational changes to how you build, document, and monitor AI.

Here’s a practical breakdown.

🎯 Risk Categories

The Act classifies AI systems into four tiers:

🚫 Unacceptable Risk (Banned)

  • Social scoring by governments
  • Subliminal manipulation
  • Exploitation of vulnerabilities
  • Real-time biometric identification in public (with exceptions for law enforcement)

Most enterprises won’t touch these.

⚠️ High Risk

  • Employment/recruitment decisions
  • Credit scoring
  • Law enforcement tools
  • Critical infrastructure control
  • Educational assessment
  • Biometric identification/categorization

These require conformity assessment, documentation, and ongoing monitoring.

👁️ Limited Risk (Transparency Obligations)

  • Chatbots and deepfakes (must disclose AI use)
  • Emotion recognition systems
  • Biometric categorization

Lighter requirements, but disclosure is mandatory.

✅ Minimal Risk

  • Spam filters, inventory management, recommendation engines

No specific obligations beyond general product liability.

✅ Compliance Checklist for High-Risk Systems

If you’re in the high-risk category, you must:

1️⃣ Risk Management System

  • 📊 Document potential harms and mitigation strategies
  • 🔍 Test for bias across protected attributes (gender, ethnicity, age, etc.)
  • 📈 Establish monitoring for drift and performance degradation

2️⃣ Data Governance

  • 🎯 Ensure training data is representative and free of bias
  • 📝 Document data sources, preprocessing, and validation
  • ✓ Implement data quality checks throughout the lifecycle

3️⃣ Technical Documentation

  • 🏗️ Architecture diagrams and design choices
  • 🃏 Model cards (training data, performance metrics, limitations)
  • 🔄 Version control and reproducibility

4️⃣ Transparency & Human Oversight

  • 📖 Clear instructions for users
  • 👤 Human-in-the-loop for high-stakes decisions
  • 🔓 Ability to override or contest AI decisions

5️⃣ Accuracy, Robustness, Cybersecurity

  • 🎯 Performance benchmarks and ongoing validation
  • 🛡️ Adversarial testing and security audits
  • 📋 Logging and auditability

6️⃣ Record-Keeping

  • 💾 Automatic logging of system events (inputs, outputs, decisions)
  • ⏱️ Retention periods aligned with regulatory requirements

🤝 What This Means for Sovereign AI

The Act’s requirements align naturally with sovereign AI principles:

  • 🏢 On-premise/hybrid deployments — Make auditing and data governance easier
  • 💡 Explainable models — Interpretable decisions are mandatory for high-risk use cases
  • 🔐 Local control — Simplifies compliance vs. relying on third-party APIs with opaque internals

🚀 Getting Started

  1. 🗂️ Classify your systems — Map AI use cases to risk tiers (this is often non-obvious—consult legal counsel)
  2. 🔎 Gap analysis — Compare current practices to Act requirements
  3. 🛠️ Technical roadmap — Implement missing controls (logging, bias testing, documentation)
  4. ♻️ Ongoing governance — Compliance isn’t a one-time project—build it into your development lifecycle

At Onyx, we help organizations navigate both the regulatory and technical sides of AI Act compliance. Reach out if you’d like a readiness assessment.