AI Governance

EU AI Act Compliance Guide for 2026

A practical guide to EU AI Act compliance for product and security teams — covering risk tiers, key obligations, documentation requirements, and how to build compliance into your AI development process.

11 min read
Legal and compliance documentation alongside an AI system dashboard — illustrating EU AI Act compliance requirements.
AI Governance
Contents·19 sections

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It entered into force in August 2024, with obligations phasing in through 2027. For AI product teams, security teams, and legal counsel, understanding what it requires — and when — is now a business necessity.

This guide explains the EU AI Act in practical terms: what it covers, which tier your AI system falls into, what you are required to do, and how to build compliance into your development process without grinding it to a halt.

What Is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is a horizontal regulation — it applies across industries and sectors, rather than targeting a specific domain like financial services or healthcare. It establishes a risk-based framework: the higher the potential harm an AI system can cause, the more stringent the requirements.

It applies to AI systems that are placed on the EU market or used in the EU, regardless of where the provider is established. A US company deploying an AI-powered product to EU users is covered. A Japanese company using an AI system to make employment decisions about EU residents is covered.

The Act covers both AI providers (those who develop and make AI systems available) and deployers (organisations that use AI systems in their operations). The obligations differ between the two roles, but both are subject to requirements for high-risk AI systems.

Timeline: What Is Already in Force?

The EU AI Act follows a phased implementation schedule:

  • August 2024: Regulation entered into force
  • February 2025: Prohibited AI practices banned (unacceptable risk tier)
  • August 2025: GPAI model rules and governance obligations apply
  • August 2026: High-risk AI system obligations fully apply for most categories
  • August 2027: High-risk AI systems in regulated sectors (medical devices, machinery) fully apply

As of mid-2026, the most urgent obligations are the prohibited practice ban (already in force), the GPAI model rules (in force since August 2025), and the approaching deadline for high-risk AI system compliance.

The Four Risk Tiers

The Act divides AI systems into four tiers based on their potential for harm. Your obligations depend almost entirely on which tier your system falls into.

Unacceptable Risk — Prohibited AI

Certain AI applications are banned outright because they pose risks the EU considers unacceptable. These include: social scoring systems used by governments, AI that exploits psychological vulnerabilities to manipulate behaviour, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), emotion recognition in workplaces and educational institutions, and AI systems that infer sensitive attributes (race, political opinions, sexual orientation) from biometric data.

These prohibitions have been in force since February 2025. If your AI system falls into any of these categories, the required action is discontinuation — there is no compliance path.

High-Risk AI Systems

High-risk AI systems are those used in contexts where incorrect or harmful outputs could significantly affect people's safety, livelihoods, or fundamental rights. The Act defines two sub-categories:

Annex I systems: AI that is itself a safety component of a product regulated under existing EU law — medical devices, machinery, vehicles, aviation equipment. These inherit the conformity assessment requirements of the underlying product regulation.

Annex III systems: AI used in specific high-stakes application areas: biometric identification, critical infrastructure management, educational and vocational access, employment and HR decisions, access to essential public services (credit, insurance, social benefits), law enforcement, migration and asylum management, and administration of justice.

High-risk AI providers face the most substantial compliance obligations: risk management systems, data governance requirements, technical documentation, transparency to deployers, human oversight mechanisms, and conformity assessments.

Limited Risk — Transparency Obligations

AI systems at limited risk are primarily subject to transparency requirements. Chatbots and virtual assistants must disclose to users that they are interacting with an AI. Deep fakes and synthetic media must be labelled as AI-generated. AI-generated text published for the purpose of informing the public on matters of public interest must be machine-readable labelled.

Most consumer-facing AI products — customer service chatbots, AI writing assistants, recommendation systems — fall into this tier. Compliance is relatively straightforward but requires explicit design attention.

Minimal Risk — Largely Unregulated

The vast majority of AI systems — spam filters, AI-powered features in productivity software, AI in video games — fall into the minimal risk tier. The Act imposes no mandatory requirements here, though it encourages adoption of voluntary codes of conduct. The EU AI Office has published model codes of conduct that organisations can adopt.

General Purpose AI (GPAI) Model Rules

The EU AI Act introduces a distinct set of rules for general-purpose AI models — foundation models like GPT-4, Claude, Gemini, and Llama — that can be adapted to a wide range of downstream tasks.

All GPAI model providers must: prepare and maintain technical documentation, make available a summary of training data, publish usage policies, comply with EU copyright law (including providing training data transparency to rights holders), and have a policy to address downstream safety risks.

GPAI models that pose “systemic risk” — currently defined as models trained on more than 10²⁵ FLOPs — face additional requirements: adversarial testing and red teaming before deployment, reporting of serious incidents to the EU AI Office, cybersecurity protections, and energy efficiency reporting.

For organisations using GPAI models via API (OpenAI, Anthropic, Google), the GPAI obligations primarily fall on the model provider. However, deployers remain responsible for how they use the model and for compliance with high-risk requirements if their application qualifies.

Key Obligations for High-Risk AI Systems

Risk Management System

Providers must establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle. This is a continuous process — not a one-time assessment. It must identify known and reasonably foreseeable risks, estimate and evaluate risks, and adopt appropriate risk management measures.

The risk management system must be reviewed and updated in light of post-market monitoring data — meaning providers need feedback loops from production deployment back into their risk assessment process.

Data Governance

High-risk AI systems must have data governance practices covering training, validation, and testing datasets. These practices must ensure that data sets are relevant, representative, free from errors, and complete in light of the intended purpose. Data must be managed in a way that addresses biases that could affect health, safety, and fundamental rights.

Technical Documentation

Before placing a high-risk AI system on the market, providers must prepare technical documentation that allows authorities to assess conformity. This documentation covers: the system's purpose and capabilities, the development methodology, the training data, performance metrics, known limitations, cybersecurity measures, and the risk management system.

Technical documentation must be kept up to date and made available to national competent authorities on request.

Human Oversight

High-risk AI systems must be designed to allow appropriate human oversight. The Act requires that the system be able to be monitored, that humans can intervene or shut it down, and that outputs are interpreted correctly by operators. The system must also detect and alert when it is operating outside its intended purpose.

Accuracy, Robustness, and Cybersecurity

High-risk AI systems must achieve an appropriate level of accuracy, and providers must declare the accuracy metrics. Systems must be resilient to errors, faults, and inconsistencies — including inconsistencies that arise from their AI nature. And they must be resilient to adversarial attack: the Act explicitly calls out protection against “attempts by unauthorised third parties to alter their use, outputs or performance.”

This last point directly implicates LLM security practices: prompt injection defences, output validation, and monitoring are not just good practice — they are required for high-risk AI system compliance.

Who Does the EU AI Act Apply To?

The Act applies to you if any of the following are true:

  • You develop AI systems and make them available in the EU (as a provider), regardless of where you are established
  • You deploy AI systems in the EU in your operations (as a deployer), where those systems are high-risk
  • You import or distribute AI systems in the EU
  • The output of your AI system is used in the EU, even if processing occurs elsewhere

SaaS companies serving EU customers with AI-powered features are covered. Enterprises using AI in HR, credit decisioning, or other Annex III categories are covered as deployers. API-based AI service providers making their models available to EU users are covered as providers.

Small and medium-sized enterprises and startups are not exempt from the Act, though the EU AI Office has committed to providing guidance and support to help smaller organisations comply.

Penalties for Non-Compliance

The EU AI Act's penalty structure mirrors the GDPR in severity:

  • Prohibited practices violations: up to €35 million or 7% of global annual turnover, whichever is higher
  • Other obligations violations (including high-risk requirements): up to €15 million or 3% of global annual turnover
  • Incorrect or misleading information to authorities: up to €7.5 million or 1.5% of global annual turnover

For SMEs and startups, penalties are calculated based on whichever is lower of the fixed amount or the turnover percentage. National regulators have discretion over enforcement, but the GDPR experience suggests that regulators will not hesitate to issue significant fines for material non-compliance.

Practical Steps to Build Compliance

For most AI teams, EU AI Act compliance starts with three foundational steps:

1. Inventory your AI systems. You cannot manage compliance for systems you have not catalogued. Build an inventory of every AI system in use or development, including third-party AI tools. For each, document its intended purpose, its data inputs and outputs, and who uses it.

2. Classify your AI systems by risk tier. For each system in your inventory, determine which risk tier applies. Most commercial AI applications will fall into the limited risk or minimal risk tier. Identify any that qualify as high-risk under Annex III — these require the most attention.

3. Address the prohibited practices immediately. Review your AI systems against the list of prohibited practices that have been banned since February 2025. Any system that meets a prohibited practice definition must be discontinued.

4. For limited-risk systems, implement transparency disclosures. Ensure that chatbots identify themselves as AI, that synthetic media is labelled, and that your AI-generated content policies are documented.

5. For high-risk systems, begin the compliance programme. This means establishing a risk management system, implementing data governance, preparing technical documentation, and building human oversight mechanisms. Start now — the August 2026 deadline is approaching rapidly.

Start With an AI Usage Policy

One of the first practical steps any organisation can take — regardless of where they fall in the risk tiers — is to establish a clear AI usage policy. This policy defines which AI systems are approved for use, what data can be processed by AI, who is responsible for AI-related decisions, and how incidents are handled.

An AI usage policy is required for high-risk AI deployers under the Act, but it is valuable for any organisation: it creates the governance foundation that makes further compliance work tractable.

Prompt Shields' free AI Usage Policy Generator produces a customised, ready-to-use AI policy based on your organisation's profile — covering the core elements required by the EU AI Act and exportable to Word or PDF for immediate use.

Conclusion

The EU AI Act represents a fundamental shift in the regulatory environment for AI. Unlike principles-based frameworks, it imposes specific, enforceable obligations — with significant penalties for non-compliance. For organisations operating in or serving the EU, compliance is not optional.

The good news is that the Act's risk-based structure means that the vast majority of AI use cases face only limited transparency requirements. The compliance burden for high-risk AI is significant — but the high-risk categories are well-defined, and organisations have time to build compliance programmes before the August 2026 deadline.

Start with an inventory, classify your systems, address prohibited practices, and build governance foundations. Prompt Shields' Atlas AI Security Posture Management platform helps organisations build the continuous monitoring and documentation capabilities that both good LLM security and EU AI Act compliance require.

Filed under

EU AI ActAI ComplianceAI GovernanceAI RegulationAI Risk ManagementAI PolicyGPAI
Get started

Read next

AI Red Teaming: LLM Security Testing Guide

A practical guide to AI red teaming — the adversarial testing discipline that finds LLM security flaws before attackers do. Covers methodology, tooling, and what to test in 2026.