Most organisations deploying AI in 2026 face pressure from multiple directions simultaneously: the EU AI Act if they operate in or sell to the EU, NIST AI RMF alignment if they sell to US federal agencies or enterprise customers that require it, and ISO 42001 certification if customers want independent attestation of AI governance maturity.
The good news is that these three frameworks share a significant common core. Building a compliance programme that satisfies all three is substantially less work than building three separate programmes. But the overlaps are not perfect, and understanding where they diverge is as important as understanding where they align.
This guide compares the EU AI Act, the NIST AI RMF, and ISO 42001 side by side — and shows how to build a single, integrated AI governance programme that addresses all three.
The Three Frameworks at a Glance
Before comparing them in detail, it helps to understand the fundamental nature of each framework — because they were designed for different purposes and by different kinds of organisations.
- EU AI Act is a regulation — mandatory law for organisations that place AI systems on the EU market or use them in the EU. It imposes specific, legally enforceable obligations with significant financial penalties for non-compliance. Its primary concern is harm prevention and fundamental rights protection.
- NIST AI RMF is a voluntary framework published by the US government. It provides structured guidance on how to manage AI risk across the AI lifecycle, without mandating specific controls or outcomes. Its primary concern is helping organisations understand and manage AI risk in a comprehensive, principled way.
- ISO 42001 is an international management system standard. It specifies requirements for an AI management system (AIMS) that organisations can be independently certified against. Its primary concern is providing a verifiable, certifiable demonstration of responsible AI governance.
In short: the EU AI Act tells you what you must do (if you are in scope), the NIST AI RMF tells you how to think about doing it well, and ISO 42001 tells you how to prove you are doing it systematically.
EU AI Act: The Regulatory Foundation
The EU AI Act establishes a risk-tiered regulatory framework covering AI systems placed on the EU market or used in the EU. Its obligations range from an outright ban on certain AI practices (social scoring, real-time biometric surveillance) through stringent requirements for high-risk AI systems to lightweight transparency disclosures for limited-risk systems.
For high-risk AI systems — those used in employment decisions, credit scoring, critical infrastructure, law enforcement, and similar contexts — the Act requires: a risk management system, data governance procedures, technical documentation, transparency to deployers, human oversight mechanisms, and conformity assessments. These obligations apply to providers (developers) and, to a lesser extent, deployers.
The Act also includes specific rules for general-purpose AI (GPAI) models — foundation models like GPT-4, Claude, and Gemini — including transparency requirements, copyright compliance obligations, and adversarial testing for the most powerful models.
NIST AI RMF: The Risk Management Framework
The NIST AI RMF organises AI risk management around four functions: GOVERN (building organisational accountability), MAP (understanding AI context and risk), MEASURE (analysing and assessing risk), and MANAGE (treating and monitoring risk). It is intended to be applied iteratively across the AI lifecycle, not as a one-time assessment.
The AI RMF is notable for its breadth. It addresses not only security risks (which cybersecurity frameworks cover) but also fairness, bias, transparency, explainability, privacy, and societal impact. It explicitly requires considering affected communities — people who are not direct users of an AI system but who may be harmed by its outputs.
NIST has published a companion Playbook with specific suggested actions for each framework subcategory, as well as sector-specific profiles — most notably the Generative AI Profile (NIST AI 600-1), which addresses the specific risks of LLMs and other generative AI systems.
ISO 42001: The Certifiable Management System Standard
ISO 42001 follows the ISO High-Level Structure used by ISO 27001 and ISO 9001, covering context, leadership, planning, support, operations, performance evaluation, and improvement. Its Annex A provides 38 AI-specific controls covering the full AI system lifecycle.
The key differentiator of ISO 42001 is certifiability. An accredited third-party certification body can audit an organisation's AIMS and issue a certificate confirming conformity with the standard. This third-party verification is something neither the EU AI Act nor the NIST AI RMF provide — the Act relies on self-assessment or notified body conformity assessment for high-risk systems; the AI RMF has no certification mechanism at all.
How the Three Frameworks Compare
Scope and Applicability
The EU AI Act applies to any organisation placing AI on the EU market or using it in the EU — the trigger is geographic and market-based. The NIST AI RMF applies to any organisation that chooses to use it, or is required to by a customer or government agency. ISO 42001 similarly applies voluntarily, or at customer request.
The EU AI Act focuses on specific AI system risk categories (high-risk, limited-risk). The NIST AI RMF and ISO 42001 both apply to the full range of AI systems an organisation uses or develops.
Risk Assessment Approach
All three frameworks require some form of risk assessment, but they approach it differently. The EU AI Act pre-defines which AI systems are high-risk — organisations assess risk by determining which category their system falls into. The NIST AI RMF requires organisations to perform their own risk assessment for each AI system, considering probability, severity, and breadth of harm. ISO 42001 requires risk assessment covering both risks from AI to others and risks to the organisation.
The NIST AI RMF is the most comprehensive in its risk taxonomy — it explicitly addresses bias, fairness, explainability, privacy, and societal harm in addition to safety and security. The EU AI Act focuses primarily on safety and fundamental rights. ISO 42001 is broadly aligned with the NIST AI RMF in risk breadth.
Documentation and Evidence Requirements
The EU AI Act has the most prescriptive documentation requirements — specific technical documentation formats for high-risk AI, registration in the EU AI database, and CE marking. These are enforceable by national regulators and must meet specific standards.
ISO 42001 requires extensive documented information — policies, risk assessments, control evidence, audit records, management review minutes — but the format is largely left to the organisation. The certification auditor assesses whether the documentation demonstrates effective implementation, not whether it follows a specific template.
The NIST AI RMF does not mandate specific documentation but strongly implies it — the Playbook suggests documenting risk assessments, treatment decisions, and monitoring results as evidence of framework implementation.
Human Oversight
All three frameworks emphasise human oversight. The EU AI Act is most specific — it requires high-risk AI systems to be designed to allow operators to understand, monitor, and intervene in the system, and to stop or pause it. The NIST AI RMF includes human oversight as a component of MANAGE. ISO 42001's Annex A control A.6.1.4 requires appropriate human oversight mechanisms.
Lifecycle Coverage
All three frameworks explicitly cover the full AI system lifecycle — from design and development through deployment, operation, and decommissioning. This is a departure from conventional software security frameworks (like those focused on development-time security testing) and reflects the recognition that AI risks emerge and change throughout a system's life.
Third-Party AI and Supply Chain
All three frameworks address third-party AI risk, but with different emphasis. The EU AI Act places primary obligations on providers (developers) and requires deployers to conduct due diligence. The NIST AI RMF's MAP function explicitly addresses AI supply chain risks. ISO 42001's Annex A includes controls for third-party AI relationships (A.8).
Where the Frameworks Overlap
The three frameworks share a substantial common core — meaning that work done to satisfy one often contributes to satisfying the others. The major overlapping areas are:
- Risk management system: All three require a systematic approach to identifying, assessing, and managing AI risks. A well-designed risk management process satisfies the EU AI Act's risk management system requirement, the NIST AI RMF's MAP and MEASURE functions, and ISO 42001's Clause 6 planning requirements.
- Data governance: All three address training data quality, data bias, and data documentation. A single data governance programme covering training data provenance, quality checks, and bias assessments serves all three.
- Human oversight mechanisms: Designing AI systems with appropriate human oversight — monitoring dashboards, intervention capabilities, shutdown procedures — satisfies requirements in all three frameworks.
- Incident response: All three frameworks require procedures for responding to AI failures and incidents. A single AI incident response playbook satisfies all three.
- AI policy and governance structures: An AI usage policy, governance committee, and clear role assignments address GOVERN (NIST AI RMF), Clause 5 (ISO 42001), and the governance obligations for deployers under the EU AI Act.
Where the Frameworks Diverge
Despite the significant overlap, there are areas where the frameworks require distinct, non-overlapping work:
- EU AI Act conformity assessment: High-risk AI systems require specific conformity assessment procedures — either self-assessment (for most Annex III categories) or third-party notified body assessment (for biometric systems). Neither the NIST AI RMF nor ISO 42001 provides or requires this.
- EU AI database registration: High-risk AI providers and deployers must register in the EU AI database. This is an EU AI Act-specific obligation with no equivalent in the other frameworks.
- GPAI model transparency: The EU AI Act's GPAI rules — training data summaries, copyright compliance, systemic risk assessments — are specific to the Act and not addressed by the other frameworks.
- ISO 42001 management system infrastructure: Internal audit, management review, nonconformity management, and the full documented information system required by ISO 42001 go beyond what the EU AI Act or NIST AI RMF explicitly require. This is where the certification-readiness work is concentrated.
- NIST AI RMF affected communities: The AI RMF's explicit requirement to identify and consider affected communities — people not directly using the system — goes beyond what the EU AI Act and ISO 42001 explicitly require, though it is implied by their fundamental rights and fairness provisions.
The Crosswalk: Mapping Requirements Across All Three
The table below maps the major requirement areas across the three frameworks. Where a requirement area is addressed by all three, a single implementation satisfies multiple frameworks.
- AI governance policy — EU AI Act (deployer obligations), NIST AI RMF (GOVERN), ISO 42001 (Clause 5 + Annex A.2)
- AI system inventory — EU AI Act (implied by high-risk classification), NIST AI RMF (MAP), ISO 42001 (Clause 4.3 + Annex A.9)
- AI risk assessment — EU AI Act (risk management system), NIST AI RMF (MAP + MEASURE), ISO 42001 (Clause 6.1)
- Data governance — EU AI Act (Article 10), NIST AI RMF (MAP + MEASURE), ISO 42001 (Annex A.7)
- Human oversight — EU AI Act (Article 14), NIST AI RMF (MANAGE), ISO 42001 (Annex A.6.1.4)
- Adversarial testing / red teaming — EU AI Act (accuracy and robustness, Article 15; GPAI systemic risk), NIST AI RMF (MEASURE), ISO 42001 (Annex A.6.2.3)
- Technical documentation — EU AI Act (Annex IV), NIST AI RMF (GOVERN + MEASURE), ISO 42001 (Clause 7.5 + Annex A.9)
- Incident response — EU AI Act (Article 73, serious incident reporting), NIST AI RMF (MANAGE), ISO 42001 (Clause 10)
- Monitoring — EU AI Act (post-market monitoring, Article 72), NIST AI RMF (MANAGE), ISO 42001 (Clause 9.1)
- Third-party AI controls — EU AI Act (deployer obligations for third-party high-risk AI), NIST AI RMF (MAP), ISO 42001 (Annex A.8)
Building a Single Compliance Programme
The practical implication of the crosswalk is that a well-designed AI governance programme can satisfy all three frameworks with less than triple the effort of satisfying one. The key is to design artefacts and processes that are explicitly mapped to all three frameworks from the outset.
Recommended Sequencing
For most organisations, the recommended sequencing is: start with NIST AI RMF to build the risk management foundations, layer ISO 42001 structure on top to make those foundations certifiable, then address EU AI Act specifics (conformity assessment, database registration, GPAI compliance) as a final layer.
The rationale for this sequencing: the NIST AI RMF's GOVERN function is the fastest way to establish the organisational culture and accountability structures that make everything else possible. ISO 42001's management system structure then formalises those foundations into a certifiable programme. The EU AI Act specifics — particularly for high-risk systems — require the risk management and documentation foundations to be in place before they can be executed efficiently.
Shared Artefacts That Satisfy Multiple Frameworks
The following artefacts, produced once and maintained centrally, serve requirements across all three frameworks:
- AI system inventory — required by EU AI Act (risk classification), NIST AI RMF (MAP), and ISO 42001 (scope). Build one inventory with fields for risk classification, applicable frameworks, and control status.
- AI risk assessment template — addressing probability, severity, breadth of harm, affected stakeholders, and treatment decisions. A single template can be structured to capture all required fields across frameworks.
- AI usage policy — covering approved use cases, data classification, responsibilities, and incident response. Prompt Shields' AI Policy Generator produces a policy structured to address the governance requirements of all three frameworks.
- Technical documentation package — the EU AI Act requires specific technical documentation for high-risk AI. This same documentation package satisfies ISO 42001's Annex A.9 requirements and supports the NIST AI RMF's GOVERN documentation requirements.
- Adversarial testing records — red team findings, test cases, and remediation records satisfy EU AI Act robustness requirements, NIST AI RMF MEASURE subcategories, and ISO 42001 Annex A.6.2.3.
Tooling to Support Multi-Framework Compliance
Managing compliance across three frameworks manually — tracking requirements, mapping controls, maintaining evidence — is operationally intensive. The right tooling significantly reduces the burden.
Prompt Shields' Atlas AI Security Posture Management platform provides the continuous monitoring and AI system inventory capabilities that all three frameworks require for their ongoing management and measurement functions. It maps monitoring signals to specific framework requirements, so teams can see which compliance obligations are being met by which monitoring activities.
For the technical controls layer — input validation, output screening, prompt injection detection — the Prompt Scorer and Developer SDK provide the LLM security controls that satisfy the technical robustness and accuracy requirements of all three frameworks.
Conclusion
The EU AI Act, NIST AI RMF, and ISO 42001 are not three separate compliance problems — they are three perspectives on the same underlying challenge: governing AI responsibly. Their shared requirements are extensive, and a well-designed programme can satisfy all three without tripling the compliance effort.
The key insights from this comparison are: start with NIST AI RMF GOVERN foundations; build ISO 42001 management system structure to make those foundations certifiable; add EU AI Act specifics as a final layer; and design shared artefacts (inventory, risk assessment, policy, documentation, testing records) that explicitly address all three frameworks from the start.
Organisations that take this integrated approach will find that they are not just compliant — they are genuinely better at governing AI, which is ultimately what all three frameworks are trying to achieve.
