AI Governance

ISO 42001: AI Management System Standard Guide

A complete guide to ISO 42001 — the international standard for AI management systems. Covers what it requires, how it is structured, how to achieve certification, and how it relates to ISO 27001 and the EU AI Act.

11 min read
Certification and compliance documentation alongside an AI system — illustrating the ISO 42001 AI management system standard.
AI Governance
Contents·23 sections

ISO 42001 is the first international standard specifically designed for AI management systems. Published by the International Organisation for Standardisation in December 2023, it provides a certifiable framework that organisations can use to demonstrate responsible AI development, deployment, and governance to customers, regulators, and other stakeholders.

For organisations familiar with ISO 27001 (information security) or ISO 9001 (quality management), ISO 42001 will feel familiar in structure. It follows the same high-level structure (HLS) used across modern ISO management system standards — making integration with existing management systems considerably easier than building a standalone AI governance programme from scratch.

What Is ISO 42001?

ISO/IEC 42001:2023 specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). An AIMS is the set of policies, processes, and controls an organisation uses to manage AI responsibly across its lifecycle.

The standard addresses AI from two organisational perspectives: organisations that develop AI systems and make them available to others (providers), and organisations that deploy and operate AI systems in their own products or operations (deployers). Many organisations are both — they use third-party AI models and also develop AI-powered products — and the standard accommodates this.

ISO 42001 is designed to be applicable to any organisation regardless of size, sector, or the type of AI it uses. A small company using AI for internal productivity is as able to implement it as a large enterprise building AI-powered products.

Who Does ISO 42001 Apply To?

Unlike the EU AI Act, ISO 42001 is not a regulation — there is no legal obligation to comply with it. Adoption is driven by voluntary choice, typically for one or more of the following reasons:

  • Customer requirements: Enterprise customers — particularly in regulated sectors — are increasingly requiring AI governance attestations. ISO 42001 certification provides a credible, third-party verified response.
  • Procurement advantages: Government and public sector tenders are beginning to reference ISO 42001. Certification provides a competitive advantage.
  • Regulatory alignment: ISO 42001 maps to EU AI Act requirements. Certification does not substitute for regulatory compliance, but it demonstrates a systematic approach that regulators recognise.
  • Internal governance: The standard provides a structured framework for organisations that want to govern AI responsibly but need external guidance on what that means in practice.

Structure of the Standard

ISO 42001 follows the ISO High-Level Structure (HLS), which means it shares the same clause numbering as ISO 27001, ISO 9001, and other modern ISO management system standards. Organisations already certified to those standards will find much of the structure familiar.

Clause 4: Context of the Organisation

Clause 4 requires the organisation to understand its context — internal and external factors that affect how it manages AI risk — and to identify interested parties (stakeholders) whose needs must be considered. It also requires defining the scope of the AIMS: which AI systems, business processes, and organisational units are covered.

A distinctive requirement of Clause 4 in ISO 42001 is the obligation to understand the organisation'srole in the AI value chain. Is it a developer of AI models? A deployer of third-party models? An operator of AI-powered services? The answer shapes which controls apply and where responsibility lies.

Clause 5: Leadership

Top management must demonstrate commitment to the AIMS — not just sign off on it, but actively champion it. Clause 5 requires management to establish an AI policy, assign clear AI governance responsibilities, and ensure that the AIMS is integrated into the organisation's overall management system.

The AI policy required by Clause 5 must cover: the organisation's commitment to responsible AI, its approach to AI risk management, and its obligations to relevant stakeholders. This is distinct from an AI usage policy (which governs how employees use AI) — the management system policy is a strategic statement of commitment.

Clause 6: Planning

Clause 6 is where AI risk management gets operationalised at the planning level. It requires the organisation to: assess AI risks and opportunities; define AI objectives and plans to achieve them; and plan how to address risks identified in the risk assessment.

The AI risk assessment required by Clause 6 must consider both the risks that AI systems pose to others (harms to users, affected communities, society) and the risks that AI poses to the organisation itself (operational, reputational, legal). This dual perspective is one of the features that distinguishes ISO 42001 from purely security-focused frameworks.

Clause 7: Support

Clause 7 covers the resources, competence, awareness, communication, and documented information that the AIMS requires. For AI, this includes ensuring that staff involved in AI development and deployment have the competence to do so responsibly — not just technical competence, but awareness of AI ethics, fairness, and risk.

Documented information requirements are significant under ISO 42001. The organisation must maintain records of its AI risk assessments, control implementations, audits, and management reviews. These records are what certification auditors examine.

Clause 8: Operation

Clause 8 is the largest and most technically demanding clause. It covers the operational controls that implement the risk treatment plans from Clause 6. For AI systems, this includes:

  • AI system impact assessments before deployment
  • Procurement and supplier controls for third-party AI
  • Data management controls covering training data quality, bias assessment, and data governance
  • AI system lifecycle management from development through to decommissioning
  • Transparency and explainability requirements appropriate to the system's risk level
  • Human oversight mechanisms

Many of the Clause 8 controls align directly with EU AI Act obligations for high-risk AI systems — making implementation of both standards more tractable when done together.

Clause 9: Performance Evaluation

Clause 9 requires the organisation to monitor, measure, analyse, and evaluate the performance of the AIMS. This includes defining what is monitored (metrics), how it is monitored (tools and methods), when it is monitored (frequency), and who analyses the results.

Internal audits — systematic, independent assessments of whether the AIMS conforms to the standard and is effectively implemented — are required at planned intervals. Management reviews assess the overall performance of the AIMS and make decisions about changes needed.

Clause 10: Improvement

Clause 10 requires the organisation to continually improve the AIMS — responding to nonconformities (failures to meet requirements) with corrective action, and proactively identifying opportunities for improvement. The management review process (Clause 9) feeds into this by identifying areas where the AIMS is underperforming.

The Annexes: AI-Specific Controls

ISO 42001 includes two normative annexes that provide AI-specific controls and guidance. Unlike in ISO 27001, where Annex A controls are mandatory, the ISO 42001 annexes serve as a reference from which organisations select controls appropriate to their AI risks.

Annex A: AI Objectives and Policies

Annex A contains 38 controls across 9 categories covering the full AI lifecycle: AI policies, internal organisation, resources for AI systems, assessing AI systems, AI system lifecycle processes, data for AI systems, third-party AI relationships, AI system use, and documentation of AI systems.

Each control specifies what the organisation must do. For example, control A.6.2.3 requires that “the organisation shall assess the risks and impacts associated with the AI system” before deployment. Control A.7.4 requires data quality controls to ensure training and validation data is suitable for the intended purpose.

Annex B: Guidance on Controls

Annex B provides guidance on implementing each Annex A control — explaining the intent of each control and suggesting implementation approaches. It is the implementation companion to Annex A's requirements, similar in function to ISO 27002's role relative to ISO 27001.

ISO 42001 vs ISO 27001: What Is the Relationship?

ISO 27001 covers information security management. ISO 42001 covers AI management. They are complementary, not competing, standards.

Organisations certified to ISO 27001 have a significant head start on ISO 42001 — the management system infrastructure (internal audit, management review, documented information, risk assessment process) is already in place. ISO 42001 can be implemented as an extension to the existing ISO 27001 ISMS rather than as a completely separate system.

The key additions ISO 42001 requires beyond ISO 27001 are AI-specific: AI risk assessment (which goes beyond information security risk), AI impact assessment, AI lifecycle management controls, training data governance, and AI-specific transparency and explainability requirements. ISO 27001 addresses confidentiality, integrity, and availability. ISO 42001 additionally addresses fairness, accountability, transparency, and AI-specific safety.

How to Achieve ISO 42001 Certification

ISO 42001 certification is issued by accredited certification bodies — the same organisations that issue ISO 27001 and ISO 9001 certificates. The certification process follows the standard pattern for ISO management system certifications.

Step 1: Gap Assessment

Before beginning implementation, assess your current state against ISO 42001 requirements. Identify which clauses and controls are already met (typically through existing ISO 27001 or other management system infrastructure), which need to be adapted, and which need to be built from scratch.

The gap assessment should produce a prioritised implementation plan with effort estimates for each gap area. For organisations with mature ISO 27001 programmes, the gap is typically concentrated in the AI-specific Clause 8 requirements and Annex A controls.

Step 2: Implementation

Implement the required policies, processes, and controls identified in the gap assessment. This typically takes 6–18 months depending on organisational size, complexity, and the maturity of existing AI governance practices.

Key implementation activities include: establishing the AI management system scope, conducting AI risk assessments, implementing the Annex A controls applicable to your AI systems, creating the required documented information, and training staff on their AI governance responsibilities.

Step 3: Internal Audit

Before the external certification audit, conduct one or more internal audits to assess whether the AIMS conforms to the standard and whether it is effectively implemented. Internal audits must be conducted by auditors who are independent of the area being audited — this typically means either using a separate internal team or engaging an external consultant.

Nonconformities identified in the internal audit must be addressed before the certification audit.

Step 4: Certification Audit

The certification audit is conducted by an accredited certification body in two stages. Stage 1 is a documentation review — the auditors review the AIMS documentation to confirm it meets the standard's requirements. Stage 2 is a site audit — the auditors assess whether the AIMS is effectively implemented by interviewing staff, reviewing records, and observing processes.

If the audit finds no major nonconformities, the certification body issues an ISO 42001 certificate, typically valid for three years with annual surveillance audits.

ISO 42001 and the EU AI Act

ISO 42001 and the EU AI Act share significant overlap. Both require risk management systems, documentation, human oversight mechanisms, and lifecycle management for AI systems. The EU AI Office has acknowledged ISO 42001 as a relevant standard that may support EU AI Act compliance.

However, ISO 42001 certification does not automatically satisfy EU AI Act requirements. The EU AI Act has specific conformity assessment procedures, registration obligations, and technical documentation requirements for high-risk AI systems that go beyond what ISO 42001 requires. Organisations should treat ISO 42001 as a strong foundation that reduces the compliance effort for the EU AI Act — not as a substitute for it.

ISO 42001 and the NIST AI RMF

ISO 42001 and the NIST AI RMF are highly complementary. The NIST AI RMF's four functions (GOVERN, MAP, MEASURE, MANAGE) map closely to ISO 42001's clauses: GOVERN maps to Clauses 4–7, MAP and MEASURE map to Clause 6 (risk assessment) and Clause 8 (AI impact assessment), and MANAGE maps to Clauses 8–10.

Organisations implementing the NIST AI RMF will find that ISO 42001 certification provides third-party verification of their AI RMF implementation — useful for demonstrating AI governance maturity to customers and regulators who may not be familiar with the NIST framework.

Where to Start

For organisations beginning their ISO 42001 journey, the most important first step is establishing the GOVERN foundations: an AI policy, a clear scope, assigned responsibility, and a basic AI system inventory. Without these, the more detailed requirements of Clauses 6–10 cannot be effectively implemented.

Prompt Shields' free AI Usage Policy Generator can produce the foundational AI policy document required by Clause 5 in minutes — covering AI objectives, responsibilities, and commitments in a format that meets the standard's requirements. The Atlas AI platform provides the AI system inventory and monitoring capabilities that Clauses 8 and 9 require.

Conclusion

ISO 42001 is the first globally recognised, certifiable standard for AI management systems. For organisations that need to demonstrate responsible AI governance to customers, regulators, and partners, it provides a credible, independently verified framework with a clear path to certification.

Its compatibility with ISO 27001 and its alignment with both the EU AI Act and the NIST AI RMF make it particularly valuable for organisations that need to satisfy multiple governance demands without building entirely separate compliance programmes for each. The investment in ISO 42001 implementation pays dividends across multiple regulatory and customer requirements.

Filed under

ISO 42001AI Management SystemAI GovernanceAI ComplianceAI CertificationAI StandardsISO
Get started

Read next

EU AI Act Compliance Guide for 2026

A practical guide to EU AI Act compliance for product and security teams — covering risk tiers, key obligations, documentation requirements, and how to build compliance into your AI development process.