AI Governance

NIST AI RMF Implementation Guide 2026

A practical guide to implementing the NIST AI Risk Management Framework — covering the four core functions (Govern, Map, Measure, Manage), profiles, tiers, and how to operationalise it in your organisation.

12 min read
Risk management dashboard showing AI system assessment results mapped to NIST AI RMF functions.
AI Governance
Contents·24 sections

The NIST AI Risk Management Framework (AI RMF) is the most widely adopted voluntary framework for managing risk in AI systems. Published by the US National Institute of Standards and Technology in January 2023, it provides a structured, flexible approach to identifying, assessing, and managing AI risks that is applicable across industries, organisation sizes, and AI system types.

Unlike regulation — which prescribes specific requirements — the AI RMF is a voluntary guidance document. But that flexibility is increasingly supplemented by external pressure: US federal agencies are required to align with it, it is referenced in the EU's AI Act regulatory guidance, and it is increasingly cited in procurement requirements and enterprise AI governance programmes worldwide.

This guide explains the AI RMF in practical terms — what it requires, how its four core functions work, and how to implement it in an organisation that is serious about AI risk management.

What Is the NIST AI Risk Management Framework?

The NIST AI RMF is a voluntary framework that helps organisations manage the risks of designing, developing, deploying, evaluating, and retiring AI systems. It is intended to be applied across the full AI lifecycle and to all types of AI — from simple rule-based systems to large language models and agentic AI.

The framework is built around the idea that AI risks are different from conventional technology risks in important ways: they are often harder to anticipate, may emerge only in deployment, can affect people who are not direct users of the system, and may change over time as the AI model drifts or the operational environment changes. Effective AI risk management must therefore be continuous, not a one-time assessment.

NIST designed the framework to be used by AI actors across the entire AI lifecycle: developers, deployers, operators, evaluators, users, and affected communities. Different actors have different responsibilities, and the framework explicitly addresses this.

Why the NIST AI RMF Matters in 2026

When the AI RMF was published in 2023, AI governance was still largely aspirational for most organisations. By 2026, the landscape has shifted significantly. Enterprises face a combination of regulatory pressure (EU AI Act, state-level AI laws in the US, sector-specific AI regulations), contractual pressure (enterprise customers requiring AI governance attestations), and reputational pressure from high-profile AI failures.

The AI RMF has emerged as a de facto reference standard for organisations that need to demonstrate structured AI governance without waiting for a specific regulation to mandate it. It is compatible with ISO 42001 and maps reasonably well to EU AI Act requirements — making it a practical foundation for organisations that need to satisfy multiple regulatory and customer demands simultaneously.

Structure of the Framework

The AI RMF has two main parts. Part 1 establishes the framing and context — why AI risk management is important, what makes AI risks distinctive, and how to think about trustworthy AI. Part 2 contains the core framework itself: four functions organised into categories and subcategories, each addressing a different aspect of AI risk management.

The four core functions are GOVERN, MAP, MEASURE, and MANAGE. GOVERN is unique in that it underpins the other three — rather than being a sequential step, it describes the organisational conditions that make the other functions possible. MAP, MEASURE, and MANAGE are applied iteratively throughout the AI lifecycle.

GOVERN — Building Organisational Accountability

GOVERN is the foundational function. It addresses the organisational culture, policies, processes, and structures that enable effective AI risk management. Without GOVERN, the other three functions cannot be sustained — they become one-off activities rather than embedded practices.

Culture and Accountability

Effective AI risk management requires an organisational culture where risk awareness is valued and where concerns can be raised without fear of reprisal. Senior leadership must visibly champion AI risk management — not just as a compliance checkbox, but as a genuine business priority.

The framework calls for clear accountability: specific individuals or roles should be responsible for AI risk management outcomes. In practice, this often means establishing an AI governance committee or appointing an AI Risk Officer with cross-functional authority.

Policies, Processes, and Procedures

GOVERN requires documented policies that define how AI risk management works in the organisation: what systems require risk assessment, who is responsible at each stage, how risks are escalated, and how decisions are documented. These policies need to be reviewed and updated as the organisation's AI use evolves.

An AI usage policy is typically the starting point. It defines approved and prohibited use cases, data classification rules for AI inputs, third-party model evaluation requirements, and incident response procedures. Prompt Shields' free AI Policy Generator produces a customised policy in minutes.

Roles, Responsibilities, and Workforce

The framework requires that AI risk management responsibilities are clearly assigned and that the workforce has the training needed to fulfil them. This includes technical roles (ML engineers, security engineers, data scientists) and non-technical roles (legal, compliance, HR, procurement) — all of whom interact with AI systems in different ways and need different aspects of AI risk literacy.

MAP — Understanding AI Context and Risk

MAP is about understanding the context of an AI system thoroughly enough to identify what risks it could pose. It cannot be done once and filed away — the context and risk landscape evolve as the system is deployed and as the environment changes.

Establishing Context

Before risks can be assessed, the system's context must be understood: What is the intended purpose? What are the expected inputs and outputs? Who are the intended users? Where will it be deployed? What is the operational environment? What existing systems or processes will it interact with or replace?

Critically, context also includes what happens when the system fails or produces incorrect outputs. For a recommendation system, a wrong recommendation may be a minor inconvenience. For a system used in credit decisioning or medical diagnosis, it could cause serious harm.

Categorising AI Risks

The AI RMF groups AI risks into several categories: risks to individuals (bias, privacy violation, safety harm), risks to organisations (reputational, financial, legal), risks to society (systemic bias, erosion of trust, national security), and risks specific to the AI system itself (robustness failures, security vulnerabilities, performance degradation).

For each risk category, the MAP function asks: what is the likelihood of this risk materialising? What is the potential severity of harm? What is the breadth of impact — how many people could be affected? These three dimensions together determine the risk priority.

Identifying Stakeholders and Impacts

A distinctive feature of the AI RMF is its emphasis on affected communities — people who are not direct users of the AI system but who may be affected by its outputs. A hiring algorithm affects job applicants who never interact with the system directly. A loan scoring model affects borrowers who may not know AI was involved in the decision.

MAP requires that these affected communities are identified and that their interests are considered in the risk assessment. This is particularly important for systems that make or inform high-stakes decisions about people.

MEASURE — Analysing and Assessing Risk

MEASURE translates the risk categories identified in MAP into quantified or qualified assessments. It asks: how bad is each risk actually, and how confident are we in that assessment?

AI Risk Testing and Evaluation

Testing is central to MEASURE. This includes functional testing (does the system do what it is intended to do?), adversarial testing (can it be manipulated or broken?), and fairness evaluation (does it perform equitably across different population groups?).

For LLM-based systems, adversarial testing includes AI red teaming — systematically probing the system for prompt injection, jailbreaks, context leakage, and policy violations. This should happen before deployment and regularly thereafter.

Metrics and Measurement Plans

The framework requires that AI risk is measured, not just described. This means defining metrics — specific, measurable indicators of risk — and tracking them over time. For a classification model, metrics might include accuracy, false positive rate, and fairness metrics across demographic groups. For an LLM, they might include policy violation rate, prompt injection detection rate, and output quality scores.

Metrics need to be chosen carefully. A system can score well on narrow metrics while performing poorly on broader risk dimensions. The measurement plan should be reviewed by stakeholders who understand the real-world context, not just the technical team.

Bias, Fairness, and Explainability

MEASURE gives particular attention to bias and fairness — reflecting NIST's view that these are among the most significant societal risks of AI. Measuring bias requires defining what fairness means in the specific context of the system (demographic parity, equalised odds, individual fairness?) and measuring the system against that definition.

Explainability — the ability to explain how and why a system produced a particular output — is also addressed. The appropriate level of explainability depends on context: a recommendation for a movie requires less explanation than a decision to deny a loan.

MANAGE — Treating and Monitoring Risk

MANAGE is where risk assessment translates into action. It covers how identified risks are prioritised, treated, tracked, and responded to when they materialise.

Prioritising and Treating Risks

Not all identified risks can be addressed simultaneously. MANAGE requires a prioritisation process — using the severity and likelihood assessments from MEASURE to determine which risks get attention first. The treatment options parallel those in conventional risk management: avoid (don't deploy the system in this context), mitigate (implement controls to reduce likelihood or severity), transfer (use contractual or insurance mechanisms), or accept (document the decision and the rationale).

For AI systems, mitigation often means technical controls: input validation, output filtering, human-in-the-loop review, access controls, and monitoring. LLM security best practices provide the technical toolkit for many of these mitigations.

Ongoing Monitoring and Incident Response

AI systems change over time even when the code does not. Model drift, distribution shift, and changing user behaviour can all degrade performance or introduce new risks. MANAGE requires ongoing monitoring — not just technical performance monitoring, but risk-focused monitoring that tracks the indicators defined in MEASURE.

When monitoring detects an emerging problem — or when an incident occurs — the framework requires a defined incident response process. Who is notified? What investigation is required? What corrective actions are available? When is the system taken offline? These questions should be answered in advance, not improvised during an incident.

AI RMF Profiles

A Profile is a customised view of the AI RMF — the subset of categories and subcategories that are most relevant to a specific organisation, use case, sector, or AI system type. Profiles allow organisations to focus their AI risk management effort on what matters most for their context, rather than trying to apply every category of the framework uniformly.

NIST has published several sector-specific profiles — for generative AI (the Generative AI Profile, or AIIA 600-1), for the financial services sector, and for healthcare. Organisations can use these as starting points and adapt them to their specific situation. The EU AI Act alignment profile maps AI RMF categories to specific EU AI Act obligations — useful for organisations that need to satisfy both.

AI RMF Tiers

The AI RMF Tiers describe the extent to which an organisation's AI risk management practices are formalised, integrated, and responsive. There are four tiers:

  • Tier 1 — Partial: AI risk management is ad hoc and reactive. Policies may exist but are inconsistently applied. Risk is managed by individuals rather than organisational processes.
  • Tier 2 — Risk Informed: Risk management practices exist but are not organisation-wide. Some functions apply structured risk management; others do not. Awareness of AI risk exists but does not consistently inform decisions.
  • Tier 3 — Repeatable: AI risk management is formally defined, consistently applied, and integrated into organisational processes. Policies are organisation-wide. Risk information is shared across teams.
  • Tier 4 — Adaptive: AI risk management is continuously improved based on lessons learned, emerging risks, and changing organisational needs. The organisation actively contributes to and learns from the broader AI risk management community.

Tiers are not a maturity progression that every organisation must climb linearly — the appropriate tier depends on the organisation's risk profile. An organisation deploying AI in low-stakes consumer applications may appropriately target Tier 2. One deploying AI in healthcare or financial services should target Tier 3 or 4.

The AI RMF Playbook

Alongside the core framework, NIST published the AI RMF Playbook — a companion document that provides specific, actionable suggestions for implementing each category and subcategory of the framework. Where the framework says “establish policies for AI risk management”, the Playbook suggests specific practices, defines suggested actions, and identifies relevant resources.

The Playbook is available on NIST's website and is regularly updated. It is an essential companion to the core framework document and provides the operational detail that the framework intentionally omits in the interest of remaining broadly applicable.

NIST AI RMF and the EU AI Act

Organisations subject to both the NIST AI RMF (typically as a voluntary or procurement requirement) and the EU AI Act (as a regulatory requirement) frequently ask whether implementing one satisfies the other. The answer is: partially.

The two frameworks share significant conceptual overlap. Both require risk assessment, both emphasise human oversight, both call for documentation and transparency, and both address the full AI lifecycle. An organisation with a mature AI RMF implementation will have much of the groundwork in place for EU AI Act compliance.

The key gaps are regulatory specifics. The EU AI Act imposes specific conformity assessment procedures for high-risk AI, specific documentation formats, registration requirements in the EU AI database, and CE marking obligations — none of which are addressed in the AI RMF. Organisations need to layer EU AI Act-specific compliance activities on top of their AI RMF implementation, not assume they are equivalent.

Getting Started: A Practical Roadmap

For organisations implementing the AI RMF for the first time, a phased approach is most effective:

Phase 1 — Establish GOVERN foundations. Define your AI governance policy, assign ownership, establish an AI governance committee, and complete an inventory of all AI systems in use or development. This provides the organisational foundation without which the other functions cannot be sustained.

Phase 2 — Apply MAP to your highest-risk AI systems. Rather than trying to map every AI system at once, start with the systems that pose the greatest potential harm. Conduct context assessments, identify affected stakeholders, and categorise risks for each system.

Phase 3 — Implement MEASURE for prioritised systems. Define metrics, conduct testing (including adversarial testing where appropriate), and produce documented risk assessments for each high-priority system.

Phase 4 — Execute MANAGE. Implement controls for identified risks, establish monitoring, and define incident response procedures. Document treatment decisions and their rationale.

Phase 5 — Expand and iterate. Apply the framework to lower-risk systems, review and update assessments for deployed systems, and incorporate lessons learned into the GOVERN function.

Conclusion

The NIST AI RMF is the most mature and widely-adopted framework for AI risk management available today. Its flexibility — it is not prescriptive about specific controls or technologies — makes it applicable across the full range of AI systems and organisational contexts. Its four-function structure (GOVERN, MAP, MEASURE, MANAGE) provides a logical, iterative approach that can be implemented incrementally.

For organisations under pressure to demonstrate AI governance — whether from regulators, enterprise customers, or their own boards — the AI RMF provides a credible, internationally recognised foundation. And for those also subject to the EU AI Act or ISO 42001, it maps well enough to reduce duplication of effort significantly.

Prompt Shields' Atlas AI Security Posture Management platform is designed to operationalise many of the MEASURE and MANAGE requirements: real-time monitoring, policy violation detection, and AI system inventory. The AI Policy Generator covers the foundational GOVERN requirement of a documented AI usage policy.

Filed under

NIST AI RMFAI Risk ManagementAI GovernanceAI ComplianceAI FrameworkAI PolicyRisk Management
Get started

Read next

ISO 42001: AI Management System Standard Guide

A complete guide to ISO 42001 — the international standard for AI management systems. Covers what it requires, how it is structured, how to achieve certification, and how it relates to ISO 27001 and the EU AI Act.