Skip to Content
Governance & ComplianceNIST AI RMFOverview

NIST AI Risk Management Framework

The NIST AI Risk Management Framework (AI RMF 1.0), published by the National Institute of Standards and Technology in January 2023, provides a structured, flexible approach to managing risks associated with AI systems throughout their lifecycle. The AI RMF was developed with broad public and private sector input and is referenced in US executive orders, financial regulatory guidance, and global AI governance standards.

The AI RMF is organised around four core functions: GOVERN, MAP, MEASURE, and MANAGE. Each function contains categories and subcategories with suggested actions. Unlike the EU AI Act, the AI RMF is voluntary — but adoption is increasingly expected by regulated industries and federal contractors.

The NIST AI RMF Playbook provides suggested actions for each category. This section focuses specifically on how VeriProof supports the subcategories where production observability plays a role.


Framework Structure

GOVERN └── Policies, accountability structures, culture, and oversight MAP └── Operational context, risk identification, stakeholder analysis MEASURE └── Quantitative and qualitative assessment of identified risks MANAGE └── Prioritisation, response, monitoring, and continuous improvement

The four functions are not sequential — they operate concurrently and feed each other. A mature AI governance programme runs all four functions continuously throughout the system’s operational lifetime.


VeriProof’s Coverage

VeriProof provides infrastructure that directly supports MEASURE and MANAGE, and contributes evidence relevant to GOVERN and MAP:


Relationship to EU AI Act

The EU AI Act and NIST AI RMF are complementary:

DimensionEU AI ActNIST AI RMF
TypeMandatory regulationVoluntary framework
GeographyEU marketGlobal, US-centric
Risk classificationPrescriptive tiers (prohibited, high-risk, limited-risk, minimal)Flexible risk characterisation
Audit requirementsConformity assessment, technical documentationInternal practices, voluntary attestation
Production monitoringRequired for high-risk systems (Article 9, 17)Recommended (MEASURE 3.3, MANAGE 1.3)

If you’re building for both EU compliance and US regulatory expectations, your VeriProof configuration satisfies both frameworks simultaneously — the session capture, governance scoring, and evidence export that satisfy EU AI Act Articles 9, 11, and 17 also satisfy the NIST AI RMF MEASURE and MANAGE subcategories.


Getting Started with AI RMF

  1. Review the GOVERN function first — Establish the organisational policies and risk tolerance statements before configuring monitoring thresholds
  2. Map your deployment context (MAP function) — Document what the system does, who uses it, and what risks have been identified
  3. Configure MEASURE — Set up governance scoring dimensions that correspond to your identified risks
  4. Activate MANAGE — Configure alert rules and establish an incident response procedure

This sequence mirrors the recommended starting point in the NIST AI RMF Playbook.


Generating an AI RMF Evidence Package

To generate a NIST AI RMF evidence package, open Compliance → Evidence Exports in the Customer Portal. Select NIST AI RMF as the framework, choose the functions to include (GOVERN, MAP, MEASURE, MANAGE), set the report period, and click Download Evidence Pack (PDF). The package includes governance policy configuration, alert rule inventory, score distributions, trigger history, and the attestation document confirming the data was generated in a trusted execution environment.


Next Steps

Last updated on