Trust · NIST AI RMF

NIST AI RMF, function by function.

Voluntary in most US contexts, mandatory for federal deployment, and rapidly becoming the de facto structure for enterprise AI risk management programs. Veridra covers all four functions with evidence-producing primitives.

NIST AI RMF is structurally sound and deliberately framework-agnostic. The gap most organizations run into is the distance between the framework's language and the evidence an auditor wants to examine.

What RMF 1.1 actually expects

The NIST AI Risk Management Framework organizes AI risk work into four functions: GOVERN (establish risk culture and accountability), MAP (understand context and AI capabilities), MEASURE (assess trustworthiness), and MANAGE (allocate resources to identified risks). Each function breaks into categories and subcategories with specific outcomes the organization should achieve. Unlike the EU AI Act, there are no fines — but federal procurement, state regulators, and many enterprise customers now reference the framework in security reviews and contracts.

How Veridra covers each function

GOVERN — policies, accountability, and culture

GOVERN categories include risk management policies, accountability structures, workforce competencies, and supply chain considerations. Veridra supports GOVERN through the Govern module (AI system inventory, risk tiering, policy-as-code) and the Attest pipeline (every policy enforcement is a signed event). Evidence that accountability structures exist is easy; evidence that they are enforced is hard — Veridra produces the hard kind.

MAP — context and capability understanding

MAP categories require understanding AI system purpose, capabilities, limitations, and anticipated use cases. Model cards (and their signed provenance via Verify module, Q1 2027) are the primary artifact for MAP evidence. Use/misuse boundaries are declared in the risk register and enforced at the signing gateway; deviation is detectable.

MEASURE — trustworthiness assessment

MEASURE categories cover identifying appropriate methods, evaluating AI system trustworthy characteristics, tracking metrics over time, and gathering feedback. Watch module runs the scheduled evaluations and drift detection that MEASURE assumes are happening. Every eval run is a signed record. Metrics tracking is continuous, not episodic.

MANAGE — risk treatment and response

MANAGE categories address prioritizing risks, responding to them, documenting responses, and managing third-party risks. Incident records and remediation evidence are first-class signed events. Third-party risk (including the RMF's emphasis on sub-component AI) maps to the Agents module work and the supply chain controls in our SECURITY.md.

Where MEASURE breaks down for most teams
The assessment-cadence problem
RMF MEASURE assumes continuous or regular assessment of AI trustworthiness. Most enterprise AI programs do this quarterly at best, often annually. That cadence is insufficient for systems whose distributions drift monthly and whose incident exposure is daily. Watch module reframes MEASURE as continuous evidence stream, not periodic project. This is the single biggest uplift we provide against a well-intentioned but under-evidenced RMF program.

Generative AI Profile

NIST published a Generative AI Profile in July 2024 with specific subcategories for generative systems. Veridra's architecture addresses the profile's additions: content provenance (Verify module), prompt and response evidence (signed decision records with input/output hashes), value-chain risks (model lineage), and mitigation of anticipated adverse impacts (policy enforcement at the signing gateway).

Federal deployment specifics

  • OMB M-24-10 — federal agencies must align AI governance with NIST AI RMF. Veridra's output structure is natively aligned.
  • EO 14110 — references NIST RMF for safe, secure, trustworthy AI practices; Veridra's evidence pipeline is compliant by construction.
  • Federal Information Security Modernization Act (FISMA) — NIST 800-53 Rev 5 controls are inherited from Veridra's security posture; AI-specific RMF evidence flows through the same substrate.
  • Agency-specific guidance — HHS, DOT, DOD each have their own AI governance documents referencing NIST RMF. Veridra's framework crosswalks cover the named subcategories.
Crosswalk economics
Why one integration covers many frameworks
NIST RMF subcategories map to EU AI Act articles, to ISO 42001 controls, and to SR 11-7 pillars. The underlying evidence — signed decisions, signed drift events, signed oversight records — is the same across all. Organizations pay the integration cost once and inherit framework coverage that compounds. This is the reason enterprises with AI exposure across jurisdictions increasingly choose an evidence-first architecture over a framework-first one.