Solutions · Model Risk Management

Examiner-ready model validation.

SR 11-7-grade model risk management with cryptographic evidence for every validation, every backtest, every production decision. Built for US banks and the model risk functions that serve them.

SR 11-7 was written for statistical models and has been stretched to cover machine learning, then LLMs. The framework still works — if the evidence does.

What SR 11-7 expects, and what AI adds

Federal Reserve SR 11-7 (and its OCC and FDIC counterparts) define three pillars of model risk management: conceptual soundness, ongoing monitoring, and outcomes analysis. These pillars apply to AI systems, but AI complicates each of them — models retrain, drift is continuous, and decision volume is orders of magnitude higher than traditional statistical models. The examiner expectation is evolving quickly: evidence that was acceptable for a regression model is insufficient for an LLM-based underwriting assistant.

How Veridra satisfies each pillar

Pillar 1 — Conceptual soundness and development evidence

Pre-deployment validation evidence is preserved as signed artifacts. Training data hashes, validation test results, challenger comparisons, and sensitivity analyses are all evidenceable through the Attest pipeline. When a model version is approved for production, the approval itself is a signed event tied to the model card.

Pillar 2 — Ongoing monitoring

The Watch module runs continuous drift detection and periodic evaluation against customer-defined test batteries. Every drift event is a signed record. Every monitoring report is a signed pack. The ongoing monitoring requirement becomes a continuous stream of evidence rather than a quarterly PowerPoint.

Pillar 3 — Outcomes analysis

Signed decision records provide the raw material for outcomes analysis. When the model risk team tests model performance against realized outcomes, the analysis itself is a signed artifact, and every decision referenced in the analysis is verifiable back to the transparency log.

Examiner posture
What banking examiners increasingly ask
Recent Federal Reserve, OCC, and FDIC exam letters show a pattern: examiners now ask for specific decision records, not just validation documentation. They want to see that a named model, at a named time, applied a named policy to a named type of input. Veridra produces this level of evidence as the default output of the pipeline, not as a special project for an exam.

Challenger models and champion-challenger evidence

SR 11-7 encourages challenger comparisons to test production model performance. Veridra supports running challenger models in parallel, with each challenger decision signed alongside the champion decision — producing a rigorous comparison corpus that satisfies the challenger evidence expectation without creating parallel audit gaps.

Use, misuse, and limitations

  • Declared model purpose is part of the model card signed at deployment; deviation from declared purpose at inference time is itself a signed anomaly.
  • Input-distribution drift surfaces before the model starts producing unreliable output — Watch module catches covariate shift early.
  • Model limitations (out-of-distribution flags, low-confidence thresholds) are enforced at the signing gateway and recorded per decision.
Regulatory scope
Beyond the Fed, OCC, FDIC
Bank MRM frameworks also incorporate the UK PRA's SS1/23 on AI in financial services, the ECB's SSM model risk expectations for EU banks, and MAS FEAT principles in Singapore. The evidence architecture that satisfies SR 11-7 largely satisfies these parallel frameworks with crosswalk-specific reporting. Veridra's framework data is maintained across all of them.