Signed postmortems. Replayable decisions.
When an AI system fails, you pull the signed record, replay the decision, and produce evidence of what happened and why. Incident response infrastructure built for AI-specific failure modes.
When an AI decision goes wrong, the first question is always "when did you know, and what did you do?" Veridra produces both answers in cryptographic form.
The incident response question AI changes
Traditional incident response is oriented around systems: what infrastructure failed, when, and what was the blast radius. AI incident response adds a question that traditional runbooks do not answer well: did the model behave within its declared policy, and if not, for how long before we noticed? Regulators after an AI incident — EU supervisory authorities, HHS OCR, state DOIs, class-action counsel — increasingly treat this question as the first one, not the last one. The answer cannot be constructed after the fact from logs; it has to already exist as evidence.
How Veridra changes incident response
Time-of-knowledge is cryptographic
The first drift record or policy-breach event is timestamped and signed at the moment of detection. When a regulator asks when the organization first had notice, the answer is precise to the second and admissible without additional corroboration. No debates about log retention, log format, or whether the timestamp could have been altered.
Decision replay at arbitrary precision
Every affected decision during the incident window is signed and reconstructable: which model version, which policy, which input context, which output, which confidence, which downstream effects. Replay is deterministic because the canonical form was preserved. An examiner can walk through the exact sequence of decisions that preceded, coincided with, and followed the incident.
Signed postmortems enter the evidence stream
The postmortem itself is a signed artifact, published to the internal evidence trail on publication date. The timeline, root cause, contributing factors, remediations, and customer notification records are all tied to the postmortem's signature. Revisions are new signed documents that reference the original, not edits that overwrite it.
Customer and regulator notification becomes evidenced
Notification records — who was informed, when, with what content — are signed events. When a regulatory deadline (72-hour GDPR breach notification, state-level AI breach rules) matters, the signed timestamp of the notification is itself evidence of compliance with the deadline.
Specific AI incident classes addressed
- Model drift crossing regulatory threshold — adverse impact rises above disparate-impact threshold; Watch surfaces; evidence of time-of-knowledge is immediate.
- Silent model swap — model version changed in production without governance process; the model hash change is itself a detected and signed anomaly.
- Policy enforcement gap — a policy was supposed to apply but was not enforced on a specific system; the mismatch is detectable and evidenceable.
- Agent-initiated action outside scope — an agent called a tool outside declared permissions; the rejection (or, worst case, the breach) is a signed event.
- Prompt injection at scale — pattern of inputs successfully extracting protected information; evaluation batteries surface the pattern quickly.
- Fine-tune or adapter divergence — a downstream model variant is behaving differently from the base model's declared properties; Verify module catches the lineage inconsistency.