Trust · Responsible AI

How we build the assurance layer.

Veridra functions as the evidence layer for other people's AI systems. We prioritize attestations that withstand adversarial scrutiny in legal and regulatory contexts.

Four operating principles guide every architectural decision: defensibility, auditability, human accountability, appropriate automation.

Four principles

Defensibility

Signed attestations must survive courtroom and regulatory challenges years after issuance. Implementation includes non-repudiable signatures, witnessed transparency logs, append-only schemas, and customer-held key management. A record signed today must be verifiable in a decade.

Auditability

Every claim the platform makes about a decision has to be independently verifiable without our cooperation. Verification tools are open-source, canonicalization follows RFC 8785, and customers can exit while verifying historical attestations using tools that don't require a Veridra account.

Human accountability

Material-impact decisions require named human owners rather than abstract system ownership. Article 14 oversight routes to identified reviewers, and approval actions are themselves signed evidence. "The AI did it" is not a defensible answer to a regulator; our platform ensures it's never the operator's answer either.

Appropriate automation

Safe processes are automated; consequential decisions demand human judgment. Risk determinations remain customer-controlled. The platform enforces technical mechanisms (cryptography, logging) while remaining deliberately neutral on customer risk preferences.

What we do not do

  • Customer attestation data is excluded from model training.
  • Explainability is sourced from customers, not internally generated scores that would falsely imply insight into their models.
  • Corrections are logged as new signed events; records are never silently modified.
  • Regulatory determinations remain customer responsibility — we produce evidence, we don't adjudicate compliance.
  • Features launch only when failure modes can be explained to regulators, not when they merely pass internal QA.

Internal AI use at Veridra

Internal AI tooling (code assistance, documentation, support triage) is registered in the company's own Govern instance and subject to identical oversight standards as customer-facing systems. AI accessing customer data is customer-dedicated or disabled by default — we don't train our own models on the evidence we sign for you.

Evidence, not adjudication
What we produce vs. what we don't
The platform enforces technical mechanisms. The platform does not decide what counts as fair, acceptable, or compliant for your jurisdiction and your use case. Those judgments are yours, signed under your name, backed by evidence we preserve faithfully.
Our own audit surface
We sign the platform that signs your decisions
Veridra's own code, deployments, and operator actions are logged in the same transparency log architecture we sell to customers. Every break-glass action, every key rotation, every admin tool invocation is a signed event. When we claim operational discipline, we produce the evidence that backs it — the same way we ask you to.