Model lineage, content authenticity.
For every model version, every training run, every output. Provenance that travels with the artifact — verifiable long after the model ships. Shipping Q1 2027.
Watch makes the decision defensible. Verify makes the model itself defensible — where it came from, what it was trained on, what outputs it produced, and whether any of that has been tampered with.
What Verify adds
Attest proves what a model decided at a moment in time. Verify proves which model made the decision, and how that model came to be. These are distinct properties and distinct audit questions. An examiner asking about a contested 2026 decision may return in 2029 asking which specific model version was in production that day, which training data revision it was built from, and whether the model artifact is identical to the one referenced in your documentation.
Verify answers all three. Not with links to a model registry, not with hashes in a README — with cryptographic attestation bound to the same signing infrastructure that produces your decision records.
Three components
Model card provenance
Every model version produces a signed model card that records: training run ID, dataset revision hashes, hyperparameter fingerprint, evaluation results at release, declared risk tier, and the signing key authority. Model cards are verifiable artifacts, not markdown files. When a model version changes in production, the change itself is a signed event, and the rollover between versions is a consistency proof.
Training data attestation
One of the hardest compliance problems in AI is proving training data lineage without exposing the data itself. Verify uses hash commitments over canonicalized data manifests: the model card references commitment hashes of the training set, and selective disclosure is possible for regulatory audit without revealing proprietary or personally-identifiable content. The training data is attested without being published.
C2PA content authenticity
For AI-generated content — images, documents, audio, code — Verify emits C2PA-compatible content manifests that travel with the artifact. A regulator, a journalist, or a downstream tool can inspect the manifest and determine the model, version, and decision chain that produced it. This matters most for generative use cases in media, legal, and healthcare where the provenance of output is a compliance obligation.
Who this is for
- Banks deploying LLM-based customer-facing systems — where model versioning is an examiner concern and outputs enter customer communications that may later be disputed.
- Healthcare organizations running clinical decision support — where FDA SaMD pathways require traceable model lineage and training data governance.
- Media and legal enterprises generating AI-assisted content — where content authenticity is a downstream compliance concern (EU AI Act Article 50 transparency obligations, SAG-AFTRA content provenance rules).
- Any organization training their own models on sensitive data — where proving training data was handled per policy, without publishing the data, is a live DPA and procurement question.
Regulatory fit
- EU AI Act Article 13 (transparency and information to users) — model card provenance directly supports the disclosure obligation.
- EU AI Act Article 50 (transparency for generative AI) — C2PA manifests are one of the named mechanisms for satisfying disclosure.
- NIST AI RMF GOVERN-1.1 and MAP-4.1 — model inventory and data provenance are direct matches to Verify outputs.
- ISO 42001 clauses on data governance — training data attestation slots into the AIMS evidence pipeline.