EU AI Act alignment, article by article.
August 2026 brings Article 6 and Annex III high-risk obligations into force. Veridra is architected to produce the specific evidence each article requires — not aspirational alignment, operational evidence.
The EU AI Act is the first comprehensive AI regulation with binding obligations, supervisory authorities, and fines reaching 7% of global turnover. The specific obligations under Annex III take effect August 2, 2026.
What the Act actually requires of high-risk systems
The AI Act does not treat all AI the same. Prohibited practices (Article 5) are banned. Limited-risk systems (chatbots, generative outputs) face transparency obligations. High-risk systems — those listed in Annex III, including AI in employment, credit scoring, insurance pricing, critical infrastructure, education, law enforcement, and migration — face the substantive obligations that Veridra is built to support. The six most operationally demanding articles are below, with specific notes on how our architecture produces evidence for each.
Article 9 — Risk management system
Providers of high-risk AI must establish, document, maintain, and update a continuous risk management system across the model lifecycle. This includes identifying foreseeable risks, evaluating them against realized outcomes, and adopting mitigations.
How Veridra produces the evidence: the risk register and model classification are part of your governance posture in the Govern module. Mitigations are enforced as policy-as-code at the signing gateway. Every decision the system makes is signed with the risk tier in effect, the policy version applied, and the mitigation status. The risk management system becomes evidenceable per decision, not only per quarterly review.
Article 12 — Record-keeping
High-risk systems must automatically record events ("logs") relevant to identifying risks, substantial modifications, and post-market monitoring. Records must be retained for an appropriate period considering the intended purpose.
How Veridra produces the evidence: every decision is a signed record. Substantial modifications — model version changes, policy updates, scope changes — are themselves signed events in the record stream. Retention is 7 years by default, configurable longer. The records are cryptographically tamper-evident, satisfying not only the letter of Article 12 but the forensic integrity that a supervisory authority will actually test.
Article 14 — Human oversight
High-risk systems must be designed and developed to enable effective oversight by natural persons, including the ability to intervene, interrupt, and overrule. The oversight must be proportionate to the risk.
How Veridra produces the evidence: human-review gates and approval chains are configured per system, per threshold. Every human intervention — approval, override, rejection — is a signed event in the decision chain. The evidence shows not only that oversight existed but exactly which human, at what time, with what reasoning.
Article 15 — Accuracy, robustness, and cybersecurity
High-risk systems must be designed to achieve an appropriate level of accuracy, robustness, and cybersecurity, and to perform consistently throughout their lifecycle.
How Veridra produces the evidence: continuous evaluation through Watch module — accuracy benchmarks run on schedule, drift detection catches robustness degradation, security events are first-class signed records. Confidence thresholds are enforced at the signing gateway. The evidence that the system performed consistently is produced continuously, not inferred from quarterly reports.
Article 50 — Transparency obligations (generative AI)
Generative AI outputs must be marked in machine-readable form as artificially generated or manipulated, with specific obligations for deepfakes and AI-generated news content.
How Veridra produces the evidence: the Verify module (Q1 2027) emits C2PA-compatible content authenticity manifests — the named mechanism that European Commission guidance points to for satisfying Article 50 marking requirements. Every generative output carries a verifiable provenance trail.
Article 72 — Post-market monitoring
Providers must establish a post-market monitoring system to collect, document, and analyze relevant data on the performance of their high-risk AI systems throughout their lifecycle.
How Veridra produces the evidence: Watch module is architected for Article 72 directly. Signed drift records, eval run results, incident records, and customer complaints flow into a single auditable stream. Reporting to supervisory authorities is a filter query, not a project.
Supervisory authority engagement
Each member state designates a national competent authority, with the European AI Office coordinating at the Union level. Supervisory authorities can request documentation, access systems, and require corrective action. Veridra's evidence packs are designed for this interaction model — they answer the specific questions supervisory authorities are expected to ask (per published regulatory guidance), in the article-referenced format those authorities expect.