Agent identity, governed.
Agents will make decisions your humans cannot audit in real time. Agents needs an identity, a permission scope, an approval gate, and a signed record for every tool call. Shipping Q3 2026.
The hardest open problem in enterprise AI is not whether models are accurate. It is whether agents acting on their own authority leave evidence an organization can defend.
The problem Agents solves
An enterprise deploying AI agents faces a governance gap that humans do not produce. A human loan officer has an employee ID, a scoped set of permissions, a supervisor chain, and a paper trail. An AI agent making the same loan-adjacent decisions typically has none of this — it runs under a shared service account, with undifferentiated permissions, with no approval gate, and with logs that are not evidence.
Agents module closes this gap. It treats every agent as a first-class identity with its own permission scope, its own audit trail, and its own accountability profile — on the same substrate Veridra uses for human-authored decisions.
Four components
Non-human identity
Every agent gets a scoped identity issued through your existing identity provider — Okta, Microsoft Entra, Ping, or SPIFFE-based service identity for cloud-native deployments. The identity carries a declared purpose, a declared scope, a declared operator, and a declared approval chain. Agent identity is not a checkbox; it is a governed artifact that answers the question who was this agent, acting on whose authority, within what bounds?
Scoped permissions
An agent authorized to read customer data is not automatically authorized to write to accounting systems. Permission scopes are declared explicitly, enforced at the tool-call boundary, and signed into the decision record alongside the action itself. A scope breach is a cryptographic event, not a log anomaly — and the enforcement gate refuses the action before it reaches the downstream system.
Tool-call audit
Every tool call an agent makes is captured, canonicalized, and signed as a decision event in the Attest pipeline. This means the complete chain — this agent, given this prompt, called this tool with these arguments, producing this result — is preserved as append-only evidence. Replayable. Verifiable. Inclusion-proofed in the transparency log alongside every other signed decision.
Human-approval gates
For high-impact actions, Agents routes to named human reviewers before the action executes. Thresholds are configurable per agent, per action type, per customer segment. The approval itself is a signed event — approver identity, timestamp, reasoning — that joins the decision chain in the evidence log. The human is in the loop, and the loop is in the evidence.
What agents will be cross-referenced against
- EU AI Act high-risk obligations — agent-operated decisions in lending, insurance, or employment contexts fall under Annex III and require the same Article 9/12/14/15 controls as human-operated decisions.
- NIST AI RMF MANAGE-3 (third-party risk) — agents acting on behalf of the organization need the same governance as third-party model providers, because they operate with delegated authority.
- OWASP Top 10 for LLM applications — agents introduce specific risk classes (excessive agency, prompt injection chains, privilege escalation through tool chaining) that Agents module addresses directly.
- SOC 2 CC6 (Logical Access) — agent identities are in-scope for access control just as human identities are; Agents makes them evidenceable.
Expected integration partners
Agents module will ship with direct integrations for the major agent frameworks: LangGraph, AutoGen, CrewAI, and the emerging MCP-based deployments. For enterprise identity, integration targets include Okta Workforce and Customer Identity, Microsoft Entra, CyberArk, and HashiCorp Boundary.