Integrations · Anthropic

Claude Messages, signed on the way out.

Wrap the Anthropic client once. Messages, tool use, extended thinking, prompt caching. Every Claude interaction becomes an attested, regulator-admissible record.

Our Anthropic integration is a wrapper over the official Anthropic SDKs: anthropic (Python), @anthropic-ai/sdk (Node), and the Bedrock and Vertex variants.

The wrap pattern

Instantiate the Anthropic client as you already do — whether via direct API key, Bedrock IAM, or Vertex AI ADC. Pass it into veridra.wrap_anthropic(client, system_id="claims-triage-v2"). The returned object has the same methods and the same types, and TypeScript users keep full inference on messages.create.

python
from anthropic import Anthropic
import veridra

client = Anthropic()
wrapped = veridra.wrap_anthropic(client, system_id="claims-triage-v2")

msg = wrapped.messages.create(
    model="claude-sonnet-4-7",
    messages=[{"role": "user", "content": "..."}],
    max_tokens=1024,
)

For streaming, we intercept the event iterator and re-emit it. Your code still receives the raw stream, and Veridra writes the attestation on stream close using the aggregated final message. If the stream is aborted mid-flight, a partial-turn record is signed with stop_reason="client_abort" so nothing gets lost or silently re-attempted.

What Veridra captures

Messages and system prompt

The full message array, system prompt (as string or structured blocks), and any cache_control anchors. We distinguish cached vs. non-cached segments so you can audit which portion of the context was served from cache on each call.

Tool use round-trips

Tool definitions, tool_use blocks emitted by the model, and tool_result blocks sent back — all signed in a single record per turn. Multi-step agents produce a linked chain of signed turns sharing a correlation id.

Extended thinking

For models that emit thinking blocks, the thinking content is hashed and logged. Whether the plaintext is retained in the signed record is governed by a Govern policy (typically hash-only for sensitive deployments, full retention for internal debugging).

Model, stop reason, usage

Model slug (claude-sonnet-4-7, claude-opus-4-7, etc.), stop_reason, stop_sequence, and the full usage block: input_tokens, output_tokens, cache_creation_input_tokens, cache_read_input_tokens. Enough to reconstruct cost and cache-hit behavior from evidence alone.

Why this works across surfaces

  • Works identically against direct Anthropic API, Bedrock Claude, and Vertex AI Claude. The wrapper dispatches by client type.
  • Composes with tool-use frameworks (LangGraph, Instructor, custom agents). Wrap the base client and the frame inherits attestation.
  • Honors prompt caching semantics: cached tokens are captured in usage, and cache_control is preserved in the signed request.
  • Policy hooks can mutate or reject the request before it reaches Anthropic, and can redact thinking blocks before signing.
Claude-specific content blocks
Generic HTTP wrappers miss these
The wrapper understands Claude-specific surface area that generic HTTP interceptors miss: the content-block array, tool_use and tool_result blocks, thinking blocks on extended-thinking models, and cache_control markers. We canonicalize each of these into the signed payload so a replayed or audited record reflects exactly what Claude saw and said.
One storyline per case
Agentic flows stitched by correlation id
Wrapped Claude calls inherit the Govern system's policy version, emit Attest records bound to that policy, and feed Watch for refusal-rate and tool-usage drift. When an agentic flow spans many turns, the evidence pack stitches them back together using the correlation id — one signed storyline per case.