Skip to Content
Getting StartedWhat is VeriProof?

What is VeriProof?

VeriProof is an AI governance evidence platform for teams deploying AI in regulated or high-accountability environments. It captures what your AI system did, what it decided, what controls were applied, and what evidence exists to defend that decision later.

It is built for a specific problem: most teams can observe latency, tokens, and model errors, but they still cannot answer the questions regulators, auditors, legal teams, and enterprise customers eventually ask:

  • What did the system decide?
  • What evidence was available at that moment?
  • What policy or guardrail fired?
  • Was a human involved?
  • Can you prove the record was not changed after the fact?

VeriProof delivers the forensic and governance infrastructure for agentic AI, ensuring that autonomous workflows are not just observable, but policy-aligned and forensic-grade from the moment of execution.


Why teams buy VeriProof

As AI systems move from copilots to decision support and transaction influence, the burden of proof changes. It is no longer enough to say a model was “tested” or that an application has “guardrails.” Teams need durable evidence that governance happened on real traffic, in production, over time.

Without that evidence, four things happen repeatedly:

  • Audit work becomes manual: engineering and compliance teams reconstruct decisions from logs, tickets, spreadsheets, and screenshots.
  • Controls are hard to prove: you may have moderation, retrieval, approval flows, and policies, but not a clean record of when they actually fired.
  • Incidents are expensive to investigate: when a customer challenges an outcome, the evidence is fragmented across systems that were not designed for defensibility.
  • Enterprise sales slows down: security reviews and procurement questionnaires keep asking for governance capabilities most AI stacks do not expose cleanly.

VeriProof addresses this by turning OpenTelemetry traces and governance annotations into a compliance record system purpose-built for AI.


Where VeriProof sits in your architecture

VeriProof is not your model host and not your request gateway. It sits beside your existing stack:

Your application / agents / workflows | | OpenTelemetry spans + governance attributes v VeriProof SDK / adapters | | batched evidence records v VeriProof ingest + governance services | +--> tenant-isolated evidence store | +--> alerts, dashboards, scoring, export | +--> Solana blockchain anchor

That design matters because it means:

  • your existing framework stays in place
  • your LLM provider configuration does not need to change
  • there is no request-path latency tax from a proxy layer
  • engineering teams instrument once and keep shipping

What VeriProof actually records

Every AI session flowing through VeriProof produces a structured evidence trail. At a high level, that trail includes:

WhatDetail
Session traceThe ordered path of the AI workflow: retrieval, prompts, model calls, tool invocations, agent handoffs, approval steps
Decision contextWhat the system chose, what options were available, why it chose them, and with what confidence
Governance metadataRisk level, grounding status, guardrail results, policy outcomes, human review flags, data sensitivity
Operational contextApplication ID, environment, service name, correlation IDs, user-defined transaction identifiers
Outcome linkageOptional downstream feedback or real-world outcome data connected back to the original session
Tamper-evidenceA cryptographic anchor proving the evidence record existed in that form at that time

In practice, this gives different stakeholders different things:


How VeriProof integrates

VeriProof uses OpenTelemetry as its wire format. If your AI framework already emits OTel spans (LangGraph, CrewAI, OpenAI Agents SDK, Vercel AI, Semantic Kernel, AutoGen, LlamaIndex), adding VeriProof is a one-line configuration change. If it doesn’t, the VeriProof Session Builder gives you a fluent API to instrument manually.

from veriproof_sdk import VeriproofClientOptions, configure_veriproof # One line — all subsequent OTel spans from this process are captured configure_veriproof( VeriproofClientOptions(api_key="vp_cust_...", application_id="loan-review"), service_name="loan-review", set_global=True, )

There is no VeriProof sidecar to deploy, no agent to manage, and no changes to your LLM provider configuration. Your AI calls stay exactly as they are.


The three layers of the platform

VeriProof operates in three distinct layers:

1. Instrumentation layer (your codebase)

The SDK observes your AI calls. You configure it once at application startup. Framework adapters for LangGraph, CrewAI, and others auto-instrument common patterns. Custom governance context — decisions, risk levels, human review events — can be added via annotations or span attributes.

2. Ingest & governance layer (VeriProof managed)

Each compliance record is validated, classified, and stored in your tenant’s isolated data store. A Merkle hash of every record is computed and batched for anchoring. The governance engine evaluates configured alert rules and populates compliance dashboards in real time.

3. Blockchain anchoring layer (Solana)

Batches of Merkle roots are written to a Solana Concurrent Merkle Tree (CMT). This creates an independent, verifiable proof that a set of compliance records existed and were not subsequently altered — regardless of whether VeriProof’s infrastructure is available. Any auditor with the Solana account address and a record’s leaf hash can verify independently.


Why the blockchain anchor matters

Most audit systems stop at database integrity. VeriProof goes one step further: it creates independent proof that a record existed and was not silently rewritten later.

That is useful when:

  • a customer disputes an AI-assisted outcome
  • an internal review asks whether evidence was edited after an incident
  • a regulator or auditor wants verification independent of your vendor’s hosted environment
  • you need a tamper-evident chain without anchoring every single event individually on-chain

The blockchain is not there for marketing. It is there to make the compliance record harder to dispute.


Who VeriProof is for

VeriProof is built for teams deploying AI in regulated industries where accountability matters:

  • Financial services — loan decisions, fraud scoring, credit underwriting
  • Healthcare — clinical decision support, triage assistance, PHI handling
  • Legal & insurance — claims assessment, contract analysis, risk evaluation
  • Public sector — benefits determination, enforcement support, administrative AI

If your AI system makes or influences decisions that affect people — and someone will eventually ask you to prove it was fair, accurate, and governed — VeriProof is the infrastructure that makes that proof possible.


What VeriProof is not

VeriProof is intentionally not trying to replace the rest of your AI platform.

  • It is not a model gateway.
  • It is not a prompt management tool.
  • It is not a vector database.
  • It is not a content moderation product by itself.
  • It is not a request-path policy engine that all traffic must proxy through.

It is an observability, governance, and evidence layer for AI systems.


A practical mental model

If application monitoring tells you whether your system was fast and healthy, VeriProof tells you whether your AI system was governed, explainable, and defensible.

That means VeriProof helps answer questions like:

  • Which customer-impacting decisions were made by AI last week?
  • Which ones lacked grounding or human review?
  • Which applications are generating high-risk sessions?
  • Which policy failed during a disputed outcome?
  • Can we prove this audit record was not changed after it was created?

Start here if you are evaluating VeriProof


Next steps

Last updated on