Skip to Content
Getting StartedFirst Integration

First Integration Guide

This guide takes you beyond the quickstart. By the end, your AI application will produce compliance records with full governance context — decisions captured with rationale and confidence, governance annotations populating the Compliance Center, and alert rules notifying you when something needs attention.

Prerequisites: Complete the 5-Minute Quickstart first. You should have the SDK installed, your API key configured, and at least one trace visible in the Customer Portal.


Register your application in the portal

Before sending production traces, register your application in the Customer Portal:

  1. Go to Applications in the left sidebar.
  2. Click New Application.
  3. Enter a name that matches the application_id you pass to the SDK (e.g. loan-review-agent).
  4. Select the applicable regulatory framework (EU AI Act, NIST AI RMF, or None) — this drives how VeriProof classifies sessions for compliance scoring.
  5. Save. A new Application entry appears in the sidebar of Decisions and Compliance Center.

The SDK will still export traces if the Application is not registered — they will appear under “Unknown Application” until you add it. Registering enables framework-specific classification and custom alert rules for that application.

Add decision context

The most important annotation you can add is a structured decision record. Decisions are what auditors, regulators, and your compliance team will scrutinize first.

from veriproof_sdk import ( VeriproofSession, DecisionContext, RiskLevel, SessionOutcome, StepTags, configure_veriproof, VeriproofClientOptions, ) import os configure_veriproof( VeriproofClientOptions(api_key=os.environ["VERIPROOF_API_KEY"], application_id="loan-review"), service_name="loan-review", set_global=True, ) async def evaluate_application(app_id: str) -> str: session = ( VeriproofSession(application_id="loan-review") .with_session_id(f"loan_{app_id}") .with_intent("loan_eligibility_check") ) async with session: # Capture a data-retrieval step session.add_step( "fetch_credit_score", output={"bureau": "transunion", "score": 718}, tags=StepTags.retrieval(), ) # Capture the LLM evaluation session.add_chat_turn( prompt=f"Should we approve application {app_id}? Credit score: 718.", response="Recommend approval. Score of 718 is above the 680 threshold.", model="gpt-4o", ) # Structured decision — this is what appears in the Decisions explorer session.set_decision( DecisionContext.with_options( "Loan eligibility decision", options=["approve", "deny", "manual_review"], selected="approve", rationale="Credit score 718 exceeds minimum threshold of 680. DTI at 32% within policy.", confidence=0.91, ) ) # Final outcome and risk classification session.set_outcome(SessionOutcome.APPROVED, risk_level=RiskLevel.LOW) return "approve"

Add governance annotations

Governance annotations populate VeriProof’s Compliance Center and Governance Trends charts. They answer the regulatory question “was this AI system operating within its intended scope and controls?”

The key annotation types are:

AnnotationWhat it records
guardrail.actionWhether a guardrail passed, blocked, or escalated the output
grounding.statusWhether the LLM response was grounded in retrieved context
human_oversight.typeHuman review, approval, or override events
content_safety.actionContent moderation outcomes
agent.roleWhether this was a primary, secondary, or orchestrator agent
from veriproof_sdk_annotations import trace_operation, set_governance_attributes @trace_operation( "loan.evaluate", attributes={ "guardrail.action": "passed", "grounding.status": "grounded", "agent.role": "primary", } ) async def evaluate_with_guardrails(application_id: str) -> str: result = await llm.complete(f"Evaluate loan {application_id}.") # Set additional attributes based on runtime conditions if result.flagged_by_safety_filter: set_governance_attributes( content_safety_action="blocked", guardrail_action="blocked", ) return result.text

Governance attribute values must match the canonical vocabulary. Unrecognized strings are silently discarded by the ingest parser. See the Governance Attributes Reference for the full list of accepted values.

Configure your first alert rule

Alert rules notify you when sessions match conditions you care about — high risk levels, blocked guardrails, low confidence decisions.

  1. In the Customer Portal, open Alerts in the left sidebar.
  2. Select the Alert Rules tab.
  3. Click New Rule.
  4. Configure a basic rule:
    • Trigger: Risk Level is HIGH or CRITICAL
    • Action: Email notification to your team
    • Scope: Application loan-review
  5. Save and activate the rule.

The next session with a HIGH or CRITICAL risk level from your loan-review application will trigger the notification.

View the compliance dashboard

Once a few sessions are captured, explore the Compliance Center:

  • Governance Score — an aggregate of guardrail pass rates, grounding quality, and human oversight ratios across all sessions
  • Governance Trends — time-series chart of score components
  • Decisions Explorer — filter sessions by risk level, outcome, model, and date range
  • Time Machine — reconstruct the exact state of any session at any point in time

The Blockchain Status indicator on each session shows whether the session’s proof has been anchored to Solana. A green shield icon means the record is independently verifiable.


Typical integration checklist

Before going to production, confirm you have:

  • SDK initialized at application startup (not per-request)
  • application_id registered in the Customer Portal
  • Production API key configured via secret manager (not environment files)
  • At least one decision recorded per session with rationale and confidence
  • Governance annotations added to guardrail checks and human oversight steps
  • At least one alert rule configured
  • Content capture decision made explicitly (enable_content_capture default is false)
  • Graceful shutdown wired to flush buffered spans before process exit

FAQ

Do I need to use the session builder, or does the framework adapter do everything?

If you use a framework adapter (LangGraph, CrewAI, etc.), basic traces are captured automatically. The session builder adds the governance-specific context — decisions, outcomes, risk levels — that compliance dashboards depend on. You can mix both: let the adapter cover framework spans, and use the session builder to attach structured governance metadata.

Does VeriProof add latency to my AI calls?

No. The SDK exports spans asynchronously after each span completes. Your AI calls are not blocked. If the ingest API is unreachable, the SDK buffers up to 500 payloads in memory and uses a circuit breaker to prevent backpressure on your application.

What happens if the ingest API is down?

The circuit breaker opens after 3 consecutive failures and resumes exports when the API recovers. Records captured during an outage are held in the in-memory buffer. If the buffer fills (500 payloads), oldest records are dropped. For critical compliance applications, consider the enterprise deployment option with a local ingest relay.


Next steps

Last updated on