SOC 2 in AI Governance
SOC 2 Type II assesses whether a service organisation’s controls operated effectively over a defined audit period. For AI systems, SOC 2 is increasingly requested by enterprise customers and regulators as evidence of a mature control environment.
This page covers how VeriProof’s capabilities support the Trust Service Criteria (TSC) most relevant to AI systems. For VeriProof’s own SOC 2 status, see SOC 2 Compliance.
AI-Relevant Trust Service Criteria
Security (CC — Common Criteria)
The Security TSC is always included in a SOC 2 engagement. For AI systems, the criteria most directly relevant to production operation are:
CC7.1 — Detection and Monitoring of Anomalies
VeriProof’s governance scoring and alert rules directly satisfy CC7.1’s requirement to monitor for anomalous events that could indicate control failures or system degradation:
- Governance scoring provides a continuous quantitative signal
- Alert rules fire when the signal crosses a threshold, triggering investigation
- Alert acknowledgement logs demonstrate that the monitoring system resulted in action
CC7.2 — Evaluation of Security Events
When a governance alert fires, the time-machine session replay provides the tooling for CC7.2’s requirement to evaluate whether flagged events represent security or control issues. The ability to replay the exact session content means investigators can determine root cause from actual data, not from reconstructed logs.
CC7.4 — Incident Response
Alert acknowledgement notes and corrective action records satisfy CC7.4’s requirement for documented incident response. When combined with VeriProof’s evidence packages, you can demonstrate to auditors that:
- Incidents were detected (alert trigger log)
- Incidents were investigated (acknowledgement notes)
- Corrective action was taken and documented (resolution log)
Availability (A — Availability Criteria)
A1.2 — Monitoring of Environmental Threats
Governance score trend monitoring serves as an AI-specific form of the availability monitoring required under A1.2. A sustained drop in governance score can indicate model degradation, infrastructure issues, or data quality problems — all of which affect the effective availability of your AI system.
Configure a baseline drift alert to satisfy this requirement. Open Monitoring in the Customer Portal and click + New Rule. Set the metric to governance score (7-day average), configure a threshold representing a meaningful drop from your deployment baseline, set severity to High, and assign the SOC 2 control owner as the recipient.
Confidentiality (C — Confidentiality Criteria)
C1.1 — Identification of Confidential Information
VeriProof’s governance scoring can include a policy that flags sessions containing
confidential information. In Settings → Governance Policies, add a custom
policy and set the signal field to metadata.contains_restricted_pattern. Any session
where your adapter emits this flag will be tracked and scored accordingly.
This requires your adapter to emit a contains_restricted_pattern metadata field,
which you populate using your own content detection logic.
Evidence for SOC 2 Auditors
When your auditors are assessing your AI system’s controls under SOC 2, the relevant VeriProof evidence to provide includes:
| Auditor request | VeriProof evidence |
|---|---|
| ”Show us how you monitor the AI system for anomalies” | Governance scoring configuration + alert rule inventory |
| ”Provide evidence that anomalies were investigated” | Alert acknowledgement log for the audit period |
| ”Show corrective actions taken in response to identified issues” | Alert resolution log with root cause and corrective action notes |
| ”Evidence of change management controls for AI system changes” | Time-machine comparison of governance scores pre/post model updates |
| ”Demonstrate record integrity” | Blockchain proof verification summary for the audit period |
Generate a custom evidence package for your SOC 2 audit period in Compliance → Evidence Exports in the Customer Portal. Select the audit period, choose the evidence sections to include (governance configuration, alert inventory, alert history, and blockchain proof verification summary), and click Download Evidence Pack (PDF). The package is attestation-signed and includes the generation timestamp.
Scoping Your AI System in a SOC 2 Engagement
If you’re including your AI system in your own SOC 2 Type II audit, work with your audit firm to determine how the AI components are scoped:
- In-scope: The AI inference service (model endpoint), session capture integration, and the monitoring system (VeriProof configuration + alerts)
- Likely out of scope as a separate system: VeriProof itself (evaluated as a subservice organisation; your auditor will review VeriProof’s own SOC 2 report when available)
- Relevant control owner: The person responsible for maintaining the governance scoring configuration and responding to alerts is the control owner for AI monitoring controls
Next Steps
- SOC 2 Compliance — VeriProof’s own SOC 2 status
- Evidence Export — generating audit evidence packages
- NIST AI RMF — MEASURE — quantitative risk assessment that complements SOC 2