EU AI Act
The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive horizontal regulation for artificial intelligence. It establishes a risk-based framework: the higher the potential harm of an AI system, the more rigorous the compliance requirements.
This section provides detailed, implementation-focused guidance for using VeriProof to satisfy the articles that apply to high-risk AI system providers and deployers.
The EU AI Act entered into force on 1 August 2024. High-risk AI system obligations under Annex III became fully applicable on 2 August 2026. GPAI model obligations applied from August 2025.
Does the Act Apply to You?
The Act applies to you if you are:
A Provider (Article 3(3)) — You develop or have an AI system developed and place it on the EU market or put it into service for your own use. Most enterprise AI teams building on top of foundation models are providers of the AI system they deploy, even if they didn’t train the underlying model.
A Deployer (Article 3(4)) — You use an AI system under your authority in a professional context that affects EU individuals.
A GPAI Model Provider (Article 3(63)) — You develop and place on the market a General-Purpose AI model (e.g., a fine-tuned foundation model distributed to others).
Risk Classification
The Act establishes four risk tiers:
| Risk tier | Examples | Compliance obligations |
|---|---|---|
| Prohibited | Social scoring, real-time biometric surveillance in public spaces | Cannot be placed on market |
| High-risk | Employment decisions, education/training, access to essential services, law enforcement, border control, critical infrastructure, biometric identification, judicial decisions | Full obligations (Articles 9–17, 26, 29) |
| Limited-risk | Chatbots, deep fakes, emotion recognition | Transparency obligations only |
| Minimal-risk | AI spam filters, AI-enabled video games | Voluntary codes of conduct |
High-Risk Article Requirements
If your AI system falls in Annex III, these articles define your obligations. VeriProof addresses the production monitoring and documentation subset:
Ongoing risk identification, estimation, and mitigation documentation requirements
Article 9 — Risk ManagementTraining and production data quality monitoring obligations
Article 10 — Data GovernanceDocumentation requirements before market placement and during the system’s lifecycle
Article 11 — Technical DocumentationObligations to make the system’s capabilities and limitations understandable to deployers
Article 13 — TransparencyQuality management system requirements: change management, incident response, corrective action
Article 17 — Quality ManagementGPAI Model Obligations
If you provide a GPAI model (Article 51+), VeriProof supports:
- Article 53(1)(a): Technical documentation — evidence packages with session-level provenance
- Article 53(1)(b): Transparency to downstream providers — exportable evidence packages that can be shared with organisations deploying your model
- Article 55(1)(c): Incident reporting — session records provide the audit trail for serious incident documentation
Implementation Sequence
For a new high-risk AI system deployment, the recommended VeriProof implementation sequence is:
-
Before launch: Establish your governance scoring criteria (thresholds for Article 9 risk metrics), configure alert rules for Article 17 incident detection, and verify your evidence export template produces output suitable for your technical documentation package
-
At launch: Enable session capture for all production traffic; link data subjects if the system processes identifiable individuals
-
Ongoing: Review governance scores and alert history weekly; generate evidence packages for your periodic conformity assessment cycle
-
On incident: Use time-machine replay to reconstruct the session(s) involved; export an incident evidence package for your Article 17 corrective action documentation
Evidence Package Walkthrough
See the Evidence Packaging Walkthrough for a step-by-step guide to generating an EU AI Act-compliant evidence package using the Customer Portal and API.
Next Steps
- Article 9 — Risk Management — the risk management system obligation
- NIST AI RMF — complementary US framework
- Compliance Evidence guide — practical evidence generation