Skip to Content
Governance & ComplianceEU AI ActOverview

EU AI Act

The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive horizontal regulation for artificial intelligence. It establishes a risk-based framework: the higher the potential harm of an AI system, the more rigorous the compliance requirements.

This section provides detailed, implementation-focused guidance for using VeriProof to satisfy the articles that apply to high-risk AI system providers and deployers.

The EU AI Act entered into force on 1 August 2024. High-risk AI system obligations under Annex III became fully applicable on 2 August 2026. GPAI model obligations applied from August 2025.


Does the Act Apply to You?

The Act applies to you if you are:

A Provider (Article 3(3)) — You develop or have an AI system developed and place it on the EU market or put it into service for your own use. Most enterprise AI teams building on top of foundation models are providers of the AI system they deploy, even if they didn’t train the underlying model.

A Deployer (Article 3(4)) — You use an AI system under your authority in a professional context that affects EU individuals.

A GPAI Model Provider (Article 3(63)) — You develop and place on the market a General-Purpose AI model (e.g., a fine-tuned foundation model distributed to others).


Risk Classification

The Act establishes four risk tiers:

Risk tierExamplesCompliance obligations
ProhibitedSocial scoring, real-time biometric surveillance in public spacesCannot be placed on market
High-riskEmployment decisions, education/training, access to essential services, law enforcement, border control, critical infrastructure, biometric identification, judicial decisionsFull obligations (Articles 9–17, 26, 29)
Limited-riskChatbots, deep fakes, emotion recognitionTransparency obligations only
Minimal-riskAI spam filters, AI-enabled video gamesVoluntary codes of conduct

High-Risk Article Requirements

If your AI system falls in Annex III, these articles define your obligations. VeriProof addresses the production monitoring and documentation subset:


GPAI Model Obligations

If you provide a GPAI model (Article 51+), VeriProof supports:

  • Article 53(1)(a): Technical documentation — evidence packages with session-level provenance
  • Article 53(1)(b): Transparency to downstream providers — exportable evidence packages that can be shared with organisations deploying your model
  • Article 55(1)(c): Incident reporting — session records provide the audit trail for serious incident documentation

Implementation Sequence

For a new high-risk AI system deployment, the recommended VeriProof implementation sequence is:

  1. Before launch: Establish your governance scoring criteria (thresholds for Article 9 risk metrics), configure alert rules for Article 17 incident detection, and verify your evidence export template produces output suitable for your technical documentation package

  2. At launch: Enable session capture for all production traffic; link data subjects if the system processes identifiable individuals

  3. Ongoing: Review governance scores and alert history weekly; generate evidence packages for your periodic conformity assessment cycle

  4. On incident: Use time-machine replay to reconstruct the session(s) involved; export an incident evidence package for your Article 17 corrective action documentation


Evidence Package Walkthrough

See the Evidence Packaging Walkthrough for a step-by-step guide to generating an EU AI Act-compliant evidence package using the Customer Portal and API.


Next Steps

Last updated on