EU AI Act

Elements, expectations, and a practical checklist.

This section provides a high-level, decision-maker summary. Align the details with your legal and compliance teams for jurisdiction-specific interpretation.

This is a high-level summary for decision makers. It is not legal advice. Validate specific obligations with counsel and compliance teams.

Core elements

Risk-based classification

Systems are categorized by risk level. Higher risk triggers stricter governance, evidence, and controls.

Prohibited practices

Certain AI uses are disallowed outright (e.g., manipulative or exploitative practices).

High-risk obligations

High-risk systems require rigorous risk management, data governance, documentation, and human oversight.

Transparency obligations

Users must be informed when they are interacting with AI or AI-generated content.

Quality, robustness, cybersecurity

Controls must ensure reliability, accuracy, resilience, and protection against misuse or attack.

Post-market monitoring

Ongoing monitoring, incident reporting, and corrective actions are expected after deployment.

Implementation checklist

  • Classify the AI system and record the rationale.
  • Maintain a risk management file and mitigation plan.
  • Define data governance: data sources, quality checks, bias controls.
  • Document system purpose, capabilities, limitations, and intended users.
  • Establish human oversight roles and escalation paths.
  • Ensure logging and traceability of AI outputs and decisions.
  • Define transparency notices for users and stakeholders.
  • Implement robustness, accuracy, and cybersecurity testing.
  • Set post-market monitoring and incident response procedures.
  • Retain evidence for audits and regulatory requests.

Executive explanations

What this means for Big Four delivery

The AI Act formalizes expectations already common in regulated work: classification, documentation, and accountability. If you cannot evidence controls, you should not deploy the use-case.

Why it matters in client work

Advisory, audit, and tax engagements often touch regulated data and client decision-making. Treat AI outputs as regulated artifacts with full traceability.

How to align fast

Use the same control architecture you apply to financial reporting and audit evidence: controlled inputs, versioned outputs, and defensible review logs.