EU AI Act:
complete guide
2026
Regulation EU 2024/1689 — the AI Act — entered into force on 1 August 2024 and will change how European organisations develop and use artificial intelligence. This guide covers everything you need to know: what it is, who it affects, what it requires and when.
- What the AI Act is and why it was created
- Who it applies to: providers, deployers and the line between them
- The four risk levels with concrete examples
- Timeline: the deadlines you cannot miss
- Concrete obligations for organisations
- Penalties
- How to get started: practical first steps
- Sector-specific deep dives
1. What the AI Act is and why it was created
Regulation EU 2024/1689, officially known as the EU AI Act, is the world's first comprehensive legal framework governing artificial intelligence systems. Adopted by the European Parliament on 13 March 2024 and published in the Official Journal of the EU on 12 July 2024, it entered into force on 1 August 2024.
The European Commission developed this regulation from a straightforward premise: artificial intelligence produces concrete effects on people's lives — it can deny a loan, influence a medical diagnosis, determine whether a candidate is called to interview. Without rules, these effects occur without transparency, without accountability and without any right of recourse.
The AI Act does not ban AI. It does not restrain innovation as a matter of principle. Instead, it introduces a risk-based approach: the greater the potential harm a system can cause, the stricter the rules governing it. A spam filter and a credit scoring system cannot be treated the same way.
The AI Act does not apply only to those who build algorithms. It applies to anyone who uses AI systems professionally in the European market — including organisations that purchase third-party software and use it in their operations. If you use a CRM with scoring, an ATS with AI, diagnostic software or an algorithmic pricing tool, you are subject to the regulation.
The founding principle: proportionality to risk
The philosophy underlying the AI Act is straightforward: obligations must be proportional to potential harm. An AI system that recommends music playlists does not require the same safeguards as one that assesses a person's creditworthiness or supports a medical diagnosis.
This approach has important practical consequences for organisations: before knowing what you must do, you need to know which AI systems you use and which risk category they fall into. The inventory is the mandatory first step.
2. Who it applies to: providers, deployers
and the line between them
The AI Act distinguishes between different roles with very different obligations. Understanding which role your organisation occupies is the second step — after the inventory — in any compliance journey.
| Role | Definition | Examples | Obligations |
|---|---|---|---|
| Provider | Develops and/or places an AI system on the market | A startup selling a credit scoring engine; a software house integrating AI into its own products | 🔴 Very extensive |
| Deployer | Uses a third-party AI system in its own professional activity | A bank using purchased scoring software; an HR team using an AI-powered ATS | 🟠 Extensive |
| Distributor | Distributes AI systems without modifying them | Resellers, software marketplaces | 🟡 Limited |
| Importer | Places on the EU market AI systems developed outside the EU | A European company reselling US-developed AI software | 🟠 Extensive |
When a deployer becomes a provider
This is the most common trap. Many organisations consider themselves simple software users, but are reclassified as providers — with all the heavier obligations — in three situations:
- Substantial modification: if you fine-tune a third-party model, modify its weights, add significant decision-making logic, or use the system for purposes other than those intended by the original supplier.
- White labelling: if you put your own brand on a third-party AI system and commercialise it as your own.
- Internal development: if you build an AI system in-house for your own use (even non-commercial), you are the provider of that system.
An organisation purchases an LLM from OpenAI or Anthropic and integrates it into a product it sells to its own customers. It is not merely a deployer of the LLM — it is the provider of the overall AI system it has built on top of it. Provider obligations apply to the final system, not only to the underlying model.
Does the AI Act apply outside the EU?
Yes, with extraterritorial effect. The AI Act applies to anyone who:
- Places an AI system on the European market, regardless of where they are based
- Puts an AI system into service in the EU
- Whose AI system produces effects on natural persons in the EU, even if the provider is located outside Europe
In practice: using AWS, Azure or Google Cloud as infrastructure, or purchasing models from non-EU providers, does not exempt an organisation from compliance. It is the point of use — and the impact on people — that determines applicability.
3. The four risk levels
with concrete examples
The heart of the AI Act is its four-level classification. Every AI system falls into one category, and obligations flow from that category. Classification is not automatic: it requires a case-by-case assessment, although Annex III of the Regulation provides explicit lists for high-risk systems.
Banned practices
Systems that violate fundamental rights or manipulate people without their awareness. Banned since 2 February 2025.
High-risk systems
Systems with significant impact on health, safety or fundamental rights. Annex III + regulated products.
Transparency obligations
Systems that interact with people or generate content. The sole obligation is to disclose that the user is interacting with AI.
No specific obligations
The vast majority of AI systems fall here. No additional mandatory obligations beyond those already in force.
Who determines the risk category?
For high-risk systems, Annex III of the Regulation provides an explicit sector-by-sector list. For all others, it is the provider or deployer who must assess and document the classification. There is no authority that pre-certifies a category: responsibility lies with the organisation and will be verified ex post by national supervisory authorities (in the UK context, sector-specific regulators such as the FCA, ICO or MHRA may be relevant depending on the domain).
Finance: credit scoring, creditworthiness assessment, insurance risk scoring.
Healthcare: AI diagnostics, clinical decision support, automated triage.
HR: candidate selection, performance evaluation, automated dismissals.
Public administration: access to essential services, recidivism risk assessment, immigration
controls.
Education: automated admissions, student assessment.
Critical infrastructure: energy, water, transport, telecoms.
GPAI models: a special category
General Purpose AI (GPAI) models — such as GPT-4, Claude and Gemini — are subject to separate rules. From 2 August 2025, their providers must comply with transparency obligations, technical documentation requirements and copyright rules. Models with exceptional systemic capabilities (above 10²⁵ training FLOPs) carry additional obligations: adversarial evaluation, systemic risk management and incident reporting.
4. Timeline: the deadlines
you cannot miss
The AI Act did not come into force all at once. The deadlines are staggered to give organisations time to adapt — but some have already passed.
2024
✅ Regulation enters into force
Reg. EU 2024/1689 officially enters into force. The countdown begins for all subsequent deadlines. EU institutions begin governance work: AI Office, codes of conduct, technical standards.
2025
✅ Prohibited practices and AI Literacy — already in force
Definitive ban on prohibited practices (manipulation, social scoring, real-time biometrics). AI Literacy obligation for all staff using AI systems — already in force and enforceable. If you have not yet implemented a training programme, you are already non-compliant.
2025
✅ GPAI obligations — already in force
Transparency, technical documentation and copyright compliance obligations for general purpose model providers. EU governance regime operational. Sanctions active for GPAI providers. For deployers using these models: verify that your suppliers are compliant.
2026
⏳ High-risk systems — the critical deadline
Full application for finance, healthcare, HR, public administration, education and critical infrastructure. Risk management, technical documentation, human oversight, audit trails, EU database registration: all mandatory. This is the deadline that affects the majority of European organisations.
2027
⏳ Legacy systems and regulated products
AI systems already on the market before August 2025 and AI embedded in regulated products (medical devices, automotive, industrial machinery). Full compliance required by this date.
5. Concrete obligations
for organisations
Obligations vary significantly depending on your role (provider or deployer) and the risk level of the system. Here is a structured overview.
Obligations for everyone — already in force
AI Literacy (Art. 4): every organisation must ensure that staff who use AI systems have an adequate level of understanding of those systems' capabilities, limitations and risks. This is not about training everyone as a data scientist: it is role-calibrated operational awareness. The procurement manager who uses a scoring tool must understand what that tool does and where its limits lie. The CEO must understand the strategic and regulatory implications.
Obligations for deployers of high-risk systems (August 2026)
Deployer obligations checklist
Compliant use: use the AI system strictly as intended by the provider, documenting any deviation.
Effective human oversight: having a human in the process is not sufficient — that person must be able to understand, monitor and intervene on system outputs in an informed manner.
Input data quality: data supplied to the system must be relevant, accurate and non-discriminatory.
Log retention: automatically generated logs must be retained for at least 6 months (longer in certain regulated sectors).
Disclosure to affected individuals: persons whose circumstances are subject to AI-assisted decisions must be informed.
FRIA (Fundamental Rights Impact Assessment): mandatory for public bodies, financial institutions and insurers. Assessment of fundamental rights impact prior to use.
Serious incident reporting: notification to competent authorities of any serious incident involving the AI system.
Obligations for providers of high-risk systems (August 2026)
| Obligation | What it means in practice | Legal reference |
|---|---|---|
| Risk management system | A continuous process — not a static document — for identifying, estimating and mitigating risks throughout the system's lifecycle | Art. 9 |
| Technical documentation | Architecture, training data, performance metrics, known limitations, intended and unintended use cases. Structure defined by Annex IV. | Art. 11, Ann. IV |
| Data governance | Documented, representative datasets; bias assessment; verified quality. No unauthorised sensitive data. | Art. 10 |
| Transparency and instructions for use | Clear documentation for deployers on how to use the system, its limitations, required inputs, and intended human oversight measures. | Art. 13 |
| Human oversight by design | The system must be designed to enable effective human oversight — not bolted on as an afterthought. | Art. 14 |
| Accuracy, robustness, cybersecurity | Documented and verifiable performance. Resistance to adversarial attacks. Stable behaviour over time. | Art. 15 |
| Conformity assessment | Formal conformity assessment before launch (self-assessment in most cases; third-party for biometrics and critical infrastructure). | Art. 43 |
| EU database registration | Mandatory registration in the EU database managed by the Commission before commercialisation. | Art. 49 |
| Post-market monitoring | An active plan for collecting real-world performance data, with defined intervention thresholds. | Art. 72 |
6. Penalties
The AI Act provides for a proportionate penalty regime modelled on the GDPR. Penalties are imposed by national supervisory authorities and apply to both organisations and responsible individuals.
| Type of infringement | Maximum penalty | % global turnover | Who is at risk |
|---|---|---|---|
| Prohibited practices (Art. 5) | €35,000,000 | 7% global turnover | Providers, deployers |
| High-risk system obligations | €15,000,000 | 3% global turnover | Providers, deployers |
| GPAI obligations | €15,000,000 | 3% global turnover | GPAI providers |
| False or incomplete information | €7,500,000 | 1% global turnover | All parties |
How it works for SMEs: the lower of the fixed amount and the turnover percentage applies. An SME with €2M turnover faces up to €60,000 for non-compliance with high-risk system obligations — not ruinous, but sufficient to make compliance a rational investment relative to the risk.
Authorities may also order the suspension or withdrawal of a non-compliant AI system from the market. For an organisation whose primary product is a high-risk AI system, this could mean an interruption of business — far beyond the direct financial cost of the penalty itself.
7. How to get started:
practical first steps
An AI Act compliance journey naturally structures itself into phases. You do not need to do everything at once — but you need to start now, because the time available is shorter than it appears.
Weeks 1–2: AI Inventory
Map all software your organisation uses that contains AI components or probabilistic logic. Not only systems that are obviously "AI" — also CRMs with scoring, dynamic pricing tools, HR software with automated ranking. Ask operational teams: they often use AI-enabled tools without being explicitly aware of it.
Weeks 3–4: Risk classification
For each system in the inventory, determine the risk level. Check whether it falls within Annex III (explicit high risk). Determine your role for each system (provider or deployer). Document the classification with supporting rationale.
Month 2: Gap analysis
For each high-risk system, identify what is missing relative to applicable obligations. Create a prioritised list of gaps to address, with owners and internal deadlines.
Months 3–6: AI Literacy and governance
Launch the AI Literacy training programme (already mandatory). Define internal governance: who is responsible for AI compliance, who approves new tools, how incidents are managed.
Months 6–12: Implementation
Implement the missing measures: human oversight, log retention, technical documentation, vendor assessment. For providers: establish the risk management system and structured technical documentation.
Not sure where to begin?
We offer a free 30-minute initial assessment. In a single call, you will understand your position and the concrete first steps to take.
Book the free assessment8. Sector-specific
deep dives
This guide is the starting point. For each high-risk sector we have developed specific resources with detailed obligations, concrete examples, downloadable checklists and document templates.
AI Act for FinTech
Credit scoring, creditworthiness assessment, insurance risk scoring. Provider and deployer obligations, checklists and document templates.
Read the guide →AI Act for Healthcare
AI diagnostics, clinical decision support, medical devices. Coming soon.
Coming soonAI Act for HR and Recruitment
AI-powered ATS, automated selection, performance evaluation. Coming soon.
Coming soonThis guide is provided for informational purposes only and does not constitute legal advice. Source: Reg. EU 2024/1689. For organisation-specific decisions, please consult a qualified professional. Updated February 2026. © 2026 euaiact.pro — Gianluca Capuzzi.