Case studies

Real
projects.

How we have helped real organisations understand their exposure to the EU AI Act and take action ahead of the deadline.

2
Projects completed
2
Sectors covered
Aug 2026
Critical deadline
✓ Real case — anonymised HR / Recruiting High risk Provider 01

Startup Accelerator:
a chatbot that became illegal
with one extra feature

An international startup accelerator — with European offices including Italy, Germany and the Netherlands — was using an AI chatbot to answer questions from founders during the application process. The system was classified as limited risk. Then came the request to add an automated candidate scoring feature. That single addition would have shifted the entire system into the EU AI Act's high-risk category, triggering a completely different set of technical and legal obligations.

Initial situation
  • RAG chatbot for application FAQs — no formal AI Act classification
  • No disclosure to users that they were interacting with an AI
  • Internal request to add automated founder scoring
  • No awareness of European legal implications
Outcome
  • Formal classification of the existing system: limited risk
  • Disclosure implemented: users informed they are interacting with AI
  • Scoring feature designed with AI Act-compliant architecture
  • Logging, human oversight and technical documentation integrated by design
What we did
01 Classification of the existing system — analysis of the RAG chatbot and formal classification as a limited-risk system under the AI Act.
02 Impact analysis of the scoring feature — technical and legal demonstration of how adding scoring would shift the system into Annex III (systems for selection of natural persons).
03 Compliant architecture design — AI decision logging, human oversight interface for programme directors, and technical documentation of the model.
04 Disclosure and transparency implementation — notifying users that they are interacting with an AI, already mandatory since February 2025.
○ Illustrative scenario Finance / FinTech High risk Deployer 02

FinTech credit scoring:
being a deployer does not
mean no obligations

A representative scenario based on a European FinTech SME using a third-party credit scoring system. The most common misconception in this context: believing that AI Act obligations rest solely with the software vendor. The deployer, however, has a distinct set of its own obligations — human oversight, log retention, FRIA, client disclosure — entirely independent of those of the provider.

Typical starting point
  • Credit scoring software purchased from an external vendor
  • No AI inventory — AI systems in use not mapped
  • Assumption that "the vendor handles compliance"
  • No documented human oversight process
  • Clients not informed of AI use in credit decisions
After the engagement
  • Complete AI inventory with risk classification for each system
  • Clear understanding of deployer vs provider obligations
  • Documented and active human oversight procedure
  • FRIA completed for creditworthiness assessment systems
  • Client disclosures updated to reference AI use
Typical engagement
01 AI Inventory — mapping all AI systems in use, including those not immediately apparent (scoring embedded in ERP, risk models in CRM).
02 Deployer gap analysis — comparison between the obligations applicable to a high-risk sector deployer and the organisation's current position.
03 Fundamental Rights Impact Assessment (FRIA) — mandatory for financial institutions using AI to assess creditworthiness and solvency.
04 Action plan with deadlines — prioritisation of interventions by urgency, distinguishing obligations already in force from the August 2026 deadline.
+
Your project could be here
We work with SMEs across all high-risk sectors. Book a free call — in 30 minutes we will work out together what you need.
Book a free call →

Working on something
similar?

Tell us about your situation. The first call is free.

Book a free call →