I Built an AI Chatbot for Founder Institute. It Takes Just One Extra Feature to Make It Illegal in Europe.
A real case study: a RAG-based chatbot built for Founder Institute Philadelphia — and a technical analysis of exactly what changes, legally and architecturally, when you add a scoring feature and deploy in Europe after August 2026.
The original system
Last year I built an AI chatbot for Founder Institute Philadelphia — one of the world's largest pre-seed startup accelerators, with chapters across the US and Europe. The system was a standard RAG pipeline: it answered questions from founders applying to the program, drawing from a knowledge base of FI documentation. No scoring, no evaluation, no persistent data beyond the session.
Since FI Philadelphia operates under US law, EU regulations were not a concern. But FI has chapters in Milan, Berlin, Amsterdam, and dozens of other European cities. If a European chapter deployed this system, what would change?
I decided to find out. I rebuilt the system, added a scoring feature, and mapped every obligation that would apply under Regulation EU 2024/1689 — the EU AI Act. This article is the result. The code is on GitHub. The obligations are real.
The code available on GitHub uses a fictional accelerator called Slearnt. The Founder Institute reference in this article is illustrative — FI does not use this system.
Version 1: limited risk, one obligation
The original system does one thing: it answers questions. A founder visits the application page, starts a conversation, and the chatbot responds about the program — structure, requirements, what FI looks for. No output influences any decision about any person.
Under the EU AI Act, this is limited risk. A single obligation applies: transparency. Users must know they are interacting with an AI system.
Classification: Limited risk. Obligations: one — transparency disclosure.
Version 2: the feature that changes everything
Now consider one additional feature: at the end of the conversation, the system produces a preliminary score — a numerical assessment of the founder's profile based on their responses. The score is not shown to the founder. It is forwarded to the FI team's admin panel as a first-pass evaluation to help prioritise which applications to review first.
This single addition moves the system from limited risk to high risk under Annex III of Regulation EU 2024/1689.
The EU AI Act classifies as high-risk any AI system used to evaluate persons in the context of access to education, vocational training, or equivalent programmes. An accelerator application is exactly this context. The moment the system produces a score that influences how a human prioritises decisions about people, it crosses the threshold — regardless of whether a human makes the final call.
Who has what obligations
The AI Act distinguishes two roles with very different obligations. In this scenario:
- Provider: the company that built and supplies the system — responsible for technical architecture, scoring model, documentation, and conformity assessment.
- Deployer: Founder Institute Milan — responsible for how the system is used, human oversight, staff training, and informing applicants.
The provider's vendor telling FI "we handle compliance" is not sufficient. The deployer has independent obligations that cannot be delegated — human oversight, log retention, staff training, and informing applicants are always the deployer's responsibility.
| Obligation | Who | What it means in practice | Art. |
|---|---|---|---|
| Risk management system | Provider | Continuous process to identify and mitigate risks throughout the system lifecycle | 9 |
| Technical documentation | Provider | Architecture, training data, performance metrics, known limitations, intended use cases (Annex IV structure) | 11 |
| Automatic log generation | Provider | System must generate logs automatically; deployer must be able to access and download them | 12 |
| Instructions for use | Provider | Clear documentation for deployers on how to use the system, its limitations, and required human oversight measures | 13 |
| Human oversight by design | Provider | System must be designed so humans can understand, monitor, and override outputs — not added as a patch | 14 |
| Conformity assessment | Provider | Formal compliance assessment before deployment. Self-assessment applies in most cases. | 43 |
| EU database registration | Provider | Mandatory registration before commercialisation | 49 |
| AI Literacy | Deployer | All staff using the system must understand its capabilities, limitations and risks. Already mandatory since February 2025. | 4 |
| Compliant use | Deployer | Use the system exactly as intended by the provider. No unauthorised modifications. | 26 |
| Effective human oversight | Deployer | Team must review and be able to override any score before a final decision. Override must be logged. | 26 |
| Log retention | Deployer | Retain automatically generated logs for at least 6 months | 26 |
| Inform affected persons | Deployer | Applicants must be told that an AI system produces a preliminary evaluation of their application | 26 |
| Serious incident reporting | Deployer | Internal process to detect and notify competent authorities of serious incidents | 26 |
| FRIA | Deployer | Fundamental Rights Impact Assessment required before deployment for systems affecting access to opportunities | 27 |
Two things to implement immediately
1. The disclosure text
Placed below the chat input field, visible before the founder types anything:
The privacy policy must also be updated to explicitly mention the use of AI and the existence of the scoring system.
2. The human override log
Every decision made by the FI team must be recorded in the admin panel:
This log is the documentary proof that human oversight is real and not merely formal. In the event of an inspection by a supervisory authority, it is the first thing they will ask to see. "We always check manually" is not sufficient — it must be recorded.
What this means beyond accelerators
Founder Institute has chapters across Europe — Italy, Germany, France, Spain, and beyond. Any European chapter using a scoring system like this after August 2026 without these measures in place is operating a non-compliant high-risk AI system.
More broadly, this pattern applies to any organisation that uses AI to evaluate people for access to programmes, jobs, funding, or services:
- Accelerators and incubators with AI-assisted application screening
- Companies using ATS software with automated candidate ranking
- Financial institutions using AI for credit or investment eligibility
- Educational institutions with AI-assisted admissions tools
The technology involved is often simple — a RAG pipeline, an LLM API call, a scoring function. The legal implications are not. And the line between limited risk and high risk is often a single feature.
AI Literacy has been mandatory since 2 February 2025. If your organisation uses AI systems and has not yet trained the relevant staff, you are already out of compliance — regardless of August 2026.
Try the code on GitHub
Both versions of the system are available on GitHub: the limited-risk chatbot (Version 1) and the high-risk scoring version (Version 2). The repository includes setup instructions, a sample knowledge base, and inline comments mapping each component to the relevant AI Act obligation.
Does your system have a scoring feature?
If you are building or deploying AI systems that evaluate people in Europe, find out exactly what obligations apply before August 2026.
Get a free 30-minute assessmentThis article is for informational purposes only and does not constitute legal advice. Source: Regulation EU 2024/1689. Updated February 2026. © 2026 euaiact.pro — Gianluca Capuzzi.