back to blog
AI & Regulation9 min read

EU AI Act Compliance for Startups: Practical Implementation Guide

#ai-act#compliance#startups#ai

In my previous post about the AI Act I covered what the regulation says and when different parts kick in. Now let's get practical. If you're building AI products or running an AI startup what do you actually need to do?

This isn't legal advice. Get a lawyer if you're doing anything serious. But these guidelines should help you figure out where to start.

Step 1: figure out your role

The AI Act defines different obligations depending on your role in the AI value chain.

Provider: you develop or have an AI system developed and place it on the market under your name. This is the heaviest obligation level.

Deployer: you use an AI system in a professional capacity. Lighter obligations but still some requirements.

Importer/Distributor: you bring AI systems into the EU market or make them available. Mostly need to verify the provider did their job.

Most startups are providers. If you're building the AI system yourself you're a provider even if you use someone else's foundation model underneath.

Using openai's api to build a chatbot? You're a provider of that chatbot application. Openai is the provider of the underlying model. You both have obligations but different ones.

Step 2: classify your risk level

This determines everything else. Run through these questions:

Is it prohibited?

Check if your system does any of these:

  • Social scoring for public authorities
  • Real-time biometric identification in public spaces (with narrow exceptions)
  • Emotion recognition at work or school
  • Untargeted facial image scraping
  • Manipulation through subliminal techniques
  • Exploiting vulnerabilities of specific groups

If yes stop building it. These are banned in the EU.

Is it high-risk?

Your system is high-risk if it falls into one of these categories:

Annex III systems (standalone high-risk):

  • Biometric identification and categorization
  • Critical infrastructure management (water, gas, electricity, traffic)
  • Education and vocational training (admissions, assessments, learning)
  • Employment (recruitment, task allocation, performance monitoring, termination)
  • Access to essential services (credit scoring, emergency services, insurance)
  • Law enforcement (risk assessment, polygraphs, evidence analysis)
  • Migration and border control
  • Administration of justice

Safety components in regulated products:

  • Medical devices
  • Machinery
  • Toys
  • Vehicles
  • Aviation systems
  • Marine equipment

If your system makes decisions or provides input for decisions in these areas you're probably high-risk.

Is it limited risk?

Systems that interact with people or generate content:

  • Chatbots and conversational AI
  • Emotion detection systems (outside banned contexts)
  • Content generation (images, audio, video, text)
  • Deepfakes and synthetic media

These need transparency but not the full compliance package.

Is it minimal risk?

Everything else. AI-powered spam filters, recommendation engines for non-essential content, internal tools that don't affect people's rights. No specific requirements.

Step 3: compliance roadmap by risk level

If you're minimal risk

You're mostly fine. No mandatory requirements. But consider voluntary compliance anyway:

  • Document how your system works
  • Have a process for handling complaints
  • Monitor for issues

This protects you if classification changes and builds good habits.

If you're limited risk

Focus on transparency. Your users need to know they're interacting with AI.

For chatbots and conversational AI:

  • Clear disclosure that they're talking to an AI
  • Don't pretend to be human
  • Make it obvious in the interface

For content generation:

  • Label AI-generated content as such
  • For deepfakes specifically the disclosure must be clear and visible
  • Implement technical solutions where possible (watermarking, metadata)

For emotion detection:

  • Inform people that emotion recognition is happening
  • Get consent where required by GDPR

Timeline: transparency obligations become fully enforceable August 2026.

If you're high-risk

This is where it gets serious. You need a comprehensive compliance program.

deadline approaching

Most high-risk system requirements become enforceable August 2, 2026. If you're building high-risk AI start now. This isn't something you can rush in the last month.

Here's what you need to build:

High-risk compliance checklist

1. Risk management system

You need an ongoing process to identify, analyze, and mitigate risks throughout the AI lifecycle.

Practically this means:

  • Document known risks before deployment
  • Define acceptable risk thresholds
  • Plan how you'll test for those risks
  • Have procedures for when risks materialize
  • Review and update as you learn more

Create a risk register. Update it regularly. Make someone responsible for it.

2. Data governance

Your training, validation, and testing data needs proper management.

Requirements:

  • Document data sources and selection criteria
  • Check data quality and relevance
  • Identify and address potential biases
  • Consider data protection implications
  • Keep records of data processing

If you're fine-tuning foundation models this applies to your fine-tuning data. If you're building from scratch it applies to everything.

3. Technical documentation

Before placing your system on the market you need comprehensive technical docs.

What to include:

  • General description of the system and its purpose
  • Design specifications and architecture
  • Training methodologies and techniques
  • Computational resources used
  • Performance metrics and benchmarks
  • Known limitations and residual risks
  • Human oversight requirements
  • Expected lifetime and maintenance needs

This isn't user documentation. It's detailed technical specs that a regulator could review to understand how your system works.

4. Record keeping and logging

Your system needs to automatically log events during operation.

Requirements:

  • Logs must be traceable to specific outputs
  • Retention period must match the system's intended purpose
  • Logs need to be accessible for inspection

Build this into your architecture from the start. Retrofitting logging is painful.

5. Transparency and instructions

Deployers need clear information about your system.

Provide:

  • Identity and contact details
  • System characteristics, capabilities, and limitations
  • Performance metrics for intended purpose
  • Known foreseeable misuse risks
  • Human oversight measures and how to implement them
  • Expected lifetime and maintenance requirements

Think of this as a detailed product manual for professional users.

6. Human oversight

Your system must allow meaningful human oversight.

This means:

  • Humans can understand system outputs
  • Humans can intervene or override
  • Humans can stop the system
  • System behavior is interpretable

You can't just have a human rubber-stamp decisions. The oversight needs to be meaningful and the human needs the tools and information to actually exercise it.

7. Accuracy, robustness, and cybersecurity

Your system must be:

  • Accurate for its intended purpose
  • Resilient to errors and inconsistencies
  • Robust against attempts to manipulate it
  • Secure against unauthorized access

Document how you achieve each of these. Have testing to back it up.

8. Quality management system

You need documented processes for ensuring ongoing compliance.

This includes:

  • Procedures for design and development
  • Data management procedures
  • Post-market monitoring
  • Incident reporting
  • Communication with authorities
  • Record keeping
  • Resource management
  • Accountability framework

For startups this might feel like overkill. But you need something written down even if it's simpler than what a big company would have.

9. Conformity assessment

Before market placement you need to verify compliance.

For most high-risk systems you can do self-assessment. Some categories require third-party assessment (notified bodies). Check which applies to your specific case.

After assessment you:

  • Draw up an EU declaration of conformity
  • Affix the CE marking
  • Register in the EU database

10. Post-market monitoring

Compliance doesn't end at launch.

You need ongoing monitoring to:

  • Collect data on system performance
  • Identify issues that emerge in production
  • Update risk assessments based on real-world use
  • Report serious incidents to authorities

Set up processes for this before you launch.

Practical tips for startups

Start documentation now

Even if you're not sure about your risk classification start documenting. Every AI project should have:

  • A clear description of what the system does
  • Documentation of training data sources
  • Performance benchmarks
  • Known limitations

This takes minutes per week if you do it as you go. It takes weeks if you try to reconstruct it later.

Build logging into your architecture

Don't treat logging as an afterthought. Every AI system should log:

  • Inputs received
  • Outputs generated
  • Model version used
  • Timestamps
  • Any relevant context

Structure your logs so they're actually useful for debugging and compliance review.

Design for human oversight

Think about how humans will interact with your system from the start:

  • Can someone understand why the system made a decision?
  • Can someone override or correct outputs?
  • Is there a kill switch?
  • Are there confidence indicators?

Building these in later is much harder than planning for them upfront.

Use regulatory sandboxes

Member states must have AI regulatory sandboxes by August 2026. These let you:

  • Test innovative AI in a controlled environment
  • Get regulatory guidance before full market entry
  • Potentially get limited liability protection during testing

If you're building something novel look into sandbox participation. It's free advice from regulators.

The AI Act is complex. If you're building anything that might be high-risk get a lawyer who knows this stuff involved early. The cost of getting it wrong is much higher than the cost of legal advice.

Don't over-classify

Some companies panic and assume everything is high-risk. That's not true and it wastes resources.

A recommendation engine for blog posts is not high-risk just because it uses AI. An internal tool that helps your team is probably not high-risk. Be honest about what your system actually does and who it affects.

Timeline recap

Now: start documentation and classification work

August 2026: high-risk systems and transparency obligations fully enforceable

August 2027: high-risk AI in regulated products (medical devices, machinery) becomes enforceable

Don't wait until 2026 to start. Compliance takes time to build properly.

Resources

Official sources worth bookmarking:

  • EU AI Act text (EUR-Lex)
  • European Commission AI Act page
  • AI Act Service Desk for guidance and timelines
  • Your national competent authority for local interpretation

The European AI Office also publishes guidance documents. Check for updates as they clarify implementation details.

Final thoughts

Compliance isn't just about avoiding fines. Building AI systems with proper documentation, oversight, and risk management makes them better products. You catch issues earlier. You can debug problems faster. You build trust with users.

The startups that treat compliance as a competitive advantage rather than a burden will come out ahead. Customers increasingly care about trustworthy AI. Being able to demonstrate compliance is a selling point.

Start small. Document as you build. Get the basics right. The regulation is complex but the core ideas are solid engineering practices that you should probably be doing anyway.

share:
Yari Bouwman

Written by

Data Engineer and Solution Designer specializing in scalable data platforms and modern cloud solutions. More about me

related posts