Last updated: 2026-04-11

EU AI Act

What the EU AI Act requires and how it affects UK businesses.

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI law. It entered into force in August 2024 with phased enforcement through 2027. If your UK business serves EU customers, operates in the EU, or uses AI outputs that affect people in the EU, it applies to you.

Does It Apply to My UK Business?

Yes, if any of the following are true:

  • You have customers in the EU
  • You have employees in the EU
  • The output of your AI system is used in the EU (even if the system runs in the UK)
  • You supply AI products or services to EU-based businesses
  • You're part of a supply chain where AI decisions affect EU individuals

If you're a purely domestic UK business with no EU-facing operations, the EU AI Act doesn't directly apply — but the upcoming UK AI Bill likely will, and it's expected to align closely with the EU framework.

The Risk Categories

Unacceptable Risk (Banned)

These are prohibited outright. You cannot use AI for:

  • Social scoring — ranking people based on their social behaviour or personal characteristics
  • Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
  • Exploitation of vulnerabilities — targeting people based on age, disability, or economic situation
  • Subliminal manipulation — AI techniques that distort behaviour without a person's awareness
  • Emotion inference in workplaces or education institutions (with narrow exceptions)
  • Untargeted facial image scraping from the internet or CCTV for recognition databases
  • Predictive policing based solely on profiling
🚨 These bans are already in force

The prohibited practices provisions took effect in February 2025. If you're using AI in any of these ways, stop immediately.

High Risk

AI systems are classified as high risk when used in these areas:

  • Recruitment and HR — CV screening, candidate ranking, interview assessment, performance evaluation
  • Credit and insurance — creditworthiness assessment, risk scoring, claims processing
  • Education — exam scoring, admissions decisions, learning assessment
  • Essential services — determining eligibility for benefits, social services, emergency services
  • Law enforcement — evidence evaluation, risk assessment (where permitted)
  • Migration and border control — visa/asylum processing
  • Critical infrastructure — safety components in utilities, transport, digital infrastructure
  • Biometric identification and categorisation

What High-Risk Deployers Must Do

If you deploy (use) a high-risk AI system, you must:

  1. Use the system in accordance with the provider's instructions
  2. Assign human oversight — a competent person who can understand, monitor, and override the AI's outputs
  3. Ensure input data is relevant and representative for the system's intended purpose
  4. Monitor the system's operation and report serious incidents to the provider and relevant authority
  5. Keep logs generated by the system for at least 6 months (or as required by sector regulation)
  6. Carry out a fundamental rights impact assessment (FRIA) before deploying the system
  7. Register high-risk AI systems in the EU database before use
  8. Inform workers and their representatives when AI is used in employment-related decisions
  9. Cooperate with national authorities on request

Limited Risk

Limited risk systems have transparency obligations. If your AI system falls into these categories, you must tell users they're interacting with AI:

  • Chatbots — users must be told they're talking to an AI, not a human
  • AI-generated content — text, images, audio, or video generated by AI must be labelled as such
  • Emotion recognition — if you use AI to detect emotions, people must be informed
  • Deepfakes — AI-generated or manipulated images/video/audio of real people must be clearly labelled
Most business chatbots are limited risk

If you use an AI chatbot for customer support (like most SpendLil customers), your main obligation is transparency: tell customers they're talking to AI and inform them when a human takes over.

Minimal Risk

AI systems with minimal risk — spam filters, AI-assisted search, recommendation engines, scheduling tools — have no specific obligations under the EU AI Act beyond existing law (GDPR, consumer protection, etc.).

General-Purpose AI (GPAI)

The models you use through SpendLil — GPT-4o, Claude, Gemini, Mistral — are classified as General-Purpose AI models. The EU AI Act places obligations primarily on the providers of these models (OpenAI, Anthropic, Google, Mistral), not on you as a deployer.

However, when you integrate a GPAI model into a high-risk use case (e.g., using GPT-4o for CV screening), the resulting system is high-risk and you take on deployer obligations for that specific use.

Enforcement and Fines

ViolationMaximum Fine
Using a prohibited AI practice€35 million or 7% of global annual turnover
Non-compliance with high-risk obligations€15 million or 3% of global annual turnover
Supplying incorrect information to authorities€7.5 million or 1.5% of global annual turnover

For SMBs, fines are applied proportionately. The Act specifically considers the economic size and market position of the offender. But proportionate doesn't mean trivial — a 3% fine on a £5M turnover business is £150,000.

Enforcement Timeline

DateMilestone
August 2024Act enters into force
February 2025Prohibited practices take effect
August 2025GPAI obligations; governance framework; AI literacy requirements
August 2026High-risk system obligations fully enforceable; market surveillance authorities active
August 2027High-risk systems in Annex I (embedded in regulated products) fully enforceable

AI Literacy Requirement

From August 2025, all organisations using AI must ensure their staff have sufficient AI literacy. This means people who operate or oversee AI systems need to understand the basics: what the system does, its limitations, and the risks. This applies to every business, regardless of risk tier.

AI literacy is already required

This obligation took effect in August 2025. If your team uses AI tools without understanding their limitations and risks, you're already non-compliant.