UK AI Bill
What we know about the UK's approach to AI regulation and how to prepare.
The UK is developing its own approach to AI regulation. While the EU opted for a single comprehensive law, the UK's model leans toward sector-specific regulation through existing regulators. Here's what we know and how to prepare.
The UK AI Bill is still in development. Details may change. We'll update this page as new information becomes available.
The UK's Approach
The UK government's AI regulation framework is based on five principles that existing regulators — the FCA, ICO, Ofcom, CMA, HSE, and others — are expected to interpret and enforce within their sectors:
- Safety, security, and robustness — AI systems should function reliably and securely
- Transparency and explainability — people should be able to understand how AI is being used and how decisions are made
- Fairness — AI should not discriminate or produce biased outcomes
- Accountability and governance — clear responsibility for AI outcomes, with appropriate oversight
- Contestability and redress — people affected by AI decisions should be able to challenge them
How It Differs from the EU AI Act
| Aspect | EU AI Act | UK Approach |
|---|---|---|
| Legal instrument | Single comprehensive regulation | Sector-specific guidance with potential statutory duties |
| Risk classification | Defined risk tiers (unacceptable, high, limited, minimal) | Context-dependent, assessed by sector regulators |
| Enforcement | Central AI Office + national authorities | Existing regulators (FCA, ICO, Ofcom, etc.) |
| Scope | All AI systems by risk category | Focused on sectors where AI risk is highest |
| Compliance | Prescriptive requirements | Principles-based with sector interpretation |
| Extraterritorial | Applies to non-EU providers/deployers affecting EU | UK-focused, but likely to affect international businesses operating in UK |
What This Means for Your Business
Even without the AI Bill fully enacted, existing UK law already covers much of what the principles require:
- GDPR / UK Data Protection Act 2018 — covers AI processing of personal data, automated decision-making rights (Article 22), data protection impact assessments
- Equality Act 2010 — prohibits discrimination; AI systems that produce biased outcomes can create liability
- Consumer Rights Act 2015 — AI-powered services must meet 'reasonable care and skill' standards
- Financial Services regulations (FCA) — AI in credit, insurance, and investment is already regulated
- Employment law — using AI for hiring, performance management, or redundancy decisions has existing legal guardrails
If your business operates in a regulated sector (financial services, legal, healthcare, education), your sector regulator likely already has expectations about AI use. The AI Bill will formalise these, not start from scratch.
Preparing Now
Regardless of the final form of the UK AI Bill, these actions will put you in a strong position:
- Create an AI inventory — know every AI tool your business uses, who uses it, and what for
- Assess risk — for each AI use case, understand the potential for harm (discrimination, financial loss, safety)
- Document decisions — record why you chose specific AI tools and models, and how you monitor their output
- Ensure human oversight — for any AI-driven decision that affects people, have a qualified human who reviews and can override
- Track spend and usage — demonstrate you know how much AI costs and how it's being used across the business
- Train your team — ensure everyone using AI understands its limitations and your organisation's AI policy
The Dual Compliance Challenge
If your business serves EU customers AND operates in the UK, you'll need to comply with both the EU AI Act and UK regulations. The good news: they share common principles. Building compliance for one gets you most of the way to the other. SpendLil's roadmap is designed to cover both frameworks.