SMB Compliance Checklist
A practical checklist for UK SMBs to assess and improve their AI compliance posture.
This checklist covers the practical steps a UK SMB should take to be AI-compliant. It's ordered by priority — start at the top and work down.
Foundation (Do First)
You know every AI tool, API key, and model your business uses. SpendLil's auto-discovery handles the API side.
Each AI use case is classified: minimal, limited, high, or unacceptable risk. Use the Obligation Checker if unsure.
Confirmed you're not using AI for social scoring, real-time biometric surveillance, manipulation of vulnerable groups, or other banned practices.
A document covering: what AI your business uses, who's authorised to use it, what's off-limits, who's responsible, how incidents are handled.
Staff who use or oversee AI understand what it does, its limitations, and your organisation's AI policy. Required since August 2025.
Visibility & Logging
You can see what every AI tool costs by key, provider, and model. SpendLil provides this.
Every AI request is logged with timestamp, model, and metadata. Logs retained for at least 6 months (high-risk) or as required.
Spend thresholds, volume spike detection, and new key alerts are set up and routed to the right people.
Monthly AI spend budgets are defined and monitored.
Transparency
Any customer-facing chatbot clearly states that the user is talking to an AI. A 'Powered by AI' label or equivalent.
When a human takes over from an AI chatbot, the user is told: 'You're now talking to a person.'
Content generated by AI (marketing copy, reports, images) is labelled as AI-generated where it's published or shared externally.
Your privacy policy mentions your use of AI and how personal data is processed through AI systems.
High-Risk Use Cases (If Applicable)
High-risk includes: hiring/recruitment, credit decisions, insurance underwriting, education assessment, access to essential services, or safety-critical applications.
Completed for each high-risk AI use case before deployment. Documents potential impacts on fundamental rights.
A named individual with appropriate competence is responsible for overseeing each high-risk AI system.
The human overseer can intervene in, override, or reverse AI-driven decisions.
Data fed to high-risk AI systems is relevant, representative, and free from known biases.
AI outputs in high-risk use cases are regularly monitored for bias across protected characteristics.
A process exists to report serious AI incidents to the provider and relevant authority.
If serving EU individuals, high-risk AI systems are registered in the EU database.
Employees and their representatives are informed when AI is used in employment-related decisions.
Data Protection
For each AI use case processing personal data, you've identified and documented the lawful basis under GDPR.
A Data Protection Impact Assessment is completed for AI processing that is likely to result in a high risk to individuals.
DPAs in place with AI providers (OpenAI, Anthropic, etc.) covering how they process data.
If AI makes decisions with legal or significant effects on individuals, they have the right to human review (GDPR Art. 22).
You've assessed whether personal data is being sent in AI prompts and taken steps to minimise it.
Ongoing Governance
AI inventory, risk assessments, and policies are reviewed at least quarterly.
A documented plan for what to do if an AI system causes harm, produces biased outputs, or suffers a data breach.
AI providers are periodically reviewed for their compliance posture, security practices, and data handling.
A named individual (or role) is responsible for AI governance within your business.