Last updated: 2026-04-11

SMB Compliance Checklist

A practical checklist for UK SMBs to assess and improve their AI compliance posture.

This checklist covers the practical steps a UK SMB should take to be AI-compliant. It's ordered by priority — start at the top and work down.

Foundation (Do First)

AI inventory complete

You know every AI tool, API key, and model your business uses. SpendLil's auto-discovery handles the API side.

Risk classification done

Each AI use case is classified: minimal, limited, high, or unacceptable risk. Use the Obligation Checker if unsure.

No prohibited practices

Confirmed you're not using AI for social scoring, real-time biometric surveillance, manipulation of vulnerable groups, or other banned practices.

Written AI policy

A document covering: what AI your business uses, who's authorised to use it, what's off-limits, who's responsible, how incidents are handled.

AI literacy training

Staff who use or oversee AI understand what it does, its limitations, and your organisation's AI policy. Required since August 2025.

Visibility & Logging

AI spend tracked

You can see what every AI tool costs by key, provider, and model. SpendLil provides this.

Usage logged with audit trail

Every AI request is logged with timestamp, model, and metadata. Logs retained for at least 6 months (high-risk) or as required.

Alerts configured

Spend thresholds, volume spike detection, and new key alerts are set up and routed to the right people.

Budget limits set

Monthly AI spend budgets are defined and monitored.

Transparency

AI disclosure on chatbots

Any customer-facing chatbot clearly states that the user is talking to an AI. A 'Powered by AI' label or equivalent.

Human handover notification

When a human takes over from an AI chatbot, the user is told: 'You're now talking to a person.'

AI-generated content labelled

Content generated by AI (marketing copy, reports, images) is labelled as AI-generated where it's published or shared externally.

Privacy notice updated

Your privacy policy mentions your use of AI and how personal data is processed through AI systems.

High-Risk Use Cases (If Applicable)

Only complete this section if you use AI for high-risk purposes

High-risk includes: hiring/recruitment, credit decisions, insurance underwriting, education assessment, access to essential services, or safety-critical applications.

Fundamental rights impact assessment (FRIA)

Completed for each high-risk AI use case before deployment. Documents potential impacts on fundamental rights.

Human oversight assigned

A named individual with appropriate competence is responsible for overseeing each high-risk AI system.

Override capability

The human overseer can intervene in, override, or reverse AI-driven decisions.

Input data quality

Data fed to high-risk AI systems is relevant, representative, and free from known biases.

Bias monitoring

AI outputs in high-risk use cases are regularly monitored for bias across protected characteristics.

Incident reporting process

A process exists to report serious AI incidents to the provider and relevant authority.

System registered (EU)

If serving EU individuals, high-risk AI systems are registered in the EU database.

Workers informed

Employees and their representatives are informed when AI is used in employment-related decisions.

Data Protection

Lawful basis identified

For each AI use case processing personal data, you've identified and documented the lawful basis under GDPR.

DPIA completed (where required)

A Data Protection Impact Assessment is completed for AI processing that is likely to result in a high risk to individuals.

Data processing agreements

DPAs in place with AI providers (OpenAI, Anthropic, etc.) covering how they process data.

Automated decision-making rights

If AI makes decisions with legal or significant effects on individuals, they have the right to human review (GDPR Art. 22).

Personal data in prompts reviewed

You've assessed whether personal data is being sent in AI prompts and taken steps to minimise it.

Ongoing Governance

Regular review schedule

AI inventory, risk assessments, and policies are reviewed at least quarterly.

Incident response plan

A documented plan for what to do if an AI system causes harm, produces biased outputs, or suffers a data breach.

Supplier review

AI providers are periodically reviewed for their compliance posture, security practices, and data handling.

Compliance responsibility assigned

A named individual (or role) is responsible for AI governance within your business.