From my perspective, every UK SME needs an Ai policy by now, specially if your team handles client data. The problem is, most firms I speak to don’t have one. Not because they don’t want one, but because they look at what the big four have published and think, “I can’t write that, I run a 12-person accountancy practice, not a 12-thousand-person consultancy.”
I believe this is the wrong reference point. Your Ai policy doesn’t need to be 40 pages long with a glossary and an appendix. It needs to do three things. Tell your team what they can use Ai for. Tell them what they can’t paste into it. And tell them what happens when something goes wrong. That’s it.
Why your firm needs one (even if you’re 5 people)
Specially when I am working with owner-operators, the first question I get is “do we really need a policy if we’re so small?” The answer, in my view, is yes — and not for the reason you’d expect. The reason isn’t ICO fines or GDPR. The reason is your team is already using Ai, whether you have a policy or not. 68% of UK employees use free-tier Ai through personal accounts, and 57% of them paste sensitive data into it. Your team is in that statistic right now.
A policy isn’t about stopping them. It’s about directing them. Without one, your junior bookkeeper is pasting client ledger lines into ChatGPT tonight because the alternative is working until 9pm. With one, she has a sanctioned tool that’s faster than ChatGPT and keeps the client data inside your firm. The policy is what moves her from the first behaviour to the second.
The four pillars your policy must cover
I tend to focus on four things when I’m helping a firm write theirs. First, scope — what Ai use is allowed, what isn’t, and what needs explicit approval. Second, data — what can go into which tools, and what must never leave the firm. Third, accountability — who signs off Ai-assisted outputs, who owns the audit chain, who responds to incidents. Fourth, review — how often the policy is updated, because Ai moves faster than policy.
Apart from this, your policy should reference your regulator’s guidance where it exists. ICAEW, SRA, FCA, RICS, GDC are all publishing Ai-specific guidance now. If you’re ICAEW-regulated, your policy should cite the fundamental principle of confidentiality and how your Ai use supports it. If you’re SRA-regulated, it should cite Principle 7 and the Code of Conduct. Make your policy defensible to your regulator, and it’s automatically defensible to your clients.
Free template — adapt for your firm
Here’s the structure I’d use. Copy it, edit for your practice, and you’ve got a workable first draft in under two hours.
Section 1 — Purpose and scope. “This policy sets out how [Firm] staff may use Artificial Intelligence (Ai) tools in their work. It applies to all permanent, temporary and contract staff, and to any Ai tool used for firm or client work — whether provided by the firm or accessed personally.”
Section 2 — Approved tools. List the Ai tools you’ve approved and the use cases for each. For a regulated practice, this is usually one governed platform (like Other Me) that handles client data, plus personal-use permissions for ChatGPT on non-client work.
Section 3 — Prohibited actions. Be specific. “Staff must not paste client names, client financial data, client case details, client medical information, or any personally identifiable information into non-approved Ai tools including but not limited to ChatGPT, Claude, Gemini, Grok, Copilot, or any browser-based Ai assistant.” This is the sentence that actually protects your firm.
Section 4 — Sign-off and audit. “Every Ai-assisted client deliverable must be reviewed and signed off by a [qualified fee-earner / registered professional / partner] before it leaves the firm. The audit trail of the Ai interaction must be retained for [regulatory retention period].”
Section 5 — Incident response. “If you believe client data has been entered into an unapproved Ai tool, you must notify [Data Protection Officer / Managing Partner] within 24 hours. The firm will assess whether the incident is reportable under GDPR Article 33 and take remediation action.”
Section 6 — Review. “This policy will be reviewed every six months, or immediately following any material Ai-related incident.”
What the policy alone can’t do
Also, a policy by itself doesn’t solve the problem. Specially in a small practice where everyone knows the rule is “don’t paste client data into ChatGPT,” and everyone does it anyway because the alternative is slower. This is where the tooling matters. If you give your team a governed Ai platform that’s faster than ChatGPT and structurally keeps client data out of general models, the policy enforces itself.
From my perspective, this is the gap most policies miss. They describe what not to do, without giving the team a better option for what to do. Other Me is built specifically for UK SME practices that have this exact problem — you can read more about how it works for your vertical on the Built for SMEs page, or, if you’re in a regulated practice specifically, on the relevant solution page: accountancy, legal, mortgage and IFA, private healthcare, wealth management.
What to do this week
If I were advising a practice owner today, I’d tell them to do three things this week. One, copy the template above, adapt it for your firm, get it signed by the managing partner and circulated. Two, audit which Ai tools your team is actually using right now — ask them, honestly. Three, trial a governed alternative that makes the policy easy to follow rather than hard. The trial doesn’t need procurement sign-off. You can start free, no credit card, and see whether it fits before anyone commits anything.
I believe a good policy saves you the conversation you’d otherwise have with your regulator. And it saves your team the conversation they’d otherwise have with your clients. That’s a fair trade for two hours of drafting on a Sunday afternoon.