From my perspective, the SRA has been quieter on Ai than most people expect, and that quietness is being misread. Firms I speak to are assuming that no explicit Ai rule means no Ai risk. I believe this is backwards. The SRA hasn’t written an Ai-specific rulebook because it doesn’t need to. The Principles and the Code of Conduct already apply to Ai use, exactly as they apply to a paralegal filing papers at 4pm. If Ai use breaches Principle 7, you’ve breached Principle 7.
So the question isn’t “what does the SRA say about Ai?” The question is “how do the Principles and Code apply to Ai use in my practice, and what evidence would my COLP need to show compliance?” That’s what this piece tries to answer.
Principle 7: confidentiality applies to Ai tools
Principle 7 is the one every solicitor knows: you must act in a way that keeps the affairs of clients confidential. This principle doesn’t exclude Ai. Specially when you consider how most Ai tools work — ChatGPT, Claude, Gemini in their consumer form send your prompts to a third-party model and may use them for training. If your associate pastes a client’s bundle into ChatGPT to summarise it, that bundle has left your control. Principle 7 has been breached, whether or not there’s an enforcement action.
The SRA’s position, even without specific Ai guidance, would be that a solicitor using Ai must take reasonable steps to ensure confidential information is not disclosed. “Reasonable steps” in 2026 means using an Ai tool where confidentiality is architectural, not just promised. It means being able to demonstrate — to a COLP review, to a client who asks, to the SRA if they ever do — exactly where client data travels when your firm uses Ai.
The Code of Conduct: competence and supervision
Code paragraph 3.4 requires you to supervise and control matters. Paragraph 3.6 requires you to ensure services to clients are performed by staff with appropriate experience. Apart from this, the Code requires you to keep your knowledge up to date. All three apply directly to Ai use.
In practice, this means three things. One, an Ai-assisted piece of work still needs qualified fee-earner sign-off. The Ai can draft; the solicitor must review and take responsibility. Two, the solicitor must understand the Ai well enough to spot when it’s wrong — hallucinations are a real risk and any fee-earner relying on Ai output without reading it is failing the competence test. Three, supervising Ai use across the firm is now part of the managing partner’s job, not an optional extra.
What your COLP needs to evidence
Specially for firms in the 2 to 30-fee-earner range, COLPs are asking themselves what they’d show the SRA if an Ai-related complaint ever arrived. I tend to focus on four evidence streams when I advise a practice.
First, policy. A written Ai policy that states what tools are approved, what data may go into them, and what sign-off is required. Second, audit. A log showing which fee-earner used Ai on which matter, what data the Ai saw, what output was produced, and who signed off. Third, matter isolation. Evidence that client A’s matter doesn’t leak into client B’s matter through the Ai tool — Chinese walls made structural, not just written into the policy. Fourth, leaver handling. Evidence that when staff leave, their Ai access and historical prompt data are dealt with cleanly.
From my perspective, this evidence list is actually easier to produce with Ai than without, provided you’re using a tool built for it. A governed Ai platform like Other Me produces this audit chain automatically — you don’t have to reconstruct it. Without such a tool, you’re relying on the honesty of your fee-earners’ browser histories, which isn’t an evidential standard.
Practical steps for your firm this quarter
If you’re a COLP or managing partner reading this, I’d start with three practical actions. Audit what Ai your fee-earners are using right now — a plain conversation, not a disciplinary one. Write a short Ai policy (we’ve published a free template for UK SMEs that you can adapt for a solicitor firm in an hour). And trial a governed Ai platform that gives you the evidence your COLP review will need, without changing how your associates work day-to-day.
Other Me is built for exactly this scenario. Per-matter retrieval isolation, tamper-evident audit chain, SCRS kill switch on leavers — the whole control environment a COLP would want. You can read how it works for SRA-regulated firms on the law firms solution page, or start a free trial and see for yourself.
Where I think SRA guidance goes next
I believe the SRA will publish Ai-specific thematic guidance within the next 12 months, probably via the Thematic Review process that’s been used for money-laundering and equality in previous years. The guidance will, in my view, be less about banning Ai and more about evidencing appropriate controls. Firms that already have the policy, the audit chain, and the governed tool in place will have nothing to retrofit. Firms that haven’t started will be scrambling.
Apart from this, I think the SRA’s clients — meaning the clients your firm serves — will move faster than the regulator. Fintech clients, regulated institutions, public bodies are already asking their legal advisors how Ai use is controlled. If you have a good answer, you keep the client. If you don’t, you lose them to a firm that does.