From my perspective, ICAEW’s Quality Assurance Department has started asking about Ai use in practice reviews, and most firms I speak to haven’t noticed yet. The questions aren’t in a thematic review document or a published guidance note. They come up in the live QAD visit, specially when the reviewer notices that a firm’s management-account commentary is faster or more polished than it used to be. “How are you producing this?” is the benign opener. What follows depends on your answer.
I believe the firms that have thought about this in advance answer cleanly and move on. Firms that haven’t spend the next 30 minutes scrambling to reconstruct something the reviewer already wanted. Here’s what the questions look like in practice, and what a confident answer sounds like.
Question one: what Ai tools is the firm using
Specially in the 2 to 20-partner range, the honest answer is usually “we don’t know.” Juniors are on ChatGPT personally. Senior associates might have Copilot bundled with Microsoft 365. Some partners are using Claude via the web browser. The firm has no central record. QAD reviewers aren’t hostile to Ai use, but they expect the firm to know what’s happening.
The answer that works is a short list of approved tools, with a stated purpose for each. “We use [governed Ai platform] for client-facing work, which keeps client financial data inside our tenant. Personal Copilot use is permitted for non-client drafting — blog posts, internal memos, training material. ChatGPT and Claude are not approved for any client work.” That answer takes you five minutes to write once and resolves the question for every future QAD visit.
Question two: how is client confidentiality protected
ICAEW’s fundamental principle of confidentiality applies to Ai use the same way it applies to a clerk filing papers. The reviewer wants to know you’ve taken reasonable steps — what the word “reasonable” means in 2026 specially includes using tools where client data doesn’t feed into general model training.
A confident answer references three things. One, the Ai tool’s architecture — where client data goes, whether it trains the model, whether it’s UK-hosted. Two, the scope controls — whether the Ai working on Acme can accidentally retrieve Zenith’s data. Three, the redaction layer — whether PII or sensitive financial identifiers are stripped before the model sees them. Other Me provides all three out of the box, which is why firms I work with tend to answer this question by pointing the reviewer at the architecture diagram.
Question three: evidence and audit trail
This is the question that catches firms out. ICAEW QAD will ask to see an example of Ai-assisted work, trace it back to who signed off, and verify the source data used. Without a governed tool, this is impossible to reconstruct — you’re relying on fee-earners’ memory and their browser histories.
With a governed platform, the audit chain is automatic. Every draft carries its source chain: which ledger entries informed which paragraph, which partner signed off, which version is the final. The reviewer sees it as exhibit, not reconstruction. Apart from this, the audit chain is tamper-evident — if anyone altered a record after the fact, the chain breaks visibly. That’s the standard QAD is moving towards.
Question four: staff lifecycle
Specially since the 2024 data-protection changes, QAD is asking what happens when a staff member leaves. With free-tier Ai, a junior who leaves takes their prompt history with them — and that prompt history may contain client data. The firm has no way to retract it.
The answer that works is: “We use a platform with per-user encryption keys and a kill switch. When a staff member leaves, we revoke their keys in one click. Their historical prompt data becomes un-decryptable — even to the platform vendor.” That’s a structural guarantee, and QAD reviewers recognise it.
What a good Ai-compliance posture looks like
Basis my conversations with firms that have passed QAD reviews cleanly post-Ai-introduction, the pattern is consistent. One, a short written Ai policy — doesn’t need to be long, must exist. Two, one governed Ai tool approved for client work, with every other tool explicitly non-approved. Three, an audit chain that’s automatic rather than manual. Four, a clean leaver process. Five, a review cycle — the policy is refreshed every six months because Ai changes faster than that.
From my perspective, these five things together take a weekend to set up and months to ignore. Most firms ignore them because there’s no immediate pressure, and then the QAD visit becomes painful. The firms that invest the weekend upfront save the painful month.
What to do this quarter
If you’re a partner reading this and you don’t have the five things above, I’d start with the policy. We’ve published a free template for UK SMEs that adapts cleanly for an accountancy practice. Then trial a governed Ai platform — our solution page for accountants explains how Other Me works specifically for ICAEW/ACCA/AAT-regulated firms, with Xero and Zoho Books integrations that handle client data inside your firm.
You can start a free 7-day trial, no credit card, and show the result to your junior partners next Monday. That’s usually how the conversation starts in firms I work with — not top-down, but from the partner who ran the pilot and saw their year-end commentary drop from 90 minutes to 15.