From my perspective, a DPIA is one of those compliance artefacts that SMEs treat as optional paperwork until the ICO tells them it wasn’t. Under Article 35 UK GDPR, if Ai processing is likely to result in high risk to individuals, a DPIA is mandatory. For most practice-grade Ai use, this threshold is met — specially when you’re processing client data, special category data, or automating decisions that affect clients.
I believe the easier route is to run a short structured assessment rather than guess. Here are 10 questions. If you answer yes to three or more, you need a DPIA. If you answer yes to five or more, you need it yesterday.
The 10 questions
One. Does the Ai tool process personal data about your clients, staff or candidates? Two. Does it process any special category data — health, ethnic origin, political opinions, religious belief, trade union membership, biometric or genetic data, sex life or sexual orientation? Three. Are you using Ai to help make decisions about individuals — hiring, lending, pricing, advice? Four. Are you processing children’s data? Five. Is the Ai tool a third-party service that processes your data outside the UK?
Six. Does the tool use the personal data for training or improving its own models? Seven. Is the Ai output shared with other third parties or exported to other systems? Eight. Could an Ai error materially affect the individual — wrong advice, wrong decision, wrong record? Nine. Is the volume of personal data large — hundreds of client files, thousands of CVs, a practice-wide rollout? Ten. Are individuals unaware their data is being processed by Ai, or would they reasonably object?
Apart from this, the ICO has a list of nine specific processing types that always require a DPIA — automated decision-making with legal or similar effects, systematic monitoring of publicly accessible areas on a large scale, processing of special category or criminal offence data at scale, and six others. If any of those apply, you’re already in DPIA territory regardless of how the 10-question test comes out.
What a good Ai DPIA contains
Specially for UK SMEs, a DPIA doesn’t need to be 30 pages. Six sections, four pages, delivered by your DPO within a week is the target. Section one, describe the Ai processing — what the tool does, what data it sees, who the users are. Section two, assess necessity and proportionality — why Ai, why this tool, why this data. Section three, identify risks to individuals — what could go wrong, who’d be affected, how severely.
Section four, mitigations — what controls the vendor provides, what controls you add on top. Section five, residual risk — what’s left after mitigations, and whether it’s acceptable. Section six, review cadence — when you’ll reassess, typically every 12 months or at any material change.
Where most Ai DPIAs fall down
From my perspective, the common weakness is section four. Firms list vendor mitigations they haven’t verified. “The vendor says they don’t train on our data” is not a mitigation you’ve assessed — it’s a vendor claim you’ve accepted. A proper DPIA verifies claims. It asks for architecture diagrams, not privacy policies. It verifies data residency through technical documentation, not marketing pages. It tests the control by simulating failure.
Good vendors make this easy by publishing architecture information openly. Other Me publishes the SCRS architecture, the UK data residency configuration, the audit chain cryptographic design — on the website, not under NDA. Your DPO can verify in an afternoon. If a vendor won’t give you that level of detail, that’s itself a finding for your DPIA.
Ai DPIA mitigations that matter
Basis DPIAs I’ve seen close cleanly, four mitigations carry the most weight with the ICO. First, pre-processing PII redaction — personal identifiers stripped before the Ai model sees the content. Second, data-residency confirmation — processing happens in UK regions, with no cross-border transfer for training. Third, audit trail — every Ai interaction logged in a way the data subject or ICO could examine. Fourth, subject-rights infrastructure — Article 15 exports and Article 17 erasure both technically feasible at the individual level, in reasonable time.
Apart from this, leaver handling is specifically relevant for Ai tools — when a staff member leaves, their historical prompts may contain personal data. If you can’t retract it or make it un-decryptable, the DPIA residual risk is higher.
What to do if your answer is “yes” to 5+ questions
If your practice clearly needs a DPIA, three steps. One, get it drafted — use the ICO’s free template or engage a DPO-as-a-service if you don’t have internal DPO capacity. Two, select an Ai platform that makes the DPIA fillable rather than painful. A vendor who publishes their architecture openly is worth more to your DPIA than a vendor who demands NDAs.
Three, link the DPIA to a written Ai policy. The DPIA assesses risk; the policy tells staff how to behave. Together they’re defensible. Apart, they’re ICO findings. We’ve published a free Ai policy template for UK SMEs that pairs naturally with a DPIA.
Other Me and DPIAs
From a vendor perspective, we try to make the DPIA part trivial. UK data residency by default. Architecture published openly on the Features page and Security page. PII redaction built into the Ai retrieval pipeline. Per-user kill switch for leavers. Article 15 exports structured. Your DPO can populate the template by reading our documentation, not by requesting it.
You can start a free 7-day trial, no credit card, and run a DPIA against the trial deployment before you commit. That’s how I’d suggest approaching any Ai vendor evaluation — live testing, not just paperwork.