Incident · · 8 min read

What Happens If Your Employee Pastes Client Data in ChatGPT (Incident Response)

AS

Founder & CEO, Pop Hasta Labs

From my perspective, this incident happens in every UK SME at some point. Specially in firms that don’t have a governed alternative, it’s not a question of whether but when. The usual pattern is: a junior has a deadline, pastes a client document into ChatGPT to summarise it, realises halfway that what they’ve just done might be a problem, and goes silent. The firm finds out three weeks later when a colleague mentions it in passing.

I believe the response matters more than the prevention story at that point. The incident has happened. What you do in the next hour, day and week determines whether it’s a quiet internal fix or an ICO report.

First hour: understand what happened

Talk to the employee privately and calmly. Don’t start disciplinary. Just ask: what tool, what data, how much, when. Write it down. Get them to show you the chat history if they still have access. If they’ve already cleared it, note that — ChatGPT conversations that were used for training cannot always be recalled.

Assess data categories. Was it personal data? Special category data (health, ethnicity, financial)? Client financial identifiers? Legal privilege content? The category determines the severity. Apart from this, note whether the client can be identified from what was pasted — even a single name plus a specific fact may meet the identifiability threshold.

First day: decide reportability

Under UK GDPR Article 33, the firm must notify the ICO within 72 hours of becoming aware of a personal data breach, unless the breach is “unlikely to result in a risk to the rights and freedoms of natural persons.” That qualifier is doing a lot of work. Specially when the pasted data went to a US-hosted model that may or may not retain it for training, the risk assessment is genuinely difficult.

From my perspective, the conservative position is to treat it as reportable if any of these are true: the pasted data could identify an individual, the receiving model trains on user content, you cannot confirm the data has been deleted from the vendor. ICO reports for incidents that turn out not to be serious are an annoyance. Failed reports that should have been made are existential.

First day: client notification

Article 34 GDPR requires you to notify affected data subjects “without undue delay” if the breach is likely to result in a high risk to their rights and freedoms. For an accountancy firm that pasted a client ledger into ChatGPT, this bar is probably met. For a law firm that pasted a bundle, almost certainly. Specially if the client would reasonably expect you not to do that, telling them honestly builds more trust than the alternative of them finding out from an ICO inquiry.

The notification should be plain, factual, not defensive. “On [date], a member of our team used a third-party Ai tool in a way that didn’t comply with our data-handling policy. Your information was included. We’ve assessed the risk and taken the following steps. We’re telling you because we believe you should know.” That’s the structure.

First week: containment and remediation

Contact the Ai vendor formally. ChatGPT has a data-deletion request process. Submit it with the account details, the approximate timestamps, and the client reference. Get written confirmation of deletion where possible. Note that for OpenAI, data used in training cannot always be surgically removed — but the account content can.

Review and update your Ai policy. If you didn’t have one, this is the time. We’ve published a free Ai policy template for UK SMEs that takes an hour to adapt. Roll it out firm-wide. Every staff member signs. Apart from this, move your team to a governed Ai alternative so the incident is less likely to recur.

Document the whole incident in your breach register — what happened, when you became aware, what you assessed, what you did, what you changed. This register is what the ICO asks to see at the next inspection, and the quality of the register matters more than the absence of incidents.

First month: why a governed alternative matters

Specially after one incident, most firms I work with realise the policy alone won’t prevent the next one. If the approved tool is slower than ChatGPT, juniors will drift back. The only reliable prevention is a governed alternative that’s faster than the unapproved tool and makes the policy easy to follow rather than hard.

Other Me is built for this scenario. Per-client vaults so Ai working on one client can’t see another. SCRS data firewall so client data structurally cannot leave the firm. Tamper-evident audit chain for every interaction. And — critically — faster than ChatGPT for the specific workflows juniors use it for: email drafts, bundle summaries, report drafting, CV shortlisting.

You can read more on the Built for UK SMEs page, or start a free 7-day trial, no credit card. The trial is the full product, so you can test it on one real workflow before deciding. If you’re in a regulated vertical, see the specific solution page: accountancy, legal, mortgage and IFA, private healthcare.

AS

Abhishek Sharma

Founder & CEO of Pop Hasta Labs. Building Other Me — the governed AI platform with patent-pending security architecture. Based in London.

Related articles

Try Other Me free for 7 days

AI assistants with governance built-in. No credit card required.

Start 7-day free trial