Security Enterprise · · 8 min read

NHS Staff Are Pasting Patient Data into ChatGPT. Here Is the Fix.

AS

Founder & CEO, Pop Hasta Labs

Somewhere in your trust right now, a clinician is pasting a referral letter into ChatGPT. The letter contains an NHS number, a list of symptoms, medication history, and a consultant's working diagnosis. The clinician is not being reckless. They are trying to draft a discharge summary before their shift ends.

This is not a hypothetical scenario. It is happening across NHS trusts every single day, and the regulators are starting to pay attention.

The quiet crisis in NHS AI adoption

The NHS has an AI problem, but it is not the one most people think. The problem is not that staff are resistant to technology. It is that they have already adopted it — without governance, without approval, and without any protection for the patient data involved.

Free-tier AI tools like ChatGPT, Gemini, and Claude are being used daily by healthcare workers to summarise notes, draft letters, interpret test results, and structure referrals. The productivity gains are genuine. But so are the risks.

7 Caldicott Principles govern confidential patient information — free AI tools comply with none of them

What is actually happening

Healthcare workers are copying patient-identifiable information directly into consumer AI tools. This includes NHS numbers, dates of birth, full names, addresses, diagnosis codes, medication lists, and clinical notes. In many cases, entire referral letters or discharge summaries are pasted in wholesale.

The intent is never malicious. A junior doctor needs to summarise a complex case for handover. A ward clerk needs to draft a response to a GP referral. A community nurse needs to structure a safeguarding concern into the correct format. AI does this in seconds. The manual alternative takes thirty minutes they do not have.

But every one of those prompts sends special category health data to a third-party server outside the trust's control. The data may be stored, logged, or used for model training. Once it leaves the trust, there is no mechanism to retrieve, audit, or delete it.

Why staff are doing it

The answer is painfully simple. NHS staff are overworked, under-resourced, and under constant pressure to do more with less. AI tools offer a genuine lifeline for administrative tasks that consume hours of clinical time.

When the alternative is staying two hours past the end of a twelve-hour shift to complete paperwork, opening a browser tab and pasting a referral letter into ChatGPT is not laziness. It is survival. And when trusts have been slow to provide governed alternatives, staff fill the gap with whatever is freely available.

Staff do not paste patient data into ChatGPT because they do not care about confidentiality. They do it because no one has given them a tool that is both fast enough and safe enough.

The legal position on this is unambiguous. Patient health data is among the most heavily protected categories of information in UK law, and pasting it into ungoverned AI tools breaches multiple regulatory frameworks simultaneously.

The Caldicott Principles

The seven Caldicott Principles form the foundation of patient data confidentiality across the NHS. They require that confidential information is justified for each use, used only when necessary, accessed on a strict need-to-know basis, and that everyone with access understands their responsibilities. Principle 7 — the duty to share information is as important as the duty to protect it — makes clear that sharing must still be governed and purposeful.

Free-tier AI tools satisfy none of these principles. There is no justified purpose registered for the data processing, no access control, no audit trail, and no accountability framework.

GDPR Article 9: special category data

Health data is classified as special category data under UK GDPR Article 9. Processing it requires both a lawful basis under Article 6 and a specific condition under Article 9(2). Pasting patient records into a consumer AI tool that stores inputs on servers outside the trust's data processing agreements meets neither requirement.

Art. 9 UK GDPR special category protections — health data requires explicit conditions for processing

The ICO has been increasingly clear that organisations cannot claim ignorance of employee behaviour as a defence. If staff are using ungoverned tools to process patient data, the trust is responsible.

NHS Data Security and Protection Toolkit

The NHS DSPT requires every organisation that accesses NHS patient data to meet specific data security standards. These include staff training, data governance policies, audit logging, and incident reporting. Ungoverned AI use creates a direct gap in DSPT compliance because the trust has no visibility into what data is being processed, by whom, or where it is being sent.

With both CQC and ICO increasing scrutiny of how NHS organisations handle data, unaddressed shadow AI use is becoming an increasingly visible compliance gap.

What NHS trusts need to do

Banning AI outright will not work. Staff will continue using it regardless, only now they will hide it. The solution is to provide a governed alternative that is fast enough to be genuinely useful but secure enough to protect patient data.

What does a governed AI tool for the NHS actually need? Here is the practical checklist:

NHS AI governance checklist:

  • Automatic PII detection and redaction — Patient-identifiable data must be stripped before it reaches any AI model. Staff should not have to manually redact; the system must do it for them.
  • Caldicott-aligned access controls — Users should only access patient data they are entitled to see, enforced at the system level, not left to individual judgement.
  • GDPR Article 9 compliance — Processing of health data must be governed by valid conditions, with clear data processing agreements in place for every AI provider involved.
  • Full audit trail — Every query, every piece of data accessed, and every AI response must be logged in a tamper-evident record for DSPT and ICO compliance.
  • Data residency within the trust's governance — Patient data must not leave the trust's controlled environment in identifiable form.
  • Staff usability — If the tool is slower or harder to use than ChatGPT, staff will not adopt it. Speed and simplicity are not optional.

How SCRS solves this

This is the problem Other Me's patent-pending SCRS (Secure Context Retrieval System) was built to address. SCRS uses a Dual-Gate architecture that enforces data governance before any AI model sees a single piece of patient data.

Here is how it works in an NHS context:

Auto-redaction at input. When a clinician pastes or types patient information into Other Me, the SCRS pipeline detects PII automatically — NHS numbers, names, dates of birth, addresses, diagnosis codes. This data is pseudonymised before the AI model processes the query. The model sees "[PATIENT_1] presenting with [CONDITION_1]" rather than real patient identifiers. The AI is just as useful for drafting, summarising, and structuring — but it never sees the actual patient data.

Gate 1: scope-constrained retrieval. If the trust uses Other Me for document retrieval, Gate 1 ensures that each user can only search patient data they are entitled to access. A ward nurse sees records relevant to their patients. A consultant sees their caseload. Nobody gets broad, unscoped access to the entire patient dataset. This maps directly to Caldicott Principle 3: the need-to-know principle.

Gate 2: AEAD encryption and verify-before-reveal. Every piece of data retrieved passes through authenticated encryption with associated data (AEAD). Even after retrieval, each data element is cryptographically verified before it reaches the user. If anything fails verification, the system returns nothing rather than risking exposure.

0 pieces of identifiable patient data seen by the AI model — auto-redaction strips PII before processing

Caldicott compliance via consent-based scope grants. Other Me's access control model allows trusts to define data access scopes that align with Caldicott Principles. Access to patient data is granted based on role, care relationship, and explicit consent — not by default. This is governance by design, not governance by policy document.

DSPT alignment via audit trails and data governance framework. Every interaction is recorded in a tamper-evident audit log. Who queried what, when, which data was accessed, which model was used, and what the AI produced. This gives trusts the documentation they need for DSPT compliance and ICO accountability.

What this looks like in practice

A junior doctor needs to draft a discharge summary. They paste the clinical notes into Other Me. The system automatically detects and pseudonymises the patient's name, NHS number, date of birth, and GP details. The AI drafts a structured discharge summary using the clinical content — diagnosis, treatment, follow-up actions — without ever processing the patient's identity.

The doctor reviews the draft, and when they are satisfied, the system rehydrates the pseudonymised fields with the real patient data for the final document. The entire interaction is logged. The trust's Caldicott Guardian can audit exactly what happened. And the patient's identifiable data never left the trust's governed environment to reach a third-party AI server.

That is the difference between shadow AI and governed AI. Same productivity gain. No data protection breach.

Built for NHS trusts. Other Me provides enterprise deployment with SCRS auto-redaction, Caldicott-aligned access controls, and full DSPT audit trail support. See the healthcare solution or view pricing.

NHS staff are not the problem. They are doing what they need to do to deliver patient care under impossible pressure. The problem is the absence of tools that are both fast and safe. That gap is closeable — and closing it is no longer optional.

Pop Hasta Labs Ltd is registered at UK Companies House (No. 16742039). SCRS Dual-Gate architecture is the subject of UK Patent Application No. 2602911.6.

AS

Abhishek Sharma

Founder & CEO of Pop Hasta Labs. Building Other Me — the governed AI platform with patent-pending security architecture. Based in London.

Try Other Me free for 7 days

AI assistants with governance built-in. No credit card required.

Start 7-day free trial