Architecture · · 6 min read

What Is a Patent-Pending Ai Data Firewall?

AS

Founder & CEO, Pop Hasta Labs

From my perspective, the phrase “Ai data firewall” is doing a lot of work and most people using it don’t mean the same thing. I want to describe what we mean at Other Me, specially because our version is patent-pending (UK Patent Application 2602911.6) and the mechanics matter.

What a firewall is, in this context

A traditional firewall sits between two networks and allows or blocks traffic basis rules. An Ai data firewall sits between your users and the Ai model, and allows or blocks the data that’s about to be sent for inference. The critical word is “about to be.” The firewall intercepts before the data reaches the model, not after.

Most Ai tools have the opposite architecture — the data goes to the model, and if something sensitive is returned, they try to redact it post-retrieval. This is like letting the cat out and then trying to catch it. Specially for regulated practices, pre-retrieval blocking is the only architecture that’s defensible.

What our firewall actually does

Three structural properties. One, pre-retrieval scope enforcement — permissions are checked during search, so out-of-scope items never enter the candidate set. Two, zero plaintext in the vector index — the search index contains mathematical representations of content, not readable text; sensitive content lives in a separate encrypted store released only on verified retrieval. Three, 100% fail-closed — any failure at any stage returns zero content, never a best-effort guess that might leak.

Apart from this, PII redaction happens before send — NINOs, DOBs, names, addresses replaced with placeholder tokens before the model sees the content. And a per-user kill switch means when a staff member leaves, their historical prompts become un-decryptable.

Why this matters for UK SMEs

Specially for regulated practices, the firewall is what makes the Ai tool defensible under ICAEW, SRA, FCA, RICS, GDC and similar regulator frameworks. Without it, client data travels to the Ai model in readable form and may be used for training. With it, client data stays in your tenant and the Ai works from redacted representations — the model never sees the client’s actual identifiers.

I believe this is the specific architectural choice that turns Ai from “possible compliance risk” into “confident compliance story.” A privacy policy says what the vendor intends. A firewall enforces what the vendor does.

Learn more

Technical detail on the Security page. Specific workflows on the Built for SMEs and vertical pages. Free 7-day trial, no credit card lets you examine the firewall behaviour directly.

AS

Abhishek Sharma

Founder & CEO of Pop Hasta Labs. Building Other Me — the governed AI platform with patent-pending security architecture. Based in London.

Related articles

Try Other Me free for 7 days

AI assistants with governance built-in. No credit card required.

Start 7-day free trial