From my perspective, bundle review is the place where Ai gains the most time for a commercial or civil litigation firm. A 340-page bundle takes a decent junior half a day to summarise accurately; with Ai, it takes two hours including the associate’s review. The gain is massive. The problem is, most Ai tools are structurally wrong for this use case.
I believe the non-negotiable for any litigation firm is that bundle content stays inside the firm, and the Ai on Matter 2036-A4 cannot, structurally, retrieve anything from Matter 2036-B7. “Cannot” meaning the software will not let it, not “cannot” meaning we trust the clerk not to. Chinese walls in a policy memo are not the same as Chinese walls in the retrieval layer.
Why ChatGPT and Claude fail here
Consumer Ai tools are architecturally flat. Every conversation in a user’s account can theoretically reach prior conversations. If the same associate works on Matter A and Matter B, the Ai has no concept of matter isolation — it can draw on both. Apart from this, bundles uploaded to ChatGPT or Claude may train the underlying model, which means privileged content has entered a third-party training set.
For solicitors, this matters twice. Once for Principle 7 (confidentiality). Once for legal privilege — if a court ever needed to determine whether privilege had been waived by disclosure to a third-party Ai, the argument is uncomfortably live.
How a governed platform handles it
Specially in tools built for solicitor firms, three controls make bundle review compliant. First, each matter has its own vault — encrypted storage isolated at the retrieval layer. Second, the Ai operating on one matter literally cannot query another matter’s vault — the software enforces this, not a policy. Third, bundle content never trains the third-party model; it’s processed in an inference-only mode where the provider contractually commits to zero-training use.
Apart from this, every fee-earner interaction is logged to a tamper-evident audit chain — matter ID, fee-earner, prompt, retrieval sources, output, sign-off. If privilege is ever challenged, the chain provides a definitive record of where the bundle went and didn’t go.
The bundle-review workflow
Typical session. Junior uploads the bundle to Matter A’s vault. Ai reads every page, produces a section-by-section summary. Flags where witness evidence contradicts the pleadings. Produces a chronology. Highlights weak links in the opponent’s case. Outputs in Word or PDF so the associate can review in their native tool.
Associate spends an hour reviewing the summary, spot-checking the original pages where the Ai has flagged something important. Then uses the Ai to draft a letter before action based on the findings. The LBA is drafted in the firm’s house style — the Ai has seen prior LBAs from the matter vault and matches tone. Associate edits, partner signs.
COLP evidence for the SRA
Specially for SRA-regulated firms, the audit chain makes COLP reviews easier. We’ve written separately on SRA guidance on Ai for solicitors and what COLP evidence looks like. For bundle review specifically, the chain demonstrates three things: the bundle didn’t leave the matter vault, the associate signed off every Ai-drafted output, the chain hasn’t been altered since.
Apart from this, for Principle 7 specifically, the per-matter isolation proves Chinese walls work structurally. If a client asks “how do you make sure the team on our matter can’t use Ai trained on our opponent’s matter,” the architecture answers the question. A privacy policy does not.
Other Me for law firms
Other Me is built specifically for UK solicitor firms. Per-matter vaults, Chinese walls enforced at retrieval, tamper-evident audit chain, SCRS kill switch for fee-earners who leave. The Law Firms solution page explains the full workflow, and a free 7-day trial, no credit card lets you run a real bundle through it before you commit.