Regulation EU · · 10 min read

EU AI Act Enforcement Starts August 2026 — A 90-Day Playbook for UK Practices Serving EU Clients

AS

Founder & CEO, Pop Hasta Labs

The EU AI Act is the world's first comprehensive AI law. It was passed in 2024, came into force in stages from 2025, and on 2 August 2026 the enforcement of the high-risk article provisions begins.

If you are a UK accountancy firm with EU subsidiaries as clients, a UK law firm advising EU companies, a UK consultancy with European customers, or a UK SaaS provider selling into the EU — this article is for you. You probably have around 90 days to be ready.

The good news: most UK SMEs are not building high-risk AI systems. The bad news: many are using them, on behalf of EU customers, without realising it.

What changes on 2 August 2026

Three things switch on. The first is the high-risk AI system regime — a list of AI uses the EU has decided are dangerous enough to require active oversight. These include AI used in employment decisions, creditworthiness assessment, education access decisions, essential service eligibility, and a handful of other "decisions that materially affect people".

The second is the obligation regime that comes with high-risk classification. From 2 August, anyone placing on the market, putting into service, or deploying a high-risk AI system in the EU has to meet a list of requirements: risk management, data quality, technical documentation, transparency, human oversight, accuracy and robustness, and post-market monitoring. There is also a Conformity Assessment process — usually self-assessment for the high-risk Annex III categories, but some categories require an external notified body.

The third is enforcement. National competent authorities in each EU member state get the power to investigate, demand evidence, and fine non-compliant providers. Penalties top out at 35 million euros or 7 percent of global turnover, whichever is higher. For most UK SMEs that is a death sentence.

"But we're in the UK — does this apply to us?"

This is the question we hear most. The honest answer is: it depends on whether your AI output reaches an EU person, an EU customer, or an EU regulator.

The Act has extraterritorial scope. It applies to providers who place AI systems on the EU market, importers and distributors operating in the EU, deployers using AI in the EU, and non-EU providers and deployers when "the output produced by the AI system is used in the Union".

That last clause is the catch. Practically, here are the situations that pull a UK practice into scope.

  • You provide AI-driven services to EU clients. A UK consultancy that uses an AI system to produce HR recommendations for a French client. The AI output is being used in the EU. You're in scope as a deployer.
  • You sell or distribute an AI tool into the EU. A UK SaaS company with EU customers. Even if you operate from London, you're in scope as a provider.
  • You use AI to produce output that affects EU citizens. A UK recruitment firm using AI to screen candidates for any role located in the EU.

The situations that don't pull you in:

  • You only operate in the UK and your customers are UK-only.
  • Your AI output never reaches the EU.
  • You use general-purpose AI for internal admin (drafting your own emails, summarising your own meetings).

If you're not sure which side of the line you're on, that uncertainty is itself an EU AI Act risk that needs to be addressed. Your auditor and your enterprise customers will ask, and "we're not sure" isn't an acceptable answer.

Who is on the hook

The Act distinguishes between five roles. The two that matter most for UK practices are:

  • Provider. The party that develops or has developed an AI system and places it on the EU market. If you build an AI tool and sell it to EU customers — that's you.
  • Deployer. The party that uses an AI system under their authority. If you use an AI tool internally to produce output for EU clients — that's you.

Most UK SMEs we speak with are deployers, not providers. They are using a third-party AI platform (ChatGPT, Microsoft Copilot, Other Me, an industry-specific tool) to do their actual job. The deployer obligations are lighter than the provider obligations, but they are not zero.

Deployer obligations under the high-risk regime include: human oversight (Article 14), monitoring of the AI system's operation (Article 26), recording the AI use, informing affected individuals before AI-supported decisions are made about them, and ensuring the input data you provide is appropriate.

If you are using a high-risk AI system as a deployer, you also need to verify the provider is meeting their obligations — that is, you need to see their technical documentation, their conformity assessment, and their CE marking (yes, AI gets CE marking under the Act).

The four documents you need ready

For a typical UK SME deployer of a high-risk AI system, four documents form the backbone of a defensible compliance posture.

1. AI Use Register

A list of every AI system you use, what it's used for, who the affected individuals are, and which Annex III category (if any) it falls under. This is an internal document. Auditors and EU customers will ask to see it. Most UK SMEs don't have one yet.

2. AI Impact Assessment per high-risk use

For each AI use that is high-risk under the Act, a written assessment of who could be harmed, how, and what you've done about it. This is similar to a UK GDPR Data Protection Impact Assessment but with an AI-specific lens. Some EU AI Act requirements (Article 27 fundamental-rights assessment for deployers in certain cases) are essentially this document under a different name.

3. Human oversight documentation

Article 14 requires that high-risk AI systems be designed for human oversight. As a deployer, you need to evidence which human in your organisation oversees each AI-driven decision, what their role is, and what they can do if they spot a problem. A short policy plus role assignments is usually enough.

4. Provider verification pack

Evidence that the AI vendor you use has met their own provider obligations — their technical documentation, their conformity assessment summary, their CE marking. Your AI vendor should provide this. If they cannot, that is a red flag and a change-of-vendor conversation.

The cost of getting it wrong

Penalties under the Act are severe and tiered.

  • Prohibited AI practices (e.g., social scoring): up to 35 million euros or 7% of global turnover.
  • Most provider obligations (data quality, technical documentation, conformity assessment): up to 15 million euros or 3% of global turnover.
  • Provision of incorrect or misleading information to authorities: up to 7.5 million euros or 1% of global turnover.
  • SME-specific rule: for SMEs (under EU SME definition: <250 employees AND <50m euros turnover) the lower of the two figures applies, not the higher. That is the only place the Act gives SMEs a meaningful break.

Below the financial penalties, there is reputation risk and customer-loss risk. An EU customer who finds out their UK supplier is non-compliant with the Act will quietly stop renewing. A UK regulator (the ICO has signalled it will track AI Act alignment as part of UK GDPR enforcement) may investigate. A leak of an enforcement letter is the kind of news that ends sales pipelines.

A 90-day plan that actually works

Days 1–14: scope

  1. List every AI system in active use in your business. Don't try to remember — ask your team. ChatGPT, Copilot, Other Me, vertical tools, custom integrations, AI features inside CRMs, scheduling assistants, the lot.
  2. For each, classify it. Internal-admin only? Output reaches an EU person? Output affects an employment, credit, or essential-service decision? Use the Annex III list as a checklist.
  3. Identify which of your customers are in the EU. Cross-reference with the AI use list above. The intersection is your in-scope set.

Days 15–45: prepare the four documents

  1. Write the AI Use Register. A spreadsheet or a structured tool — both work. Update it whenever a new AI use appears.
  2. For each in-scope AI system, run an Impact Assessment. Affected individuals, possible harms, mitigations, residual risk, human oversight arrangements.
  3. Write a one-page Human Oversight policy. Who oversees what. What they can do.
  4. Email your AI vendors and ask for their EU AI Act compliance pack: technical docs, conformity assessment, CE marking. Vendors that don't have one yet will say so — that is itself useful information for your risk register.

Days 46–75: review + train

  1. Have a director or compliance lead review the four documents. They sign them.
  2. Brief the whole team on the AI Use Register and the Human Oversight policy. Most violations are caused by someone using AI for something the policy doesn't cover. Awareness prevents this.
  3. Set up a quarterly review cycle. The Use Register and the Impact Assessments need to be live documents, not one-time write-ups.

Days 76–90: customer-facing readiness

  1. Prepare a one-page summary you can send to EU customers when they ask. "Here is how we comply with the EU AI Act." Reference your four documents without disclosing them in full.
  2. Add EU AI Act alignment to your sub-processor register and your Privacy Notice (your customers' compliance teams will check both).
  3. Decide whether you want to publish a public-facing EU AI Act statement on your website. Many UK SMEs are choosing to do this proactively in 2026 because it answers procurement questions before they're asked.

Total time investment: 30 to 60 hours for a typical small UK SME, spread across the 90 days. The single biggest time-saver is using a governance tool that auto-generates the documents from a wizard rather than writing them in Word from scratch. We built one — your mileage may vary.

What not to do

Three failure modes we see often in UK practices.

  • Wait until August. The 90 days starts now. Even at 30 hours of work, scattered across other priorities, that's three months. There is no honest way to get there in a fortnight.
  • Assume Brexit means it doesn't apply. Brexit affects UK GDPR enforcement, not EU regulation reach. The EU AI Act follows the same extraterritorial pattern as the GDPR — anyone affecting EU citizens is in scope, regardless of where they're based.
  • Buy a £20,000 consultancy package. Some UK consultancies are pitching EU AI Act readiness for £15,000 to £30,000. For most SMEs that is dramatic overkill. The Act is not that complicated for a typical deployer. The cost should be measured in tens of hours of internal time plus a small subscription to governance tooling, not five figures.

How we help

Other Me's Governance Add-on from £6 a month auto-generates the four documents above (AI Policy that covers the Use Register, AI Impact Assessments per system, and a Privacy Notice that includes the EU AI Act statement). The Impact Assessment wizard is specifically structured around the Annex III high-risk categories — you answer five questions and get back a compliance-grade draft.

If you'd like to see the full mapping of how Other Me's architecture aligns with the EU AI Act's high-risk article requirements, that's published openly on our Trust Center. EU AI Act mapping ships in v2 of our public scorecard (alongside ISO 27701 and NIST AI RMF), but the underlying governance tooling — Impact Assessments, audit logging, sub-processor register, privacy notice generator — already supports the work the Act will require.

The 90 days starts now. A free 7-day trial — card at signup, no charge for 7 days — is enough to see whether the tooling fits your workflow before you commit.

AS

Abhishek Sharma

Founder & CEO of Pop Hasta Labs. Building Other Me — the governed AI platform with patent-pending security architecture. Based in London.

Related articles

Try Other Me free for 7 days

AI assistants with governance built-in. Card at signup — no charge for 7 days.

Start 7-day free trial