Compliance UK SMEs · · 9 min read

ISO 42001 Explained for UK SMEs — Why Procurement Teams Are Asking About It

AS

Founder & CEO, Pop Hasta Labs

If you sell B2B in 2026, you have probably already had this email: "Please send us your AI policy and confirm your alignment with ISO/IEC 42001."

Three years ago that question did not exist. ISO 42001 was published in December 2023 — the world's first international standard for AI management. By 2025 it had appeared on a few enterprise procurement RFPs. By 2026 it is on most of them. Microsoft, GitHub, and several other major AI providers have already certified to it, which means buyers now ask their other vendors to match.

For a UK SME or regulated practice with one or two large customers, this becomes a problem fast. You don't have a chief AI officer. You don't have a compliance team. You have a real job to do. And now your largest customer's procurement team wants paperwork.

This article does three things. First, it explains ISO 42001 in plain English — no acronyms, no clause numbers in the prose. Second, it tells you honestly what your AI platform already does for you (more than you think). Third, it shows you how to answer the procurement email tomorrow.

What is ISO 42001, in plain English

ISO 42001 is a quality stamp for how a company uses AI. Like ISO 27001 is the stamp for "we handle data properly", and ISO 9001 is the stamp for "we run a quality business", ISO 42001 is the stamp for "we use AI responsibly".

It is voluntary. Nobody can force you to certify. But like ISO 27001 before it, the standard is becoming the default vocabulary that big buyers use to describe what they want from their AI vendors. Procurement teams use it because it gives them a checklist. Auditors use it because it gives them something to inspect. SaaS providers like Microsoft use it because it lets them tick a box that opens enterprise sales.

The standard has two halves. The first is a set of processes: write down how your organisation thinks about AI, who is responsible, how risks are assessed, how you review your decisions. The second is a list of controls: 38 specific things an AI-using business should be doing, grouped into nine categories like "AI policies", "data for AI systems", "AI system lifecycle", and "third-party relationships".

An auditor checks both halves. They want to see your written processes (the document side) and evidence those processes are working (the data side — audit logs, completed assessments, etc.). When both check out, they sign a certificate.

Why are people asking now

Three things came together in 2026.

  • Major AI vendors certified. Microsoft 365 Copilot, GitHub Copilot, and Microsoft Foundry got their ISO 42001 certificates in late 2025 and early 2026. Once the largest providers had it, every other AI vendor in the procurement pipeline got asked to match. The certificate became a procurement default.
  • The EU AI Act enforcement window opened. Article 26 of the EU AI Act starts being enforced for high-risk AI systems in August 2026. Procurement teams at large UK businesses with EU customers want to see ISO 42001 alignment as evidence their suppliers will not become a compliance problem when enforcement starts.
  • The Information Commissioner's Office signalled. The ICO's 2025 guidance on automated decision-making and AI explicitly references ISO 42001 as a recognised framework for demonstrating accountability under UK GDPR Article 24.

None of these on their own would have moved the needle. Together they have made the question — "what's your alignment with ISO 42001" — appear in roughly 80 percent of enterprise SaaS procurement RFPs we see in 2026.

What it actually requires

The 38 controls in the standard cover nine areas. Here is each area in one sentence.

  1. Have an AI policy. A written document approved by leadership, reviewed annually, that says how your organisation uses AI responsibly.
  2. Assign clear roles. Someone is accountable for AI decisions. Someone else can raise concerns without being punished.
  3. Have the right resources. The right people, the right tools, the right computing capacity, the right data.
  4. Run impact assessments. For each significant AI use, evaluate who could be harmed and how, and document it.
  5. Manage the AI lifecycle. From design through retirement, control how AI systems are built, deployed, monitored, and turned off.
  6. Govern data for AI. Where it comes from, how it's quality-checked, how it's prepared, how its origin is tracked.
  7. Be transparent. Tell your users, customers, and other affected people that AI is being used, and what to do if they object.
  8. Monitor responsible use. Set goals for how AI is used, define what intended use means, monitor whether reality matches.
  9. Manage third parties. The AI vendors you use, the customers you sell AI to, the contracts that govern the relationships.

If you read that list and thought "that's just sensible governance," you're right. The standard is not radical. It's a structured way of writing down what a careful organisation should be doing anyway.

What your AI platform probably already does for you

Here is the honest part most people don't realise: if you use a properly-built AI platform, you already meet 60–80 percent of the controls without writing a single document yourself.

The data-related controls (area 6 above) are the largest cluster — five separate controls — and they are exactly what a governed AI platform handles by design. Data quality, data origin tracking, data preparation, where the data is sourced from. If your AI vendor has a per-client data separation, encrypted storage with proof of origin, and audit logging — those five controls are met by the architecture, not by your paperwork.

The lifecycle controls (area 5) are similar. Controlled deployment, monitoring, technical documentation, event logging — these are properties of a well-built platform, not of your written process. If your AI vendor publishes a security architecture page, has a tamper-evident audit chain, and ships incident communication tools, those controls are technically satisfied before you start.

The third-party relationship controls (area 9) are partly handled too. Your AI vendor's data processing agreement, sub-processor register, and customer contract clauses are evidence you can point to when an auditor asks.

What this means in practice: you do not need to start from zero. You need to identify what your AI platform already gives you, and then close the gap on the remaining controls — which are the document-and-process controls that only you can do.

We publish exactly this mapping for Other Me on our Trust Center page — the customer-side scorecard shows which controls of ISO 42001 (and ISO 27001, UK GDPR, and Caldicott) are technically satisfied by the platform, which are partial, and which are pure paperwork. The exercise of doing this map openly is itself one of the ISO 42001 controls (transparency to interested parties).

What only you can do

About 11 of the 38 controls cannot be delivered by any vendor. They are the organisation-level controls that require human decisions in your business.

  • A board-approved AI policy. A document, in your company's name, that says how you use AI. The platform vendor cannot write this for you because it is your governance document. (However — auto-generation of a draft, in plain language tailored to your industry, is exactly the kind of thing the better governance tools now offer in 2026.)
  • Clear roles and responsibilities. Who in your team is accountable for AI decisions. Who handles concerns. This is people stuff, not platform stuff.
  • An AI risk assessment process. A repeatable methodology your team uses for new AI use cases. Pick one, document it, follow it.
  • Staff training and awareness. A short training the team completes annually on responsible AI use.
  • An internal review cycle. Once a year, the leadership team sits down, looks at what is and isn't working with the AI policy, and writes minutes.
  • A continual improvement log. A list of changes and improvements to your governance over time.

None of these are individually large. The total time investment for a small organisation, once you have the templates, is typically two to four hours a quarter. The first time you do it, expect a long afternoon to write the AI policy and run the first impact assessments. After that, it's maintenance.

What it costs (the honest version)

If you want the actual ISO 42001 certificate (not just alignment), the costs break down roughly as follows for a small UK organisation.

  • External certification body fees — between £8,000 and £15,000 in the first year for the Stage 1 + Stage 2 audits with a UKAS-accredited body like BSI. Then around £2,500 a year for surveillance audits. Recertification at year three is similar to the first audit.
  • Internal time — for a founder-led SME, expect 30 to 60 hours spread across two months to set up the management system, run the first internal audit, and prepare for the external auditor.
  • Consultants (optional) — UK consultancies charge between £8,000 and £30,000 to lead-implement the management system. Usually unnecessary for SMEs that take the structured-template approach.

If you don't want the certificate yet but want to be ready when a customer asks — that's much cheaper. The right governance tooling can give you the technical evidence and the document drafts for under £200 a year. Our Trust Center page walks through what you'd actually need.

How to answer procurement when they ask

The honest, defensible answer for most UK SMEs in 2026 is some version of this:

"We are not yet ISO 42001 certified, but our AI usage is aligned with the standard's controls. Our underlying platform [X] handles approximately 70 percent of the technical controls by design — see attached scorecard. Our internal AI policy, risk assessment methodology, and impact assessments cover the remaining controls. We can provide a complete audit pack on request."

That answer is true, defensible, and exactly what most procurement teams expect from a sub-£10m supplier. They are not expecting you to be Microsoft. They are checking whether you have thought about it and can produce evidence. Saying "yes, here's our scorecard and our audit pack" is enough to clear the gate.

What you cannot say truthfully — and should never claim — is "we are ISO 42001 certified." The word certified means a UKAS-accredited body has audited you and issued a certificate. Nothing else. Aligned, compliant, and working towards are all defensible. Certified is binary, and lying about it is contractually catastrophic.

What we built so customers can answer this question

Other Me is a governed AI platform for UK regulated practices and SMEs. We built it because we were on the receiving end of these procurement emails ourselves. Our security architecture handles roughly 70 percent of ISO 42001's technical controls by design, and we publish the per-control mapping openly on our Trust Center. Our customers can also enable a Governance Add-on from £6 a month that auto-generates the policy documents, runs the data subject and breach workflows, and produces the audit pack you hand to procurement.

We're not yet ISO 42001 certified ourselves — that's a 9-month engagement we'll undertake when the volume of customer requests warrants it. We are certified-ready, openly mapped, and honest about which controls we satisfy and which we don't. Customers can read the full scorecard and start a free 7-day trial — card at signup, no charge for 7 days.

If you'd like to walk through how the Add-on would handle your specific industry's procurement questions, the answer is on our pricing page — it's a pure add-on at £12 a month for the Business tier, and it includes everything we've discussed in this article.

AS

Abhishek Sharma

Founder & CEO of Pop Hasta Labs. Building Other Me — the governed AI platform with patent-pending security architecture. Based in London.

Related articles

Try Other Me free for 7 days

AI assistants with governance built-in. Card at signup — no charge for 7 days.

Start 7-day free trial