Enterprise Guide · · 6 min read

The 5-Minute Ai Policy Every Small Business Needs

AS

Founder & CEO, Pop Hasta Labs

Why you need an Ai policy right now

From my perspective, the conversation around whether your team is using Ai is already over, because they are. This is not some distant prediction or a trend that might arrive next year, it is happening right now in real world across businesses of every size. Someone on your team has already pasted company data into a free chatbot to draft an email, summarise a document, or put together a client proposal, and while the productivity gains from doing this are genuinely impressive, the risks that come along with it are just as real.

75% of small and mid-sized businesses have no formal AI usage policy in place (ISACA, 2025)

Three quarters of businesses are operating without any Ai governance at all, which means their employees are making daily decisions about what data to share with Ai tools, which tools to trust, how to handle whatever the Ai generates, all without any guidance from the company. Every single one of those decisions is a potential data protection incident waiting to happen, and the recall value of that statistic alone should be enough to make anyone take notice.

I believe the good news here is that creating an Ai policy does not require a legal team, a six-month project, or a 40-page document. You can have a working policy in place in five minutes, and this article gives you the template to do exactly that.

Why most businesses do not have one

If Ai policies are so important then why do most small businesses not have one? From my perspective the reasons are predictable and understandable, even if they are not good excuses.

It feels too early

Many business owners still think of Ai as something that is coming rather than something that is already here, they plan to address it eventually once things settle down. However Ai adoption among employees is not waiting for anyone's planning cycle, and by the time you get around to it months of ungoverned usage have already happened which makes the whole exercise harder than it needed to be.

It feels too complicated

Ai governance sounds like something that requires specialist knowledge, legal review, technical expertise etc. For a business with 10 or 50 employees that feels disproportionate, and the result is paralysis. Nobody does anything because everyone assumes it needs to be perfect before it can exist, which is simply not true.

Nobody owns it

In larger companies Ai governance falls to the CISO or the compliance team, but in a small business there is no obvious owner. The IT person is busy keeping systems running, the managing director is focused towards revenue, and Ai policy falls into the gap between roles where it never gets picked up.

The biggest risk is not getting your Ai policy wrong. It is not having one at all.

The 5-minute Ai policy template

Here is a ready-to-use Ai policy that you can adapt for your business right now. Copy it, fill in the specifics for your company, and share it with your team today. I have kept it deliberately short because a policy that people actually read has far more recall value than a long document that sits in a shared drive untouched, and I believe that knowledge absorption happens best when things are concise and clear.

[Your Company Name] — AI Usage Policy

Effective date: [Date]   |   Owner: [Name/Role]   |   Next review: [Date + 90 days]

 

1. Approved Tools

The following AI tools are approved for business use: [List your approved tools here, e.g. Other Me, Microsoft Copilot]. All other AI tools, including free-tier chatbots and browser extensions with AI features, are not approved for use with any company or client data. If you want to use a tool not on this list, speak to [Name/Role] first.

 

2. Data Classification

Before using any AI tool, classify the data you intend to share:

Green — Public data. Marketing copy, published content, general knowledge questions. Safe to use with approved AI tools.

Amber — Internal data. Internal reports, meeting notes, strategy documents, financial summaries. May only be used with approved tools that have been configured for your company.

Red — Restricted data. Client personal data, employee records, contracts, legal documents, health information, financial account details. Must never be entered into any AI tool unless the tool has been specifically approved for restricted data by [Name/Role].

 

3. Client Data Rules

Client data must never be pasted into a free-tier AI tool under any circumstances. When using approved tools with client data, remove or anonymise names and identifying details wherever possible. If you are unsure whether data counts as client data, treat it as restricted. When in doubt, ask before you paste.

 

4. Accountability

You are responsible for everything you share with an AI tool and everything you produce using one. AI-generated output must be reviewed for accuracy before it is sent to clients, published, or used in decision-making. If an AI tool produces something incorrect and you share it without checking, that is your responsibility, not the tool's.

 

5. Review Schedule

This policy will be reviewed every 90 days by [Name/Role]. AI tools and risks are evolving rapidly. If something changes before the next review, raise it with [Name/Role] immediately. Suggested changes are welcome at any time.

That is it. Five sections, one page, a policy that covers the essentials without overwhelming your team. Print it out, pin it in the kitchen, send it round on Slack. The goal is not perfection, the goal is a baseline that everyone understands and follows basis which you can build over time.

Making it stick

Writing a policy is the easy part, making sure your team actually follows it is where most businesses stumble, so here are a few practical ways to make your Ai policy stick.

Announce it properly

Do not bury the policy in an email that gets lost in someone's inbox. Dedicate five minutes in your next team meeting to walk through it, and specially focus on explaining the why not just the what. From my perspective when people understand that client data pasted into a free chatbot could lead to a data breach and a regulatory fine, they take the policy seriously. When they just receive a document with rules however they skim it and forget, which defeats the entire purpose.

Make it visible

Put the data classification table somewhere people will see it every day, a poster in the office, a pinned message in your team chat, a bookmark on the company intranet etc. The traffic light system of green amber and red is deliberately simple so that it becomes second nature. If someone is about to paste something into an Ai tool they should instinctively ask themselves whether it is green amber or red, and that kind of knowledge absorption only happens through repeated exposure.

Lead by example

If the managing director or team lead is openly using unapproved Ai tools nobody will take the policy seriously. Leadership needs to model the behaviour they expect, use the approved tools, follow the data classification, talk openly about Ai usage in a positive way that shows governance and productivity are not in conflict. I believe this is specially important because people pay far more attention to what their leaders do than what any document says.

Review and adapt

The 90-day review cycle is not just a formality. Ai tools change rapidly, new capabilities appear, new risks emerge, new tools show up constantly. Your policy needs to keep pace, and each review should be basis three simple questions: are the approved tools still the right ones, has anyone encountered a situation the policy does not cover, and do the data classification rules still make sense?

What happens without a policy

If the positive case for an Ai policy does not motivate you, consider what happens when things go wrong and you have no policy in place.

+£530k additional cost per data breach when shadow AI is involved (IBM Cost of a Data Breach Report, 2025)

When shadow Ai contributes to a data breach the breach costs over half a million pounds more than average, and for a small business that is simply not a survivable cost. Apart from this the financial damage is only part of the picture.

From a regulatory standpoint, under UK GDPR your company is the data controller, and if an employee shares personal data with an unapproved Ai tool the ICO will not accept "we had no policy" as a defence. In fact the absence of a policy makes the situation worse because it demonstrates a lack of appropriate technical and organisational measures which is precisely what the regulation requires.

Also, client trust is something that once broken is nearly impossible to rebuild. If a client discovers that their confidential information was pasted into a free Ai chatbot by one of your employees, the relationship is damaged in a way that no apology can repair. For professional services firms specially this is an existential risk.

Then there is the insurance angle, which most people do not think about until it is too late. Cyber insurance policies increasingly require evidence of data governance measures, and if you make a claim related to Ai data exposure but cannot demonstrate that you had a policy in place your insurer may reduce or deny the claim entirely.

Apart from this, prospective clients and employees increasingly ask about data governance during due diligence. Having no Ai policy signals a lack of maturity that can cost you business before you even know you lost it, which is a silent kind of damage that does not show up on any report but proves to be very real over time.

I believe the cost of not having a policy is not hypothetical, it is measurable, it is growing, and it is entirely avoidable.

Tools that help enforce automatically

A written policy is a good start, but policies rely on people remembering to follow them and people are fallible. From my perspective the most effective approach combines a clear policy with technology that enforces it automatically, which is where governed Ai platforms make a real difference.

Instead of relying on every employee to correctly classify data and choose the right tool, a governed platform handles this at infrastructure level. Other Me for example enforces data governance through its patent-pending SCRS (Secure Context Retrieval System) architecture, and when your team uses Other Me instead of free-tier chatbots several things happen automatically which induce confidence that your data is actually protected.

Data never leaves your governance boundary because conversations are processed through a Dual-Gate system that checks permissions before any data is accessed or returned, there is no copy-paste into an external tool because the tool itself is governed. Also, different team members see different data basis their role, so a junior employee cannot accidentally access restricted client information through an Ai query because the system enforces access rules at retrieval layer. Every interaction is logged which means if you ever need to demonstrate to a regulator or a client what data was accessed and by whom the records exist, and this is simply impossible with free-tier tools. Apart from this your team gets access to powerful Ai models through a single governed interface rather than scattered across a dozen unapproved free tools.

Policy plus platform: The strongest approach is a written policy that sets expectations combined with a governed Ai platform that enforces them. Other Me starts at £15 per user per month, which is significantly less than the cost of a single data incident. Learn more about Other Me.

Start today, not next quarter

From my perspective the biggest mistake businesses make with Ai governance is waiting, waiting for the perfect policy, waiting for regulatory clarity, waiting for someone else to go first. Meanwhile every day without a policy is another day of ungoverned Ai usage, another day of data exposure risk, and another day closer to an incident that could have been prevented.

You now have a working Ai policy template which will take you five minutes to adapt for your business. Do it today, send it to your team before end of the week, set a calendar reminder for the 90-day review. You can refine and improve it over time but I believe the most important version of your Ai policy is the first one, because it replaces having nothing at all and that alone will prove to be fruitful in ways you might not immediately see.

Your employees are already using Ai, and the only question is whether they are doing it with guardrails or without them. Five minutes is all it takes to answer that question in ethical way, and I would strongly encourage you to take those five minutes today.

Pop Hasta Labs Ltd is registered at UK Companies House (No. 16742039). SCRS Dual-Gate architecture is the subject of UK Patent Application No. 2602911.6.

AS

Abhishek Sharma

Founder & CEO of Pop Hasta Labs. Building Other Me — the governed AI platform with patent-pending security architecture. Based in London.

Try Other Me free for 7 days

AI assistants with governance built-in. No credit card required.

Start 7-day free trial