Guide Governance · · 9 min read

Schools Need an AI Policy by 2026. Most Do Not Have One.

AS

Founder & CEO, Pop Hasta Labs

Somewhere in your school right now, a pupil is using a generative AI tool to complete homework. A teacher down the corridor is wondering whether that counts as cheating. And the headteacher has no formal policy to help either of them navigate the situation.

This is not a hypothetical scenario. It is the reality in the majority of UK schools heading into the 2026-27 academic year. AI tools are already embedded in how pupils learn, communicate, and produce work. The question is whether schools are governing that usage or simply hoping for the best.

The problem no one is talking about

The conversation around AI in education has so far focused on two extremes. Either AI is a transformative learning tool that will revolutionise classrooms, or it is a threat to academic integrity that must be blocked. Neither position is helpful on its own, and neither addresses the practical governance gap that most schools are facing right now.

76% of teachers are not confident advising pupils on AI use (Teacher Tapp / NFER, 2025)

Three quarters of teachers do not feel equipped to guide their pupils on how to use AI appropriately. This is not a failure of individual teachers. It is a systemic failure to provide schools with clear frameworks, training, and policies that address how AI should be used in an educational setting.

1 in 5 teachers do not know whether their school permits pupils to use generative AI (Teacher Tapp / NFER, 2025)

Nearly one in five teachers are unsure whether their own school even allows pupils to use generative AI. That ambiguity is a problem. When staff do not know the rules, they cannot enforce them. When pupils do not know the boundaries, they make their own decisions about what is acceptable.

What the government is doing

The UK Education Select Committee launched a formal inquiry into AI and EdTech in February 2026. The inquiry is examining how AI is being used in schools, what risks it presents, and whether current guidance is adequate. It is gathering evidence from teachers, school leaders, parents, and technology providers.

This matters because select committee inquiries typically lead to formal recommendations that shape DfE policy. Schools that wait for the final report before acting will find themselves playing catch-up. Schools that build a policy now will be ahead of whatever guidance emerges.

At the same time, there are strong signals that Ofsted is paying attention. Inspectors are increasingly likely to ask about AI risk management as part of their assessments of school leadership and governance. A school without an AI policy will struggle to demonstrate that it is managing this risk effectively.

Key insight: The Education Select Committee inquiry signals that formal government guidance is coming. Schools that build their AI policy now will be positioned to adapt, not scramble, when that guidance arrives.

What an AI policy actually needs to include

An effective school AI policy does not need to be a 40-page document. It needs to be clear, practical, and specific enough that every teacher and pupil knows what is expected. Here are the six essential components.

The 6-point AI policy checklist for schools

  • 1. Approved tools list. Name the specific AI tools that pupils and staff are permitted to use for school-related work. Everything not on the list is not approved. Review this list every term as new tools appear and existing ones change their terms of service.
  • 2. Age-appropriate usage rules. Define different rules for different year groups. What is appropriate for a Year 12 student researching an EPQ is not appropriate for a Year 7 pupil completing maths homework. Set clear boundaries for each key stage.
  • 3. Data protection requirements. Specify what information pupils and staff must never enter into AI tools. This includes names, addresses, photos, SEN information, safeguarding concerns, and any other personal data. Make this list explicit and unambiguous.
  • 4. Academic integrity rules. Define precisely what constitutes acceptable AI use for assessed work, homework, and classroom activities. Distinguish between using AI as a research tool, a drafting aid, and a direct answer generator. Pupils need to understand where the line is.
  • 5. Reporting and escalation procedures. Establish what happens when AI produces harmful, inappropriate, or inaccurate content. Staff and pupils should know exactly who to report to and what the response process looks like.
  • 6. Review schedule. AI capabilities change rapidly. Commit to reviewing the policy at least once per term, with trigger reviews when significant new tools or guidance emerge. Name the person responsible for each review.

A policy that sits in a shared drive untouched is not a policy. It is a liability. The goal is a document short enough to read in a staff meeting and clear enough that an NQT can apply it on day one.

Safeguarding and data protection

For schools, AI governance is fundamentally a safeguarding issue. Generative AI tools can produce content that is inappropriate for children. They can be used to generate harmful material. And they can collect personal data from pupils in ways that breach UK GDPR and the Age Appropriate Design Code.

Keeping Children Safe in Education (KCSIE) already requires schools to have appropriate measures in place to manage online safety risks. AI tools fall squarely within that remit. A school that allows unfiltered access to generative AI without governance is failing in its safeguarding obligations in the same way it would fail if it provided unfiltered internet access.

Data protection specifics

Schools are data controllers under UK GDPR. When a pupil enters personal information into an AI tool, the school is responsible for ensuring that processing is lawful. Most free-tier AI tools do not meet the data protection standards required for processing children's data.

The ICO has been clear that organisations processing children's data must apply higher standards of protection. For schools, this means conducting Data Protection Impact Assessments for any AI tools used by pupils, ensuring lawful basis for processing, and being able to demonstrate compliance if questioned by the ICO or by parents.

Assessment integrity

The question of AI and assessment integrity is one that every school must address directly. Pupils are already using AI tools to help with homework, coursework, and revision. Pretending otherwise does not serve anyone.

The practical approach is to define categories of AI use and make them explicit. For example: AI as a research tool (acceptable with citation), AI as a drafting assistant (acceptable for certain tasks with disclosure), and AI as a direct answer generator (not acceptable for assessed work). These categories will vary by subject, year group, and assessment type.

Exam boards are also beginning to publish guidance on AI use in coursework and controlled assessments. Your policy should reference these and commit to updating as exam board positions evolve. The schools that handle this well will be the ones that treat AI literacy as a skill to be taught, not a threat to be feared.

Key insight: Banning AI outright does not work. Pupils will use it regardless. The schools that build assessment integrity into their AI policy, rather than relying on detection and punishment, will produce pupils better prepared for a workplace where AI is standard.

How to get started today

Building an AI policy does not require external consultants or months of committee work. It requires a senior leader to own it, a focused conversation with staff, and a willingness to start with something imperfect rather than waiting for something perfect.

Step 1: Assign ownership

Designate a member of SLT as the AI policy owner. This person does not need to be a technology expert. They need to be someone with the authority to set expectations and the time to keep the policy current.

Step 2: Audit current usage

Spend one week asking staff and pupils what AI tools they are currently using. The answers will almost certainly reveal more usage than you expected. This audit gives you the baseline from which to build your policy.

Step 3: Draft using the 6-point checklist

Use the six components outlined above as your framework. Keep the language simple. Aim for a document that fits on two sides of A4. Share the draft with staff for feedback before finalising.

Step 4: Communicate and train

Dedicate a staff meeting to walking through the policy. Focus on the why, not just the what. Follow up with a brief assembly or form-time discussion so that pupils understand the expectations too.

Step 5: Implement and review

Set a date for the first termly review before you even publish the policy. AI tools evolve quickly, and your policy needs to keep pace. Each review should ask three questions: are the approved tools still appropriate, have new risks emerged, and is the policy being followed in practice?

How Other Me supports schools

Building a policy is the first step. Enforcing it is where technology can help. Other Me is built with education-specific governance features that align directly with the requirements outlined in this guide.

Content filter levels can be configured by age group, ensuring that younger pupils receive stricter filtering than sixth-form students. Schools can define an approved assistants whitelist so that pupils only access AI capabilities that have been vetted and approved by the school. Session time limits prevent excessive use and encourage pupils to engage with AI as a tool rather than a crutch.

Engagement scoring measures the quality of learning interactions rather than simply tracking time spent. This gives teachers visibility into whether pupils are using AI to think more deeply about a subject or simply extracting answers. It is the difference between a pupil who uses AI to explore a topic through guided questioning and one who pastes in a homework question and copies the output.

Underneath all of this, Other Me's patent-pending SCRS (Secure Context Retrieval System) provides the data governance foundation that schools need when handling student data. Personal information is protected through scope-constrained retrieval, meaning the system enforces access boundaries before any data is searched or returned. For schools managing sensitive pupil data including SEN records, safeguarding notes, and pastoral information, this architectural approach ensures that data protection is enforced at infrastructure level rather than relying on individual behaviour.

Built for schools: Other Me provides age-appropriate content filtering, approved tool whitelists, session time limits, and engagement scoring, all underpinned by SCRS data governance designed for student data protection. Learn more about Other Me for schools.

The Education Select Committee inquiry will produce recommendations. Ofsted will incorporate AI governance into its inspection framework. The schools that act now, rather than waiting for external pressure, will be the ones best positioned to demonstrate responsible AI leadership. Your pupils are already using AI. Give them the framework to use it well.

Pop Hasta Labs Ltd is registered at UK Companies House (No. 16742039). SCRS Dual-Gate architecture is the subject of UK Patent Application No. 2602911.6.

AS

Abhishek Sharma

Founder & CEO of Pop Hasta Labs. Building Other Me — the governed AI platform with patent-pending security architecture. Based in London.

Try Other Me free for 7 days

AI assistants with governance built-in. No credit card required.

Start 7-day free trial