#EdTech - AI, GDPR & Safeguarding: What School and MAT Leaders Must Get Right in 2026
- Mar 9
- 5 min read
A practical, human-centred guide for schools and multi-academy trusts navigating artificial intelligence, data protection and pupil safety.

Artificial intelligence has arrived in schools far faster than most leadership teams expected. Teachers are already using tools like ChatGPT to draft reports, write lesson plans and reduce admin workload. Students are experimenting with AI homework helpers on their own devices. And in some cases, staff are unknowingly entering worksheets, behaviour notes or even pupil names into free online platforms without realising they may be putting the school at risk.
As someone who works closely with schools every day, I see this pattern repeatedly. The issue isn't just technological. It's about governance, safeguarding and compliance — and school leaders ultimately carry the responsibility.
This guide breaks down what MAT leaders and SLT must know to build a confident, lawful AI policy for schools that protects pupils, supports staff and keeps you on the right side of the ICO.
Navigating this alone is tough. The Tech Shepherd works with MAT leaders and SLT as an IT consultancy and fractional IT leader — helping trusts build practical AI governance frameworks without the jargon. Get in touch for a free initial conversation.
What Happens When Staff Upload Pupil Data Into AI Tools?
When a member of staff types a pupil's name, SEN information or behaviour notes into an online AI tool, several invisible things can happen behind the scenes.
Many free AI services retain inputs to improve their models. That means sensitive pupil data could be stored on servers outside the UK — and in rare cases, could even reappear in someone else's output. This is the scenario no school wants to end up answering to the ICO about.
Under UK GDPR, processing data outside the UK or EEA requires specific safeguards in place — Article 46 mechanisms in most cases. Worth checking you've got those covered. And for children's data, the bar is even higher. Legitimate interests will rarely be strong enough for processing sensitive information.
Remember: schools are the data controller. If a teacher uploads pupil data into a tool without a Data Processing Agreement (DPA), the school is liable — not the teacher.
Practical action: create a clear AI safeguarding policy for schools across your trust — no pupil-identifiable information may be entered into any AI tool unless it is on your approved list and covered by a signed DPA. This is best owned at trust level by your DPO, rather than left to individual schools to interpret.
Data Processing Agreements for AI Tools in Schools
A DPA is not a tick-box exercise — it's your legal safety net. And for AI tools specifically, many standard DPAs simply don't go far enough.
Your DPA must clearly state whether your data is used to train or fine-tune the vendor's models, how long data is stored and how it can be deleted, where data is processed and whether sub-processors are used, breach notification timelines (the 72-hour rule applies here too), how Subject Access Requests will be handled, and what security measures are in place — encryption, access controls and so on.
Some vendors quietly allow model training in their general Terms of Service, even when their DPA suggests otherwise. Always review both documents together and speak with your DPO if anything looks unclear.
Cloud vs On-Premises AI Tools: A Guide for Schools
Not all AI tools carry the same level of risk. Cloud AI tools send data outside your network, require a DPA, and may use inputs for training unless contractually forbidden. On-premises tools keep data entirely on-site, avoid external model training, and don't require a DPA — though they do require more budget and local IT capacity.
For most trusts, a hybrid approach works well. Use cloud AI for anonymised admin work, and reserve local or enterprise tools for anything involving identifiable pupil data. This keeps risk low while still unlocking the productivity benefits AI can offer.
Age Restrictions & Parental Transparency
Most major AI platforms set a minimum user age of 13 or 16, depending on the region and the tool. Yet in many schools, AI tools are casually recommended to pupils far younger.
The UK's Age Appropriate Design Code (Children's Code) requires schools to ensure strong privacy protections if pupils are asked to use an online service. From a trust perspective, this is both a safeguarding and a reputational risk — if a parent raises a concern and you can't demonstrate that age-appropriate checks were in place, that's a difficult conversation for any CEO or headteacher to have.
Your school's privacy notice should clearly list the AI tools you use and explain how pupil data is handled, processed and protected. Parents must not be surprised by AI use in your school — transparency builds trust.
Preventing Misuse & Exposure to Inappropriate Content
AI tools are powerful — but not always predictable. Pupils can deliberately or accidentally bypass filters. Common risks include "jailbreak" prompts that produce explicit or violent content, custom AI personas that allow inappropriate role play, AI image tools being misused to generate fake or harmful images, and pupils accepting hallucinated AI answers as factual.
Your Acceptable Use Policy should include clear guidance on AI use, with defined reporting routes for safeguarding concerns.
Updating Your Safeguarding Policy for AI
Although KCSIE doesn't yet include AI-specific guidance, the underlying principles still apply. A robust AI safeguarding policy for schools should include a definition of AI tools within your online safety parameters, clarity on who approves AI tools at trust level (DPO, DSL or a named SLT lead), clear rules for staff on prohibited AI uses, a process for reporting AI-related safeguarding concerns, updated references in your Online Safety curriculum, and an annual review trigger aligned with your AUP.
This ensures AI becomes part of your existing safeguarding culture rather than a bolt-on afterthought.
6 Key Questions Before Approving Any AI Tool
Before adding any AI tool to your school or trust's toolkit, ask:
Do we have a DPA? If the vendor won't sign one, the conversation ends there.
Is staff or pupil data used to train AI models? It must be explicitly forbidden.
Where is data stored? Don't assume — verify.
What is the minimum user age? If it's under-18s, you must have documented risk assessments in place.
Have staff tested content safeguards? Never rely solely on vendor assurances.
What problem does this tool actually solve, and is it worth the risk? AI should address a clear priority — not become another shiny distraction.
Building a Culture of Safe, Confident, Lawful AI Use in Your Trust
The trusts that will get the most from AI are those that move quickly but govern well. That means embedding your AI policy for schools within your Acceptable Use Policies, safeguarding framework, staff training, parental communications and school improvement strategy — not treating it as a standalone document that sits in a folder and gets reviewed once a year.
For multi-academy trusts, a single AI approval framework rolled out consistently across all schools is far more effective than leaving individual headteachers to navigate this alone. It protects everyone, creates clarity for staff, and means you can move faster when a genuinely useful tool comes along. This is exactly where MAT IT leadership makes a measurable difference — bringing strategic oversight to decisions that would otherwise fall through the gaps.
With the right controls in place, AI is a powerful asset — one that supports your staff, enhances learning and protects the rights of every pupil in your care.
Ready to Get This Right Across Your Trust?
If your trust is ready to use AI confidently and compliantly, The Tech Shepherd can help. Whether you need a fractional IT leader to lead this work internally, or an external IT consultancy to review your current position and build out your AI policy framework, we work alongside school leaders to make it practical and straightforward — no unnecessary complexity, no jargon.
Book a free call with The Tech Shepherd today.




Comments