04 Mar 2025

Bridging the Gaps: AI in Education and the Urgent Call for Inclusion

AI is transforming education at a breathtaking pace. At this year’s BETT expo, the Secretary of State’s keynote made it clear: AI is no longer a futuristic concept; it is actively reshaping classrooms today. Yet, as this rapid transformation unfolds, critical gaps remain—particularly for students with special educational needs and disabilities (SEND), assistive technology users, and those who require accessible learning tools.

At techUK, we’ve been exploring these issues through our Education & EdTech Programme and Digital Ethics Programme. During BETT week, we brought together leading voices in AI ethics, education, and accessibility to discuss the challenges and opportunities AI presents in SEND education. The consensus was clear: if AI adoption does not prioritise accessibility and inclusion, it risks deepening educational inequalities rather than closing them.

AI Adoption: Fast but Uneven

The integration of AI in schools is growing rapidly, but it remains fragmented. AI tools are often introduced in silos—driven by enthusiastic teachers or individual initiatives—rather than as part of a cohesive, system-wide strategy. This lack of consistency creates uncertainty around regulation, data protection, and ethical use.

Meanwhile, government strategies for AI in education are still evolving. The UK Government’s AI Opportunities Plan outlines ambitions for AI in public services, but concrete implementation strategies for education remain unclear. While AI is often championed for its efficiency and innovation, discussions about inclusion, accessibility, and digital infrastructure are lagging behind.

Without a structured approach, we risk exacerbating existing educational inequalities. AI should not just be an innovation for well-resourced schools; it must be a tool that benefits all students, especially those in SEND and accessibility contexts.

AI and the Risk of Widening Educational Inequality

AI holds immense potential to bridge gaps in education, offering tools that personalise learning and improve accessibility. However, without deliberate action, it could also deepen existing inequalities. Research from the Digital Poverty Alliance highlights the stark disparities in access to AI tools, digital infrastructure, and teacher training:

  • 75% of teachers in well-funded schools report having enough devices for most students, compared to just 25% in disadvantaged schools.
  • In underprivileged areas, only 2% of teachers say students have adequate access to devices and connectivity for homework.

This digital divide raises an urgent question: will AI be a force for equity, or will it reinforce existing disparities? If we fail to embed accessibility and inclusion into AI from the outset, we risk leaving behind those who stand to benefit the most.

AI in SEND Education: A Game-Changer or a Missed Opportunity?

One of AI’s most promising applications is in assistive technology for SEND students. AI-driven tools such as text-to-speech, adaptive tutoring, and eye-tracking software are already enabling greater independence for students with diverse needs. However, our discussion at BETT highlighted key concerns:

  • AI systems are only as inclusive as the data they are trained on. Many AI learning models are developed using mainstream educational data, which often fails to reflect the needs of SEND students. This raises the risk of AI-driven learning platforms unintentionally excluding the very learners they claim to support.

  • Co-production is essential. AI solutions must be designed in collaboration with educators, SEND specialists, and students themselves. Too often, EdTech solutions are imposed on classrooms without proper input from those who understand their practical challenges.

  • Transparency is crucial. Many AI tools operate as opaque “black boxes,” making it difficult for teachers and students to understand how decisions—such as personalised learning recommendations or grading—are made. This lack of clarity is particularly concerning for SEND learners, where tailored support is essential.

AI, Autonomy, and Student Outcomes

AI-powered learning tools have the potential to engage students who struggle with traditional education models. However, our discussions raised important questions about the fine line between assistance and autonomy. AI should scaffold learning, enabling students to work independently rather than completing tasks on their behalf. If AI automates too much, it risks undermining student agency rather than enhancing it.

This is particularly relevant for AI’s role in assessment. Traditional exam-based systems often disadvantage SEND learners. AI-powered adaptive assessment tools could revolutionise this process by:

  • Adjusting the difficulty and format of tests to suit individual students.
  • Providing real-time feedback to support learning rather than just measuring performance retrospectively.
  • Offering alternative assessment methods, such as speech-based or project-based evaluations, which better accommodate neurodiverse learners.

However, if these innovations are not designed with inclusion in mind, they could simply replicate existing biases rather than overcoming them.

Supporting Teachers in AI Adoption

Teachers are on the frontlines of AI adoption, yet many feel unprepared and uncertain about how to integrate AI into their classrooms. While some educators have embraced AI tools, uptake remains uneven due to gaps in training and policy clarity.

Surveys indicate that a third of teachers are already using AI in some capacity, but many hesitate due to concerns about educational integrity and lack of formal guidance. The introduction of initiatives like the EdTech Evidence Board and AI training programs for teachers is a step in the right direction. However, more structured support is needed to ensure that AI integration is ethical, effective, and accessible to all students.

From Conversation to Action

Our discussions at BETT reinforced a fundamental truth: without a structured approach to AI in education, we risk widening existing inequalities rather than closing them. AI must be implemented with clear policies, inclusive design principles, and robust teacher support.

Governments, EdTech companies, and educators must work together to ensure AI serves all students—especially those in SEND and under-resourced schools. The future of AI in education should not be one of unfulfilled potential, but of deliberate, inclusive innovation.

Want to learn more?

Read the full brief here.

 

For all things EdTech, reach out to Austin Earl ([email protected]). For all things ethics and assurance, contact Tess Buckley ([email protected]).

Related topics