What students can now build with AI should change how schools respond

The power of AI has gone beyond writing convincing essays. Schools and trusts need to shift their thinking to understand the potential risks and opportunities from a new generation of tools, says the Good Future Foundation's Daniel Emmerson.

When early conversations about artificial intelligence in schools began to take shape in 2023, a lot of the concern from teachers and leaders focused on academic integrity, particularly in relation to homework and coursework completed outside of the classroom. The issue was immediate when generative AI became mainstream, since they allowed students to produce fluent written responses with very little friction, raising questions about how teachers could interpret submitted work as evidence of learning. While comparisons were made to existing forms of support such as parental input or help from older siblings, the scale and nature of AI assistance introduced a different level of accessibility. These tools are constant, responsive and capable of adapting closely to a student’s voice or preferred structure at the click of a button. This combination made it increasingly difficult to distinguish between independent effort and generated output and it is unsurprising that early policy responses and school level guidance concentrated heavily on this area.

What has changed over the past three years is not just the quality of generated text, but the scope of what these systems enable users to produce, with recent developments in widely adopted tools such as ChatGPT, Claude, and Gemini demonstrating a clear shift from single outputs towards more complex, multi-step interactions that support planning, iteration and execution. Alongside these, tools such as NotebookLM and Perplexity have altered how users engage with information retrieval and synthesis, while platforms like Replit and Notion increasingly change the way we consider planning, organising and producing work. These developments are not isolated as they point to a broader pattern in which AI systems are moving beyond generating answers towards supporting the construction of tools and environments that shape how tasks are approached in the first place.

This shift becomes more concrete when considering recent advances in code generation and execution, particularly through systems such as Codex and integrated coding capabilities within assistants like Claude, where natural language prompts can now be translated into working applications with relatively limited technical knowledge from the user. In practical terms, this means that a secondary school student is no longer restricted to asking for help with a piece of writing or a revision summary, but can instead design and deploy a personalised revision planner, a subject specific quiz engine, or a workflow tool that structures their study habits over time. These systems can be deployed using accessible infrastructure such as Netlify for hosting and Firebase for data handling, while front end frameworks like Svelte or Django can be incorporated with minimal direct interaction if the AI layer is sufficiently capable. The way that we are able to now build these systems is increasingly conversational, which makes it so easy to access, all without necessarily reducing the capability of what can be produced.

For school trusts, this development introduces a different set of considerations from those that dominated earlier discussions about AI use, because the focus shifts from evaluating finished work to understanding how students are structuring their learning processes through tools that may sit partially or entirely outside the school’s direct visibility. There are clear advantages in this space, particularly in relation to student agency, since learners could potentially offload planning, test different approaches to revision, and create systems that align with their own

preferences and pacing. At the same time, the presence of highly personalised tools raises questions about consistency, shared experience and the extent to which learning remains a collective activity shaped through dialogue and feedback loops within a classroom context.

Research across neuroscience and education points to the importance of social interaction in learning, with work by Robert Sapolsky and others highlighting how cognitive development is influenced by environment, stress, collaboration and social contexts and not in isolation. While Sapolsky’s primary work focuses on stress and behaviour, the broader field, including Lev Vygotsky, has long established that learning is mediated through social processes and interaction with others. The increasing availability of highly individualised AI systems introduces a different tension, since these tools can support efficiency and personalisation while also reducing opportunities for shared reasoning and peer challenge and collective if used without structure.

Alongside these pedagogical considerations, there is a parallel issue relating to capability and oversight, given that the same tools which enable students to build useful and well-intentioned systems can also be used to create applications that bypass school policies or introduce new areas of concern around academic dishonesty, data compliance and also consent. The combination of lower technical barriers and higher functional capability means that students can experiment in ways that were previously restricted to those with more advanced programming knowledge, and this requires a response that is grounded in understanding rather than assuming what’s out there and what is likely to come next. While it’s entirely reasonable that some teachers and leaders choose not to adopt AI tools directly in their own practice, a lack of familiarity with what is now possible limits the ability of schools to respond proportionately and with confidence.

Within this context, the role of organisations such as the Good Future Foundation is to support schools in moving from reactive concern to informed engagement, particularly through structured approaches such as the AI Quality Mark, which focuses on safeguarding and the development of staff confidence in navigating these systems. The intention is to ensure that decisions about use are made with a clear understanding of both capability and risk, and that schools are equipped to guide students effectively in an environment where access to these tools is increasingly difficult to restrict.

Taken together, these developments suggest that the current phase of AI in school spaces is about the growing ability of individuals to construct systems that shape how learning is organised, extended, and, in some cases, concealed. Trust leaders who recognise this shift and invest in building organisational understanding will be better placed to respond at the pace of change and to prepare students for further study and employment in contexts where confident, responsible use of AI is increasingly seen as essential.

We welcome perspectives from a diverse range of guest contributors. The opinions expressed in blogs are the views of the author(s), and should not be read as CST guidance or CST’s position.

Blog High quality inclusive education Assessment Curriculum Technology