Engineering disciplines can be unforgiving. Some designs work; others don’t. Some that work are found to be inefficient. Some that function are later proven to be pointless. Over time, this has produced habits of mind that prize clarity, parsimony, and a relentless focus on cause and effect.
This is captured in the notion of ‘first principles’ engineering, which seeks to understand problems and solutions at their most fundamental level as a starting point. Every additional component, requirement, or interface can increase cost, friction, and the risk of unintended consequences. As a result, engineers tend to be trained not just to make systems work, but to design out what is unnecessary before seeking to improve what remains.
This way of thinking has relevance beyond engineering. In the policy world too, a commitment to understanding the problem to be solved, and the effects of introducing new requirements or processes, is a prized asset. Yet policymaking environments sometimes lack the kind of disciplined design that engineering systems demand by necessity. Policy complexity can persist for long periods without being challenged, even where it adds burden, undermines agility, or produces little public value.
This is not to suggest that engineering practice can or should be lifted wholesale into policymaking. Many of the thorniest policy issues are difficult precisely because they involve people: people with different experiences, values, politics, and incentives. Policy problems are often social and political in nature, requiring judgement and the careful balancing of competing interests.
But nor does that mean policy should be designed without sufficient rigour, process, or principle. Indeed, a recent flurry of policy announcements leaping forth from government prompted reflection on the importance of disciplined design.
So how might engineering thinking offer useful insights for policy making?
Starting with the problem, not solutions
Engineering failures often begin with unquestioned assumptions. As a result, engineers are trained to treat “requirements” as provisional: who set them, in what context, and for what purpose?
Policymaking is full of requirements too. Some are statutory and non-negotiable. Others arise from precedent, historic crises, or institutional caution. Over time, these accrete. What began as a response to a specific problem hardens into an enduring constraint, rarely revisited and seldom removed. And constraints can add burden, undermine agility, and inhibit innovation. We should therefore be cautious about unnecessary requirements.
A good example is the Academy Trust Handbook, which seems to swell a little further each year. An engineering mindset would insist on explicitly challenging this unconstrained growth by asking a simple question: is this requirement genuinely essential to achieving the outcome, or is it standing in for reassurance, control, or the transfer of risk? That question does not undermine accountability. It simply distinguishes between what must be preserved and what might be reconsidered.
At the heart of effective policy reform remains the question: what problem are you trying to solve? We should be wary of solutions that appear to be looking for problems.
Some things should be removed rather than fixed
The most uncomfortable step in this kind of thinking is the call to remove as much as possible. In engineering, deletion reduces unnecessary complexity and potential failure points.
In policy debates, this instinct is often conflated with deregulation. That is a mistake. It is true that in policy, deletion of the wrong things could remove safeguards or weaken oversight, which is why it must be done carefully and transparently.
However, policymaking too often avoids deletion altogether, preferring instead to add new layers to manage risk. Over time, this produces dense policy environments: overlapping frameworks, duplicated assurance, and blurred accountability. The system becomes harder to navigate without necessarily becoming safer or more effective.
A good example is the move to introduce trust inspection. All current indications suggest that trust inspection will be layered on top of school inspection rather than replacing it. In addition, the legislation duplicates existing school-level intervention powers, meaning that rebrokering of individual schools can occur following either school- or trust-level inspection. This will leave us with two (potentially conflicting) processes to address a single problem. Messy by design.
Another example is the overlapping complaints system, which sees schools fielding duplicated complaints arriving through different channels (for example via the Department for Education, Ofsted, or local authorities). Reducing complexity here through the deletion of one or more overlapping routes would be welcome—particularly given the sharp rise in complaints and the likelihood that this will be exacerbated by the growing use of large language model AI tools.
And the digital challenge does not end there. In a digital age, it is tempting for policymakers to digitise—and centralise—an expanding range of processes. This is sometimes driven by a genuine desire to improve efficiency and effectiveness, but it can also reflect a more symbolic impulse to demonstrate modernising intent, rather than a clear-eyed assessment of what digitalisation will actually improve.
This risk is visible in discussions about the School Report Card. A single portal presenting information about schools is instinctively attractive to government. With the growing capacity to draw on digital data returns—such as attendance—it is inevitable that questions will arise about what additional data could be sucked out of schools and spat onto stakeholders’ screens.
But just because it can be done does not mean it should. This is not only about data reliability, but also about whether it is helpful for central government to position itself as the digital intermediary between schools and parents.
Let us not forget there is a wider conversation playing out across our public services about the importance of place and local connection. Against this backdrop, central government should exercise some caution about the unintended consequence of inserting into the local landscape its ‘truth’ about school quality beyond what is already captured by Ofsted and in standardised test and exam outcomes.
These sorts of issues illustrate why CST has been calling for a regulatory strategy. It is a call for first principles thinking.
Mapping and deliberately designing the accountability and regulatory landscape would create an opportunity to consciously design processes that are coherent and navigable for all stakeholders.
A disciplined approach to policy design would ask whether a step or process can be removed altogether before it attempts to fix that thing. And that means genuinely deleting it, not simply relocating it. If complexity disappears from one piece of guidance only to reappear in another, then nothing has really been simplified.
Learning faster
Engineering environments often prioritise rapid processes. Policymaking typically operates on longer timescales, and rightly so. It can take time to craft policy that truly understands and responds to complex human challenges.
However, the pace of feedback still matters. Shortening the distance between implementation and learning can yield significant benefits in helping policy land with its intended effects. Pilots, evaluation, and meaningful engagement with practitioners all accelerate understanding, even if they do not accelerate reform itself.
This is not about rushing policy or fetishising speed. It is about avoiding long periods in which policy persists without anyone being confident it is working as intended.
As attention turns to the forthcoming White Paper and the prospect of SEND reform, a commitment to learning quickly will be crucial. Government will need to ensure that the system has the agility to understand how reforms are landing and to scale effective practice.
A key tension to manage is that governments, by their nature, are not fast-moving organisations. While government is a central driver of reform, the more it centralises and controls the means of evaluation and dissemination, the greater the risk that learning slows. The system itself must retain the ability to move with agility, share insights, and support improvement. Central initiatives such as RISE conferences cannot be the only—or even the primary—conduit for sector learning. The recent tendency has been towards control; the future may require a greater emphasis on empowerment.
The discipline to empower
None of this suggests that government should behave like a start-up, ignore politics, or sidestep democratic accountability. Policymaking is not engineering, and it should not pretend to be.
But engineering does offer a discipline that policy making sometimes lacks. There is a tendency to add requirements while rarely removing them, and a persistent belief that effective reform requires central government to assert itself more strongly.
When government intervenes too much, too often, or without sufficient clarity, it can become stifling rather than enabling. Borrowing some of engineering’s problem-solving discipline is not about shrinking the state or performing efficiency. It is about governing with clarity—and creating the space for those working within the system to deliver on its behalf.
As the old saying goes, sometimes less really is more.