AI is rewriting the social contract between citizens and the state
- 4 days ago
- 4 min read

The social contract is a simple idea that hides a lot of complexity. Citizens give states power, resources and some degree of obedience. In return, they expect protection, basic services and a say in how rules are written and applied. For most of modern history that bargain has been mediated by human institutions. Civil servants, teachers, judges and frontline workers interpret law, exercise discretion and can be challenged when they get things wrong. As governments fold AI systems into public services, that mediation layer is changing in ways that most people never really see.
AI is already embedded in public decision making. Welfare agencies experiment with risk scoring for fraud detection. Immigration systems use automated triage to decide which applications deserve more scrutiny. Police departments test predictive tools to decide where to send patrols. Municipalities lean on algorithmic systems to allocate housing or child protection resources (Wirtz et al., 2020). On paper, the promise is efficiency and consistency. Overstretched public bodies get help, case backlogs shrink and decisions become more data driven. But in practice, these systems reshape how power flows between citizen and state, and who is accountable when a model gets it wrong.
One way to see that shift is through the language of the social contract. An article in Data and Policy looks directly at “citizens’ stances on AI in public services” using a social contract frame (Cambridge, 2024). The basic question is not just whether systems are accurate, but whether their use can still be squared with ideas of consent, reciprocity and fairness. If you live in a democracy, you have at least a thin expectation that important decisions about your life can be understood, questioned and appealed. When an opaque model ranks you as high risk, or marks your application as low priority, that expectation is put under strain.
The first pressure point is opacity. Even when models are not literally a black box, the technical logic of a neural network or a complex ensemble is not something a typical citizen, or even a typical caseworker, can easily interrogate. Classic administrative law assumes decisions can be explained in terms of rules or reasons that make sense in ordinary language. Algorithmic systems often cannot do that without translation. Scholars of “algorithmic governance” warn that delegating more decisions to such systems can hollow out accountability, because officials start to rely on model outputs they do not fully understand (Livermore, 2025). The result is a state that still uses human frontlines, but where real power sits upstream with technical and procurement choices.
The second pressure point is contestability. Democracies depend on friction. Appeals, public consultations and slow hearings are the places where disagreement is surfaced and sometimes resolved. If AI systems are deployed mainly to streamline those frictions away, as instruments of administrative optimisation, the space for citizens to push back gets narrower. Work on “democratic algorithmic institutions” argues that AI systems which govern people ought to be embedded in new oversight structures that look more like independent commissions or regulatory bodies, not just internal IT projects (Tech Policy Press, 2026). Without that, it becomes too easy to frame every objection as resistance to progress rather than a legitimate demand for voice.
The third pressure point is asymmetry. The state already holds more data and coercive power than any individual citizen. AI allows that data to be mined and acted on at new scales and speeds. The same infrastructure that lets a city spot traffic patterns in real time can also support pervasive location tracking. The same analytics that help target social support more efficiently can also be used to intensify surveillance of groups seen as risky. A “social contract for the AI age” only makes sense if there are credible guarantees about how far that power can reach and which uses are off the table (Social Contract for the AI Age, 2020).
Importantly, people are not uniformly hostile to AI in the public sector. Survey research on attitudes to algorithmic decision making finds support when systems are seen as tools that assist humans, reduce bias and remain subject to clear safeguards (Oxford Academic, 2025). The problem is that these conditions are often described at a policy level, while real deployments look different. Notice and consent frameworks are vague, explanation rights are hard to use in practice, and appeal routes are not designed for algorithmic cases. That gap between formal promises and lived experience is exactly where trust in the social contract erodes.
If governments want AI to sit inside the existing social contract rather than rewrite it, some baselines are needed. Citizens should be told, in plain language, when AI is materially involved in decisions about them. They should have accessible ways to challenge those decisions and escalate to a human reviewer with real authority to override the system. There should be democratically debated red lines, areas where AI is simply not appropriate, regardless of potential efficiency gains. And communities most likely to be affected, including those with histories of over policing or punitive welfare treatment, should be involved early in the design and evaluation of systems, not just consulted after the fact.
The alternative is drift. AI systems move from pilots to permanent infrastructure. Procedural friction is gradually stripped away in the name of modernisation. Citizens still vote, still receive letters from agencies, still see human faces at service counters, but the important decisions are shaped upstream by models they never chose. It seems like a science fiction scenario, but it is truly one possible trajectory for the next decade of digital government. The question is whether we treat AI as another tool that must answer to the social contract, or as an excuse to let that contract fade.

