Why Public Trust in AI Is Collapsing (And What We Must Do About It)
- Nikita Silaech
- Oct 7
- 7 min read

AI isn't coming. It's already here, making decisions about our jobs, our children, and our relationships. And people are worried.
Not in an abstract, someday-maybe kind of way. They're worried right now, about real things that matter to them.
The Seismic Foundation just released a massive study that surveyed 10,000 people across the US, UK, France, Germany, and Poland. What they found isn't just interesting. It's urgent.
We're not at a stable point of public opinion. We're balanced on a razor's edge. One major breach, one viral scandal, one algorithmic mistake that goes too far, and public sentiment could tip decisively against AI.
The question isn't whether AI will transform society. It's whether we'll build trust fast enough to make sure that transformation doesn't blow up in our faces.
The Numbers Are Brutal
Here's what 10,000 people told researchers:
Less than 1 in 3 see AI as hopeful for humanity
1 in 2 view it as a growing problem
6 in 10 worry more about AI replacing relationships than jobs
7 in 10 say AI should never make decisions without human oversight
Women are 2.2 times more pessimistic than men
These aren't the numbers of smooth adoption. These are warning signals.
It's Not Really About AI
Here's what most polling gets wrong: people don't rank "AI" high on their list of concerns because they don't think of it as a separate thing. They think of it as something that will make everything else worse.
The public believes AI will worsen war and terrorism, crime, unemployment, misinformation, mental health, relationships, and democracy. The only areas where people think AI might help? Healthcare access and pandemic prevention.
AI isn't competing with climate change or inequality for attention. It's embedded in all of them.
That's what makes this moment so precarious. AI anxiety isn't isolated. It's threaded through every other fear people already have.
Some Groups Are Way More Worried (And They Should Be)
Trust in AI isn't evenly distributed. Three groups stand out:
Women
Women are 2.2 times more pessimistic than men. This isn't irrational. Research shows women's jobs are three times more likely to be disrupted by AI. Clerical and administrative roles, mostly held by women, are getting hit hardest.
Women see AI amplifying inequalities they already face. They're not paranoid. They're paying attention.
Lower-Income People
Poor and working-class respondents are significantly more worried than wealthy ones. They expect AI to hurt their children, their mental health, their economic prospects.
Meanwhile, rich people are optimistic, especially about AI's impact on their kids. The gap there is especially wide.
This reveals fears that AI will deepen existing divides. And those fears are well-founded.
Parents and Students
69% of parents would be concerned if their child fell in love with an AI
52% worry about their kids forming AI friendships
Half of students feel daunted by the future of work
3 in 5 students fear AI will make entry-level jobs impossible to find
4 in 10 students worry that what they're studying will be irrelevant by graduation
These groups aren't overreacting. They're responding to real structural vulnerabilities.
What People Are Actually Worried About
Early AI anxiety was fuzzy and existential. Now it's sharp and personal.
Top fears:
Deepfake sexual imagery of children: 50%
AI scams and fraud: 46%
Cyberattacks and data breaches: 45%
Deepfakes of politicians: 44%
Revenge porn: 44%
Mass surveillance: 43%
Notice something? These aren't "will AI become sentient" worries. They're about harms people can see happening right now.
Interestingly, fear of losing control of AI is actually dropping. In 2023, two-thirds worried about it. Now it's one in three.
Existential dread is being replaced by immediate, practical concerns about misuse.
Relationships Trump Jobs
Here's the finding that should stop every AI developer in their tracks: more people worry about AI replacing human relationships (60%) than worry about it replacing their jobs (57%).
When asked if their partner having a deep emotional connection with an AI would count as cheating:
43% said yes
32% said no
Cultural splits were huge: 50% in the US said yes vs. 37% in France
AI isn't just disrupting labor markets. It's entering the most intimate parts of human life. And people are deeply uncomfortable.
Nobody Trusts AI Companies
The research reveals profound distrust of the people building AI:
Over half think AI labs are "playing god"
Only about a third believe labs have our best interests at heart
Half think development is moving too fast to be safe
7 in 10 want human oversight for all AI decisions
When asked about regulation:
Only 1 in 3 think current rules are adequate
Support for more regulation increases sharply with age
Even among young people, only 1 in 4 think there's enough oversight
What do people want?
AI kill switches (hugely popular)
Legal liability for companies when their AI causes harm
Government licenses required for training advanced AI
Support for workers displaced by AI
The message is unambiguous: the utopian visions tech leaders blog about aren't landing. People want accountability.
Five Groups About to Mobilise
Seismic identified five distinct publics. All of them are politically active. All of them are one major news story away from organising.
Tech-Positive Urbanites (20.2 million)
Young urban professionals who use AI and see its benefits, but are terrified about their jobs. 7 in 10 think their roles are high-risk for automation. They're twice as likely to use AI daily, but familiarity breeds worry, not comfort.
Globalist Guardians (31.2 million)
Affluent, progressive, globally minded. More concerned about AI's societal impact than personal risk. Over half are extremely worried about AI in warfare and political decisions. They don't trust AI developers and want international coordination, not a race to the bottom.
Anxious Alarmists (16.3 million)
See AI as another sign that everything's going wrong. Twice as likely to think AI will hurt them long-term. They don't trust the developers. For them, AI amplifies every existing fear about immigration, healthcare costs, and economic instability.
Diverse Dreamers (10.5 million)
Cautiously optimistic but hyper-aware of risks to kids and jobs. More likely to be ethnic minorities, parents, and religious. They want cooperative development and government regulation, not corporate self-governance.
Stressed Strivers (21.6 million)
Young, lower-income parents juggling work and childcare. 62% see their jobs as high-risk, yet a quarter remain unsure about AI overall, making them highly swayable. They're too busy surviving to follow AI debates. But when they mobilise, they show up hard.
These groups are politically engaged, emotionally primed, and culturally connected. When they move, everyone moves.
Why Transparency Matters So Much
Trust requires visibility. Right now, that visibility doesn't exist.
When people can't understand how AI makes decisions, they assume the worst. When they can't see who benefits, they assume it's not them. When they can't access information about risks, they imagine catastrophe.
What Transparency Actually Means
Explain How It Works: Not with jargon. In plain language. If an algorithm recommends medical treatment, people need to know it's based on peer-reviewed research, not biased training data. If it denies a loan, applicants deserve to know which factors mattered.
Admit What You Don't Know: Every AI has blindspots, failure modes, edge cases where it falls apart. Pretending otherwise destroys trust faster than honesty builds it. People can handle uncertainty. They just need the truth.
Make Governance Inclusive: The groups most worried about AI (women, lower-income people, parents, students) should help make the rules. Transparency isn't just technical. It's about power. Who decides? Who benefits? Who gets hurt?
Educate Without Condescension: 21.6 million Stressed Strivers are uncertain about AI because they're too busy to follow technical debates. That's a communication failure, not an intelligence one. Good education turns anxiety into agency.
What This Looks Like in Practice
Start simple. Show people when AI is involved and what data shaped the decision. Let curious users dig deeper into confidence scores and reasoning. Give technical users full access to model architecture and decision paths.
Match your explanation to the stakes. Healthcare and finance demand detailed breakdowns with uncertainty ranges. Shopping apps need light explanations that don't interrupt the experience.
Let people correct your AI and see how that changes future outputs. This creates accountability and continuous improvement.
Get independent audits. External oversight proves your transparency isn't just performance.
What Happens Next
Companies building transparent AI won't just avoid backlash. They'll win. As awareness grows, people will choose systems they understand over black boxes, even when the black box performs better.
The winners won't have the most parameters or the flashiest demos. They'll be the ones helping people understand, trust, and work with AI effectively.
The Moment We're In
The Seismic research captures something fragile. Public opinion is balanced. Worry is embedded but not yet mobilised. Those five key publics are watching, one major story away from action.
We can tip this moment toward trust or away from it.
Keep building opaque systems, excluding vulnerable groups, prioritising speed over safety, and we'll face backlash. Regulation will be reactive and punitive. Development will fracture. The technology's potential will be wasted.
Choose transparency, explain decisions, admit limitations, include diverse voices, educate people, and we build something better. An AI ecosystem where innovation and trust coexist. Where benefits reach everyone. Where the next generation sees AI as useful, not threatening.
What RAIF Believes
At the Responsible AI Foundation, transparency isn't the final step. It's the foundation everything else rests on.
Before deploying any AI system, ask:
Are we clear about how this works and who it affects?
Can users and stakeholders understand the risks and limits?
Is this a black box or can people scrutinise it?
Have we included the voices of those most likely to be harmed?
The Seismic report shows where we stand: on a razor's edge. The next scandal or breakthrough could tip us decisively.
The most powerful thing we can do is show our work. Explain our reasoning. Admit our limits. Share control.
Because AI isn't failing because the technology doesn't work. It's failing because people don't trust it to work for them.
And trust requires transparency.
This blog post draws extensively from the Seismic Foundation's "On the Razor's Edge: AI vs. Everything We Care About" (2025). Read Full Report: On the Razor's Edge: AI vs. Everything We Care About (2025)