We Can’t Believe Our Own Eyes Anymore
- Nikita Silaech
- Dec 12, 2025
- 3 min read

A study analyzing over 8,885 long-form social media posts from 2018 to 2024 discovered a dramatic acceleration in AI-generated content. Only 5.34% of posts were AI-written before 2023, but after ChatGPT became widely accessible, the average jumped to 24.05% (IACIS, 2025).
It is not simply that AI is generating content but that this AI-generated content spreads measurably faster than human-written content, with research showing that AI-generated false information spreads approximately 70% faster than true information on social media platforms (PMC, 2025).
The mechanism is straightforward because AI is specifically trained to generate content that engages and persuades. Engagement and persuasion are decoupled from truthfulness. An AI optimized for virality becomes an AI optimized for spreading whatever is most likely to be shared regardless of its accuracy.
Over 1,200 AI-generated fake news sites existed by 2024, which was a ten-fold increase in automated misinformation production compared to the previous generation of manually created disinformation campaigns (PMC, 2025).
But the more sophisticated problem is that traditional misinformation detection methods are failing against AI-generated content because these detection systems rely on identifying patterns and style that distinguish human writing from AI writing. The AI systems generating misinformation are improving faster than the detection systems designed to identify them.
During the 2024 US election debates, AI-generated misinformation often took the form of emotionally charged statements and cleverly disguised opinions that traditional detection models completely failed to identify, since the models were looking for factual inaccuracies rather than subtle manipulations.
Deepfakes created using generative AI can fool over 50% of human evaluators, so even when people attempt to manually verify content by watching videos and listening to audio, they fail more often than they succeed at distinguishing real from artificial.
What makes detection even more difficult is that deepfakes no longer require obvious signs of AI generation because the technology has advanced to the point where a casual viewer cannot identify manipulation through visuals alone.
Advanced AI models like GPT have the contextual understanding necessary to detect subtle misinformation, but these models require so much processing power that using them in real-time across social media platforms is practically infeasible.
During rapidly evolving events like political debates, the narratives and context shift too quickly for detection systems to recalibrate their understanding of what constitutes misinformation. The detection systems often lag behind the pace of information dissemination.
The problem isn’t just that AI-generated misinformation succeeds at spreading false information. What’s happening is that the presence of AI-generated content everywhere creates what researchers call trust decay, which is an erosion of people's ability to believe that any information is real.
As people become aware that AI can generate convincing videos, audio, and text, they begin to discount all information as potentially artificial. Even true information becomes harder to believe because the existence of plausible falsehoods makes verification nearly impossible.
A person watching a video of a political candidate saying something controversial now has to assume it might be a deepfake before they accept it as real. Which is way different and problematic than deciding whether to believe information based on its source and consistency with known facts.
It creates a vacuum where any interpretation of reality becomes harder and harder because there is no shared ground of factual understanding.





Comments