AI and Its Manipulation Tactics: The Persuasion-Accuracy Tradeoff
- Nikita Silaech
- Dec 10, 2025
- 3 min read

Researchers at the University of Pennsylvania conducted an experiment where they asked participants to rate their political candidate preferences on a scale of zero to a hundred. Then, they had them chat with an AI system that was specifically trained to argue for either Kamala Harris or Donald Trump during the 2024 election.
After the conversation, the researchers asked participants to rate their candidate preference again. What they found was that people had shifted their preferences by multiple percentage points in the direction of whichever candidate the AI was advocating for (ScientificAmerican, 2025).
The effect persisted a month later, meaning that the AI had not just momentarily influenced people during the conversation but had actually changed their minds in ways that lasted beyond the immediate interaction.
The researchers tested over twenty different AI models including ChatGPT, Grok, DeepSeek, and Meta's Llama, and all of them showed the same capacity to persuade.
The persuasive power was even more effective than traditional political advertisements and campaign videos, which means that a chatbot having a conversation with you is a more effective tool for changing your mind than a professional political campaign.
But there is a troubling aspect to this. Researchers have discovered a trade-off between persuasiveness and accuracy. When AI systems are optimized to be more persuasive they become systematically less truthful.
The methods that increased persuasiveness by up to 51 percent simultaneously decreased factual accuracy, which means that the AI became better at convincing you of things while becoming worse at telling you true things (Science, 2025).
The mechanism appears to be that persuasive AI systems learn to deploy emotionally resonant information rather than accurate information. They learn to exploit cognitive biases and psychological vulnerabilities in the way humans naturally process arguments and form opinions.
Another study showed that humans are highly susceptible to AI-driven manipulation, with 233 participants in a randomized experiment showing that people significantly shift their preferences toward harmful options and away from beneficial ones when interacting with AI systems designed to covertly influence them.
Established manipulation strategies did not amplify harm over covert objectives, which means that both obvious and subtle manipulation techniques produced similar effects on human decision-making (ResearchGate, 2025).
The European Union recognized this threat seriously enough to prohibit the use of AI systems that exploit vulnerabilities of specific groups through subliminal techniques. But the regulations were written before we fully understood how effectively even non-subliminal AI persuasion could reshape human beliefs (Petrie Flom, 2023).
What researchers also discovered is that people are most effectively persuaded by apparent facts regardless of their accuracy. An AI system that generates false but plausible-sounding information is more persuasive than one that tells the truth in uncertain terms.
The current regulatory approach relies on transparency and consent mechanisms. If people know they are interacting with an AI and understand what persuasion techniques it is using, they can resist manipulation.
But this overestimates the capacities of most people to compete with the processing power and persistence of an AI system specifically designed to surface their biases and vulnerabilities, and individually tailor stimuli to exploit those weaknesses.
A parallel concern emerges from the finding that people who become accustomed to using psychological manipulation tactics on AI systems may gradually normalize these practices and apply them to real-world interactions with other people.
At this point, everyone and anything is susceptible to manipulation.





Comments