top of page

You Are Not Arguing With the Internet

  • 3 days ago
  • 3 min read

If arguments on the internet feel different lately or if they seem faster, sharper, and more circular, you are not imagining it. AI, rather than being another participant in these arguments, is increasingly becoming the stage manager.  


Most online arguments now start long before you type anything. They start when a recommender system decides what shows up in your feed.  


A 2024 PNAS Nexus study calls recommendation algorithms a force that “profoundly shape users’ attention and information consumption,” and shows that even small tweaks to how news is recommended can shift what people see and how much news they consume at all (Chan et al., 2024). A 2025 survey of AI-based recommenders goes further, arguing that recommendation systems across social media, retail, and mapping already influence “most actions of our day-to-day lives” by nudging what we click and how we move through platforms (Pedreschi, 2024).


When algorithms narrow your “interest profile” into a stable pattern, for example, this kind of story, that kind of creator, this tone, that side, your arguments inherit those constraints. You think you are reacting to “the internet.” You are reacting to the slice of the internet your recommender decided you should see.


AI-powered personalization is about emotional temperature.  


A 2025 study on AI-driven personalization in social media marketing found that AI-personalized content increased trust and perceived usefulness, even when it did not obviously increase engagement itself (Teepapal, 2025). Another 2025 analysis of consumer digital behavior notes that more than half of surveyed consumers (58%) say they now use generative AI tools instead of traditional search engines to find product and service information, and that people are getting accustomed to instant, tailored answers (Mishra, 2025).


Put those together and you get a simple dynamic: the more a feed or chatbot feels “for you,” the more you are inclined to take what it shows you seriously, and the less patience you have for friction or slowness. Arguments start from a more personalized baseline: you see content that matches your existing concerns, phrased in ways that feel familiar, and anything that cuts across that pattern feels more intrusive and more deserving of a sharp response.


AI does not just decide what you see, but is also helping people decide what to say.  


Users are already turning to generative AI for advice, drafts, and instant summaries instead of browsing through links themselves (Mishra, 2025). A 2025 ethnographic study of science YouTube production shows how creators explicitly adjust scripts, pacing, and thumbnails to please recommendation algorithms, not just audiences, because they know algorithmic approval is the gate to visibility (Milzner et al., 2025). And a 2025 paper on streaming services notes that recommender systems now actively shape users’ “aesthetic choices,” not just help them navigate options (Chapman, 2025).


Those studies describe a pattern where people who make content like posts, videos, or even comments, are already adapting their style to what algorithms reward. In practice, that often means more immediacy, clearer emotional cues, and less ambiguity. The same pressures bleed into how we argue with sharper hooks, cleaner one-liners, fewer hedges, and more performance.


Generative tools then sit on top of this ecosystem as accelerants. If it is easier to ask a chatbot, “Write a firm but polite reply pushing back against this,” than to think through your own response, more arguments will come pre-formatted by being fluent, structured, and slightly detached. The surface looks more adult. The underlying disagreement can remain just as stuck.


Policy and design work is starting to acknowledge that recommender systems need deliberate guardrails. A 2025 policy report on “better feeds” argues that many platforms still optimize for short-term engagement rather than long-term user well-being, and urges designers to treat recommender choices as public-interest decisions, not just product tuning (Georgetown KGI, 2025).


That is the context for how we argue now. AI systems rank and route what we see, nudge what we feel, and increasingly help write what we say. None of this removes responsibility from human users. But it does mean that online arguments are no longer just clashes between people. They are also artifacts of how our feeds are built.  


If we want better arguments, we cannot only tell people to be kinder or more informed. We also have to ask a very important question of what kinds of fights do our systems make most likely, and who benefits from the way we currently disagree? 

Comments


bottom of page