What Happens When Algorithms Stop Showing You Things They Think You Disagree With
- Nikita Silaech
- Nov 13, 2025
- 5 min read

Recommendation systems exist because there is too much content and not enough attention, so they make choices about what surfaces on your feed or appears in your search results based on what they think will hold your interest (Wisvora, 2024). The logic sounds straightforward enough until you recognize that engagement and truth are not the same thing, and algorithms have no reason to prefer one over the other unless someone explicitly builds that preference into how they measure success.
When Netflix or YouTube or Instagram optimizes for watch time or clicks, they are implicitly choosing to show you more of what you already interact with, and this becomes a problem when you realise that content designed to trigger emotional reactions, such as outrage, fear, or certainty tends to keep people engaged longer than content that complicates their existing views (Wisvora, 2024; BBC, 2020). The system learns this pattern from user behavior and amplifies it, not because the engineers intended to radicalize anyone, but because engagement metrics reward that outcome and nothing in the design pushes back against it.
There is a distinction that research keeps trying to draw between filter bubbles and echo chambers, and it matters because the two problems require different solutions, even though they often get tangled together in conversations about polarisation. A filter bubble is what happens when an algorithm narrows your information diet because it learned your preferences from what you clicked on in the past, so it keeps serving similar content, and you gradually stop seeing anything that contradicts what you already believe (PMC NIH, 2022).
An echo chamber is something different; it describes the social choice people make to surround themselves only with others who think like them and to dismiss or ridicule anyone who disagrees, which algorithms may reinforce but do not necessarily create (PMC NIH, 2022; BBC, 2020). The research suggests that people are good at building their own echo chambers, and algorithms are good at making those chambers seem permanent and inevitable, which means the algorithm is not the root cause, but it is still a powerful accelerant.
The reason platforms tend to blur these categories is that acknowledging the difference would require admitting something uncomfortable, which is that their recommendation systems were deliberately designed to maximize engagement, and engagement happens to correlate with polarization, and they have known this for years without changing the underlying metric they optimize for (Wisvora, 2024).
Instagram's recent case study showed that the platform's algorithm amplifies selective exposure and confirmation bias through the feed, the explore page, and the content recommendations, which increased polarization in measurable ways and reduced exposure to diverse viewpoints in ways the company's own research documented (Sage Journals, 2024). The platform can see in real time whether a user is being shown an ideological mix or whether they are seeing the same worldview repeatedly, and the data shows that users who see less diversity tend to spend more time on the app, which is why the algorithm keeps working that way even after researchers publish papers about what it does.
Education is now adopting the same pattern and the consequences are sharper because the decisions are not about what content appears on a feed but about who gets admitted, who gets flagged as at-risk, and who gets tracked into remedial versus advanced coursework.
Algorithmic bias in these systems works through multiple paths. The training data reflects historical discrimination so the algorithm learns to replicate it, the loss function the designers chose optimizes for efficiency or cost savings rather than fairness, and the feedback loop means that students labeled as at-risk receive weaker interventions and therefore perform worse, confirming the original prediction (Journal with Advanced Research and Reviews, 2025).
When researchers tested models used for early-warning systems, they found that the algorithm often flagged Black students as at-risk while leaving similarly positioned white students alone, not because anyone programmed racial discrimination into the code but because the algorithm learned patterns from historical data where students of different races received different resources and opportunities (Journal with Advanced Research and Reviews, 2025). The algorithm has no concept of fairness. It just recognizes patterns and reproduces them.
The regulatory response has leaned heavily on transparency, based on the idea that if you can see how an algorithm works and understand what data it trained on and what its performance looks like across different groups, then bad outcomes become the user's responsibility rather than the builder's problem (GDPR Local, 2025; Anecdotes AI, 2025).
This sounds appealing and it sounds fair, but it runs into a practical wall the moment you realize that knowing how an algorithm works does not mean you know whether it should exist at all. If an algorithm explicitly uses GPA and test scores to make admissions decisions and you know it does that, and you know that GPA and test scores correlate with school funding and family wealth, then you have perfect transparency into why certain groups are disadvantaged, but you have not solved the problem unless you choose a different target or accept that the system is working as intended (Journal with Advanced Research and Reviews, 2025; AI Multiple, 2025). Explainability is important but it can’t be a substitute for choosing whether to use the system in the first place.
Organizations that take responsible AI seriously tend to separate the problem into two parts, and this distinction is where the real work happens. The first part is finding where bias enters the pipeline, testing the model across different demographic groups to see if it treats people differently, and actually changing the model when you find disparate impact instead of just documenting it in a report (AI Multiple, 2025; SuperAGI, 2025).
The other part is institutional and it is harder. It’s deciding what your system should actually optimize for, whether that target serves the people using the system or the people building it, and whether you are willing to accept lower efficiency or lower profit if that is what fairness requires (Journal with Advanced Research and Reviews, 2025; GDPR Local, 2025).
Most organizations get stuck on this second part because the answer usually involves admitting that the current target - engagement, efficiency, cost - was never justified beyond "it makes money" or "it scales the business," and changing it requires sustained commitment rather than a one-time audit.
If you are designing a system to maximize engagement, you will get polarization, then that is not a side effect, it is the intended output of that objective function. If you are designing an admissions algorithm to flag at-risk students efficiently, you will get systems that sort students into tiers based on what the historical data tells them, and that historical data includes centuries of discrimination.
The algorithms themselves are not evil or broken as such; they are doing exactly what they were built to do. The question is whether what they were built to do serves the people they affect or whether it just serves the organizations that deployed them (Journal with Advanced Research and Reviews, 2025; Sage Journals, 2024). Once that question comes up, the technical and policy solutions start to clarify, but it requires admitting that the problem is not algorithmic at all, but a choice about what gets optimized for and who that optimization serves (Wisvora, 2024).





Comments