top of page

The Hiring Paradox: When AI promises fairness, but delivers bias instead

  • Writer: Nikita Silaech
    Nikita Silaech
  • Nov 12, 2025
  • 3 min read
Image on Unsplash
Image on Unsplash

We've heard the pitch a thousand times that artificial intelligence removes human prejudice from decision-making. No more unconscious bias. No more gut feelings clouding judgment. Just objective algorithms evaluating candidates on merit. Except it's not working out that way. In fact, AI is doing something far more troubling; it's codifying bias at scale, making discrimination look mathematical. 


The evidence is stark. A recent study of five leading large language models (LLMs) found that when evaluating identical resumes, AI systematically favors female candidates while disadvantaging Black male applicants (Van Deursen, 2025). The bias isn't random or marginal, but it persists across different job types, geographic contexts, and different AI models from different developers. 


What began as a solution to human bias has become a delivery vehicle for it. In 2015, Amazon discovered that its experimental recruiting engine, trained on a decade of resumes predominantly from male engineers, had taught itself to prefer men. It penalised resumes containing "women's," as in "women's chess club captain." It downgraded graduates from two all-women's colleges. It favored candidates using verbs like "executed" and "captured" -- language more commonly found on male engineers' resumes (Amazon scraps secret AI recruiting tool, 2018; ACLU, 2023). Amazon scrapped the project, but other companies forged ahead anyway. Today, over half of US companies are investing in AI-based recruiting tools.


The tragedy isn't just that Amazon failed. It's that the failure didn't stop the industry.


What makes this worse is how people interact with these systems. A groundbreaking University of Washington study presented participants with AI recommendations while screening candidates. When the AI was moderately biased, people mirrored those biases 80% of the time. In severe bias cases, people followed AI recommendations roughly 90% of the time, even when they had the agency to override (Wilson, 2025). 


The researchers called this "automation bias": a cognitive tendency to trust AI decisions more than human judgment, particularly when the bias isn't blindingly obvious. We've built a system where bad recommendations get amplified through human deference to algorithms. The upside? A 13% reduction in bias occurred when hiring managers completed an implicit association test beforehand. Small but measurable. Which suggests the problem isn't inevitable, but architectural.


This is where responsibility enters the picture. Bias in hiring doesn't just harm individuals; it reshapes labor markets. It filters out talent systematically based on identity. It codifies existing inequities into future hiring decisions. It's discrimination with a gloss of objectivity. An intersectional lens matters here. Black men face unique disadvantages compared to Black women or white women, not because any single "race" or "gender" variable explains it, but because AI systems have internalised specific, compounded stereotypes (Van Deursen, 2025). 


Current regulatory frameworks treat gender and race as separate categories.

They don't.

Reality is messier.

Organisations need to conduct impact assessments before deployment, analysing outcomes across intersectional groups. They need human oversight -- not token reviews, but genuine decision-making authority retained by humans for candidates flagged by AI. They need diverse training datasets, transparent algorithms, and regular audits to catch proxy discrimination (the algorithm learning to disadvantage women by using seemingly "neutral" signals like the university attended). And critically, they need to stop treating AI as objective. The Impress AI study in 2025 found that debiased AI, built with intentional fairness constraints, actually delivers both higher diversity and higher quality hires (Impress AI, 2025). It's not a trade-off. You don't sacrifice merit for fairness.


Here's what’s nagging: we knew this would happen. Amazon knew. Researchers knew. Policymakers knew. And yet the industry scaled anyway, betting that "this time will be different" or "we'll figure it out later." We didn't. We won't, unless hiring systems are treated as high-risk AI requiring mandatory audits, transparent reporting, and human accountability. 


The irony is thick. AI promised to remove bias from hiring. Instead, it's given us a tool to scale discrimination while maintaining plausible deniability. "The algorithm decided," companies can now claim, sidestepping responsibility. But algorithms don't decide; people do. And people who outsource hiring to unexamined AI, knowing the risks, are choosing bias. 


The path forward isn't to abandon AI in hiring. It's to demand that before any system touches a candidate pool, it's been tested for disparate impact, audited for bias, and built with human oversight embedded from the start. It requires transparency about how candidates are ranked and explainability about why they're rejected. It's to treat hiring AI not as a nice-to-have innovation, but as a high-risk system that shapes lives. Until we do that, AI in hiring remains what it's been since Amazon: a solution in search of a fairness problem it keeps creating.

Comments


bottom of page