The Lack Of Explainability In AI
- Nikita Silaech
- Dec 26, 2025
- 3 min read

The EU AI Act demands explainability for high-risk systems. ISO 42001 requires documented, repeatable controls over AI decision-making. NIST's AI Risk Management Framework expects you to explain how and why your systems make decisions. The requirement appears in every governance framework published in the last three years.
But there is a huge gap between requiring explainability and what companies actually deploy. Companies are shipping AI systems into production without building explanation mechanisms. They are deploying algorithms that score humans on creditworthiness, hirability, and risk without being able to tell those people why they were denied (Bismart, 2025).
The problem starts with the definition itself. Explainability sounds simple until you try to operationalize it. Does it mean you can explain how the model works mathematically to a data scientist? Does it mean a customer can understand why their loan was rejected? Does it mean regulators can audit whether the system was working as intended? These are three completely different problems requiring three different solutions. Most organizations pick the easiest one and call it done.
A survey found that 61 percent of organizations claim to be at the strategic or embedded stage of Responsible AI maturity (PwC, 2025). But only 56% say their first-line teams actively lead Responsible AI efforts. 21% are still in the training stage, building governance frameworks. Eighteen percent have not moved beyond foundational policies. Yet the regulatory deadline for EU AI Act high-risk compliance is 2026 to 2027. The gap between where companies claim to be and where they actually are is not closing fast enough.
Explainability is expensive since it requires technical infrastructure. You need tools like LIME or SHAP to generate local explanations for individual decisions. You need dashboards that show users why a particular outcome occurred. You need audit trails that document every decision and its rationale. You need versioning systems that track what changed in the model between last month and this month. Most importantly, you need people who understand both the mathematics and the business context well enough to communicate it (ApproveIt, 2025).
That costs money. For a bank implementing explainability across a credit scoring model that processes thousands of applications per day, the infrastructure, tooling, and personnel requirements exceed budgets allocated for model monitoring.
The second problem is that explainability conflicts with performance. Simpler, more interpretable models like logistic regression or decision trees can generate clear explanations. But they often perform worse than black-box models like neural networks. Organizations face a choice between accuracy and explainability. They choose accuracy and then pretend they will retrofit explainability later, which rarely end up happening (Bismart, 2025).
Research on explainable AI found that global consumer trust in AI has declined from 61 percent to 53 percent over the past five years (Bismart, 2025). Sixty percent of companies using AI admit to having trust issues with their algorithmic models. These companies are not worried about explainability for its own sake. They are worried because they know their systems are biased, unreliable, or make decisions they cannot justify. Explainability would expose those problems. So they avoid it.
The regulatory framework says you need to document your AI system, explain how it makes decisions, and demonstrate that it is safe. The actual practice is that companies document what they intended the system to do, explain how it works in principle, and demonstrate that it works well on benchmark data. What they do not do is explain what their system actually does in production or why its real-world decisions can harm people sometimes.
Amazon's hiring algorithm is a great example. The company trained a model on historical resumes, which were dominated by male engineers. The model learned to prefer male candidates. When Amazon discovered the bias, the company tried to remove it. But the model kept finding new proxies to discriminate, and Amazon eventually scrapped the entire project. But the strange part is that Amazon could not explain why the model was making the decisions it was making even after the bias was identified (ApproveIt, 2025).
If you cannot explain a system, you cannot control it. If you cannot control it, you cannot make it fair. If you cannot make it fair, you should not deploy it in situations where it can harm people.





Comments