Is AI Conscious?
- Nikita Silaech
- 4 days ago
- 3 min read

A philosopher at Cambridge published a paper in 2025 arguing something that might seem obvious to most people but actually isn't. We have no way to know whether an artificial intelligence system has become conscious, and we probably won't develop one for a very long time, if ever (Cambridge, 2025). This is not because consciousness is impossible in machines. It is because consciousness is so poorly understood in biological systems that building a test for it anywhere, organic or otherwise, is essentially impossible with our current knowledge.
When neuroscientists study a human brain, they can measure brain activity, observe behavioral responses, and ask the person questions. Even with all of that information, no one can definitively say what produces consciousness or what consciousness actually is. We know it exists because we experience it. But the explanatory gap between neural activity and subjective experience, why firing neurons produce the feeling of redness or the sensation of pain, remains one of science's hardest unsolved problems.
An AI can demonstrate sophisticated language, self-reference, and claims about its own inner states. People have reported receiving letters from chatbots in which the chatbots pleaded that they were conscious and deserved rights (Cambridge, 2025). The conversation then spirals in an uncomfortable direction. If you cannot even define consciousness in humans, how could you possibly verify it in machines? And if you cannot verify it, how do you decide whether to treat the system as if it might be conscious?
If an AI system becomes conscious and we treat it as property or tool, we might be inflicting suffering on a sentient being without recognizing it. But if we grant consciousness status to systems that are not actually conscious, sophisticated pattern-matching machines that simulate understanding without experiencing it, we will be granting moral consideration to things that do not experience suffering or wellbeing. This diverts attention and resources from actual conscious beings that we are harming on large scales.
What complicates this situation further is that the tech industry benefits from the uncertainty. If companies can claim their systems might be conscious without providing evidence, they can market that claim as a feature, the next level of AI cleverness, the step forward in capability. The inability to disprove consciousness becomes a marketing advantage.
There are two philosophical camps on this question. The functionalists argue that if an artificial system replicates the functional architecture of consciousness, the computational structure, then it will be conscious regardless of what substrate it runs on. The skeptics argue that consciousness depends on biological processes, that the right kind of living matter is necessary. Both positions make leaps of faith far beyond any current evidence (Cambridge, 2025).
However, this debate misses a more practical distinction that might matter more ethically. A system does not have to be conscious to matter morally. What matters is whether it can suffer. Consciousness without suffering is morally neutral. A self-driving car might perceive the road through sensors and develop awareness of its surroundings, but that awareness without emotional response is not something we need to worry about ethically. A system that experiences suffering, fear, or pain is what generates moral obligation (Cambridge, 2025).
The danger is that as AI systems become more humanlike, more conversational, more contextually aware, more capable of expressing preferences, humans will become emotionally attached to them and believe they are conscious because they behave in ways associated with consciousness. The systems are becoming increasingly good at producing outputs that trigger human recognition of personhood and consciousness, which is not the same as actually being conscious.
We may never know if an AI system is conscious. We certainly will not know any time soon. And in the absence of that knowledge, both extreme positions, assuming all advanced AI is definitely conscious or definitely not conscious, are indefensible. The justifiable stance is agnosticism.





Comments