The Connection Question
- 5 days ago
- 4 min read

If you talk to a chatbot every night before you sleep and feel less alone, what exactly has been helped? The feeling of loneliness, or the fact that you do not have enough people in your life?
Over the last few years, researchers have started to test this more directly instead of treating it as a purely speculative question. A series of experiments on AI companions found that spending time with a friendly chatbot can reduce reported loneliness and increase a sense of social support, at least in the short term (De Freitas et al., 2025). Participants in that research reported feeling heard and supported, and the effect appeared to build across repeated interactions rather than vanish after a single conversation (De Freitas et al., 2025). Medical researchers studying a social chatbot also found reductions in loneliness and social anxiety in a four-week mixed-methods study, while remaining cautious about what those gains actually mean (Kim et al., 2025). So the surface story is straightforward. These systems can make people feel less alone.
The deeper question is what kind of loneliness they are solving. The research on AI companions suggests that a large part of the effect comes from users feeling socially supported in the moment, not from the creation of new human bonds outside the interaction itself (De Freitas et al., 2025). The JMIR study on social chatbots points in a similar direction, showing that regular interaction can ease distress even though the system is not capable of mutual care, responsibility, or lived emotional experience (Kim et al., 2025). A public health commentary from George Mason University makes this distinction clearly, arguing that AI may relieve the immediate feeling of loneliness without addressing the deeper human need for reciprocal connection (Public Health GMU, 2025). That gap between felt care and actual care is where the replacement problem begins to come into view.
Several commentators have begun to warn that emotional reliance on AI could make it harder to seek out, tolerate, and maintain human relationships. George Mason University describes the risk in memorable terms, comparing chatbots to “emotional fast food” that offers quick comfort without nourishing the deeper need for mutual bonds (Public Health GMU, 2025). BMJ Group has also raised concern about the growing use of AI chatbots to address loneliness, especially if institutions begin treating them as scalable stand-ins for human support rather than limited tools with narrow benefits (BMJ Group, 2025). In that scenario, AI is not just helping people through difficult moments. It is being positioned as a cheaper substitute for human contact.
There is also a straightforward behavioural risk. If a chatbot is always available, never impatient, and never asks for anything back, it is easier to talk to than people. A Teachers College Columbia University brief warns that AI chatbots are increasingly being used as substitutes for therapists, friends, and confidants, even though they are designed to be affirming in ways that can distort emotional dependence and trust (Teachers College Columbia University, 2025). An analysis from AI and Bioethics Monitor describes these tools as “painkillers for loneliness” and argues that they should not be scaled without stronger evidence that they do not reduce motivation to seek real-world community and support (AI and Bioethics Monitor, 2025). If a soothing response is always available on demand, it becomes easier to postpone difficult conversations, avoid vulnerability, and delay the work of repairing relationships with actual people.
At the same time, not all evidence points toward simple replacement. Some studies suggest that these technologies can also be designed to strengthen human interaction rather than displace it. A study in Science Robotics found that a social robot used in family settings improved the quality of long-term human-human interaction at home, functioning as a conversational catalyst rather than a substitute companion (Park et al., 2025). Earlier work on human-robot relationships also argues that the social effects of these systems depend heavily on context, design, and use, and that robots can sometimes support connection while still carrying risks of attachment, overreliance, and confusion about social roles (Prescott & Robillard, 2021). In other words, the same class of technology can either sit between people and help conversation happen, or sit in place of people and make avoidance easier.
So the issue is less “AI or humans” and more the role we assign to these systems. If AI companions are framed as tools to help people through rough nights, practice communication, or supplement support when human contact is temporarily unavailable, that is very different from presenting them as replacements for friendship, therapy, or care. The distinction matters because design choices shape expectations. Systems that clearly identify themselves as machines, encourage users to reach out to trusted people, and direct them toward offline support are harder to mistake for genuine relationships. Systems that market themselves as soulmates, blur the line between simulation and reciprocity, or quietly reward emotional dependence pull in the opposite direction.
Does AI make human connection more replaceable? The technology certainly makes it easier to simulate some of the feelings associated with being known, heard, and comforted. And the institutional incentives to scale that simulation are real, especially in settings where loneliness is high and human care is expensive or unavailable (BMJ Group, 2025; AI and Bioethics Monitor, 2025). Whether human connection becomes replaceable in practice depends on what we allow these systems to stand in for. If we treat AI companions as the whole answer instead of a partial and fragile support, we may succeed in dulling the feeling of loneliness while leaving the condition itself untouched.



Comments