Is AI Making Us Less Curious?
- 5 days ago
- 4 min read

Curiosity used to require some friction. You wondered about something, tried to remember it, failed, looked it up, and maybe got distracted along the way by something else equally interesting. Now you type a question into a chatbot, it answers immediately, and you move on. The answer is faster, the process is cleaner, but something gets lost in the efficiency. The impulse to wonder, to test, to explore beyond the first response starts to shrink, and it happens so gradually that most people do not notice until the habit is already gone.
The problem is that AI gives fast answers, which changes what your brain learns to do first. Research on how people use the internet for information shows that cognitive offloading, the tendency to rely on external tools rather than internal memory, increases after each use (Storm et al., 2016). In a study where participants were divided into two groups to answer trivia questions, one group used only their memory while the other used Google. When participants were later given easier questions and allowed to choose their method, those who had previously used the internet were significantly more likely to reach for it again, and they did so much more quickly. Remarkably, 30% of participants who had previously consulted the internet failed to even attempt to answer a single simple question from memory (Storm et al., 2016).
When a tool reliably produces a plausible response, the incentive changes from exploring the question to accepting the output and moving on. A study published by researchers at Microsoft and Carnegie Mellon University in January 2025 surveyed over 300 knowledge workers about their use of generative AI in the workplace and found that the more confident workers were in AI's capability to complete a task, the more they felt themselves letting go of the wheel. Participants reported a perceived reduction in critical thinking when they felt they could rely on the AI tool, presenting the potential for over-reliance on the technology without examination (Gizmodo, 2025).
The researchers noted that this was especially true for lower-stakes tasks, where people tended to be less critical, but warned that this could lead to long-term reliance and diminished independent problem-solving. By contrast, when workers had less confidence in the AI's ability to complete the task, they found themselves engaging their critical thinking skills more actively, and they reported more confidence in their ability to evaluate what the AI produced and improve upon it on their own (Gizmodo, 2025).
The way this shows up in daily life is often subtle. You start asking "what's the best" instead of "what are the options," and your questions become narrower because you are optimizing for an answer rather than for understanding. You stop trying to remember where you read something and just ask the chatbot to summarize it, which can work perfectly well for efficiency but reduces the mental push to make sense of something deeply in the first place. As Dr. Benjamin Storm, lead author of the cognitive offloading study, explained, "Memory is changing. Our research shows that as we use the Internet to support and extend our memory we become more reliant on it. Whereas before we might have tried to recall something on our own, now we don't bother" (EBME, 2016).
A Harvard Gazette article in November 2025 collected perspectives from six faculty members across disciplines on whether AI is dulling critical thinking, and the responses pointed to the same underlying tension (Harvard Gazette, 2025). One professor noted that no learning occurs unless the brain is actively engaged in making meaning and sense of what you are trying to learn, and this will not happen if you just ask ChatGPT to give you the answer to the question the instructor is asking. Another emphasized that one of the traps of generative AI, even when you are using it well, is that if you are using it just to do the same old stuff better and quicker, you have a faster way of doing the wrong thing (Harvard Gazette, 2025).
The concern is that the design of these tools rewards speed and completion in ways that can train users into passivity, even when the tool is accurate. When you confuse output with learning, you might use AI in ways that are not conducive to actual cognitive development. The Microsoft study found that users who had access to generative AI tools tended to produce a less diverse set of outcomes for the same task compared to those without, which the researchers interpreted as a potential deterioration of critical thinking (Gizmodo, 2025).
As much as this can be about moral panic or nostalgia for harder times, it is mostly about recognizing that tools shape habits, and habits shape what your mind gets good at. The proliferation of what one Harvard professor called "cheap intelligence," more code, text, and images than ever before, means that the skills of discernment, evaluation, judgment, thoughtful planning, and reflection are even more crucial now than before. Another noted that we already know that the tools we use during cognitive labor can change the ways that we do that work, pointing to evidence that taking notes longhand leads to greater recall than taking notes by keystroke, and that predictive text features built into word processors change our word choices (Harvard Gazette, 2025).
The key to making AI a positive force rather than a negative one is not to let it do your thinking for you. Generative AI does not understand human context, so it will not provide wisdom about social, emotional, and contextual events, because those are not part of its repertoire. However, it is very good at absorbing large amounts of data and making calculative predictions in ways that can augment your thinking, if you stay in the driver's seat.
A simple practice is to use AI to generate hypotheses and counterarguments, then force yourself to pick one thing to verify or to try without the tool. Another is to notice when you are asking for an answer versus asking for options, and to deliberately choose the harder path sometimes just to keep the cognitive muscle from atrophying.



Comments