As large language models take on an expanding range of cognitive tasks, researchers are beginning to question the hidden cost of outsourcing thinking to artificial intelligence.
Research scientist Nataliya Kosmyna noticed something unusual while reviewing internship applications. Many of the cover letters she received were strikingly similar—well-structured, highly polished, and often starting with generic introductions before quickly shifting into vague or loosely connected references to her research.
To her, it became clear that many applicants were using large language models (LLMs)—the technology behind tools like ChatGPT, Google Gemini, and Claude—to generate their writing.
Around the same time, during her teaching work at the Massachusetts Institute of Technology, Kosmyna began noticing another pattern. Students seemed to be retaining less information than in previous years and forgetting course material more quickly.
As someone who studies human-computer interaction, she began to suspect a possible link: that increasing dependence on LLMs might be subtly affecting how students process, store, and recall information.
This observation led her to begin exploring whether growing reliance on AI tools could be changing not just how people work—but how they think.
The concern that researchers like Kosmyna have is that if we become too reliant on AI, it could affect the language we use and even our ability to do basic cognitive tasks. There is now a growing body of research suggesting that this “cognitive offloading” to AI can have a corrosive effect on our mental abilities. The consequences could be alarming and may even contribute to cognitive decline.
The ChatGPT group showed notably less brain activity – it was reduced by up to 55%
It’s well known that the tools we use can change how we think. With the advent of the internet for instance, tasks that once required deep research could be found by plugging a simple query into a search box. As the use of search engines increased, research found we became less likely to remember details, something dubbed “the Google effect”. (Some argue, however, the internet also serves as an external memory system that frees up our brain to do other tasks.)
But there is now growing alarm that as we offload even more of our thinking to LLMs and other forms of AI, the effects on our memories and ability to solve problems could get worse. Artificial intelligence tools can write convincing poetry, give financial advice and provide companionship. Students are increasingly outsourcing their own work to AI tools as well.
Studies have already shown that young people might be particularly vulnerable to the negative effects that using AI can have on key cognitive skills like critical thinking. Kosmyna, however, wanted to dig deeper into the potential effects.
Reduced mental effort
She and her colleagues at MIT Media Lab recruited 54 students to write short essays and split them into three groups. One was instructed to use ChatGPT. A second could use Google search, with AI-generated summaries turned off. The third didn’t use technology. Each student’s brainwaves were measured while they worked.
The essay topics were deliberately open-ended, meaning little research was needed for the task, with prompts including questions around loyalty, happiness or our daily life choices.
The results haven’t been published in a scientific journal yet, but they were none-the-less eye-opening, according to Kosmyna. Those who used their own minds had a brain that was “on fire”, showing widespread activity across many parts of the brain, she says. The search engine-only group still showed strong activity in the visual parts of the brain, but the ChatGPT group showed notably less brain activity – it was reduced by up to 55%.
“The brain didn’t fall asleep, but there was much less activation in the areas corresponding to creativity and to processing information,” says Kosmyna.
ChatGPT also affected people’s memories. After submitting their essays, people in the AI group were unable to quote from their essays, and several felt they had no ownership over the work. Other studies have also shown that people become less able to retain and recall information when they use AI tools such as ChatGPT
While the findings are still undergoing peer review, they echo those from other studies. One study by researchers at the University of Pennsylvania suggests that some people undergo something they term “cognitive surrender” when using generative AI chatbots. This means they tend to accept what the AI tells them with minimal scrutiny and even allow it to override their own intuition.
Similar effects can be found outside the world of AI chatbots too – even in life-or-death situations. A recent multinational study team found that medical professionals who used an AI tool to screen for colon cancer for three months were subsequently worse at spotting the tumours without it
Outsourcing work to AI also risks losing much of the creativity that produces original work, warns Kosmyna. The essays that students in her study wrote with ChatGPT looked very similar and were described by the teachers marking them as “soulless”, lacking originality and depth, Kosmyna says. “One of the teachers asked if students were sitting next to each other because the essays were so similar.”
While studies such as these illustrate the short-term effects LLMs can have on the brain, the long-term impacts are far less clear. The study by Kosmyna and her colleagues provides a glimpse. Four months after the initial study they asked the students to write another essay, but this time those who had used ChatGPT were told to work without LLM support. The neural connectivity in their brains was lower than those who switched the opposite way, perhaps indicating that they had not engaged with the topics properly in the first place
Cognitive decline
Yet, LLMs can be a positive tool to aid thinking, but only if we don’t rely on them by outsourcing our mental tasks in the process, says computational neuroscientist Vivienne Ming, author of Robot Proof. She’s concerned though that this is not how most people interact with this technology.
Her reasoning comes from research she conducted for her book, during which Ming asked a group of students at the University of Berkeley to predict real-world outcomes, such as the price of oil. She found that the majority of participants simply asked AI and copied the answer.
She measured their brains’ gamma wave activity – a marker of cognitive effort – finding it showed very little activation. Again her research is yet to published, but Ming worries that if her findings are borne out in further studies it could have long-term implications. Other research, for example, has linked weak gamma wave activity to cognitive decline later in life.
“That’s really worrying,” Ming says. “If that is a natural mode for people to interact with these systems – and these are smart kids – that’s bad.” Deep thinking, she says, is our superpower. “If we don’t use it, the long-term implications for cognitive health are pretty strong.”
That’s because when we rely on LLMs it requires very little cognitive effort, Ming adds, which is exactly what’s needed for a he.althy brain
A small subset of participants though – less than 10% – worked differently and used AI as a tool to gather data that they then analysed themselves. These individuals made more accurate predictions than other participants and showed stronger brain activation too.
For long-term brain health we need to continue to challenge ourselves
Almost two decades ago, Ming predicted that within 20 to 30 years we would see a statistically meaningful rise in dementia rates directly related to our overreliance on Google Maps. “I meant it to be provocative,” Ming says. “If you don’t have to think about navigating then there’ll be some detectable effect.”
While we don’t have data on this exact prediction, the increased use of GPS has been linked to worse spatial memory over time, according to one study of 13 people conducted over three years. And poor spatial navigation may be a potential predictor of Alzheimer’s disease, according to another study.
It’s clear that the more active our brain is, the more protected it is from cognitive decline. LLMs then, Ming says, could not only reduce creativity but could harm cognition and potentially increase the risk of dementia
As AI tool use increases, we need to work with it in a way that benefits us rather than harms us. Ming suggests that ultimately, the goal could be a form of “hybrid intelligence” where humans and machines “do the hard stuff” together. By this she means we need to think first and use tools to challenge us later, rather than simply letting them answer questions for us. Kosmyna agrees and suggests learning subjects without AI tools first to build a foundation and then think about using LLMs.
Ming recommends using what she calls the “nemesis prompt” to challenge your own thinking. It works by prompting an AI to act as a “lifelong enemy” or nemesis, then ask it to explain in detail why your ideas are wrong and how you can fix them, forcing you to defend and refine your arguments rather than simply accepting the answers it provides.
Another technique she suggest is prioritising “productive friction” and asking the AI to only provide context and ask you questions, rather than supplying answers. When she tested this by fine-tuning an AI bot not to give answers, she found that more individuals were more engaged.
Ultimately, we should all be wary of cognitive shortcuts, which is something “our brains love”, Kosmyna says. Clearly, for long-term brain health we need to continue to challenge ourselves. Our minds, creativity and cognitive health will benefit in the process.

