2 Sources
2 Sources
[1]
AI may blunt our thinking skills - here's what you can do about it
There is growing evidence that our reliance on generative AI tools is reducing our ability to think clearly and critically, but it doesn't have to be that way Socrates wasn't the greatest fan of the written word. Famous for leaving no texts to posterity, the great philosopher is said to have believed that a reliance on writing destroys the memory and weakens the mind. Some 2400 years later, Socrates's fears seem misplaced - particularly in light of evidence that writing things down improves memory formation. But his broader mistrust of cognitive technologies lives on. A growing number of psychologists, neuroscientists and philosophers worry that ChatGPT and similar generative AI tools will chip away at our powers of information recall and blunt our capacity for clear reasoning. What's more, while Socrates relied on clever rhetoric to make his argument, these researchers are grounding theirs in empirical data. Their studies have uncovered evidence that even trained professionals disengage their critical thinking skills when using generative AI, and revealed that an over-reliance on these AI tools during the learning process reduces brain connectivity and renders information less memorable. Little wonder, then, that when I asked Google's Gemini chatbot whether AI tools are turning our brains to jelly and our memories to sieves, it admitted they might be. At least, I think it did: I can't quite remember now. But all is not lost. Many researchers suspect we can flip the narrative, turning generative AI into a tool that improves our cognitive performance and augments our intelligence. "AI is not necessarily making us stupid, but we may be interacting with it stupidly," says Lauren Richmond at Stony Brook University, New York. So, where are we going wrong with generative AI tools? And how can we change our habits to make better use of the technology? In recent years, generative AI has become deeply embedded in our lives. Therapists use it to look for patterns in their notes. Students rely on it for essay writing. It has even been welcomed by some media organisations, which may be why financial news website Business Insider reportedly now permits its journalists to use AI when drafting stories. In one sense, all of these AI users are following a millennia-old tradition of "cognitive offloading" - using a tool or physical action to reduce mental burden. Many of us use this strategy in our daily lives. Every time we write a shopping list instead of memorising which items to buy, we are employing cognitive offloading. Used in this way, cognitive offloading can help us improve our accuracy and efficiency, while simultaneously freeing up brain space to handle more complex cognitive tasks such as problem-solving, says Richmond. But in a review of the behaviour that Richmond published earlier this year with her Stony Brook colleague Ryan Taylor, she found it has negative effects on our cognition too. "When you've offloaded something, you almost kind of mentally delete it," says Richmond. "Imagine you make that grocery list, but then you don't take it with you. You're actually worse off than if you just planned on remembering the items that you needed to buy at the store." Research backs this up. To take one example, a study published in 2018 revealed that when we take photos of objects we see during a visit to a museum, we are worse at remembering what was on display afterwards: we have subconsciously given our phones the task of memorising the objects on show. This can create a spiral whereby the more we offload, the less we use our brains, which in turn makes us offload even more. "Offloading begets offloading - it can happen," says Andy Clark, a philosopher at the University of Sussex, UK. In 1998, Clark and his colleague David Chalmers - now at New York University - proposed the extended mind thesis, which argues that our minds extend into the physical world through objects such as shopping lists and photo albums. Clark doesn't view that as inherently good or bad - although he is concerned that as we extend into cyberspace with generative AI and other online services, we are making ourselves vulnerable if those services ever become unavailable because of power cuts or cyberattacks. Cognitive offloading could also make our memory more vulnerable to manipulation. In a 2019 study, researchers at the University of Waterloo, Canada, presented volunteers with a list of words to memorise and allowed them to type out the words to help remember them. The researchers found that when they surreptitiously added a rogue word to the typed list, the volunteers were highly confident that the rogue word had actually been on the list all along. As we have seen, concerns about the harms of cognitive offloading go back at least as far as Socrates. But generative AI has supercharged them. In a study posted online this year, Shiri Melumad and Jin Ho Yun at the University of Pennsylvania asked 1100 volunteers to write a short essay offering advice on planting a vegetable garden after researching the topic either using a standard web search or ChatGPT. The resulting essays tended to be shorter and contained fewer references to facts if they were written by volunteers who used ChatGPT, which the researchers interpreted as evidence that the AI tool had made the learning process more passive - and the resulting understanding more superficial. Melumad and Yun argued that this is because the AIs synthesise information for us. In other words, we cognitively offload our opportunity to explore and make discoveries about a subject for ourselves. The latest neuroscience is adding weight to these fears. In experiments detailed in a paper pending peer review which was released this summer, Nataliya Kos'myna at the Massachusetts Institute of Technology and her colleagues used EEG head caps to measure the brain activity of 54 volunteers as they wrote essays on subjects such as "Does true loyalty require unconditional support?" and "Is having too many choices a problem?". Some of the participants wrote their essays using just their own knowledge and experience, those in a second group were allowed to use the Google search engine to explore the essay subject, and a third group could use ChatGPT. The team discovered that the group using ChatGPT had lower brain connectivity during the task, while the group relying simply on their own knowledge had the highest. The browser group, meanwhile, was somewhere in between. "There is definitely a danger of getting into the comfort of this tool that can do almost everything. And that can have a cognitive cost," says Kos'myna. Critics may argue that a reduction in brain activity needn't indicate a lack of cognitive involvement in an activity, which Kos'myna accepts. "But it is also important to look at behavioural measures," she says. For example, when quizzing the volunteers later, she and her colleagues discovered that the ChatGPT users found it harder to quote their essays, suggesting they hadn't been as invested in the writing process. There is also emerging - if tentative - evidence of a link between heavy generative AI use and poorer critical thinking. For instance, Michael Gerlich at the SBS Swiss Business School published a study earlier this year assessing the AI habits and critical thinking skills of 666 people from diverse backgrounds. Gerlich used structured questionnaires and in-depth interviews to quantify the participants' critical thinking skills, which revealed that those aged between 17 and 25 had critical thinking scores that were roughly 45 per cent lower than participants who were over 46 years old. "These [younger] people also reported that they depend more and more on AI," says Gerlich: they were between 40 and 45 per cent more likely to say they relied on AI tools than older participants. In combination, Gerlich thinks the two findings hint that over-reliance on AI reduces critical thinking skills. Others stress that it is too early to draw any firm conclusions, particularly since Gerlich's study showed correlation rather than causation - and given that some research suggests critical thinking skills are inherently underdeveloped in adolescents. "We don't have the evidence yet," says Aaron French at Kennesaw State University in Georgia. But other research suggests the link between generative AI tools and critical thinking may be real. In a study published earlier this year by a team at Microsoft and Carnegie Mellon University in Pennsylvania, 319 "knowledge workers" (scientists, software developers, managers and consultants) were asked about their experiences with generative AI. The researchers found that people who expressed higher confidence in the technology freely admitted to engaging in less critical thinking while using it. This fits with Gerlich's suspicion that an over-reliance on AI tools instils a degree of "cognitive laziness" in people. Perhaps most worrying of all is that generative AI tools may even influence the behaviour of people who don't use the tools heavily. In a study published earlier this year, Zachary Wojtowicz and Simon DeDeo - who were both at Carnegie Mellon University at the time, though Wojtowicz has since moved to MIT - argued that we have learned to value the effort that goes into certain behaviours, like crafting a thoughtful and sincere apology in order to repair social relationships. If we can't escape the suspicion that someone has offloaded these cognitively tricky tasks onto an AI - having the technology draft an apology on their behalf, say - we may be less inclined to believe that they are being genuine. One way to avoid all of these problems is to reset our relationship with generative AI tools, using them in a way that enhances rather than undermines cognitive engagement. That isn't as easy as it sounds. In a new study, Gerlich found that even volunteers who pride themselves on their critical thinking skills have a tendency to slip into lazy cognitive habits when using generative AI tools. "As soon as they were using generative AI without guidance, most of them directly offloaded," says Gerlich. When there is guidance, however, it is a different story. Supplemental work by Kos'myna and her colleagues provides a good example. They asked the volunteers who had written an essay using only their own knowledge to work on a second version of the same essay, this time using ChatGPT to help them. The EEG data showed that these volunteers maintained high brain connectivity even as they used the AI tool. Clark argues that this is important. "If people think about [a given subject] on their own before using AI, it makes a huge difference to the interest, originality and structure of their subsequent essays," he says. French sees the benefit in this approach too. In a paper he published last year with his colleague, the late J.P. Shim, he argued that the right way to think about generative AI is as a tool to enhance your existing understanding of a given subject. The wrong way, meanwhile, is to view the tool as a convenient shortcut that replaces the need for you to develop or maintain any understanding. So what are the secrets to using AI the right way? Clark suggests we should begin by being a bit less trusting: "Treat it like a colleague that sometimes has great ideas, but sometimes is entirely off the rails," he says. He also believes that the more thinking you do before using a generative AI tool, the better what he dubs your "hybrid cognition" will be. That being said, Clark says there are times when it is "safe" to be a bit cognitively lazy. If you need to bring together a lot of publicly available information, you can probably trust an AI to do that, although you should still double-check its results. Gerlich agrees there are good ways to use AI. He says it is important to be aware of the "anchoring effect" - a cognitive bias that makes us rely heavily on the first piece of information we get when making decisions. "The information you first receive has a huge impact on your thoughts," he says. This means that even if you think you are using AI in the right way - critically evaluating the answers it produces for you - you are still likely to be guided by what the AI told you in the first place, which can serve as an obstacle to truly original thinking. But there are strategies you can use to avoid this problem too, says Gerlich. If you are writing an essay about the French Revolution's negative impacts on society, don't ask the AI for examples of those negative consequences. "Ask it to tell you facts about the French Revolution and other revolutions. Then look for the negatives and make your own interpretation," he says. A final stage might involve sharing your interpretation with the AI and asking it to identify any gaps in your understanding, or to suggest what a counter-argument might look like. This may be easier or harder depending on who you are. To use AI most fruitfully, you should know your strengths and weaknesses. For example, if you are experiencing cognitive decline, then offloading may offer benefits, says Richmond. Personality could also play a role. If you enjoy thinking, it is a good idea to use AI to challenge your understanding of a subject instead of asking it to spoon-feed you facts. Some of this advice may seem like common sense. But Clark says it is important that as many people as possible are aware of it for a simple reason: if more of us use generative AI in a considered way, we may actually help to keep those tools sharp. If we expect generative AI to provide us with all the answers, he says, then we will end up producing less original content ourselves. Ultimately, this means that the large language models (LLMs) that power these tools - which are trained using human-generated data - will start to decline in capacity. "You begin to get the danger of what some people call model collapse," he says: the LLMs are forced into feedback loops where they are trained on their own content, and their ability to provide creative, high-quality answers deteriorates. "We've got a real vested interest in making sure that we continue to write new and interesting things," says Clark. In other words, the incorrect use of generative AI might be a two-way street. Emerging research suggests there is some substance to the fears that AI is making us stupid - but it is also possible that the practice of overusing it is making AI tools stupid, too.
[2]
How AI and social media contribute to 'brain rot'
Studies show that relying on AI tools and social media may harm learning and memory. Students using ChatGPT remembered little of what they wrote, while heavy social media use links to lower reading scores. Experts suggest using AI for small tasks, limiting screen time, and practising mindful, active learning. Last spring, Shiri Melumad, a professor at the Wharton School of the University of Pennsylvania, gave a group of 250 people a simple writing assignment: Share advice with a friend on how to lead a healthier lifestyle. To come up with tips, some were allowed to use a traditional Google search, while others could rely only on summaries of information generated automatically with Google's artificial intelligence. The people using AI-generated summaries wrote advice that was generic, obvious and largely unhelpful -- eat healthy foods, stay hydrated and get lots of sleep! The people who found information with a traditional Google web search shared more nuanced advice about focusing on the various pillars of wellness, including physical, mental and emotional health. The tech industry tells us that chatbots and new AI search tools will supercharge the way we learn and thrive, and that anyone who ignores the technology risks being left behind. But Melumad's experiment, like other academic studies published so far on AI's effects on the brain, found that people who rely heavily on chatbots and AI search tools for tasks like writing essays and research are generally performing worse than people who don't use them. "I'm pretty frightened, to be frank," Melumad said. "I'm worried about younger folks not knowing how to conduct a traditional Google search." Welcome to the era of "brain rot," the slang term to describe a deteriorated mental state from engaging with low-quality internet content. When Oxford University Press, the publisher of the Oxford English Dictionary, named brain rot the word of the year in 2024, the definition referred to how social media apps like TikTok and Instagram had people hooked on short videos, turning their brains into mush. Whether technology makes people dumber is a question as old as technology itself. Socrates faulted the invention of writing for weakening human memory. As recently as 2008, many years before the arrival of AI-generated web summaries, The Atlantic published an essay titled, "Is Google Making Us Stupid?" Those concerns turned out to be overblown. But the growing wariness in academia of the impact of AI on learning (on top of older concerns about the distracting nature of social media apps) is troubling news for a country whose performance in reading comprehension is already in steep decline. This year, reading scores among children, including eighth graders and high school seniors, hit new lows. The results, gathered from the National Assessment of Educational Progress, long regarded as the nation's most reliable, gold-standard exam, were the first of their kind to be published since the COVID-19 pandemic disrupted education and drove up screen time among youths. Researchers worry that evidence is mounting of a potent link between lower cognitive performance and AI and social media. In addition to recent studies that found a correlation between the use of AI tools and cognitive decline, a new study led by pediatricians found that social media use was associated with poorer performance among children taking reading, memory and language tests. Here's a summary on the research so far, and how to use AI in a way that boosts -- rather than rots -- the brain. When we write with ChatGPT, are we even writing? The most high-profile study this year about AI's effects on the brain came out of the Massachusetts Institute of Technology, where researchers sought to understand how tools like OpenAI's ChatGPT could affect how people write. The study, which involved 54 college students, had a small sample size, but the results raised important questions about whether AI could stifle people's abilities to learn. (The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems. The two companies have denied those claims.) For part of the study, students were asked to write an essay ranging from 500 to 1,000 words, and they were divided into different groups: One group could write with the help of ChatGPT, a second group could look up information only with a traditional Google search, and a third group could rely only on their brains to compose their assignment. The students wore sensors that measured electrical activity in their brains. The ChatGPT users showed the lowest brain activity, which was unsurprising since they were letting the AI chatbot do the work. But the most striking revelation arose after the students finished the writing exercise. One minute after completing their essays, the students were asked to quote any part of their essay. The vast majority of ChatGPT users (83%) could not recall a single sentence. In contrast, the students using Google's search engine could quote some parts, and the students who relied on no tech could recite lots of lines, with some even quoting almost the entirety of their essays verbatim. "It has been one minute, and you really cannot say anything?" said Nataliya Kosmyna, the research scientist at MIT Media Lab who led the study, about the ChatGPT users. "If you don't remember what you wrote, you don't feel ownership. Do you even care?" Though the study focused on essay writing, Kosmyna said she worried about the implications for people using AI chatbots in fields where retention is essential, like a pilot studying to get a license. More research urgently needs to be done, she said, on how AI affects people's ability to hold on to information. Social media may be linked to lower reading scores. Over the last two years, schools in states like New York, Indiana, Louisiana and Florida have raced to ban cellphones from classrooms, citing concerns that students were distracting themselves with social media apps like TikTok and Instagram. Lending credence to the bans, a study published last month found a potent link between social media use and poorer cognitive performance. Last month, the medical journal JAMA published a study conducted by the University of California, San Francisco. Dr. Jason Nagata, a pediatrician who led the study, and his colleagues looked at data from ABCD, for Adolescent Brain Cognitive Development, a research project that followed more than 6,500 youths ages 9 to 13 from 2016 to 2018. All the children were surveyed once a year on how much time they used social media. Every other year, they took several tests. For example, a visual vocabulary test involved correctly matching pictures to words they heard. The data showed that children who reported using a low amount of social media (one hour a day) to a high amount (at least three hours a day) scored significantly lower on reading, memory and vocabulary tests than children who reported using no social media. As for why social media apps like TikTok and Instagram would harm test scores, the only safe conclusion is that every hour a child spends scrolling through the apps takes time away from more enriching activities like reading and sleeping, Nagata said. What are some healthier ways to use social media and AI? Despite findings of a correlation between social media use and cognitive decline, it would be difficult to recommend an ideal amount of screen time for youths, because lots of children spend time in front of screens doing things unrelated to social media, like watching TV shows, Nagata said. Instead, he suggested that parents enforce screen-free zones, prohibiting phone use in areas like the bedroom and dinner table so that children can stay focused on their studies, sleep and mealtimes. Meta did not respond to a request for comment. A TikTok spokesperson referred to a webpage with instructions to set up Time Away, a tool for parents to create schedules for when their teenagers are allowed to use TikTok. As for AI chatbots, there was an interesting wrinkle in the MIT study that presented a possible solution on how people could best use chatbots to learn and write. Eventually, the groups in that study swapped roles: The people who relied only on their brains to write got to use ChatGPT, and the people who had relied on ChatGPT could use only their brains. All the students wrote essays on the same topics they had chosen before. The students who had originally relied only on their brains recorded the highest brain activity once they were allowed to use ChatGPT. The students who had initially used ChatGPT, on the other hand, were never on a par with the former group when they were restricted to using their brains, Kosmyna said. That suggests that people who are eager to use chatbots for writing and learning should consider starting the process on their own before turning to the AI tools later in the process for revisions, similar to math students using calculators to solve problems only after they have used pencil and paper to learn the formulas and equations. Both Google and OpenAI declined to comment. Melumad said the problem with those tools was that they transformed what was once an active process in your brain -- perusing through links and clicking on a credible source to read -- into a passive one by automating all of that. So perhaps the key to using AI in a healthier way, she said, is to try to be more mindful in how we use them. Rather than ask a chatbot to do all the research on a broad topic, Melumad said, use it as a part of your research process to answer small questions, such as looking up historical dates. But for deeper learning of a subject, consider reading a book. This article originally appeared in The New York Times.
Share
Share
Copy Link
Growing research reveals that heavy reliance on AI tools like ChatGPT is reducing critical thinking abilities and memory formation. Studies show users exhibit lower brain activity and poor recall, but experts suggest mindful usage strategies can turn AI into a cognitive enhancement tool.
A growing body of research is raising serious concerns about the impact of generative AI tools on human cognitive abilities. Studies conducted at prestigious institutions like MIT and the University of Pennsylvania are revealing that our increasing reliance on ChatGPT and similar AI systems may be fundamentally altering how our brains process and retain information
1
2
.
Source: ET
The most striking evidence comes from MIT researchers who studied 54 college students performing writing tasks. When students used ChatGPT to compose essays, brain sensors revealed significantly lower electrical activity compared to those using traditional research methods. More alarmingly, 83% of ChatGPT users could not recall any part of their essay just one minute after completion, suggesting a profound disconnect between AI-assisted work and memory formation
2
.Researchers like Lauren Richmond at Stony Brook University explain this phenomenon through the concept of "cognitive offloading" - using external tools to reduce mental burden. While this strategy has been employed for millennia, from shopping lists to photo albums, AI represents an unprecedented escalation
1
."When you've offloaded something, you almost kind of mentally delete it," Richmond explains. This creates a dangerous spiral where increased offloading leads to reduced brain usage, which in turn encourages even more offloading
1
.The University of Pennsylvania's Shiri Melumad conducted a revealing experiment with 250 participants writing health advice. Those using AI-generated summaries produced generic, unhelpful content, while traditional Google searchers provided nuanced, comprehensive guidance. "I'm pretty frightened, to be frank," Melumad admits, expressing particular concern about younger people losing basic research skills
2
.Oxford University Press's decision to name "brain rot" the word of the year 2024 reflects growing societal awareness of technology's cognitive impact. Originally describing the mental deterioration from consuming low-quality social media content, the term now encompasses AI's effects on learning and memory
2
.This concern coincides with alarming educational trends. Reading scores among American children, including eighth graders and high school seniors, have reached new lows according to the National Assessment of Educational Progress. Pediatric studies now show correlations between social media use and poorer performance on reading, memory, and language tests
2
.Related Stories
Despite these concerning findings, researchers believe the narrative can be reversed. "AI is not necessarily making us stupid, but we may be interacting with it stupidly," argues Richmond
1
.Philosopher Andy Clark from the University of Sussex, co-author of the influential "extended mind thesis," suggests that while our minds naturally extend into physical tools, we must be mindful of our vulnerability when these digital extensions become unavailable
1
.Experts recommend several strategies: using AI for small, specific tasks rather than comprehensive work; limiting screen time; practicing active, mindful learning; and maintaining traditional research skills alongside AI tools. The goal is transforming AI from a cognitive crutch into an intelligence amplifier that enhances rather than replaces human thinking capabilities [1](https://www.newscientist.com/article/2501634-ai-may-blunt-our-thinking-skills-heres-what-you-can-do-about-it/]
2
.Summarized by
Navi
[1]
09 Jul 2025•Science and Research

18 Jun 2025•Science and Research

26 Jun 2025•Technology
