20 Sources
[1]
MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated
Since ChatGPT appeared almost three years ago, the impact of artificial intelligence (AI) technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty? Most importantly, there has been concern that using AI will lead to a widespread "dumbing down", or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving. Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to "cognitive debt" and a "likely decrease in learning skills". So what did the study find? The difference between using AI and the brain alone Over the course of four months, the MIT team asked 54 adults to write a series of three essays using either AI (ChatGPT), a search engine, or their own brains ("brain-only" group). The team measured cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays. The cognitive engagement of those who used AI was significantly lower than the other two groups. This group also had a harder time recalling quotes from their essays and felt a lower sense of ownership over them. Interestingly, participants switched roles for a final, fourth essay (the brain-only group used AI and vice versa). The AI-to-brain group performed worse and had engagement that was only slightly better than the other group's during their first session, far below the engagement of the brain-only group in their third session. The authors claim this demonstrates how prolonged use of AI led to participants accumulating "cognitive debt". When they finally had the opportunity to use their brains, they were unable to replicate the engagement or perform as well as the other two groups. Cautiously, the authors note that only 18 participants (six per condition) completed the fourth, final session. Therefore, the findings are preliminary and require further testing. Does this really show AI makes us stupider? These results do not necessarily mean that students who used AI accumulated "cognitive debt". In our view, the findings are due to the particular design of the study. The change in neural connectivity of the brain-only group over the first three sessions was likely the result of becoming more familiar with the study task, a phenomenon known as the familiarisation effect. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategy adapts accordingly. When the AI group finally got to "use their brains", they were only doing the task once. As a result, they were unable to match the other group's experience. They achieved only slightly better engagement than the brain-only group during the first session. To fully justify the researchers' claims, the AI-to-brain participants would also need to complete three writing sessions without AI. Similarly, the fact the brain-to-AI group used ChatGPT more productively and strategically is likely due to the nature of the fourth writing task, which required writing an essay on one of the previous three topics. As writing without AI required more substantial engagement, they had a far better recall of what they had written in the past. Hence, they primarily used AI to search for new information and refine what they had previously written. What are the implications of AI in assessment? To understand the current situation with AI, we can look back to what happened when calculators first became available. Back in the 1970s, their impact was regulated by making exams much harder. Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks. Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available. The challenge with AI is that, for the most part, educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago. In such situations, AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in "metacognitive laziness". However, just like calculators, AI can and should help us accomplish tasks that were previously impossible - and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination. In the MIT study, participants who used AI were producing the "same old" essays. They adjusted their engagement to deliver the standard of work expected of them. The same would happen if students were asked to perform complex calculations with or without a calculator. The group doing calculations by hand would sweat, while those with calculators would barely blink an eye. Learning how to use AI Current and future generations need to be able to think critically and creatively and solve problems. However, AI is changing what these things mean. Producing essays with pen and paper is no longer a demonstration of critical thinking ability, just as doing long division is no longer a demonstration of numeracy. Knowing when, where and how to use AI is the key to long-term success and skill development. Prioritising which tasks can be offloaded to an AI to reduce cognitive debt is just as important as understanding which tasks require genuine creativity and critical thinking.
[2]
Brain activity lower when using AI chatbots: MIT research
EEG and recall tests suggest people who use ChatGPT to write essays aren't learning much Using AI chatbots actually reduces activity in the brain versus accomplishing the same tasks unaided, and may lead to poorer fact retention, according to a new preprint study out of MIT. Seeking to understand how the use of LLM chatbots affects the brain, a team led by MIT Media Lab research scientist Dr. Nataliya Kosmyna hooked up a group of Boston-area college students to electroencephalogram (EEG) headsets and gave them 20 minutes to write a short essay. One group was directed to write without any outside assistance, a second group was allowed to use a search engine, and a third was instructed to write with the assistance of OpenAI's GPT-4o model. The process was repeated four times over several months. While not yet peer reviewed, the pre-publication research results suggest a striking difference between the brain activity of the three groups and the corresponding creation of neural connectivity patterns. To put it bluntly and visually, brain activity in the LLM-using cohort was ... a bit dim. EEG analysis showed that each group exhibited distinct neural connectivity patterns, with brain connectivity "systematically scaled down with the amount of external support." In other words, the search engine users showed less brain engagement, and the LLM cohort "elicited the weakest overall coupling." Cognitive load in the participants was measured using a method known as Dynamic Directed Transfer Function (dDTF), which measures specific brain activity related to the flow of information across different brain regions. dDTF is able to account for the strength and direction of flow, making it a good representation of "executive function, semantic processing and attention regulation," according to the MIT researchers. The researchers said that, compared to the baseline established by the group writing using nothing but their grey and white matter, the search engine group showed between 34 and 48 percent less dDTF connectivity. The LLM group, meanwhile, showed a more profound up to 55 percent reduction in dDTF signal magnitude. Put simply, relying on LLMs - and, to a lesser extent, search engines - significantly reduces task-related brain connectivity, indicating lower cognitive engagement during the essay-writing task. "The Brain-only group leveraged broad, distributed neural networks for internally generated content," the researchers said of their results. "The Search Engine group relied on hybrid strategies of visual information management and regulatory control; and the LLM group optimized for procedural integration of AI-generated suggestions." As the researchers explained, those distinctions raise "significant implications" for educational practices and how we understand learning - namely in that there appears to be a definite tradeoff between internal synthesis of information and external support. The LLM group's participants performed worse than their counterparts in the Brain-only group at all levels Tests of participants on recall and perceived ownership of what they wrote were demonstrably worse in the LLM cohort, the research team found. "In this study we demonstrate the pressing matter of a likely decrease in learning skills," the researchers said. "The LLM group's participants performed worse than their counterparts in the Brain-only group at all levels." The fourth session of essay writing reinforced those findings. In the last research session, participants who were originally told to rely on their brains or LLMs swapped roles and were given another set of essay instructions. Unsurprisingly, the LLM group performed poorly when asked to rely on their own thought processes. "In Session 4, removing AI support significantly impaired the participants from original LLM group," the researchers said. The opposite was true for the other cohort. "The so-called Brain-to-LLM group exhibited significant increase in brain connectivity across all EEG frequency bands when allowed to use an LLM on a familiar topic." The findings suggest that the use of AI early in the learning process "may result in shallow encoding" that leads to poor recall of facts and a lack of learning because all the effort has been offloaded. Using one's cognitive faculties to learn about something, and then using AI to further research skills, on the other hand, is perfectly acceptable. "Taken together, these findings support an educational model that delays AI integration until learners have engaged in sufficient self-driven cognitive effort," the MIT team concluded. "Such an approach may promote both immediate tool efficacy and lasting cognitive autonomy." That might not be a shocking conclusion, but given the increasing number of young people relying on AI to do their schoolwork, the issue needs to be addressed before the world produces an entire generation of intellectually stunted AI junkies. Kosmyna told The Register in an email that she doesn't want to use words like "stupid, dumb or brainrot" to refer to AI's effect on us, arguing it does a disservice to the work her team has done. Still - it is having an effect that needs to be addressed. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research," the paper concluded. With the paper yet to undergo peer review, Kosmyna noted that its conclusions "are to be treated with caution and as preliminary." Nonetheless, she wrote, the pre-review findings can still serve "as a preliminary guide to understanding the cognitive and practical impacts of AI on learning." The team hopes that future studies will look at not only the use of LLMs in modalities beyond text, but also AI's impact on memory retention, creativity and written fluency. As for what the MIT team plans to research next, Kosmyna told us the team is turning its attention to a similar study looking at vibe coding, or using AI to generate code from natural language prompts. "We have already collected the data and are currently working on the analysis and draft," Kosmyna said. She added that, as this is one of the first such studies to be done studying AI's effect on the human brain, she expects this work will trigger additional studies in the future "with different protocols, populations, tasks, methodologies, that will add to the general understanding of the use of this technology in different aspects of our lives." With AI creeping into seemingly every aspect of our lives at an ever-increasing pace, there's going to be plenty of research to be done. ®
[3]
MIT brain scans suggest that using GenAI tools reduces cognitive activity
Why it matters: As the use of generative AI becomes increasingly common in education, law, politics, media, and other fields, many worry that reliance on the technology may reduce cognitive independence. A recent study from MIT strongly supports this concern, indicating that the use of digital tools significantly alters brain activity. The newly published paper explains that as participants in an experiment wrote a series of essays, electronic brain monitoring revealed substantially weaker connections between regions of the brain in those who used large language models (LLMs). This correlated with poorer memory and more derivative output. Three groups of participants each wrote three essays: one group relied on an LLM, another used search engines, and the third worked without any external tools. Then, the LLM and "brain-only" groups swapped members for a fourth essay. Although essays from the brain-only group were not always the most accurate, participants in that group displayed significantly stronger neural connectivity, suggesting deeper mental engagement. While essays from the LLM group received high marks from both human graders and AI judges, the writing tended to be more homogeneous and adhered more closely to the original prompt. Participants who used LLMs were more likely to copy and paste, edit their work less, and struggle to quote from their own writing. During the final session, LLM users also had difficulty recalling information from earlier in the experiment. Their brain activity had reset to a novice level regarding the essay topics. During the final session, LLM users also had difficulty recalling information from earlier in the experiment. Their brain activity had reset to a novice level regarding the essay topics. Although the brain-only group showed some decline in connectivity over time, it remained at an intermediate level, and participants could easily remember earlier material. Interestingly, participants who switched from the LLM group to the brain-only group showed increased neural connectivity. The group that used search engines demonstrated low-to-intermediate brain connectivity. Their writing was also highly homogeneous, but they were better able to quote from their work, suggesting stronger memory retention compared to LLM users. Overall, the results indicate that any use of digital tools impacts brain activity, but search engines require more mental engagement than generative AI. These findings could have significant implications for education, where the use of AI is becoming widespread. In many schools, most students now use tools like ChatGPT to varying degrees when completing assignments. Some generate only outlines or topic ideas, while others use the assignments as prompts and submit the output without so much as reading it. Teachers and professors have also started using AI to grade assignments and try to detect the use of AI, with varying results. The results of the MIT study suggest that both groups might suffer cognitively, regardless of how much or how little they rely on LLMs.
[4]
Using ChatGPT to write? MIT study says there's a cognitive cost.
Relying on ChatGPT significantly affects critical thinking abilities, according to a new study. Researchers from MIT Media Lab, Wellesley College, and Massachusetts College of Art and Design conducted a four-month study titled "Your Brain on ChatGPT" and found users of large language models (LLMs) like OpenAI's chatbot "consistently underperformed at neural, linguistic, and behavioral levels." This included the participants' decreased brain activity, a weaker sense of authorship, and inability to remember what they wrote -- which even continued when they weren't allowed to use an LLM. Anyone who uses ChatGPT for writing may have drawn similar conclusions; the point of using LLMs, after all, is to automate the work and outsource the critical thinking effort. But with this MIT study, there's now scientific evidence showing that relying on ChatGPT and other LLMs can impair memory and learning. It's worth noting that the study, published June 10, surveyed a small group and has not yet been peer-reviewed, but according to an interview with Time, the lead author Nataliya Kosmyna felt it was important to publish the study as is, given the rapid adoption of genAI, particularly in education. This Tweet is currently unavailable. It might be loading or has been removed. "What really motivated me to put it out now before waiting for a full peer review is that I am afraid in six to eight months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental," said Kosmyna. In April, President Donald Trump signed an executive order to promote "AI literacy and proficiency of K-12 students," so the urgency to understand how ChatGPT is affecting our brains is all too real. The study divided 54 participants into three groups with the task of writing SAT essays over the course of three sessions. One group used ChatGPT ("LLM group"), another group used Google search ("search engine group", and the third group wasn't allowed any tools ("brain only.") In an additional fourth session with 18 participants, the LLM group was tasked with writing an essay without ChatGPT and the brain only group was allowed to use ChatGPT. Researchers measured the participants' brain activity while they wrote the essays using electroencephalography (EEG), analyzed the essays using Natural Language Processing (NLP), and had the essays scored by AI and human graders. Among the many discoveries detailed in the length paper, researchers discovered a stark decrease in the LLM group's "alpha band connectivity" which measures the brain's cognitive abilities like memory and language processing, compared to the brain only group. This was evident when the participants were asked to quote from the essays they had written. "LLM users significantly underperformed in this domain, with 83 percent of participants reporting difficulty quoting in Session 1, and none providing correct quotes," reads the study. This Tweet is currently unavailable. It might be loading or has been removed. In the fourth session, where the group previously using ChatGPT had to write an essay without it, the participants continued to struggle with quoting anything from what they had written. "LLM group's poor recall and incorrect quoting is a possible indicator that their earlier essays were not internally integrated, likely due to outsourced cognitive processing to the LLM," the study reads. This suggests that the participants weren't really retaining what they wrote or took from ChatGPT. In fact, by the third session, the researchers reported that most of the essays from the LLM group were mostly copied and pasted responses from ChatGPT with "minimal editing." Another effect researchers measured was the perceived level of "ownership" or belief that they had fully conceived of the essay. Compared to the brain only group, which consistency claimed almost full ownership, the LLM group "presented a fragmented and conflicted sense of authorship," with some claiming, full, partial, or no ownership whatsoever. While it might be tempting to offload work into LLMs like ChatGPT, as this study shows, there could be long-term cognitive consequences.
[5]
Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming
"Consistently underperformed at neural, linguistic, and behavioral levels." Scientists at the Massachusetts Institute of Technology have found some startling results in the brain scans of ChatGPT users, adding to the growing body of evidence suggesting that AI is having a serious -- and barely-understood -- impact on its users' cognition even as it explodes in popularity worldwide. In a new paper currently awaiting peer review, researchers from the school's storied Media Lab documented the vast differences between the brain activity of people who using ChatGPT to write versus those who did not. The research team recruited 54 adults between the ages of 18 and 39 and divided them into three groups: one that used ChatGPT to help them write essays, one that used Google search as their main writing aid, and one that didn't use AI tech. The study took place over four months, with each group tasked with writing one essay per month for the first three, while a smaller subset of the cohort either switched from not using ChatGPT to using it -- or vice versa -- in the fourth month. As they completed the essay tasks, the participants were hooked up to electroencephalogram (EEG) machines that recorded their brain activity. Here's where things get wild: the ChatGPT group not only "consistently underperformed at neural, linguistic, and behavioral levels," but also got lazier with each essay they wrote; the EEGs found "weaker neural connectivity and under-engagement of alpha and beta networks." The Google-assisted group, meanwhile, had "moderate" neural engagement, while the "brain-only" group exhibited the strongest cognitive metrics throughout. These findings about brain activity, while novel, aren't entirely surprising after prior studies and anecdotes about the many ways that AI chatbot use seems to be affecting people's brains and minds. Previous MIT research, for instance, found that ChatGPT "power users" were becoming dependent on the chatbot and experiencing "indicators of addiction" and "withdrawal symptoms" when they were cut off. And earlier this year Carnegie Mellon and Microsoft -- which has invested billions to bankroll OpenAI, the maker of ChatGPT -- found in a joint study that heavy chatbot use appears to almost atrophy critical thinking skills. A few months later, The Guardian found [link] in an analysis of studies like that one that researchers are growing increasingly concerned that tech like ChatGPT is making us stupider, and a Wall Street Journal reporter even owned up [link] to his cognitive skill loss from over-using chatbots. Beyond the neurological impacts, there are also lots of reasons to be concerned about how ChatGPT and other chatbots like it affects our mental health. As Futurism found in a recent investigation, many users are becoming obsessed with ChatGPT and developing paranoid delusions into which the chatbot is pushing them deeper. Some have even stopped taking their psychiatric medication because the chatbot told them to. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously," OpenAI told us in response to that reporting. "We've built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations." Add it all up, and the evidence is growing that AI is having profound and alarming effects on many users -- but so far, we're seeing no evidence that corporations are slowing down in their attempts to injecting the tech into every part of of society.
[6]
This is your brain on ChatGPT
Sizzle. Sizzle. That's the sound of your neurons frying over the heat of a thousand GPUs as your generative AI tool of choice cheerfully churns through your workload. As it turns out, offloading all of that cognitive effort to a robot as you look on in luxury is turning your brain into a couch potato. That's what a recently published (and yet to be peer-reviewed) paper from some of MIT's brightest minds suggests, anyway. The study examines the "neural and behavioral consequences" of using LLMs (Large Language Models) like ChatGPT for, in this instance, essay writing. The findings raise serious questions about how long-term use of AI might affect learning, thinking, and memory. More worryingly, we recently witnessed it play out in real life. The study, titled: Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, involved 54 participants split into three groups: Across three sessions, these groups were tasked with writing an essay about one of three changing topics. An example of the essay question for the topic of "Art" was: "Do works of art have the power to change people's lives?" Participants then had 20 minutes to answer the question related to their chosen topic in essay form, all while wearing an Enobio headset to collect EEG signals from their brain. In a fourth session, LLM and Brain-only groups were swapped to measure any potential lasting impact of prior sessions. The results? Across the first three tests, Brain-only writers had the most active, widespread brain engagement during the task, while LLM-assisted writers showed the lowest levels of brain activity across the board (although routinely completed the task fastest). Search engine-assisted users generally fell somewhere in between the two. In short, Brain-only writers were actively engaging with the assignment, producing more creative and unique writing while actually learning. They were able to quote their essays afterwards and felt strong ownership of their work. Alternatively, LLM users engaged less over each session, began to uncritically rely on ChatGPT more as the study went on, and felt less ownership of the results. Their work was judged to be less unique, and participants often failed to accurately quote from their own work, suggesting reduced long-term memory formation. Researchers referred to this phenomenon as "metacognitive laziness" -- not just a great name for a Prog-Rock band, but also a perfect label for the hazy distance between autopilot and Copilot, where participants disengage and let the AI do the thinking for them. But it was the fourth session that yielded the most worrying results. According to the study, when the LLM and Brain-only group traded places, the group that previously relied on AI failed to bounce back to pre-LLM levels tested before the study. To put it simply, sustained use of AI tools like ChatGPT to "help" with tasks that require critical thinking, creativity, and cognitive engagement may erode our natural ability to access those processes in the future. But we didn't need a 206-page study to tell us that. On June 10, an outage lasting over 10 hours saw ChatGPT users cut off from their AI assistant, and it provoked a disturbing trend of people openly admitting, sans any hint of awareness, that without access to OpenAI's chatbot, they'd suddenly forgotten how to work, write, or function. This study may have used EEG caps and grading algorithms to prove it, but most of us may already be living its findings. When faced with an easy or hard path, many of us would assume that only a particularly smooth-brained individual would willingly take the more difficult, obtuse route. However, as this study claims, the so-called easy path may be quietly sanding down our frontal lobes in a lasting manner -- at least when it comes to our use of AI. That's especially frightening when you think of students, who are adopting these tools en masse, with OpenAI itself pushing for wider embrace of ChatGPT in education as part of its mission to build "an AI-Ready Workforce." A 2023 study conducted by Intelligent.com revealed that a third of U.S. college students surveyed used ChatGPT for schoolwork during the 2022/23 academic year. In 2024, a survey from the Digital Education Council claimed that 86% of students across 16 countries use artificial intelligence in their studies to some degree. AI's big sell is productivity, the promise that we can get more done, faster. And yes, MIT researchers have previously concluded that AI tools can boost worker productivity by up to 15%, but the long-term impact suggests codependency over competency. And that sounds a lot like regression.
[7]
ChatGPT's Impact On Our Brains According to an MIT Study
After writing the three essays, the subjects were then asked to re-write one of their previous efforts -- but the ChatGPT group had to do so without the tool, while the brain-only group could now use ChatGPT. The first group remembered little of their own essays, and showed weaker alpha and theta brain waves, which likely reflected a bypassing of deep memory processes. "The task was executed, and you could say that it was efficient and convenient," Kosmyna says. "But as we show in the paper, you basically didn't integrate any of it into your memory networks." The second group, in contrast, performed well, exhibiting a significant increase in brain connectivity across all EEG frequency bands. This gives rise to the hope that AI, if used properly, could enhance learning as opposed to diminishing it. Read more: I Quit Teaching Because of ChatGPT This is the first pre-review paper that Kosmyna has ever released. Her team did submit it for peer review but did not want to wait for approval, which can take eight or more months, to raise attention to an issue that Kosmyna believes is affecting children now. "Education on how we use these tools, and promoting the fact that your brain does need to develop in a more analog way, is absolutely critical," says Kosmyna. "We need to have active legislation in sync and more importantly, be testing these tools before we implement them." Ironically, upon the paper's release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to "only read this table below," thus ensuring that LLMs would return only limited insight from the paper.
[8]
Could using ChatGPT make you dumber? New study says it can
A new study shows that using ChatGPT could mean "a likely decrease" in learning skills and could internalise "shallow or biased perspectives". A new pre-print study from the US-based Massachusetts Institute of Technology (MIT) found that using OpenAI's ChatGPT could lead to cognitive decline. Researchers with the MIT Media lab broke participants into three groups and asked them to write essays only using ChatGPT, a search engine, or using no tools. Brain scans were taken during the essay writing with an electroencephalogram (EEG) during the task. Then, the essays were evaluated by both humans and artificial intelligence (AI) tools. The study showed that the ChatGPT-only group had the lowest neural activation in parts of the brain and had a hard time recalling or recognising their writing. The brain-only group that used no technology was the most engaged, showing both cognitive engagement and memory retention. The researchers then did a second session where the ChatGPT group were asked to do the task without assistance. In that session, those who used ChatGPT in the first group performed worse than their peers with writing that was "biased and superficial". The study found that repeated GPT use can come with "cognitive debt" that reduces long-term learning performance in independent thinking. In the long run, people with cognitive debt could be more susceptible to "diminished critical inquiry, increased vulnerability to manipulation and decreased creativity," as well as a "likely decrease" in learning skills. "When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalising shallow or biased perspectives," the study continued. The study also found higher rates of satisfaction and brain connectivity in the participants who wrote all essays with just their minds compared to the other groups. Those from the other groups felt less connected to their writing and were not able to provide a quote from their essays when asked to by the researchers. The authors recommend that more studies be done about how any AI tool impacts the brain "before LLMs are recognised as something that is net positive for humans."
[9]
MIT Study Warns of Cognitive Decline with LLM Use | AIM
While LLMs reduce immediate cognitive load and make tasks feel easier, this reduction comes at the cost of deep engagement and memory formation. A recent study from MIT Media Lab reveals how the increasing use of large language models (LLMs) like ChatGPT may be quietly dulling our cognitive skills, especially in complex tasks like writing and critical thinking. The findings, part of a study titled 'Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task', show that while AI tools can improve productivity, they come with measurable mental trade-offs. In an effort to understand this phenomenon, the researchers split participants into three groups: one used an LLM (ChatGPT), another used a traditional search engine, and the third used only their brain -- no external assistance. Through EEG (electroencephalogram) brain monitoring and essay analysis, the study found that LLM users showed significantly weaker neural connectivity, indicating lower cognitive effort and engagement compared to others. "Brain connectivity systematically scaled down with the amount of external support," the paper stated. Those who relied solely on their brain had the strongest and widest-ranging neural networks, while LLM users had the weakest overall coupling. Shockingly, LLM users were far less able to recall or quote from essays they had just written. In fact, 83.3% of LLM users failed to provide even a single correct quote from their own writing, compared to just 11.1% in both the Search Engine and Brain-only groups. The study also highlighted what it calls "cognitive offloading" -- outsourcing thinking to AI tools -- which leads to a long-term decline in learning capabilities. While LLMs reduce immediate cognitive load and make tasks feel easier, this reduction comes at the cost of deep engagement and memory formation. This "offloading" also seemed to lead to what the study calls a 'metacognitive laziness', where users skip the mental work of integrating ideas and reflecting critically on content. The implications are especially important for educators. The study suggests that over-reliance on AI writing tools could weaken students' long-term ability to think critically, recall information, and construct original arguments. As one participant noted in their interview, "I would rather use the internet than ChatGPT -- I can read other people's ideas." Researchers caution that while AI can be a powerful assistant, its use must be deliberate and well-regulated in learning environments. "We demonstrate the pressing matter of a likely decrease in learning skills," the authors concluded, adding that LLMs can make students more productive, but less cognitively present.
[10]
Need ChatGPT's help? It could be making you dumber
MIT research has found heavy use of ChatGPT impairs memory, lowers brain engagement and may hinder learning by replacing critical cognitive processes. Artificial intelligence chatbots such as OpenAI's ChatGPT could be diminishing cognitive ability, according to a recent study by MIT researchers. In a study by researchers from the Massachusetts Institute of Technology's Media Lab, 54 participants completed essay writing tasks over three sessions using one of three assigned methods and tools: ChatGPT, search engines and only their brains. In a fourth session, the researchers asked the ChatGPT group to use no tools, and the brain-only group participants were asked to use the LLM. The results were startling, as they revealed poor memory recall among AI users, with more than 83% of ChatGPT users unable to quote from essays they wrote minutes earlier. Co-founder of sales tech agency ColdIQ, Alex Vacca, described the results as "terrifying," adding that AI isn't making us more productive, "it's making us cognitively bankrupt." "You write something, hit save, and your brain has already forgotten it because ChatGPT did the thinking." The researchers stated that brain connectivity "systematically scaled down" with the amount of external support. "The Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling," they said. The researchers used electroencephalography (EEG) to record brain activity in order to assess participants' cognitive engagement and cognitive load during the tasks. Related: Elon Musk's AI firm xAI faces lawsuit over data center air pollution They discovered that an accumulation of "cognitive debt" could arise from repeated reliance on external systems such as LLMs, which replace the cognitive processes required for independent thinking. "Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity." The paper, which has yet to be peer reviewed, suggests that the usage of AI LLMs could actually harm learning, especially for younger users. "In this study, we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study," the researchers concluded. The researchers said that "longitudinal studies" are needed to understand the long-term impact of AI chatbots on the human brain, "before LLMs are recognized as something that is net positive for humans." When Cointelegraph asked ChatGPT for its thoughts on the study, the chatbot replied, "This study doesn't say ChatGPT is inherently harmful -- rather, it warns against overreliance without reflection or effort."
[11]
ChatGPT use linked to cognitive decline: MIT research
ChatGPT can harm an individual's critical thinking over time, a new study suggests. Researchers at MIT's Media Lab asked subjects to write several SAT essays and separated subjects into three groups -- using OpenAI's ChatGPT, using Google's search engine and using nothing, which they called the "brain‑only" group. Each subject's brain was monitored through electroencephalography (EEG), which measured the writer's brain activity through multiple regions in the brain. They discovered that subjects who used ChatGPT over a few months had the lowest brain engagement and "consistently underperformed at neural, linguistic, and behavioral levels," according to the study. The study found that the ChatGPT group initially used the large language model, or LLM, to ask structural questions for their essay, but near the end of the study, they were more likely to copy and paste their essay. Those who used Google's search engine were found to have moderate brain engagement, but the "brain-only" group showed the "strongest, wide-ranging networks." The findings suggest that using LLMs can harm a user's cognitive function over time, especially in younger users. It comes as educators continue to navigate teaching when AI is increasingly accessible for cheating. "What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental," the study's main author Nataliya Kosmyna told TIME. "Developing brains are at the highest risk." However, using AI in education doesn't appear to be slowing down. In April, President Trump signed an executive order that aims to incorporate AI into U.S. classrooms. "The basic idea of this executive order is to ensure that we properly train the workforce of the future by ensuring that school children, young Americans, are adequately trained in AI tools, so that they can be competitive in the economy years from now into the future, as AI becomes a bigger and bigger deal," Will Scharf, White House staff secretary, said at the time.
[12]
What happens when you use ChatGPT to write an essay? See what new study found.
A study published by the Massachusetts Institute of Technology Media Lab analyzed the cognitive function of 54 people writing an essay with: only the assistance of OpenAI's ChatGPT; only online browsers; or no outside tools at all. Largely, the study found that those who relied solely on ChatGPT to write their essays had lower levels of brain activity and presented less original writing. "As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with (language learning model) integration in educational and informational contexts," the study states. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research." Here's a deeper look at the study and how it was conducted. How was the study conducted? A team of MIT researchers, led by MIT Media Lab research scientist Nataliya Kosmyna, studied 54 participants between the ages of 18 and 39. Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups, 18 people per group. The study states that the three groups included a language learning model group, in which participants only used OpenAI's ChatGPT-4o to write their essays. The second group was limited to using only search engines for their research, and the third was prohibited from any tools. Participants in the latter group could only use their minds to write their essays. Each participant had 20 minutes to write an essay from one of three prompts taken from SAT tests, the study states. Three different options were provided to each group, totaling nine unique prompts. An example of a prompt available to participants using ChatGPT was about loyalty: "Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical? Does true loyalty require unconditional support?" As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain's electrical activity. Following the sessions, 18 participants returned for a fourth study group. Participants who had previously used ChatGPT to write their essays were required to use no tools and participants who had used no tools before used ChatGPT, the study states. Quality of essays: What did the study find? In addition to analyzing brain activity, the researchers looked at the essays themselves. First and foremost, the essays of participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, the study states. On the other hand, essays written with the help of ChatGPT were more homogenous. All of the essays were "judged" by two English teachers and two AI judges trained by the researchers. The English teachers were not provided background information about the study but were able to identify essays written by AI. "These, often lengthy essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We, as English teachers, perceived these essays as 'soulless,' in a way, as many sentences were empty with regard to content and essays lacked personal nuances," a statement from the teachers, included in the study, reads. As for the AI judges, a judge trained by the researchers to evaluate like the real teachers scored each of the essays, for the most part, a four or above, on a scale of five. Brain activity: What did the study find? When it came to brain activity, researchers were presented "robust" evidence that participants who used no writing tools displayed the "strongest, widest-ranging" brain activity, while those who used ChatGPT displayed the weakest. Specifically, the ChatGPT group displayed 55% reduced brain activity, the study states. And though the participants who used only search engines had less overall brain activity than those who used no tools, these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen. What's next for future studies? Further research on the long-term impacts of artificial intelligence chatbots on cognitive activity is needed, the study states. As for this particular study, researchers noted that a larger number of participants from a wider geographical area would be necessary for a more successful study. Writing outside of a traditional educational environment could also provide more insight into how AI works in more generalized tasks. Greta Cross is a national trending reporter at USA TODAY. Story idea? Email her at [email protected].
[13]
Is ChatGPT making us dumb? MIT brain scans reveal alarming truth about AI's impact on the human mind
It's quick, it's clever, and it answers almost everything -- no wonder millions around the world rely on ChatGPT. But could this digital genie be dulling our minds with every wish we make? According to a startling new study by scientists at MIT's Media Lab, the answer may be yes. Researchers have now found that excessive use of AI tools like ChatGPT could be quietly eroding your memory, critical thinking, and even your brain activity. Published on arXiv, the study titled "The Cognitive Cost of Using LLMs" explores how language models -- especially ChatGPT -- affect the brain's ability to think, learn, and retain information. To examine what they call the "cognitive cost" of using large language models (LLMs), MIT researchers tracked 54 students over a four-month period using electroencephalography (EEG) devices to monitor brain activity. The participants were divided into three groups: one used ChatGPT, another relied on Google, and the last used no external help at all -- dubbed the "Brain-only" group. While the AI-powered group initially showed faster results, the long-term findings were more sobering. Students who depended on ChatGPT for essay writing exhibited poorer memory retention, reduced brain engagement, and lower scoring compared to their peers. As the researchers noted, "The LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, and scoring." Interestingly, students who used Google showed moderate brain activity and generated more thoughtful content than those who leaned on ChatGPT. Meanwhile, those in the Brain-only group had the highest levels of cognitive engagement, producing original ideas and deeper insights. In fact, even when ChatGPT users later attempted to write without assistance, their brain activity remained subdued -- unlike the other groups who showed increased engagement while adapting to new tools. This suggests that habitual ChatGPT usage might not just affect how we think, but whether we think at all. The study also points to how this over-reliance on AI encourages mental passivity. While ChatGPT users reported reduced friction in accessing information, this convenience came at a cost. As the researchers explained, "This convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or 'opinions'." The team also raised red flags about algorithmic bias: what appears as top-ranked content from an AI is often a result of shareholder-driven training data, not necessarily truth or value. This creates a more sophisticated version of the "echo chamber," where your thoughts are subtly shaped -- not by your own reasoning, but by an AI's probabilistic guesses. As AI tools become more embedded in our everyday tasks -- from writing emails to crafting essays -- this study is a wake-up call for students, educators, and professionals. While tools like ChatGPT are powerful assistants, they should not become cognitive crutches. The researchers caution that as language models continue to evolve, users must remain alert to their potential mental side effects. In a world where convenience is king, critical thinking might just be the first casualty.
[14]
MIT Study Warns: Frequent Use Of ChatGPT And AI Tools May Dull Your Mind, Drain Creativity, Impair Thinking, And Increase Risk Of Manipulation Over Time
AI tools are increasingly being paid heed to by tech companies, with a focus on making the technology more sophisticated. While we have come far in terms of advancement, an increasing number of chatbots are now becoming smarter and more intuitive, and we feel closer to how humans interact and their mental processes. However, as tools like ChatGPT become more efficient, users increasingly rely on them daily and academically. With AI becoming more intelligent, research now suggests that it is eroding humans' cognitive abilities. As AI frenzy is not going away any time soon, and it is becoming increasingly integrated into our daily lives, users seem to be depending on these AI tools for education and for generally gaining information on any aspect they are struggling with. While ChatGPT and other tools are becoming shockingly intelligent, a recent MIT study shed some light on the cognitive effects that ChatGPT dependence may have on users. The study divided participants into three groups and assigned them the task of writing essays using either ChatGPT, a search engine, or no tool at all. ECG scans were used to evaluate the brain activities during this writing task. The essays were then assessed by both AI systems and human reviewers for better accuracy. According to the findings, the group of participants that relied on ChatGPT for the writing process had the lowest level of neural activation, especially in areas such as cognitive engagement and memory. The participants also struggled with memory retention as they were not able to properly recall what they had written. Compared to the ChatGPT users, those who did not rely on technological assistance showed strong memory retention and brain engagement and were able to engage in deeper cognitive involvement. Another session was then carried out, and the ChatGPT group was asked to do the essay without the dependence on ChatGPT, and the results were even more alarming, as the ChatGPT group performed worse than its peers. This basically showed that repeatedly relying on ChatGPT and other AI tools led to a state the researchers referred to as cognitive debt, which was impairing the ability to think critically and even eroding core learning skills. While these are the findings of a single study, the results do leave us thinking about our overdependence on ChatGPT and other tools and the bigger price we pay for convenience.
[15]
ChatGPT is getting smarter, but excessive use could destroy our...
Not only is AI getting frighteningly smart, but it may be making us dumber as well. Scientists found that students who used ChatGPT to complete essays had poorer cognitive skills than those who relied on just their brain, according to a dystopian new study out of the Massachusetts Institute of Technology (MIT) in Cambridge. "Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone," the researchers wrote, per the Telegraph. The team had set out to determine the "cognitive cost" of using large language models (LLMs), which have become increasingly omnipresent in every sector of society, including academia. According to a winter survey by the Pew Research Center, approximately 26% of teen students used the AI chatbot to help them with assignments in 2024 -- up from just 13% in 2023. To determine how using synthetic homework assistants affects the mind, the MIT researchers tasked 54 people with writing several SAT essays, Time Magazine reported. Participants were split into three groups: one that relied on pure brainpower, one that used Google, and a third that enlisted the aid of the now-ubiquitous LLM ChatGPT. Each person was outfitted with an electroencephalography (EEG) device so researchers could monitor their brain activity while completing the task. They found that the ChatGPT group "performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring," according to the Telegraph. The readings also showed reduced activity in the regions of the brain associated with memory and learning, the authors said, noting that a lot of the "thinking and planning was offloaded." In fact, AI-aided scholars got lazier with each subsequent paper to the point that by the third essay, they were simply typing the prompt into ChatGPT and having it do all the work. "It was more like, 'Just give me the essay, refine this sentence, edit it, and I'm done,'" said the paper's main author, Nataliya Kosmyna. By contrast, the essayists with no external aid demonstrated the highest levels of neural connectivity, especially in regions of the brain responsible for language comprehension, creativity and memory. The brain-only group was also more engaged and satisfied with their essays, per the study. Interestingly, the Google group showed just slightly lower levels of engagement, but the same amount of recall -- a perhaps troubling prospect given the increasing number of people who dive into research using AI rather than internet search engines. Researchers deduced that "frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving." That could have long-term ramifications, including "diminished critical inquiry, increased vulnerability to manipulation" and "decreased creativity," the authors said. Fortunately, the findings weren't a total indictment of AI in academia. As a follow-up exam, the scientists asked the ChatGPT group and their brain-only counterparts to rewrite one of their previous essays -- but the AI-assisted participants did so without the chatbot, while the unassisted group could use the cutting-edge tech. Unsurprisingly, the original ChatGPT group didn't recall much info from their papers, indicating either a lack of engagement or an inability to remember it. Meanwhile, the former brain-only group exhibited a marked increase in brain activity across all the aforementioned regions despite using the tool. That suggests if used properly, AI could be a helpful academic tool rather than a cognition-destroying crutch. The warning about AI-induced brain atrophy comes -- somewhat frighteningly -- as the technology is becoming more "intelligent." Recently, Chinese researchers found the first-ever evidence that AI models like ChatGPT process information similarly to the human mind -- particularly when it comes to language grouping.
[16]
ChatGPT & Cognitive Laziness: Are We Losing Mental Edge?
Using AI wisely keeps mental skills sharp while enjoying its benefits. The growing use of artificial intelligence tools like ChatGPT has sparked a new debate in the world of education, workplaces, and society at large. While AI offers speed and convenience, experts are now asking whether this reliance on technology is making people mentally lazy. Recent studies, especially from well-known institutions like MIT, suggest that this concern is not just theoretical. There is increasing evidence that depending too much on AI could reduce the brain's ability to think critically, solve problems, and come up with creative ideas. A team of researchers from MIT recently studied how the brain behaves when people use AI tools like ChatGPT compared to when they work without such support. The study used brain scans to track activity while people completed writing tasks. The findings were clear: those who used ChatGPT showed much less brain activity than those who wrote on their own or used traditional search engines like Google. Even more concerning, the effect did not go away immediately. When participants were asked to write without AI after using it for a while, their brain activity remained low. This points to a worrying trend. The brain, like any muscle, seems to work less hard when a tool like ChatGPT is doing the heavy lifting. Over time, this could lead to a weakening of important mental skills such as critical thinking, memory, and creativity. The term "cognitive laziness" refers to a habit where people stop putting effort into thinking because they rely too much on external tools. This is not a new phenomenon. A similar effect was seen when GPS devices became common; people stopped trying to remember directions or read maps. In the case of AI, however, the concern is deeper. AI doesn't just help with small tasks. The technology can write essays, answer complex questions, and even suggest solutions to problems. If people stop engaging with these tasks mentally, they risk losing the ability to handle them independently. When tasks are handed over to AI, the brain is no longer pushed to work hard. Over time, this can result in a habit of avoiding effort, and the mind may become slower at processing new information or solving unfamiliar problems. Also Read - Top Searched Keywords on ChatGPT One of the biggest dangers of overusing AI is the impact on creativity. Creativity requires effort, reflection, and the ability to connect different ideas in new ways. If AI is constantly filling in the gaps, people may stop trying to think outside the box. Writing, for example, could become formulaic, with less original thought or personal insight. Memory is also at risk. Just as people now often forget facts because they know they can search for them online (a pattern sometimes called the "Google Effect"), reliance on AI can weaken the brain's natural recall abilities. Instead of remembering key ideas, facts, or solutions, people may simply expect the AI to provide them on demand. Over time, this can dull the mind's sharpness. The effects of cognitive laziness are becoming noticeable in schools and workplaces. Teachers report that students who use AI tools to write essays or complete assignments often struggle to explain their work. Their understanding of the subject is shallow because the AI did most of the thinking for them. This raises concerns about the long-term learning of young people growing up with AI at their fingertips. In offices, AI tools are being used to draft reports, presentations, and emails. While this can boost efficiency, it also creates the risk that employees may present polished work without fully understanding the details. If something goes wrong or deeper knowledge is required, this lack of genuine understanding can become a serious problem. Not all experts agree that AI leads to cognitive laziness. Some studies show that when AI is used properly, it can support learning rather than harm it. For example, AI tutoring tools have helped students improve their performance in some cases. The difference lies in how AI is used. If it is treated as a helper that guides thinking rather than replacing it, the risks are much lower. It is important to focus on balance. AI should assist with tasks without taking over completely. This way, people can benefit from the speed and power of AI while still exercising their minds. AI models like ChatGPT do not truly understand what they are saying. They predict responses based on patterns in data. This means they can sometimes produce information that sounds convincing but is wrong. Such mistakes are called "hallucinations" in AI terms. People who depend on AI without checking facts risk spreading or believing incorrect information. This adds another layer of concern, as cognitive laziness might make people less likely to verify the accuracy of what AI provides. Also Read - Why ChatGPT Still Dominates AI Traffic Despite Endless Rivals? Every time a task is handed over to AI, the brain saves effort. But saving effort too often means the brain stops learning. This is known as "cognitive offloading." While offloading can be helpful for certain tasks, doing it all the time weakens memory, problem-solving, and creative abilities. The mental pathways that handle these skills become less active and, over time, harder to reactivate. Experts suggest that awareness is the first step to avoiding cognitive laziness. AI should be seen as a tool that supports thought, not replaces it. Simple habits can help protect mental sharpness. Writing without AI before using it for editing, double-checking AI answers, practicing memory exercises, and engaging in creative hobbies like drawing or puzzles are all ways to keep the mind active. Education systems and workplaces will need to focus on encouraging thoughtful AI use. Assignments and tasks should be designed in a way that still requires original thinking and problem-solving, even if AI is part of the process. Many believe that the best way forward is "hybrid intelligence," where human brains and AI work together. In this vision, people provide judgment, creativity, and moral insight, while AI handles repetitive or data-heavy tasks. This approach allows both strengths to shine and prevents the risk of mental skills fading away. AI tools like ChatGPT have the power to help in countless ways, but they also pose risks if used carelessly. Evidence shows that too much dependence on AI can make people mentally lazy, reducing critical thinking, memory, and creativity. However, by using AI thoughtfully and continuing to engage the mind actively, it is possible to enjoy the benefits of technology without losing the mental edge that makes human intelligence unique. The key lies in balance and letting AI supplement, not replace, human thought.
[17]
Educators warn that AI shortcuts are already making kids lazy:...
A new MIT study suggests that AI is degrading critical thinking skills -- which does not surprise educators one bit. "Brain atrophy does occur, and it's obvious," Dr. Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, told The Post. "Talk to any professor in the humanities or social sciences and they will tell you that students who just throw in a prompt and hand in their paper are not learning. " Researchers at MIT's Media Lab found that individuals who wrote essays with the help of ChatGPT showed less brain activity while completing the task, committed less to memory and grew gradually lazier in the writing process over time. A group of 54 18- to 39-year-olds were split into three cohort -- one using ChatGPT, one using Google search and one "brain-only" -- and asked to write four SAT essays over the course of four months. Scientists monitored their brain activity under EEG scans and found that the ChatGPT group had the lowest brain engagement when writing and showed lower executive control and attention levels. Over four sessions, the participants in the study's Chat GPT group started to use AI differently. At first, they generally asked for broad and minimal help, like with structure. But near the end of the study period, they were more likely to resort to copying and pasting entire sections of writing. Murphy Kenefick, a high-school literature teacher in Nashville, said he has seen first-hand how students' "critical thinking and attention spans have been demolished by AI. "It's especially a problem with essays, and it's a fight every assignment," he told The Post. "I've caught it about 40 times, and who knows how many other times they've gotten away with it." In the MIT study, the "brain-only" group had the "strongest, wide-ranging networks" in their brain scans, showing heightened activity in regions associated with creativity, memory and language processing. They also expressed more engagement, satisfaction and ownership of their work. "There is a strong negative correlation between AI tool usage and critical thinking skills, with younger users exhibiting higher dependence on AI tools and consequently lower cognitive performance scores," the study's authors warn. "The impact extends beyond academic settings into broader cognitive development." Asked to rewrite prior essays, the ChatGPT group was least able to recall them, suggesting they didn't commit them to memory as strongly as other groups. The ChatGPT group also tended to produce more similar essays, prompting two English teachers brought in to evaluate the essays to characterize them as "soulless" -- something teachers all over the country say they are seeing more regularly. Robert Black, who retired last week from teaching AP and IB high school history in Canandaigua, New York, said that the last two years of his 34-year career were a "nightmare because of ChatGPT." "When caught, kids just shrug," he said. "They can't even fathom why it is wrong or why the writing process is important." Black also points out AI has only worsened a gradual trend of degrading skills that he attributes to smartphones. "Even before ChatGPT it was harder and harder to get them to think out a piece of writing -- brainstorming, organizing and composing," he told The Post. "Now that has become a total fool's errand." Psychologist Jean Twenge, the author of "10 Rules for Raising Kids in a High-Tech World," agrees that AI is just one additional barrier to learning for Gen Z and Gen Alpha. She points out that international math, reading and science standardized test scores have been on the decline for years, which she attributes to pandemic lockdown and the advent of smartphones and social media. "With the addition of AI, academic performance will likely decline further, as students who regularly use AI to write essays are not learning how to write," Twenge told The Post. "When you don't learn how to write, you don't learn how to think deeply." The MIT study study was spearheaded by Media Lab research scientist Nataliya Kosmyna, who told Time Magazine that "developing brains are at the highest risk." While Toby Walsh, Chief Scientist at the University of New South Wales AI Institute in Sydney, Australia, acknowledges that the study's findings are frightening, he also warns educators against outright banning it. "We have to be mindful that there are great opportunities. I'm actually incredibly jealous of what students have today," Walsh said, recalling his 15-year-old daughter recently using an AI voice to ask her questions in French as a study aide. "I don't think we should be banning AI," Walsh said. But, he added, "the concern is that AI surpasses human intelligence, not because AI got better but because human intelligence got worse." Kenefick, meanwhile, imagines his students "wouldn't care" about the study's findings: "They just want the grade. They see no real incentive to develop any useful skills. It's very troubling."
[18]
New MIT Study Suggests Using AI Like ChatGPT Makes Writers Lazier, More Slow-Witted
'Cognitive activity scaled down in relation to external tool use,' the researchers write in their study. A new study from researchers at MIT has ignited debate over the effects of AI tools like ChatGPT on learning and critical thinking. The results of the study suggest that prolonged use of generative artificial intelligence for academic tasks, such as essay writing, could lead to diminished cognitive engagement and deteriorated intellectual capabilities, an effect the researchers called "cognitive debt." "Cognitive activity scaled down in relation to external tool use," the researchers at the Massachusetts Institute of Technology's Media Lab write in their study. Led by Nataliya Kosmyna, the study involved 54 participants aged 18 to 39 from the Boston area. The participants were divided into three groups and tasked with writing several SAT-style essays. One group leveraged ChatGPT, a Large Language Model built by an artificial intelligence program that is designed to understand and generate human language. Another group used Google Search, and the final group relied solely on their own knowledge and skills. Throughout the study, researchers measured participants' brain activity via electroencephalography and analyzed the quality and originality of their writing. "Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity," the researchers wrote. "Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels." Participants who used ChatGPT exhibited significantly lower neural engagement than the other two groups. Over time, their essays became increasingly generic and repetitive, with researchers noting a lack of creative thought and originality. By the third essay, many in the ChatGPT group resorted to simple copy-and-paste workflows, actively bypassing deeper cognitive processes. Meanwhile, the group that wrote essays with only their brain demonstrated the highest levels of cognitive engagement. Their brain scans showed increased neural connectivity in areas associated with creativity, memory, and critical thinking. The group also expressed more satisfaction with their work and displayed stronger recall of their own content. Similarly, participants in the Google Search group exhibited high levels of brain activity compared to those using ChatGPT. The final phase of the study involved role reversals, with ChatGPT users being asked to write their final essay without the AI tool while the brain-only group was allowed to use ChatGPT. Those transitioning from AI to manual writing struggled significantly. They had a harder time recalling their previous content and faced challenges producing essays comparable to those produced by participants who had started without AI. Despite the concerning findings, the study's authors urged caution in interpreting the results, citing the limited sample size and preliminary nature of their conclusions. Only 18 participants completed the final essay-writing phase, a factor that may have affected the findings. Nonetheless, the study raises important questions about the educational use of generative AI. The researchers drew comparisons to the introduction of calculators in the 1970s, when educators responded to the technology by raising the complexity of exam questions, ensuring students still had to engage in high-level problem-solving. The researchers suggest a similar approach may be needed for AI.
[19]
MIT's ChatGPT study says AI is making you dumber: Here's how
Artificial intelligence has become second nature to us in 2025. From students drafting essays to professionals powering through reports, tools like ChatGPT have woven themselves into the fabric of how we think, work, and create. But a new study out of MIT's Media Lab, led by neuroscientist Nataliya Kosmyna, is raising an urgent question: What is artificial intelligence doing to our brains? Over four months, 54 participants were fitted with EEG caps while completing writing tasks with either ChatGPT, Google Search, or nothing but their own minds. The findings, published in the Journal of Cognitive Neuroscience, paint a striking picture of how AI might be reshaping human cognition. And not always for the better. Here are the biggest insights from the study that might change the way we interact with AI, from the classroom to the boardroom. Also read: Midjourney V1 Explained: Better than Google Veo 3? One of the most jaw-dropping revelations: participants using ChatGPT experienced a 47% drop in neural connectivity. That's a steep decline from 79 functional brain connections to just 42. Think of it as your brain's Wi-Fi signal going weak just when you need it most. Meanwhile, participants who tackled the tasks without AI lit up with alpha and theta brain waves, both closely tied to creativity, memory, and deep thinking. Even more alarming? Over 83% of ChatGPT users were unable to recall key details of the essays they had just written. In short, while AI might make our output look polished, it's doing far less for what's happening under the hood. As Kosmyna puts it, "When the machine starts thinking for us, the brain stops needing to show up." The productivity gains are undeniable. On average, ChatGPT helped users complete their tasks 60% faster. But that speed comes with a price: a 32% reduction in germane cognitive load, the kind of mental effort tied to understanding, learning, and reflection. Also read: Israel-Iran conflict: How AI's guiding military war on ground The researchers call this "cognitive debt". A metaphorical loan we take out when we outsource our thinking to AI. Eventually, the interest comes due: we become quicker, yes, but not necessarily wiser. The participants who wrote without AI often described a greater sense of satisfaction and engagement, evidence that slower, more effortful work still holds value in an AI-saturated world. Perhaps the most sobering twist came when long-time ChatGPT users were asked to complete tasks without assistance. The result? Their performance declined more than those who had never used AI at all. It was as if their brains had forgotten how to work independently. This finding echoes growing concerns among educators and employers alike: that AI isn't just changing how we work, it may be quietly eroding our ability to think without it. However, the opposite was also true. Users with strong foundational cognitive skills who used AI sparingly actually saw an increase in brain activity during assisted tasks. The message? AI isn't inherently harmful, it's how we use it that matters. Despite the red flags, the study offers hope. Participants who had relied only on their own minds or used traditional tools like search engines showed enhanced brain activity when they first began experimenting with ChatGPT. Their prefrontal cortex lit up, and memory retention improved, suggesting that AI, when used as a support, not a substitute, can actually boost cognitive performance. This is the model the researchers hope to champion: AI as a co-pilot, not an autopilot. It's about balance, letting the machine help without handing over the wheel entirely. As social platforms light up with hot takes and debate, this study cuts through the noise with a clear warning: AI can sharpen us or soften us. The difference lies in how we engage with it. Will we use tools like ChatGPT to spark new ideas, enhance creativity, and stretch our thinking or let them think for us entirely? The answer may shape not just our productivity, but the very essence of our minds.
[20]
Is ChatGPT making us lazy? New MIT study raises questions
Researchers urge caution, especially in schools, warning that early AI exposure may harm cognitive development in young minds. Whether it's drafting an email, summarising a long article, or finding just the right words for an essay, ChatGPT has become our go-to tool. It makes our work quick, convenient, and often impressively accurate. But a new study from MIT's Media Lab has raised a serious question: is our growing reliance on AI tools making our brains lazier? The research was conducted on 54 participants aged between 18 and 39 years. Split into three groups, they were asked to write an SAT-style essay using ChatGPT and Google Search, while a third group relied solely on their own brainpower without any digital assistance. As they were writing, researchers monitored their brain activity using EEG headsets, and the findings are not very pleasing for AI enthusiasts. The one taking help from ChatGPT showed the lowest brain activity across 32 regions. Not only that, but their essays were described as "soulless," and their creativity, attention, and effort declined with each new task. By the final round, many participants were simply pasting prompts into ChatGPT and copying the answers with minimal editing. Their brainwaves reflected their mental capability. On the other hand, those who wrote without using any tools showed the highest engagement, particularly in terms of imagination, focus, and memory. They also reported feeling more satisfied with their work. Even the group using Google Search performed better cognitively than those relying on ChatGPT. This also suggests that navigating through information still stimulates the brainwaves more than using AI. Also read: Motorola Edge 50 now available with Rs 11,000 discount on Flipkart Lead researcher Nataliya Kosmyna says the findings were urgent enough to release before peer review, especially since AI tools are being considered to be taught in schools. "I'm afraid someone will say, 'Let's do GPT kindergarten,'" she said. "That would be detrimental. Developing brains are at the highest risk." What's even worse is that when participants using ChatGPT were later asked to rewrite one of their earlier essays without any help, they struggled to remember what they'd written. "It was efficient," Kosmyna said, "but nothing was integrated into their brains." Researchers conclude that while AI boosts quick productivity, it may hurt long-term learning and brain development.
Share
Copy Link
A recent MIT study suggests that using AI tools like ChatGPT for writing tasks may lead to reduced cognitive engagement and potential learning impairments, raising concerns about the widespread adoption of AI in education and other fields.
A recent study conducted by researchers from the Massachusetts Institute of Technology (MIT) has shed light on the potential cognitive consequences of using AI tools like ChatGPT for writing tasks. The research, which is currently awaiting peer review, suggests that reliance on large language models (LLMs) may lead to reduced cognitive engagement and potential learning impairments 1.
Source: Wccftech
The MIT team, led by Dr. Nataliya Kosmyna, conducted a four-month study involving 54 adults divided into three groups: one using ChatGPT, another using search engines, and a third relying solely on their own cognitive abilities 2. Participants were tasked with writing a series of essays while their brain activity was monitored using electroencephalogram (EEG) headsets.
Key findings from the study include:
Reduced Brain Activity: The group using ChatGPT showed significantly lower cognitive engagement compared to the other groups, with up to a 55% reduction in brain connectivity as measured by Dynamic Directed Transfer Function (dDTF) 2.
Impaired Memory and Recall: ChatGPT users demonstrated poorer recall of their written content and a lower sense of ownership over their work 3.
Homogeneous Output: Essays produced with AI assistance tended to be more uniform and closely adhered to the original prompts 3.
The study's results raise significant concerns about the impact of AI tools on learning and cognitive development. Dr. Kosmyna emphasized the potential risks of integrating AI too early in the educational process, stating, "In this study, we demonstrate the pressing matter of a likely decrease in learning skills" 2.
The researchers suggest that delaying AI integration until learners have engaged in sufficient self-driven cognitive effort may promote both immediate tool efficacy and lasting cognitive autonomy 2.
Source: Mashable
As AI tools become increasingly prevalent in various fields, including education, law, and media, the study's findings have sparked broader discussions about the long-term effects of AI reliance:
Cognitive Independence: There are growing concerns that overreliance on AI may reduce cognitive independence and critical thinking abilities 4.
Educational Policies: The rapid adoption of AI in education has prompted calls for careful consideration of its implementation, particularly in early education 4.
Mental Health: Some studies have reported indicators of addiction and withdrawal symptoms among heavy ChatGPT users, raising concerns about the psychological impact of AI tools 5.
Source: Digit
While the MIT study provides valuable insights, it's important to note that the research is still awaiting peer review, and the sample size was relatively small. Dr. Kosmyna emphasized that the findings should be treated with caution and considered preliminary 2.
As AI continues to integrate into various aspects of society, further research is needed to fully understand its cognitive and practical impacts on learning and cognitive development. The study serves as a crucial starting point for exploring the complex relationship between AI tools and human cognition in the digital age.
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
11 Sources
Business
13 hrs ago
11 Sources
Business
13 hrs ago
SoftBank makes a significant $2 billion investment in Intel, boosting the chipmaker's efforts to regain its competitive edge in the AI semiconductor market.
22 Sources
Business
22 hrs ago
22 Sources
Business
22 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
21 hrs ago
15 Sources
Technology
21 hrs ago
Microsoft introduces a new AI-powered 'COPILOT' function in Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
8 Sources
Technology
14 hrs ago
8 Sources
Technology
14 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
13 hrs ago
10 Sources
Technology
13 hrs ago