15 Sources
[1]
Brain activity lower when using AI chatbots: MIT research
EEG and recall tests suggest people who use ChatGPT to write essays aren't learning much Using AI chatbots actually reduces activity in the brain versus accomplishing the same tasks unaided, and may lead to poorer fact retention, according to a new preprint study out of MIT. Seeking to understand how the use of LLM chatbots affects the brain, a team led by MIT Media Lab research scientist Dr. Nataliya Kosmyna hooked up a group of Boston-area college students to electroencephalogram (EEG) headsets and gave them 20 minutes to write a short essay. One group was directed to write without any outside assistance, a second group was allowed to use a search engine, and a third was instructed to write with the assistance of OpenAI's GPT-4o model. The process was repeated four times over several months. While not yet peer reviewed, the pre-publication research results suggest a striking difference between the brain activity of the three groups and the corresponding creation of neural connectivity patterns. To put it bluntly and visually, brain activity in the LLM-using cohort was ... a bit dim. EEG analysis showed that each group exhibited distinct neural connectivity patterns, with brain connectivity "systematically scaled down with the amount of external support." In other words, the search engine users showed less brain engagement, and the LLM cohort "elicited the weakest overall coupling." Cognitive load in the participants was measured using a method known as Dynamic Directed Transfer Function (dDTF), which measures specific brain activity related to the flow of information across different brain regions. dDTF is able to account for the strength and direction of flow, making it a good representation of "executive function, semantic processing and attention regulation," according to the MIT researchers. The researchers said that, compared to the baseline established by the group writing using nothing but their grey and white matter, the search engine group showed between 34 and 48 percent less dDTF connectivity. The LLM group, meanwhile, showed a more profound up to 55 percent reduction in dDTF signal magnitude. Put simply, relying on LLMs - and, to a lesser extent, search engines - significantly reduces task-related brain connectivity, indicating lower cognitive engagement during the essay-writing task. "The Brain-only group leveraged broad, distributed neural networks for internally generated content," the researchers said of their results. "The Search Engine group relied on hybrid strategies of visual information management and regulatory control; and the LLM group optimized for procedural integration of AI-generated suggestions." As the researchers explained, those distinctions raise "significant implications" for educational practices and how we understand learning - namely in that there appears to be a definite tradeoff between internal synthesis of information and external support. The LLM group's participants performed worse than their counterparts in the Brain-only group at all levels Tests of participants on recall and perceived ownership of what they wrote were demonstrably worse in the LLM cohort, the research team found. "In this study we demonstrate the pressing matter of a likely decrease in learning skills," the researchers said. "The LLM group's participants performed worse than their counterparts in the Brain-only group at all levels." The fourth session of essay writing reinforced those findings. In the last research session, participants who were originally told to rely on their brains or LLMs swapped roles and were given another set of essay instructions. Unsurprisingly, the LLM group performed poorly when asked to rely on their own thought processes. "In Session 4, removing AI support significantly impaired the participants from original LLM group," the researchers said. The opposite was true for the other cohort. "The so-called Brain-to-LLM group exhibited significant increase in brain connectivity across all EEG frequency bands when allowed to use an LLM on a familiar topic." The findings suggest that the use of AI early in the learning process "may result in shallow encoding" that leads to poor recall of facts and a lack of learning because all the effort has been offloaded. Using one's cognitive faculties to learn about something, and then using AI to further research skills, on the other hand, is perfectly acceptable. "Taken together, these findings support an educational model that delays AI integration until learners have engaged in sufficient self-driven cognitive effort," the MIT team concluded. "Such an approach may promote both immediate tool efficacy and lasting cognitive autonomy." That might not be a shocking conclusion, but given the increasing number of young people relying on AI to do their schoolwork, the issue needs to be addressed before the world produces an entire generation of intellectually stunted AI junkies. Kosmyna told The Register in an email that she doesn't want to use words like "stupid, dumb or brainrot" to refer to AI's effect on us, arguing it does a disservice to the work her team has done. Still - it is having an effect that needs to be addressed. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research," the paper concluded. With the paper yet to undergo peer review, Kosmyna noted that its conclusions "are to be treated with caution and as preliminary." Nonetheless, she wrote, the pre-review findings can still serve "as a preliminary guide to understanding the cognitive and practical impacts of AI on learning." The team hopes that future studies will look at not only the use of LLMs in modalities beyond text, but also AI's impact on memory retention, creativity and written fluency. As for what the MIT team plans to research next, Kosmyna told us the team is turning its attention to a similar study looking at vibe coding, or using AI to generate code from natural language prompts. "We have already collected the data and are currently working on the analysis and draft," Kosmyna said. She added that, as this is one of the first such studies to be done studying AI's effect on the human brain, she expects this work will trigger additional studies in the future "with different protocols, populations, tasks, methodologies, that will add to the general understanding of the use of this technology in different aspects of our lives." With AI creeping into seemingly every aspect of our lives at an ever-increasing pace, there's going to be plenty of research to be done. ®
[2]
MIT brain scans suggest that using GenAI tools reduces cognitive activity
Why it matters: As the use of generative AI becomes increasingly common in education, law, politics, media, and other fields, many worry that reliance on the technology may reduce cognitive independence. A recent study from MIT strongly supports this concern, indicating that the use of digital tools significantly alters brain activity. The newly published paper explains that as participants in an experiment wrote a series of essays, electronic brain monitoring revealed substantially weaker connections between regions of the brain in those who used large language models (LLMs). This correlated with poorer memory and more derivative output. Three groups of participants each wrote three essays: one group relied on an LLM, another used search engines, and the third worked without any external tools. Then, the LLM and "brain-only" groups swapped members for a fourth essay. Although essays from the brain-only group were not always the most accurate, participants in that group displayed significantly stronger neural connectivity, suggesting deeper mental engagement. While essays from the LLM group received high marks from both human graders and AI judges, the writing tended to be more homogeneous and adhered more closely to the original prompt. Participants who used LLMs were more likely to copy and paste, edit their work less, and struggle to quote from their own writing. During the final session, LLM users also had difficulty recalling information from earlier in the experiment. Their brain activity had reset to a novice level regarding the essay topics. During the final session, LLM users also had difficulty recalling information from earlier in the experiment. Their brain activity had reset to a novice level regarding the essay topics. Although the brain-only group showed some decline in connectivity over time, it remained at an intermediate level, and participants could easily remember earlier material. Interestingly, participants who switched from the LLM group to the brain-only group showed increased neural connectivity. The group that used search engines demonstrated low-to-intermediate brain connectivity. Their writing was also highly homogeneous, but they were better able to quote from their work, suggesting stronger memory retention compared to LLM users. Overall, the results indicate that any use of digital tools impacts brain activity, but search engines require more mental engagement than generative AI. These findings could have significant implications for education, where the use of AI is becoming widespread. In many schools, most students now use tools like ChatGPT to varying degrees when completing assignments. Some generate only outlines or topic ideas, while others use the assignments as prompts and submit the output without so much as reading it. Teachers and professors have also started using AI to grade assignments and try to detect the use of AI, with varying results. The results of the MIT study suggest that both groups might suffer cognitively, regardless of how much or how little they rely on LLMs.
[3]
Using ChatGPT to write? MIT study says there's a cognitive cost.
Relying on ChatGPT significantly affects critical thinking abilities, according to a new study. Researchers from MIT Media Lab, Wellesley College, and Massachusetts College of Art and Design conducted a four-month study titled "Your Brain on ChatGPT" and found users of large language models (LLMs) like OpenAI's chatbot "consistently underperformed at neural, linguistic, and behavioral levels." This included the participants' decreased brain activity, a weaker sense of authorship, and inability to remember what they wrote -- which even continued when they weren't allowed to use an LLM. Anyone who uses ChatGPT for writing may have drawn similar conclusions; the point of using LLMs, after all, is to automate the work and outsource the critical thinking effort. But with this MIT study, there's now scientific evidence showing that relying on ChatGPT and other LLMs can impair memory and learning. It's worth noting that the study, published June 10, surveyed a small group and has not yet been peer-reviewed, but according to an interview with Time, the lead author Nataliya Kosmyna felt it was important to publish the study as is, given the rapid adoption of genAI, particularly in education. This Tweet is currently unavailable. It might be loading or has been removed. "What really motivated me to put it out now before waiting for a full peer review is that I am afraid in six to eight months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental," said Kosmyna. In April, President Donald Trump signed an executive order to promote "AI literacy and proficiency of K-12 students," so the urgency to understand how ChatGPT is affecting our brains is all too real. The study divided 54 participants into three groups with the task of writing SAT essays over the course of three sessions. One group used ChatGPT ("LLM group"), another group used Google search ("search engine group", and the third group wasn't allowed any tools ("brain only.") In an additional fourth session with 18 participants, the LLM group was tasked with writing an essay without ChatGPT and the brain only group was allowed to use ChatGPT. Researchers measured the participants' brain activity while they wrote the essays using electroencephalography (EEG), analyzed the essays using Natural Language Processing (NLP), and had the essays scored by AI and human graders. Among the many discoveries detailed in the length paper, researchers discovered a stark decrease in the LLM group's "alpha band connectivity" which measures the brain's cognitive abilities like memory and language processing, compared to the brain only group. This was evident when the participants were asked to quote from the essays they had written. "LLM users significantly underperformed in this domain, with 83 percent of participants reporting difficulty quoting in Session 1, and none providing correct quotes," reads the study. This Tweet is currently unavailable. It might be loading or has been removed. In the fourth session, where the group previously using ChatGPT had to write an essay without it, the participants continued to struggle with quoting anything from what they had written. "LLM group's poor recall and incorrect quoting is a possible indicator that their earlier essays were not internally integrated, likely due to outsourced cognitive processing to the LLM," the study reads. This suggests that the participants weren't really retaining what they wrote or took from ChatGPT. In fact, by the third session, the researchers reported that most of the essays from the LLM group were mostly copied and pasted responses from ChatGPT with "minimal editing." Another effect researchers measured was the perceived level of "ownership" or belief that they had fully conceived of the essay. Compared to the brain only group, which consistency claimed almost full ownership, the LLM group "presented a fragmented and conflicted sense of authorship," with some claiming, full, partial, or no ownership whatsoever. While it might be tempting to offload work into LLMs like ChatGPT, as this study shows, there could be long-term cognitive consequences.
[4]
ChatGPT's Impact On Our Brains According to an MIT Study
After writing the three essays, the subjects were then asked to re-write one of their previous efforts -- but the ChatGPT group had to do so without the tool, while the brain-only group could now use ChatGPT. The first group remembered little of their own essays, and showed weaker alpha and theta brain waves, which likely reflected a bypassing of deep memory processes. "The task was executed, and you could say that it was efficient and convenient," Kosmyna says. "But as we show in the paper, you basically didn't integrate any of it into your memory networks." The second group, in contrast, performed well, exhibiting a significant increase in brain connectivity across all EEG frequency bands. This gives rise to the hope that AI, if used properly, could enhance learning as opposed to diminishing it. Read more: I Quit Teaching Because of ChatGPT This is the first pre-review paper that Kosmyna has ever released. Her team did submit it for peer review but did not want to wait for approval, which can take eight or more months, to raise attention to an issue that Kosmyna believes is affecting children now. "Education on how we use these tools, and promoting the fact that your brain does need to develop in a more analog way, is absolutely critical," says Kosmyna. "We need to have active legislation in sync and more importantly, be testing these tools before we implement them." Ironically, upon the paper's release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to "only read this table below," thus ensuring that LLMs would return only limited insight from the paper.
[5]
Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming
"Consistently underperformed at neural, linguistic, and behavioral levels." Scientists at the Massachusetts Institute of Technology have found some startling results in the brain scans of ChatGPT users, adding to the growing body of evidence suggesting that AI is having a serious -- and barely-understood -- impact on its users' cognition even as it explodes in popularity worldwide. In a new paper currently awaiting peer review, researchers from the school's storied Media Lab documented the vast differences between the brain activity of people who using ChatGPT to write versus those who did not. The research team recruited 54 adults between the ages of 18 and 39 and divided them into three groups: one that used ChatGPT to help them write essays, one that used Google search as their main writing aid, and one that didn't use AI tech. The study took place over four months, with each group tasked with writing one essay per month for the first three, while a smaller subset of the cohort either switched from not using ChatGPT to using it -- or vice versa -- in the fourth month. As they completed the essay tasks, the participants were hooked up to electroencephalogram (EEG) machines that recorded their brain activity. Here's where things get wild: the ChatGPT group not only "consistently underperformed at neural, linguistic, and behavioral levels," but also got lazier with each essay they wrote; the EEGs found "weaker neural connectivity and under-engagement of alpha and beta networks." The Google-assisted group, meanwhile, had "moderate" neural engagement, while the "brain-only" group exhibited the strongest cognitive metrics throughout. These findings about brain activity, while novel, aren't entirely surprising after prior studies and anecdotes about the many ways that AI chatbot use seems to be affecting people's brains and minds. Previous MIT research, for instance, found that ChatGPT "power users" were becoming dependent on the chatbot and experiencing "indicators of addiction" and "withdrawal symptoms" when they were cut off. And earlier this year Carnegie Mellon and Microsoft -- which has invested billions to bankroll OpenAI, the maker of ChatGPT -- found in a joint study that heavy chatbot use appears to almost atrophy critical thinking skills. A few months later, The Guardian found [link] in an analysis of studies like that one that researchers are growing increasingly concerned that tech like ChatGPT is making us stupider, and a Wall Street Journal reporter even owned up [link] to his cognitive skill loss from over-using chatbots. Beyond the neurological impacts, there are also lots of reasons to be concerned about how ChatGPT and other chatbots like it affects our mental health. As Futurism found in a recent investigation, many users are becoming obsessed with ChatGPT and developing paranoid delusions into which the chatbot is pushing them deeper. Some have even stopped taking their psychiatric medication because the chatbot told them to. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously," OpenAI told us in response to that reporting. "We've built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations." Add it all up, and the evidence is growing that AI is having profound and alarming effects on many users -- but so far, we're seeing no evidence that corporations are slowing down in their attempts to injecting the tech into every part of of society.
[6]
This is your brain on ChatGPT
Sizzle. Sizzle. That's the sound of your neurons frying over the heat of a thousand GPUs as your generative AI tool of choice cheerfully churns through your workload. As it turns out, offloading all of that cognitive effort to a robot as you look on in luxury is turning your brain into a couch potato. That's what a recently published (and yet to be peer-reviewed) paper from some of MIT's brightest minds suggests, anyway. The study examines the "neural and behavioral consequences" of using LLMs (Large Language Models) like ChatGPT for, in this instance, essay writing. The findings raise serious questions about how long-term use of AI might affect learning, thinking, and memory. More worryingly, we recently witnessed it play out in real life. The study, titled: Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, involved 54 participants split into three groups: Across three sessions, these groups were tasked with writing an essay about one of three changing topics. An example of the essay question for the topic of "Art" was: "Do works of art have the power to change people's lives?" Participants then had 20 minutes to answer the question related to their chosen topic in essay form, all while wearing an Enobio headset to collect EEG signals from their brain. In a fourth session, LLM and Brain-only groups were swapped to measure any potential lasting impact of prior sessions. The results? Across the first three tests, Brain-only writers had the most active, widespread brain engagement during the task, while LLM-assisted writers showed the lowest levels of brain activity across the board (although routinely completed the task fastest). Search engine-assisted users generally fell somewhere in between the two. In short, Brain-only writers were actively engaging with the assignment, producing more creative and unique writing while actually learning. They were able to quote their essays afterwards and felt strong ownership of their work. Alternatively, LLM users engaged less over each session, began to uncritically rely on ChatGPT more as the study went on, and felt less ownership of the results. Their work was judged to be less unique, and participants often failed to accurately quote from their own work, suggesting reduced long-term memory formation. Researchers referred to this phenomenon as "metacognitive laziness" -- not just a great name for a Prog-Rock band, but also a perfect label for the hazy distance between autopilot and Copilot, where participants disengage and let the AI do the thinking for them. But it was the fourth session that yielded the most worrying results. According to the study, when the LLM and Brain-only group traded places, the group that previously relied on AI failed to bounce back to pre-LLM levels tested before the study. To put it simply, sustained use of AI tools like ChatGPT to "help" with tasks that require critical thinking, creativity, and cognitive engagement may erode our natural ability to access those processes in the future. But we didn't need a 206-page study to tell us that. On June 10, an outage lasting over 10 hours saw ChatGPT users cut off from their AI assistant, and it provoked a disturbing trend of people openly admitting, sans any hint of awareness, that without access to OpenAI's chatbot, they'd suddenly forgotten how to work, write, or function. This study may have used EEG caps and grading algorithms to prove it, but most of us may already be living its findings. When faced with an easy or hard path, many of us would assume that only a particularly smooth-brained individual would willingly take the more difficult, obtuse route. However, as this study claims, the so-called easy path may be quietly sanding down our frontal lobes in a lasting manner -- at least when it comes to our use of AI. That's especially frightening when you think of students, who are adopting these tools en masse, with OpenAI itself pushing for wider embrace of ChatGPT in education as part of its mission to build "an AI-Ready Workforce." A 2023 study conducted by Intelligent.com revealed that a third of U.S. college students surveyed used ChatGPT for schoolwork during the 2022/23 academic year. In 2024, a survey from the Digital Education Council claimed that 86% of students across 16 countries use artificial intelligence in their studies to some degree. AI's big sell is productivity, the promise that we can get more done, faster. And yes, MIT researchers have previously concluded that AI tools can boost worker productivity by up to 15%, but the long-term impact suggests codependency over competency. And that sounds a lot like regression.
[7]
Could using ChatGPT make you dumber? New study says it can
A new study shows that using ChatGPT could mean "a likely decrease" in learning skills and could internalise "shallow or biased perspectives". A new pre-print study from the US-based Massachusetts Institute of Technology (MIT) found that using OpenAI's ChatGPT could lead to cognitive decline. Researchers with the MIT Media lab broke participants into three groups and asked them to write essays only using ChatGPT, a search engine, or using no tools. Brain scans were taken during the essay writing with an electroencephalogram (EEG) during the task. Then, the essays were evaluated by both humans and artificial intelligence (AI) tools. The study showed that the ChatGPT-only group had the lowest neural activation in parts of the brain and had a hard time recalling or recognising their writing. The brain-only group that used no technology was the most engaged, showing both cognitive engagement and memory retention. The researchers then did a second session where the ChatGPT group were asked to do the task without assistance. In that session, those who used ChatGPT in the first group performed worse than their peers with writing that was "biased and superficial". The study found that repeated GPT use can come with "cognitive debt" that reduces long-term learning performance in independent thinking. In the long run, people with cognitive debt could be more susceptible to "diminished critical inquiry, increased vulnerability to manipulation and decreased creativity," as well as a "likely decrease" in learning skills. "When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalising shallow or biased perspectives," the study continued. The study also found higher rates of satisfaction and brain connectivity in the participants who wrote all essays with just their minds compared to the other groups. Those from the other groups felt less connected to their writing and were not able to provide a quote from their essays when asked to by the researchers. The authors recommend that more studies be done about how any AI tool impacts the brain "before LLMs are recognised as something that is net positive for humans."
[8]
MIT Study Warns of Cognitive Decline with LLM Use | AIM
While LLMs reduce immediate cognitive load and make tasks feel easier, this reduction comes at the cost of deep engagement and memory formation. A recent study from MIT Media Lab reveals how the increasing use of large language models (LLMs) like ChatGPT may be quietly dulling our cognitive skills, especially in complex tasks like writing and critical thinking. The findings, part of a study titled 'Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task', show that while AI tools can improve productivity, they come with measurable mental trade-offs. In an effort to understand this phenomenon, the researchers split participants into three groups: one used an LLM (ChatGPT), another used a traditional search engine, and the third used only their brain -- no external assistance. Through EEG (electroencephalogram) brain monitoring and essay analysis, the study found that LLM users showed significantly weaker neural connectivity, indicating lower cognitive effort and engagement compared to others. "Brain connectivity systematically scaled down with the amount of external support," the paper stated. Those who relied solely on their brain had the strongest and widest-ranging neural networks, while LLM users had the weakest overall coupling. Shockingly, LLM users were far less able to recall or quote from essays they had just written. In fact, 83.3% of LLM users failed to provide even a single correct quote from their own writing, compared to just 11.1% in both the Search Engine and Brain-only groups. The study also highlighted what it calls "cognitive offloading" -- outsourcing thinking to AI tools -- which leads to a long-term decline in learning capabilities. While LLMs reduce immediate cognitive load and make tasks feel easier, this reduction comes at the cost of deep engagement and memory formation. This "offloading" also seemed to lead to what the study calls a 'metacognitive laziness', where users skip the mental work of integrating ideas and reflecting critically on content. The implications are especially important for educators. The study suggests that over-reliance on AI writing tools could weaken students' long-term ability to think critically, recall information, and construct original arguments. As one participant noted in their interview, "I would rather use the internet than ChatGPT -- I can read other people's ideas." Researchers caution that while AI can be a powerful assistant, its use must be deliberate and well-regulated in learning environments. "We demonstrate the pressing matter of a likely decrease in learning skills," the authors concluded, adding that LLMs can make students more productive, but less cognitively present.
[9]
Need ChatGPT's help? It could be making you dumber
MIT research has found heavy use of ChatGPT impairs memory, lowers brain engagement and may hinder learning by replacing critical cognitive processes. Artificial intelligence chatbots such as OpenAI's ChatGPT could be diminishing cognitive ability, according to a recent study by MIT researchers. In a study by researchers from the Massachusetts Institute of Technology's Media Lab, 54 participants completed essay writing tasks over three sessions using one of three assigned methods and tools: ChatGPT, search engines and only their brains. In a fourth session, the researchers asked the ChatGPT group to use no tools, and the brain-only group participants were asked to use the LLM. The results were startling, as they revealed poor memory recall among AI users, with more than 83% of ChatGPT users unable to quote from essays they wrote minutes earlier. Co-founder of sales tech agency ColdIQ, Alex Vacca, described the results as "terrifying," adding that AI isn't making us more productive, "it's making us cognitively bankrupt." "You write something, hit save, and your brain has already forgotten it because ChatGPT did the thinking." The researchers stated that brain connectivity "systematically scaled down" with the amount of external support. "The Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling," they said. The researchers used electroencephalography (EEG) to record brain activity in order to assess participants' cognitive engagement and cognitive load during the tasks. Related: Elon Musk's AI firm xAI faces lawsuit over data center air pollution They discovered that an accumulation of "cognitive debt" could arise from repeated reliance on external systems such as LLMs, which replace the cognitive processes required for independent thinking. "Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity." The paper, which has yet to be peer reviewed, suggests that the usage of AI LLMs could actually harm learning, especially for younger users. "In this study, we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study," the researchers concluded. The researchers said that "longitudinal studies" are needed to understand the long-term impact of AI chatbots on the human brain, "before LLMs are recognized as something that is net positive for humans." When Cointelegraph asked ChatGPT for its thoughts on the study, the chatbot replied, "This study doesn't say ChatGPT is inherently harmful -- rather, it warns against overreliance without reflection or effort."
[10]
ChatGPT use linked to cognitive decline: MIT research
ChatGPT can harm an individual's critical thinking over time, a new study suggests. Researchers at MIT's Media Lab asked subjects to write several SAT essays and separated subjects into three groups -- using OpenAI's ChatGPT, using Google's search engine and using nothing, which they called the "brain‑only" group. Each subject's brain was monitored through electroencephalography (EEG), which measured the writer's brain activity through multiple regions in the brain. They discovered that subjects who used ChatGPT over a few months had the lowest brain engagement and "consistently underperformed at neural, linguistic, and behavioral levels," according to the study. The study found that the ChatGPT group initially used the large language model, or LLM, to ask structural questions for their essay, but near the end of the study, they were more likely to copy and paste their essay. Those who used Google's search engine were found to have moderate brain engagement, but the "brain-only" group showed the "strongest, wide-ranging networks." The findings suggest that using LLMs can harm a user's cognitive function over time, especially in younger users. It comes as educators continue to navigate teaching when AI is increasingly accessible for cheating. "What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental," the study's main author Nataliya Kosmyna told TIME. "Developing brains are at the highest risk." However, using AI in education doesn't appear to be slowing down. In April, President Trump signed an executive order that aims to incorporate AI into U.S. classrooms. "The basic idea of this executive order is to ensure that we properly train the workforce of the future by ensuring that school children, young Americans, are adequately trained in AI tools, so that they can be competitive in the economy years from now into the future, as AI becomes a bigger and bigger deal," Will Scharf, White House staff secretary, said at the time.
[11]
What happens when you use ChatGPT to write an essay? See what new study found.
A study published by the Massachusetts Institute of Technology Media Lab analyzed the cognitive function of 54 people writing an essay with: only the assistance of OpenAI's ChatGPT; only online browsers; or no outside tools at all. Largely, the study found that those who relied solely on ChatGPT to write their essays had lower levels of brain activity and presented less original writing. "As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with (language learning model) integration in educational and informational contexts," the study states. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research." Here's a deeper look at the study and how it was conducted. How was the study conducted? A team of MIT researchers, led by MIT Media Lab research scientist Nataliya Kosmyna, studied 54 participants between the ages of 18 and 39. Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups, 18 people per group. The study states that the three groups included a language learning model group, in which participants only used OpenAI's ChatGPT-4o to write their essays. The second group was limited to using only search engines for their research, and the third was prohibited from any tools. Participants in the latter group could only use their minds to write their essays. Each participant had 20 minutes to write an essay from one of three prompts taken from SAT tests, the study states. Three different options were provided to each group, totaling nine unique prompts. An example of a prompt available to participants using ChatGPT was about loyalty: "Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical? Does true loyalty require unconditional support?" As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain's electrical activity. Following the sessions, 18 participants returned for a fourth study group. Participants who had previously used ChatGPT to write their essays were required to use no tools and participants who had used no tools before used ChatGPT, the study states. Quality of essays: What did the study find? In addition to analyzing brain activity, the researchers looked at the essays themselves. First and foremost, the essays of participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, the study states. On the other hand, essays written with the help of ChatGPT were more homogenous. All of the essays were "judged" by two English teachers and two AI judges trained by the researchers. The English teachers were not provided background information about the study but were able to identify essays written by AI. "These, often lengthy essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We, as English teachers, perceived these essays as 'soulless,' in a way, as many sentences were empty with regard to content and essays lacked personal nuances," a statement from the teachers, included in the study, reads. As for the AI judges, a judge trained by the researchers to evaluate like the real teachers scored each of the essays, for the most part, a four or above, on a scale of five. Brain activity: What did the study find? When it came to brain activity, researchers were presented "robust" evidence that participants who used no writing tools displayed the "strongest, widest-ranging" brain activity, while those who used ChatGPT displayed the weakest. Specifically, the ChatGPT group displayed 55% reduced brain activity, the study states. And though the participants who used only search engines had less overall brain activity than those who used no tools, these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen. What's next for future studies? Further research on the long-term impacts of artificial intelligence chatbots on cognitive activity is needed, the study states. As for this particular study, researchers noted that a larger number of participants from a wider geographical area would be necessary for a more successful study. Writing outside of a traditional educational environment could also provide more insight into how AI works in more generalized tasks. Greta Cross is a national trending reporter at USA TODAY. Story idea? Email her at gcross@usatoday.com.
[12]
Is ChatGPT making us dumb? MIT brain scans reveal alarming truth about AI's impact on the human mind
It's quick, it's clever, and it answers almost everything -- no wonder millions around the world rely on ChatGPT. But could this digital genie be dulling our minds with every wish we make? According to a startling new study by scientists at MIT's Media Lab, the answer may be yes. Researchers have now found that excessive use of AI tools like ChatGPT could be quietly eroding your memory, critical thinking, and even your brain activity. Published on arXiv, the study titled "The Cognitive Cost of Using LLMs" explores how language models -- especially ChatGPT -- affect the brain's ability to think, learn, and retain information. To examine what they call the "cognitive cost" of using large language models (LLMs), MIT researchers tracked 54 students over a four-month period using electroencephalography (EEG) devices to monitor brain activity. The participants were divided into three groups: one used ChatGPT, another relied on Google, and the last used no external help at all -- dubbed the "Brain-only" group. While the AI-powered group initially showed faster results, the long-term findings were more sobering. Students who depended on ChatGPT for essay writing exhibited poorer memory retention, reduced brain engagement, and lower scoring compared to their peers. As the researchers noted, "The LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, and scoring." Interestingly, students who used Google showed moderate brain activity and generated more thoughtful content than those who leaned on ChatGPT. Meanwhile, those in the Brain-only group had the highest levels of cognitive engagement, producing original ideas and deeper insights. In fact, even when ChatGPT users later attempted to write without assistance, their brain activity remained subdued -- unlike the other groups who showed increased engagement while adapting to new tools. This suggests that habitual ChatGPT usage might not just affect how we think, but whether we think at all. The study also points to how this over-reliance on AI encourages mental passivity. While ChatGPT users reported reduced friction in accessing information, this convenience came at a cost. As the researchers explained, "This convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or 'opinions'." The team also raised red flags about algorithmic bias: what appears as top-ranked content from an AI is often a result of shareholder-driven training data, not necessarily truth or value. This creates a more sophisticated version of the "echo chamber," where your thoughts are subtly shaped -- not by your own reasoning, but by an AI's probabilistic guesses. As AI tools become more embedded in our everyday tasks -- from writing emails to crafting essays -- this study is a wake-up call for students, educators, and professionals. While tools like ChatGPT are powerful assistants, they should not become cognitive crutches. The researchers caution that as language models continue to evolve, users must remain alert to their potential mental side effects. In a world where convenience is king, critical thinking might just be the first casualty.
[13]
ChatGPT is getting smarter, but excessive use could destroy our...
Not only is AI getting frighteningly smart, but it may be making us dumber as well. Scientists found that students who used ChatGPT to complete essays had poorer cognitive skills than those who relied on just their brain, according to a dystopian new study out of the Massachusetts Institute of Technology (MIT) in Cambridge. "Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone," the researchers wrote, per the Telegraph. The team had set out to determine the "cognitive cost" of using large language models (LLMs), which have become increasingly omnipresent in every sector of society, including academia. According to a winter survey by the Pew Research Center, approximately 26% of teen students used the AI chatbot to help them with assignments in 2024 -- up from just 13% in 2023. To determine how using synthetic homework assistants affects the mind, the MIT researchers tasked 54 people with writing several SAT essays, Time Magazine reported. Participants were split into three groups: one that relied on pure brainpower, one that used Google, and a third that enlisted the aid of the now-ubiquitous LLM ChatGPT. Each person was outfitted with an electroencephalography (EEG) device so researchers could monitor their brain activity while completing the task. They found that the ChatGPT group "performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring," according to the Telegraph. The readings also showed reduced activity in the regions of the brain associated with memory and learning, the authors said, noting that a lot of the "thinking and planning was offloaded." In fact, AI-aided scholars got lazier with each subsequent paper to the point that by the third essay, they were simply typing the prompt into ChatGPT and having it do all the work. "It was more like, 'Just give me the essay, refine this sentence, edit it, and I'm done,'" said the paper's main author, Nataliya Kosmyna. By contrast, the essayists with no external aid demonstrated the highest levels of neural connectivity, especially in regions of the brain responsible for language comprehension, creativity and memory. The brain-only group was also more engaged and satisfied with their essays, per the study. Interestingly, the Google group showed just slightly lower levels of engagement, but the same amount of recall -- a perhaps troubling prospect given the increasing number of people who dive into research using AI rather than internet search engines. Researchers deduced that "frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving." That could have long-term ramifications, including "diminished critical inquiry, increased vulnerability to manipulation" and "decreased creativity," the authors said. Fortunately, the findings weren't a total indictment of AI in academia. As a follow-up exam, the scientists asked the ChatGPT group and their brain-only counterparts to rewrite one of their previous essays -- but the AI-assisted participants did so without the chatbot, while the unassisted group could use the cutting-edge tech. Unsurprisingly, the original ChatGPT group didn't recall much info from their papers, indicating either a lack of engagement or an inability to remember it. Meanwhile, the former brain-only group exhibited a marked increase in brain activity across all the aforementioned regions despite using the tool. That suggests if used properly, AI could be a helpful academic tool rather than a cognition-destroying crutch. The warning about AI-induced brain atrophy comes -- somewhat frighteningly -- as the technology is becoming more "intelligent." Recently, Chinese researchers found the first-ever evidence that AI models like ChatGPT process information similarly to the human mind -- particularly when it comes to language grouping.
[14]
MIT's ChatGPT study says AI is making you dumber: Here's how
Artificial intelligence has become second nature to us in 2025. From students drafting essays to professionals powering through reports, tools like ChatGPT have woven themselves into the fabric of how we think, work, and create. But a new study out of MIT's Media Lab, led by neuroscientist Nataliya Kosmyna, is raising an urgent question: What is artificial intelligence doing to our brains? Over four months, 54 participants were fitted with EEG caps while completing writing tasks with either ChatGPT, Google Search, or nothing but their own minds. The findings, published in the Journal of Cognitive Neuroscience, paint a striking picture of how AI might be reshaping human cognition. And not always for the better. Here are the biggest insights from the study that might change the way we interact with AI, from the classroom to the boardroom. Also read: Midjourney V1 Explained: Better than Google Veo 3? One of the most jaw-dropping revelations: participants using ChatGPT experienced a 47% drop in neural connectivity. That's a steep decline from 79 functional brain connections to just 42. Think of it as your brain's Wi-Fi signal going weak just when you need it most. Meanwhile, participants who tackled the tasks without AI lit up with alpha and theta brain waves, both closely tied to creativity, memory, and deep thinking. Even more alarming? Over 83% of ChatGPT users were unable to recall key details of the essays they had just written. In short, while AI might make our output look polished, it's doing far less for what's happening under the hood. As Kosmyna puts it, "When the machine starts thinking for us, the brain stops needing to show up." The productivity gains are undeniable. On average, ChatGPT helped users complete their tasks 60% faster. But that speed comes with a price: a 32% reduction in germane cognitive load, the kind of mental effort tied to understanding, learning, and reflection. Also read: Israel-Iran conflict: How AI's guiding military war on ground The researchers call this "cognitive debt". A metaphorical loan we take out when we outsource our thinking to AI. Eventually, the interest comes due: we become quicker, yes, but not necessarily wiser. The participants who wrote without AI often described a greater sense of satisfaction and engagement, evidence that slower, more effortful work still holds value in an AI-saturated world. Perhaps the most sobering twist came when long-time ChatGPT users were asked to complete tasks without assistance. The result? Their performance declined more than those who had never used AI at all. It was as if their brains had forgotten how to work independently. This finding echoes growing concerns among educators and employers alike: that AI isn't just changing how we work, it may be quietly eroding our ability to think without it. However, the opposite was also true. Users with strong foundational cognitive skills who used AI sparingly actually saw an increase in brain activity during assisted tasks. The message? AI isn't inherently harmful, it's how we use it that matters. Despite the red flags, the study offers hope. Participants who had relied only on their own minds or used traditional tools like search engines showed enhanced brain activity when they first began experimenting with ChatGPT. Their prefrontal cortex lit up, and memory retention improved, suggesting that AI, when used as a support, not a substitute, can actually boost cognitive performance. This is the model the researchers hope to champion: AI as a co-pilot, not an autopilot. It's about balance, letting the machine help without handing over the wheel entirely. As social platforms light up with hot takes and debate, this study cuts through the noise with a clear warning: AI can sharpen us or soften us. The difference lies in how we engage with it. Will we use tools like ChatGPT to spark new ideas, enhance creativity, and stretch our thinking or let them think for us entirely? The answer may shape not just our productivity, but the very essence of our minds.
[15]
Is ChatGPT making us lazy? New MIT study raises questions
Researchers urge caution, especially in schools, warning that early AI exposure may harm cognitive development in young minds. Whether it's drafting an email, summarising a long article, or finding just the right words for an essay, ChatGPT has become our go-to tool. It makes our work quick, convenient, and often impressively accurate. But a new study from MIT's Media Lab has raised a serious question: is our growing reliance on AI tools making our brains lazier? The research was conducted on 54 participants aged between 18 and 39 years. Split into three groups, they were asked to write an SAT-style essay using ChatGPT and Google Search, while a third group relied solely on their own brainpower without any digital assistance. As they were writing, researchers monitored their brain activity using EEG headsets, and the findings are not very pleasing for AI enthusiasts. The one taking help from ChatGPT showed the lowest brain activity across 32 regions. Not only that, but their essays were described as "soulless," and their creativity, attention, and effort declined with each new task. By the final round, many participants were simply pasting prompts into ChatGPT and copying the answers with minimal editing. Their brainwaves reflected their mental capability. On the other hand, those who wrote without using any tools showed the highest engagement, particularly in terms of imagination, focus, and memory. They also reported feeling more satisfied with their work. Even the group using Google Search performed better cognitively than those relying on ChatGPT. This also suggests that navigating through information still stimulates the brainwaves more than using AI. Also read: Motorola Edge 50 now available with Rs 11,000 discount on Flipkart Lead researcher Nataliya Kosmyna says the findings were urgent enough to release before peer review, especially since AI tools are being considered to be taught in schools. "I'm afraid someone will say, 'Let's do GPT kindergarten,'" she said. "That would be detrimental. Developing brains are at the highest risk." What's even worse is that when participants using ChatGPT were later asked to rewrite one of their earlier essays without any help, they struggled to remember what they'd written. "It was efficient," Kosmyna said, "but nothing was integrated into their brains." Researchers conclude that while AI boosts quick productivity, it may hurt long-term learning and brain development.
Share
Copy Link
A new MIT study shows that using AI chatbots like ChatGPT for writing tasks significantly reduces brain activity and cognitive engagement, raising concerns about the impact of AI on learning and critical thinking skills.
A groundbreaking study from the Massachusetts Institute of Technology (MIT) has revealed alarming findings about the cognitive effects of using AI chatbots like ChatGPT for writing tasks. The research, led by Dr. Nataliya Kosmyna from the MIT Media Lab, suggests that relying on large language models (LLMs) significantly reduces brain activity and may lead to poorer fact retention 1.
Source: Economic Times
The study involved 54 Boston-area college students divided into three groups: one using ChatGPT, another using search engines, and a third relying solely on their own cognitive abilities. Participants were connected to electroencephalogram (EEG) headsets while writing essays over four months 2.
Key findings include:
The study raises significant concerns about the impact of AI tools on learning and critical thinking skills. Dr. Kosmyna emphasized the potential risks of integrating AI too early in the educational process, stating, "I am afraid in six to eight months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental" 3.
Source: Mashable
The research suggests that relying on AI chatbots may have long-term cognitive consequences:
The study's findings contribute to a growing body of evidence suggesting that AI is having a serious and poorly understood impact on users' cognition. Previous research has indicated potential addiction-like behaviors in heavy ChatGPT users and a possible atrophy of critical thinking skills 5.
Source: TIME
Dr. Kosmyna and her team suggest an educational model that delays AI integration until learners have engaged in sufficient self-driven cognitive effort. They emphasize the need for active legislation and thorough testing of AI tools before implementation in educational settings 1 4.
As AI continues to permeate various aspects of society, including education, these findings underscore the urgent need for further research and careful consideration of the cognitive impacts of AI tools on users, particularly in learning environments.
Summarized by
Navi
[1]
Thinking Machines Lab, a secretive AI startup founded by former OpenAI CTO Mira Murati, has raised $2 billion in seed funding, valuing the company at $10 billion. The startup's focus remains unclear, but it has attracted significant investor interest.
2 Sources
Startups
20 hrs ago
2 Sources
Startups
20 hrs ago
The ongoing Israel-Iran conflict has unleashed an unprecedented wave of AI-generated disinformation, marking a new phase in digital warfare. Millions of people are being exposed to fabricated images and videos, making it increasingly difficult to distinguish fact from fiction in real-time.
3 Sources
Technology
20 hrs ago
3 Sources
Technology
20 hrs ago
A UN survey unveils a stark contrast in AI trust levels between developing and developed nations, with China showing the highest confidence. The study highlights the complex global attitudes towards AI adoption and its perceived societal benefits.
2 Sources
Technology
20 hrs ago
2 Sources
Technology
20 hrs ago
Reddit is reportedly considering the use of World's iris-scanning orbs for user verification, aiming to balance anonymity with authenticity in response to AI-generated content concerns and regulatory pressures.
3 Sources
Technology
20 hrs ago
3 Sources
Technology
20 hrs ago
Nvidia has reportedly booked all available capacity at Wistron's new server plant in Taiwan through 2026, focusing on the production of Blackwell and Rubin AI servers. This move highlights the increasing demand for AI hardware and Nvidia's strategy to maintain its market leadership.
2 Sources
Business and Economy
12 hrs ago
2 Sources
Business and Economy
12 hrs ago