8 Sources
[1]
Trading Speed for Depth: Does Using AI for Work Affect Our Confidence?
Be careful delegating your work to that chatbot. A new peer-reviewed study published Thursday by the American Psychological Association found that people who heavily rely on AI tools for work tasks reported feeling less confident in their abilities and had less ownership over their work. There has been growing research on how our brains function when we use AI tools. A landmark study from MIT in 2025 found that our brains don't retain as much information or employ necessary critical thinking skills when writing tasks are outsourced to AI chatbots. This new study aimed to understand how our human behavior, specifically executive functions -- like strategic planning and decision making -- can change when AI is part of the process. Sarah Baldeo, the study's author and a Ph.D. candidate in AI and neuroscience at Middlesex University in England, noted in the paper that these findings do not show that AI is harming or causing cognitive decline. Rather, they "highlight variability in how users distribute effort between themselves and AI systems under conditions of convenience and competence." Meaning, people who use AI are making conscious trade-offs, and their confidence fluctuates as a result. The study encouraged nearly 2,000 adults to use AI for a variety of workplace tasks, like prioritizing projects based on deadlines, explaining a strategy and developing plans with incomplete information. It then asked them to self-report their levels of confidence, ownership and AI reliance, including whether they significantly altered the AI-generated outputs. Overall, confidence varied with AI use. A greater reliance on AI was associated with lower confidence in their ability to reason independently. Participants also reported relatively few modifications, meaning they often did not tweak or put their own stamp on what the AI spit out. But those who modified the AI's work reported feeling more confident and more like the author. Men reported higher reliance on AI than women. The trade-off between speed and depth was one of the main themes participants reported. "I got an answer faster, but I don't think I thought as deeply as I normally would," one of the participants said. This reflects one of the biggest caveats of using AI tools. Chatbots, for example, can produce text quickly, but it doesn't always have the same level of subject matter expertise you need. AI tools can also hallucinate, or make up facts, so AI-generated output needs to be verified before it's used. The office is one of the main places where people use AI tools. We're moving beyond just chatbots, with agents that can autonomously handle tasks that would've otherwise required a human. But these tools aren't necessarily making our work lives better; one study found they made workdays longer and more unpleasant. As AI becomes increasingly embedded in our work lives, it's important to understand how it's shaping our mental attitudes. Qualities like confidence and ownership of our work are important factors in determining the quality of our work life.
[2]
There's yet another study about how bad AI is for our brains
A group of researchers from across the US and the UK on what AI does to our brains and the results are, in a word, grim. These results were published in a paper called "AI assistance reduces persistence and hurts independent performance" which kind of tells you everything you need to know. "We find that AI assistance improves immediate performance, but it comes at a heavy cognitive cost," the study declares. Researchers went on to state that just ten minutes of using AI made people dependent on the technology, which led to worsening performance and burnout once the tools were removed. The study followed people who use AI for "reasoning-intensive" cognitive labor. This refers to stuff like writing, coding and brainstorming new ideas, which are some of the most common use cases. The researchers recruited 350 Americans, who were asked to complete some fraction-based equations. Half of the participants were randomly granted access to a specialized chatbot built on OpenAI's GPT-5 for help and the others had to go it alone. Halfway through the exam, the AI group had their access cut off. This led to a steep decline in correct answers by the AI group and many instances of people simply giving up. This result, in which performance and perseverance both dropped, was repeated in a larger experiment with 670 people. Finally, the scientists performed one final experiment with reading comprehension questions, and not math. The results were more of the same. "Once the AI is taken away from people, it's not that people are just giving wrong answers. They're also not willing to try without AI," Rachit Dubey, an assistant professor at the University of California and coauthor of the study, . "People's persistence drops." Dubey went on to warn that rapid deployment of AI in the education sector could lead to a "generation of learners and people who will not know what they're capable of, and then that will really dilute human innovation and creativity." The study likens using the technology to the , in which "sustained AI use erodes the motivation and persistence that drive long-term learning." These effects accumulate and "by the time they are visible, they will be difficult to reverse." There are two caveats here. The study has yet to be peer-reviewed. Also, researchers found one tiny bright spot regarding the use of AI. People who used AI tools for hints and clarification had a much easier time once the chatbot was removed when compared to those who used the bot to essentially prompt the answers. This is just the latest study trying to get to the bottom of what AI is doing to our collective noggins. It has been found to who rely on the tools, "AI brain fry." To that end, researchers discovered that employees who use AI actually end up than those old-fashioned luddites. The results are even starker in the world of education. Studies have found that AI use in school leads to and that kids who rely on chatbots tend to .
[3]
Concern Grows That AI Is Damaging Users' Cognitive Abilities
Can't-miss innovations from the bleeding edge of science and tech Last year, a team of researchers led by MIT research scientist Nataliya Kosmyna used electroencephalograms to monitor the brains of students while they were writing short, deliberately open-ended essays. They split the 54 participants into three groups: one was told to use ChatGPT, one could search for information on Google (minus AI-generated summaries), and another had to rely on their own knowledge. As detailed in a resulting yet-to-be-peer-reviewed paper, each group was tasked with writing one essay per month for three months, while a subset of each group was asked to switch to or away from using ChatGPT for a fourth month. The researchers' EEG findings were ominous: the students using ChatGPT "consistently underperformed at neural, linguistic, and behavioral levels," they found, and even got lazier with each consecutive essay. "The brain didn't fall asleep, but there was much less activation in the areas corresponding to creativity and to processing information," Kosmyna told the BBC in an interview this week. Participants using ChatGPT also struggled to quote their own essays, dovetailing with other research that's have found information recall could be negatively affected by the use of AI. There's also the question of originality: a teacher who was involved in the study even asked her if students who were using ChatGPT "were sitting next to each other because the essays were so similar." The results were an early warning of an alarming phenomenon that researchers are only starting to explore: that the widespread use of AI chatbots could be allowing us to offload much of our thinking, slowly deteriorating our cognitive skills. Case in point, another recent study claimed to have found the first causal evidence that leaning on these tools can impair our intellectual abilities, in what the research team called a "boiling frog" effect. Another recent paper by researchers at the University of Pennsylvania found that participants who were asked to answer a variety of reasoning and knowledge-based questions, and were given the option to use ChatGPT, predominantly chose to use the chatbot to answer the questions -- in what the scientists termed "cognitive surrender." AI users are also sounding the alarm, anecdotally complaining that AI tools are starting to erode their creativity or ability to articulate nuanced ideas. Many teens feel that the tech is addictive and eating away at their cognitive abilities. Add it all up, and there's still a ton we don't understand about AI's effects on the mind. But it's an urgent area of inquiry, and there are plenty of reasons to be wary of cognitive shortcuts -- something that, as Kosmyna told the BBC, "our brains love."
[4]
Letting AI Do Your Work Erodes Your Confidence, According to a New Study
But that pattern wasn't inevitable. Participants who pushed back -- editing, questioning, or rejecting AI-generated suggestions -- reported the opposite: greater confidence and a stronger sense that the final output was truly theirs. The findings suggest AI isn't inherently undermining our abilities. Instead, it may be subtly reshaping how we experience our own thinking. "Generative AI can lead to cognitive decline or cognitive evolution -- it depends on your interaction style," says study author Sarah Baldeo, a PhD candidate in AI and neuroscience at Middlesex University in England and author of 100 Ways to Future-Proof Your Brain in The Age of AI. "When we look at brain activity contingent on how people choose to use the tool, we can see increases or decreases. It really doesn't have to do with the tool itself."
[5]
Study Finds AI Use Eats Away at Users' Confidence in Their Own Brains
Can't-miss innovations from the bleeding edge of science and tech Researchers are growing increasingly suspicious that outsourcing intellectual tasks to AI is causing a range of cognitive deficits. A new study adds a wrinkle worth paying attention to: that use of the bots can eat away at users' faith in their own abilities, resulting in lower confidence in their own independent reasoning. The new peer-reviewed study, published in the journal Technology, Mind and Behavior and flagged by TIME, found that those who showed a heavy reliance on AI were more likely to admit chatbots were "thinking" for them -- and seem to show decreased confidence in their own ideas. By the same token, participants who took control over their AI's output by editing, questioning, or scrapping it showed a greater confidence and sense of ownership over the final output, even though the tools used were the same. As the study's author Sarah Baldeo -- a PhD candidate in AI and neuroscience at Middlesex University -- told TIME, the ultimate cognitive effects "depends on your interaction style." "When we look at brain activity contingent on how people choose to use the tool, we can see increases or decreases," Baldeo said. "It really doesn't have to do with the tool itself." That finding lines up with the "boiling frog study" we covered earlier this week. That yet-to-be-peer-reviewed paper, by researchers at MIT and Carnegie Mellon, claimed to have found the first causal evidence that AI can rapidly degrade users' intellectual abilities, particularly when we use the tech for "reasoning-intensive" work. During that study, participants in the experimental group were given AI access to complete a series of equations. The researchers then pulled the plug in the middle of their work, forcing them to continue without it. Those who had been abandoned by their AI chatbots saw rapid declines in reasoning ability, as well as a swift drop in willingness to follow-through and complete the math tasks. Both studies converge on the same core mechanism: how you use AI is an important factor in determining whether it harms your cognitive ability. Basically, they argue that offloading all of your work to the machine degrades your capacity for independent reasoning, while using AI as a supplement may help to preserve it. Whether you're measuring confidence or raw reasoning, both pieces of research seem to pose an important question: are you using AI to help you think, or is AI doing the thinking for you?
[6]
AI Use Appears to Have a "Boiling Frog" Effect on Human Cognition, New Study Warns
Can't-miss innovations from the bleeding edge of science and tech In a new study, researchers claim to provide the first causal evidence that leaning on AI to assist with "reasoning-intensive" cognitive labor -- mental tasks ranging from writing to studying to coding to simply brainstorming new ideas -- can rapidly impair users' intellectual ability and willingness to persist despite difficulty. "We find that AI assistance improves immediate performance, but it comes at a heavy cognitive cost," the study declares of its findings. "After just [about] 10 minutes of AI-assisted problem-solving, people who lost access to the AI performed worse and gave up more frequently than those who never used it." The study, which was conducted by a multidisciplinary cohort of scientists from across the United States and United Kingdom, has yet to be peer-reviewed. But it builds on a growing body of research suggesting that extensive AI use can distort and dampen users' thinking and independence, and as experts work to understand the impacts of widely-used chatbots on people as they unfold in real-time, they're warning that outsourcing cognitive tasks to AI tools could put humans in a "boiling frog" conundrum -- in which an unwitting, bit-by-bit erosion of our cognitive "muscles" leads to formidable challenges in the long-term. "If sustained AI use erodes the motivation and persistence that drive long-term learning, these effects will accumulate over years, and by the time they are visible, they will be difficult to reverse," the study urges. "This is analogous to the 'boiling frog' effect, where each incremental act feels costless, until the cumulative effect becomes overwhelming to address." To conduct the study, the researchers recruited a cohort of about 350 Americans, who were asked to try to complete a brief series of fraction equations. A little more than half of participants were randomly granted access to a chatbot -- a specialized bot built on OpenAI's GPT-5 and provided with the specific answers for each question on the brief exam -- for help. Everyone else was funneled into an AI-free control group. At first, the results revealed, the chatbot proved expedient in helping AI-aided participants breeze through the test. But halfway through the short exam, access to the AI was suddenly cut off -- at which point participants' ability to work through the reasoning questions without AI assistance quickly declined, as did their will to keep working at a problem when the going got tough. For a follow-up experiment, the researchers recruited another, larger group of nearly 670 participants. They were once again split into two roughly-equal halves and asked to complete a brief mathematical reasoning test, with one group given access to a chatbot assistant -- only to once again be suddenly abandoned by their AI companion, leaving them to cognitively fend for themselves. The results were pretty much the same: performance dropped, as did perseverance. These same outcomes persisted once again in a final experiment, in which about 200 more participants were asked to complete a brief series of reading comprehension questions, showing that such results aren't simply limited to math problems. "People's persistence drops," said University of California, Los Angeles assistant professor Rachit Dubey, a computational cognitive scientist who coauthored the study alongside peers from the Massachusetts Institute of Technology, Carnegie Mellon University, and the University of Oxford, in an interview with Futurism."Once the AI is taken away from people, it's not that people are just giving wrong answers. They're also not willing to try without AI." One bright spot: how participants used AI appeared to make a difference for individual outcomes, according to the research. Those who self-reported that they essentially prompted the chatbot to cough up the answers unsurprisingly had a worse time once the AI rug was pulled. Participants who instead said that they asked the chatbot for hints or clarification -- as opposed to outright cheating -- appeared to be better off sans AI assistance. Dubey is concerned that leaning too heavily on chatbots to replace cognitive labor could cause people to become more impatient, and even create the conditions for over-reliance on AI to function like an addiction. Most of all, though, he says he worries about how AI reliance will transform individuals' sense of confidence and worth as they struggle to think through problems independently. "The most important thing I learned in college is the value of hard work... if I work hard, I'm capable of doing a lot of things," Dubey reflected, noting that schools and communities should think very carefully about "blindly" integrating chatbots into educational programs. "These are very important core human elements that we learned throughout our childhood, in high school and college years." "If we're offloading to AI at scale for everything and anything, what will it do to our own beliefs about our own selves?" Dubey continued, adding that "practice makes you better in many domains, and that's what AI will take away from you... that's what I'm most worried about. We will have a generation of learners and people who will not know what they're capable of, and then that will really dilute human innovation and creativity." And as the researchers seek to expand their research into longer-term experiments, they're challenging folks across industries to "think about optimizing not just what people can do with AI," as they write in the study, "but what they can do without it."
[7]
Studies Suggest AI Use Leads to Dishonest Behaviour, Reduces Cognitive Capabilities
Both these research efforts need to be taken up as part of AI guardrails while formulating policies at the global level and across countries Two separate studies conducted by a research institute and a group of academic institutions have suggested that people using AI for daily work tend to lie and cheat more while those using it for 'reasoning-intensive cognitive labour' can impair users' intellectual ability and result give up easily on tasks. "Using AI creates a convenient moral distance between people and their actions -- it can induce them to request behaviours they wouldn't necessarily engage in themselves, nor potentially request from other humans," says Zoe Rahwan, who co-authored one of the studies by of the Max Planck Institute for Human Development in Berlin, Germany. The other study titled "AI Assistance Reduces Persistence and Hurts Independent Performance," conducted by researchers from Carnegie Mellon University, MIT, University of Oxford, and the University of California, Los Angeles, says AI assistance improves immediate performance, but it comes at a heavy cognitive cost. "We find that AI assistance improves immediate performance, but it comes at a heavy cognitive cost," the study declares of its findings. "After just [about] 10 minutes of AI-assisted problem-solving, people who lost access to the AI performed worse and gave up more frequently than those who never used it," they say in the report. Of course, AI users being dishonest is not news, given the instances of students using it to cheat in examinations and lawyers turning in fake AI-generated citations that got published by the media. In a statement, the German team relates says "research on behavioural science shows that people tend to act dishonestly when they distance themselves from consequences." In this research, scientists conducted 13 tests on 8,000 participants to measure the honesty levels of people when they instruct AI to perform an action. It was observed that participants were honest in a game of rolling the dice when AI wasn't involved. From 95% participants behaving honestly, the figure dropped to 75% when people used AI models to report dice numbers. Things turned worse when the participants were given the opportunity to manipulate an AI model with different data sets that either reported dice numbers accurately every time, some of the times, or simply gave out the maximum numbers each time the dice was rolled. The report from the study has since been published by Nature. "Our results establish that people are more likely to request unethical behaviour from machines than to engage in the same unethical behaviour themselves," the paper said while raising three key observations post the experiment. * Delegation to AI can induce dishonesty: When people delegated tasks to machine agents-whether voluntarily or in a forced manner-they were more likely to cheat. Dishonesty varied with the way in which they gave instructions, with lower rates seen for rule-setting and higher rates for goal-setting. * Machines follow unethical commands more often: Compliance with fully unethical instructions is another, novel, risk the researchers identified for AI delegation. In experiments with large language models, namely GPT-4, GPT-4o, Claude 3.5 Sonnet, and Llama 3.3, machines more frequently complied with such unethical instructions (58%-98%) than humans did (25-40%). * Technical safeguards are inadequate: Pre-existing LLM safeguards were largely ineffective at deterring unethical behaviour. The researchers tried a range of guardrail strategies and found that prohibitions on dishonesty must be highly specific to be effective. These, however, may not be practicable. Scalable, reliable safeguards and clear legal and societal frameworks are still lacking. "These studies make a key contribution to the debate on AI ethics, especially in light of increasing automation in everyday life and the workplace. It highlights the importance of consciously designing delegation interfaces -- and building adequate safeguards in the age of Agentic AI," the statement concluded. The second study, conducted by a multidisciplinary cohort of scientists from across the US and the UK, is yet to be peer-reviewed. However, it seems to be building on earlier research that suggested growing AI use resulting in cognitive atrophy that causes a user's think to distort and dampen while curtailing their independence that may result in dropping self-esteem. "If sustained AI use erodes the motivation and persistence that drive long-term learning, these effects will accumulate over years, and by the time they are visible, they will be difficult to reverse. This is analogous to the 'boiling frog' effect, where each incremental act feels costless, until the cumulative effect becomes overwhelming to address," says the study. In this case, researchers recruited about 350 people who were asked to complete a series of fraction equations. While a little more than half of them randomly gave access to a chatbot built on OpenAI's GPT-5 for help, the rest were part of an AI-free group. What happened next was that the chatbot assisted the participants to breeze through one half of the test after which their access to AI was cut off. At this point, participants' ability to work through the reasoning questions without assistance declined sharply and soon they just let go of the problem with it got tougher. You can access details of the experiments on GitHub. Here is a list of our earlier stories around AI and mental health: As a follow-up, the researchers recruited a larger group of 670 participants and once again split them into two groups, before giving them another mathematical reasoning test. Once again, it was found that the group that began with AI support and was left without it midway saw performance drop and perseverance at solving the problem decline very quickly. The authors of the study noted that people's persistence drops once AI is taken away from people. It is not that they are giving the wrong answers, they are just not willing to try without AI. Which should be good news for all AI startups like OpenAI, Anthropic, Perplexity and Gemini, who are collectively hoping to reap the rewards for human lethargy at some future date. However, there was one bright spot in what was otherwise a cause for real concern. Participants who self-reported that they sought AI chatbot help had the worst time when support was denied. Those who sought hits and clarifications from the chatbot as against outright answers, seemed to be better off when the AI assistance was removed. The researchers note in conclusion that relying very heavily on chatbots to replace cognitive labour might result in people becoming impatient or even create conditions for over-reliance on AI to function like an addiction. Of course, the issue of mental wellness resulting from a loss of confidence without AI support could be a major factor if unbridled AI use continues. The report says rapid rise of AI chatbots promises immediate and effective help with reasoning-intensive tasks such as studying, writing, coding, and brainstorming. But what happens to users' own abilities when the AI is not available is the question that needs to be asked. If AI is being offloaded at scale for everything and anything, what will it do to our beliefs about ourselves? The entire premise of practice making humans perfect is being taken away from people's lives. How the next generation of learners would tackle this and whether they would ever realise their true potential or capabilities of handling challenges is the scary part. What's more, how would the future journey of human innovation and creativity pan out? And as the researchers seek to expand their research into longer-term experiments, they're challenging folks across industries to "think about optimizing not just what people can do with AI," as they write in the study, "but what they can do without it," the report said.
[8]
Heavy AI usage may quietly weaken your thinking, says report
Experts say relying too much on AI can reduce focus and confidence. AI is something we use on a daily basis since its inception, whether it be to do research, generate images, or create content. While there are many upsides to using AI in your everyday life, a new study has recently raised concerns about how artificial intelligence may be shaping the way people think, learn, and solve problems. The study finds that even short periods of AI use can affect how people perform once the tool is no longer available. Experts suggest that what appears to be harmless assistance could be quietly reducing your ability to think independently and stay engaged with challenging tasks over time. It is easy to rely on AI when you're short on time, as it delivers fast and accurate answers. However, over time you may notice yourself reaching out to an AI assistant for everything, even for simple tasks like how to turn off your laptop. Prior to the inception of the AI tools, you would have searched the internet for the same or maybe have watched some YouTube tutorials, but now the first instinct would be to open ChatGPT or any other AI assistant. While you could feel that using the AI tool is more efficient in the moment, there you're subtly losing your patience, and this can be clearly seen when you have to solve something on your own without the help of the AI. That slight hesitation, or the urge to give up sooner, reflects the kind of shift the study is pointing to, making the concern feel less abstract and more personal. Researchers termed it 'cognitive offloading', a situation where people depend on machines to do their thinking. There was a study that was carried out in the United States and the United Kingdom. It looked at how people solve tasks that need careful thinking, like maths problems and reading passages, with and without AI. Also read: Samsung Galaxy Z Fold 8: From launch date to specs and price, here is what leaks reveal In controlled experiments, people did better when they used an AI assistant. But when the AI help was taken away, their performance dropped a lot. Many people made more mistakes and skipped questions more quickly. This showed not only weaker performance, but also less effort to keep trying. In a recent study that was conducted with about 350 people where they had to solve fraction problems, the people did better when they had the external help from the AI tools. However, as soon as the AI was taken away, which was about halfway through the experiment, the performance of the people got worse. Another study with around 670 people showed the same result. This isn't the only example, as when people who use AI tools extensively were asked to perform reading tasks, they found it harder to continue on their own. Experts say that while AI can help us work faster and more efficiently, using it too much may reduce our focus and confidence. Not only that, but it also affects the ability to solve problems in one's individual capacity. Also read: Neuralink rival? Startup builds mind-reading beanie that turns thoughts into text It is important to note that the study has not been fully reviewed yet, but it still shows the need to balance AI use with keeping our thinking skills strong.
Share
Copy Link
Recent peer-reviewed research shows that heavy reliance on AI tools significantly reduces confidence in independent reasoning and degrades cognitive abilities. Studies involving nearly 2,000 participants found that people who heavily depend on AI for work tasks report lower confidence and less ownership over their output, while those who actively edit AI-generated content maintain stronger cognitive skills.
A peer-reviewed study published by the American Psychological Association reveals troubling patterns about how AI use affects cognitive abilities and user confidence. The research, led by Sarah Baldeo, a Ph.D. candidate in AI and neuroscience at Middlesex University in England, examined nearly 2,000 adults who used AI tools for workplace tasks like strategic planning, prioritizing projects, and developing plans with incomplete information
1
. Participants who showed heavy reliance on AI reported lower confidence in their ability to reason independently and felt diminished ownership over their work. The psychological impact of using AI extends beyond simple task completion—it fundamentally alters how people perceive their own capabilities.
Source: Digit
The study asked participants to self-report their levels of confidence, ownership, and AI reliance, including whether they significantly modified AI-generated outputs. A striking pattern emerged: those who relied heavily on AI tools made relatively few modifications to what chatbots produced, essentially accepting the output without putting their own stamp on it
1
. Men reported higher reliance on AI than women, suggesting gender differences in how people interact with these technologies.Separate research from scientists across the US and UK documented an even more alarming cognitive impact of AI. Their study, titled "AI assistance reduces persistence and hurts independent performance," found that just ten minutes of using AI made people dependent on the technology, leading to diminished performance and burnout once the tools were removed
2
. The researchers recruited 350 Americans to complete fraction-based equations, with half randomly granted access to a specialized chatbot built on OpenAI's GPT-5. When AI access was cut off halfway through, the AI group experienced a steep decline in correct answers and many simply gave up."Once the AI is taken away from people, it's not that people are just giving wrong answers. They're also not willing to try without AI," explained Rachit Dubey, an assistant professor at the University of California and coauthor of the study
2
. The pattern repeated in a larger experiment with 670 people and again with reading comprehension questions, suggesting the effect transcends specific task types.Research from MIT adds another dimension to understanding how offloading thinking to AI impacts our brains. MIT research scientist Nataliya Kosmyna used electroencephalograms to monitor 54 students' brains while writing essays. Students were divided into three groups: one using ChatGPT, one using Google search, and one relying solely on their own knowledge
3
. The findings were stark—students using ChatGPT "consistently underperformed at neural, linguistic, and behavioral levels" and became lazier with each consecutive essay.
Source: Futurism
"The brain didn't fall asleep, but there was much less activation in the areas corresponding to creativity and to processing information," Kosmyna told the BBC
3
. Students using ChatGPT also struggled to quote their own essays, indicating impaired information recall. The research team likened sustained AI use to the Google effect, warning that these effects accumulate and "by the time they are visible, they will be difficult to reverse"2
.The findings aren't entirely pessimistic. Baldeo's research revealed that the cognitive impact of AI depends critically on interaction style. Participants who actively engaged with AI-generated content—editing, questioning, or rejecting suggestions—reported the opposite effect: greater confidence and a stronger sense that the final output was truly theirs
4
. "Generative AI can lead to cognitive decline or cognitive evolution—it depends on your interaction style," Baldeo explained4
.This distinction matters for understanding dependency and cognitive surrender. Research from the University of Pennsylvania found that when participants were given the option to use ChatGPT for reasoning and knowledge-based questions, they predominantly chose the chatbot in what scientists termed "cognitive surrender"
3
. The key variable appears to be whether AI is helping you think or doing the thinking for you5
.Related Stories
One of the main themes emerging from Baldeo's research centers on the trade-off participants reported between speed and depth. "I got an answer faster, but I don't think I thought as deeply as I normally would," one participant noted
1
. This reflects a fundamental caveat of AI tools: chatbots can produce text quickly, but the output doesn't always have the subject matter expertise required and can hallucinate or fabricate facts.As AI becomes increasingly embedded in workplace environments—moving beyond chatbots to autonomous agents handling tasks that previously required human judgment—understanding these cognitive shifts becomes critical. Studies have found that AI tools can make workdays longer and more unpleasant rather than improving work life
1
. Qualities like confidence and ownership over work are essential factors in determining work quality and employee wellbeing.Dubey warned that rapid deployment of AI in education could produce "a generation of learners and people who will not know what they're capable of, and then that will really dilute human innovation and creativity"
2
. Anecdotal reports from AI users support these concerns, with many complaining that AI tools erode their creativity and ability to articulate nuanced ideas. Teens particularly report feeling that the technology is addictive and eating away at their cognitive abilities3
.Baldeo's findings emphasize that AI isn't inherently harmful or causing cognitive decline. Rather, the research "highlights variability in how users distribute effort between themselves and AI systems under conditions of convenience and competence," meaning people using AI make conscious trade-offs, and their confidence fluctuates as a result
1
. The challenge ahead involves developing interaction patterns that preserve independent reasoning while leveraging AI's capabilities—a balance that will shape how effectively we can maintain our cognitive abilities in an AI-saturated future.
Source: CXOToday
Summarized by
Navi
10 Nov 2025•Science and Research

18 Jun 2025•Science and Research

09 Jul 2025•Science and Research

1
Health

2
Technology

3
Technology
