Curated by THEOUTPOST
On Tue, 11 Feb, 12:08 AM UTC
17 Sources
[1]
AI: The Silent Thief of Our Critical Thinking Skills?
As technology continues to evolve at an extraordinary pace, AI-powered tools are revolutionizing modern workspaces by enhancing efficiency and accelerating tasks. But amid this transformation, concerns grow about whether we are compromising our critical thinking abilities. A recent study by Microsoft and Carnegie Mellon University raises a red flag on how AI dependence is reshaping problem-solving skills. The study, which surveyed 319 knowledge workers, reveals a striking pattern. When workers trust AI, they scrutinize results less. They assume AI-generated insights are accurate without questioning them. This habit, over time, can weaken independent thinking and analytical abilities. Confidence in AI often translates to less effort in verification. On the other hand, professionals who believe in their expertise tend to challenge AI-generated information, leading to better decision-making. The paradox? AI can boost efficiency, but unchecked reliance may dilute cognitive engagement.
[2]
Is AI making us stupider? Maybe, according to one of the world's biggest AI companies
A team of researchers from Microsoft and Carnegie Mellon University in the United States say that not doing much thinking and instead relying on devices like mobiles, calculator or even artificial intelligence (AI), could be making us humans "dumber". There is only so much thinking most of us can do in our heads. Try dividing 16,951 by 67 without reaching for a pen and paper. Or a calculator. Try doing the weekly shopping without a list on the back of last week's receipt. Or on your phone. By relying on these devices to help make our lives easier, are we making ourselves smarter or dumber? Have we traded efficiency gains for inching ever closer to idiocy as a species? This question is especially important to consider with regard to generative artificial intelligence (AI) technology such as ChatGPT, an AI chatbot owned by tech company OpenAI, which at the time of writing is used by 300 million people each week. According to a recent paper by a team of researchers from Microsoft and Carnegie Mellon University in the United States, the answer might be yes. But there's more to the story. Thinking well The researchers assessed how users perceive the effect of generative AI on their own critical thinking. Generally speaking, critical thinking has to do with thinking well. One way we do this is by judging our own thinking processes against established norms and methods of good reasoning. These norms include values such as precision, clarity, accuracy, breadth, depth, relevance, significance and cogency of arguments. Other factors that can affect quality of thinking include the influence of our existing world views, cognitive biases, and reliance on incomplete or inaccurate mental models. The authors of the recent study adopt a definition of critical thinking developed by American educational psychologist Benjamin Bloom and colleagues in 1956. It's not really a definition at all. Rather it's a hierarchical way to categorise cognitive skills, including recall of information, comprehension, application, analysis, synthesis and evaluation. The authors state they prefer this categorisation, also known as a "taxonomy", because it's simple and easy to apply. However, since it was devised it has fallen out of favour and has been discredited by Robert Marzano and indeed by Bloom himself. In particular, it assumes there is a hierarchy of cognitive skills in which so-called "higher-order" skills are built upon "lower-order" skills. This does not hold on logical or evidence-based grounds. For example, evaluation, usually seen as a culminating or higher-order process, can be the beginning of inquiry or very easy to perform in some contexts. It is more the context than the cognition that determines the sophistication of thinking. An issue with using this taxonomy in the study is that many generative AI products also seem to use it to guide their own output. So you could interpret this study as testing whether generative AI, by the way it's designed, is effective at framing how users think about critical thinking. Also missing from Bloom's taxonomy is a fundamental aspect of critical thinking: the fact that the critical thinker not only performs these and many other cognitive skills, but performs them well. They do this because they have an overarching concern for the truth, which is something AI systems do not have. Higher confidence in AI equals less critical thinking Research published earlier this year revealed "a significant negative correlation between frequent AI tool usage and critical thinking abilities". The new study further explores this idea. It surveyed 319 knowledge workers such as healthcare practitioners, educators and engineers who discussed 936 tasks they conducted with the help of generative AI. Interestingly, the study found users consider themselves to use critical thinking less in the execution of the task, than in providing oversight at the verification and editing stages. In high-stakes work environments, the desire to produce high-quality work combined with fear of reprisals serve as powerful motivators for users to engage their critical thinking in reviewing the outputs of AI. But overall, participants believe the increases in efficiency more than compensate for the effort expended in providing such oversight. The study found people who had higher confidence in AI generally displayed less critical thinking, while people with higher confidence in themselves tended to display more critical thinking. This suggests generative AI does not harm one's critical thinking - provided one has it to begin with. Problematically, the study relied too much on self-reporting, which can be subject to a range of biases and interpretation issues. Putting this aside, critical thinking was defined by users as "setting clear goals, refining prompts, and assessing generated content to meet specific criteria and standards". "Criteria and standards" here refer more to the purposes of the task than to the purposes of critical thinking. For example, an output meets the criteria if it "complies with their queries", and the standards if the "generated artefact is functional" for the workplace. This raises the question of whether the study was really measuring critical thinking at all. Becoming a critical thinker Implicit in the new study is the idea that exercising critical thinking at the oversight stage is at least better than an unreflective over-reliance on generative AI. The authors recommend generative AI developers add features to trigger users' critical oversight. But is this enough? Critical thinking is needed at every stage before and while using AI - when formulating questions and hypotheses to be tested, and when interrogating outputs for bias and accuracy. The only way to ensure generative AI does not harm your critical thinking is to become a critical thinker before you use it. Becoming a critical thinker requires identifying and challenging unstated assumptions behind claims and evaluating diverse perspectives. It also requires practising systematic and methodical reasoning and reasoning collaboratively to test your ideas and thinking with others. Chalk and chalkboards made us better at mathematics. Can generative AI make us better at critical thinking? Maybe - if we are careful, we might be able to use generative AI to challenge ourselves and augment our critical thinking. But in the meantime, there are always steps we can, and should, take to improve our critical thinking instead of letting an AI do the thinking for us.
[3]
Are we losing our critical thinking skills to AI? New Microsoft study raises red flags
As more workplaces embrace emerging technology, research warns that prolonged use of AI can erode our cognitive abilities. Could the widespread adoption of generative AI (Gen AI) tools in the workplace erode our cognitive abilities? New research suggests that worrying reality is a possibility. A paper from Microsoft and Carnegie Mellon University researchers investigated the use of Gen AI tools by knowledge workers and explored how automation contributes to the deterioration of cognitive faculties. Also: The work tasks people use Claude AI for most, according to Anthropic The researchers sampled 319 knowledge workers across professions in computer science, education, entertainment/media, administration, and financial and business services. The participants shared three real-world examples of using Gen AI in their work. Participants were also asked to provide examples for each task type (creation, information, and advice) to increase the variety of examples collected. About 936 first-hand examples of using Gen AI at work were collected. Also: Crawl, then walk, before you run with AI agents, experts recommend Participants also completed a survey assessing their confidence in Gen AI-assisted work tasks compared to their perceived confidence in finishing the same responsibilities without Gen AI. According to the researchers, "higher confidence in Gen AI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking." Furthermore, the researchers argued that mechanizing routine tasks via AI and "leaving exception-handling to the human user" deprives workers of practicing their judgment. Ultimately, this process weakens their cognitive musculature, "leaving them atrophied and unprepared when the exceptions do arise." Unsurprisingly, OpenAI's ChatGPT and Microsoft's Copilot were the most-used tools and some of the tasks cited in the paper include a trader using ChatGPT to "generate recommendations for new resources and strategies" to hone their skills, a lawyer using ChaGPT to find relevant laws for a particular case, a teacher using DALL-E to create images for a presentation about washing hands at school, and a nurse who "verified a ChatGPT-generated educational pamphlet for newly diagnosed diabetic patients." Also: The work tasks people use Claude AI for most, according to Anthropic The researchers found that "knowledge workers view critical thinking as ensuring the objectives and quality of their work." When workers partake in critical thinking, "their confidence in themselves doing and evaluating the task, and their general tendency to reflect on work strongly correlated with their perceived enaction of critical thinking." Generally, workers said they had more confidence in Gen AI-assisted tasks when they believed they enacted critical thinking. When users had less confidence in AI-assisted responses, they used critical thinking to mitigate the quality of AI's output. About a third (36%) of surveyees used critical-thinking skills to mitigate potential adverse outcomes from using AI. However, while "it is possible that fostering worker's domain expertise and associated self-confidence may result in improved critical thinking when using Gen AI," the researchers wrote, users with access to Gen AI tools were also more likely to produce "a less diverse set of outcomes for the same task, compared to those without." Also: Cerebras CEO on DeepSeek: Every time computing gets cheaper, the market gets bigger The research highlights a shift in knowledge workers' cognitive effort. Increasingly, professionals only intervene when AI outputs are not up to par. The researchers suggested that an overreliance on Gen AI in knowledge work is shifting AI use cases "from task execution to oversight." The researchers said: "Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving."
[4]
Microsoft Study Finds AI Makes Human Cognition "Atrophied and Unprepared"
Researchers find that the more people use AI at their job, the less critical thinking they use. A new paper from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can "result in the deterioration of cognitive faculties that ought to be preserved." "[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise," the researchers wrote. The researchers recruited 319 knowledge workers for the study, who self reported 936 first-hand examples of using generative AI in their job, and asked them to complete a survey about how they use generative AI (including what tools and prompts), how confident they are the generative AI tools' ability to do the specific work task, how confident they are in evaluating the AI's output, and how confident they are in their abilities in completing the same work task without the AI tool. Some tasks cited in the paper include a teacher using the AI image generator DALL-E to create images for a presentation about hand washing at school, a commodities trader using ChatGPT to "generate recommendations for new resources and strategies to explore to hone my trading skills," and a nurse who "verified a ChatGPT-generated educational pamphlet for newly diagnosed diabetic patients." Overall, these workers self-reported that the more confidence they had in AI doing the task, the more they observed "their perceived enaction of critical thinking." When users had less confidence in the AI's output, they used more critical thinking and had more confidence in their ability to evaluate and improve the quality of the AI's output and mitigate the consequences of AI responses. "The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI," the researchers wrote. "Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving." The researchers also found that "users with access to GenAI tools produce a less diverse set of outcomes for the same task, compared to those without. This tendency for convergence reflects a lack of personal, contextualised, critical and reflective judgement of AI output and thus can be interpreted as a deterioration of critical thinking." The researchers also noted some unsurprising conditions that make workers use more or less critical thinking and pay attention to the quality of the AI outputs. For example, workers who felt crunched for time used less critical thinking, while workers in "high-stakes scenarios and workplaces" who were worried about harm caused by faulty outputs used more critical thinking. So, does this mean AI is making us dumb, is inherently bad, and should be abolished to save humanity's collective intelligence from being atrophied? That's an understandable response to evidence suggesting that AI tools are reducing critical thinking among nurses, teachers, and commodity traders, but the researchers' perspective is not that simple. As they correctly point out, humanity has a long history of "offloading" cognitive tasks to new technologies as they emerge and that people are always worried these technologies will destroy human intelligence. "Generative AI tools [...] are the latest in a long line of technologies that raise questions about their impact on the quality of human thought, a line that includes writing (objected to by Socrates), printing (objected to by Trithemius), calculators (objected to by teachers of arithmetic), and the Internet," the researcher wrote. "Such consternation is not unfounded. Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved." I, for example, am old enough to remember a time when I memorized the phone numbers of many friends and family members. The only number I remember now that all those contacts are saved on my phone is my own. I also remember when I first moved to San Francisco for college I bought a little pocket map and eventually learned to navigate the city and which Muni busses to take where. There are very few places I can get to today without Google Maps. I don't feel particularly dumb for outsourcing my brain's phonebook to a digital contacts list, but the same kind of outsourcing could be dangerous in a critical job where someone is overlying on AI tools, stops using critical thinking, and incorporates bad outputs into their work. As one of the biggest tech companies in the world, and the biggest investor in OpenAI, Microsoft is pot committed to the rapid development of generative AI tools, so unsurprisingly the researchers here have some thoughts about how to develop AI tools without making us all incredibly dumb. To avoid that situation, the researchers suggest developing AI tools with this problem in mind and design them so they motivate users to use critical thinking. "GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques," the researchers wrote. "The tool could help develop specific critical thinking skills, such as analysing arguments, or cross-referencing facts against authoritative sources. This would align with the motivation enhancing approach of positioning AI as a partner in skill development."
[5]
New Research Suggests Overreliance on AI Could Hinders Critical Thinking
Imad is a senior reporter covering Google and internet culture. Hailing from Texas, Imad started his journalism career in 2013 and has amassed bylines with The New York Times, The Washington Post, ESPN, Tom's Guide and Wired, among others. Overreliance on AI systems may hinder our critical thinking potential as people offload synthesis to machines, according to researchers at Microsoft and Carnegie Mellon University. The study is set to be presented at the CHI conference on Human Factors in Computing Systems in Yokohama, Japan, in April. In the study, researchers define critical thinking as a hierarchical pyramid, with knowledge at the top, followed by an understanding of ideas, putting ideas into practice, analyzing against related ideas, synthesizing or combining those ideas and evaluating ideas through a set criteria. Based on surveys done of 319 knowledge workers, which can generally be categorized as white collar jobs, the study found that while generative AI can improve efficiency, "it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving." Researchers found that workers like to use AI to double-check their work and do so by comparing it against other external sources to meet a certain criteria. While this certainly requires critical analysis, researchers note that workers' use of AI to automate routine or lower-stakes tasks raises concerns about "long-term reliance and diminished independent problem-solving." Interestingly, when workers have higher confidence in AI responses, it "appear[s] to reduce the perceived effort required for critical thinking tasks." However, workers who really believe in their own expertise end up putting greater effort in evaluating AI responses. So while AI can help workers retrieve information faster, they may end up spending more time trying to verify all that information as accurate and not a hallucination. "As workers shift from task execution to AI oversight, they trade hands-on engagement for the challenge of verifying and editing AI outputs, revealing both the efficiency gains and the risks of diminished critical reflection," the study said. Researchers don't want people to come to definitive conclusions about AI use and weakened critical thinking, however. They admit that correlations don't indicate causation. It's impossible to see inside the human mind and know exactly what thoughts are bouncing around when a person reads an AI-generated answer. Still, the data did lead to some recommendations from the researchers. The study says as workers shift from information gathering tasks to more information verification, they should be trained on the importance of cross-referencing AI outputs and assessing their relevancy. The study comes as AI is proliferating across all sectors, with particularly major effects for businesses, which could see a 41% reduction in workforce, according to a survey by the World Economic Forum. Already, Big Tech CEOs admit that they've been offloading more tasks to AI, leading to layoffs and fewer job opportunities. The CEO of Klarna told the BBC he's already reduced his workforce from 5,000 to 3,800 and plans to bring it down even further to 2,000, but admits that remaining employees will get paid more. A series of AI safety-related executive orders by former President Joe Biden were overturned by President Donald Trump, giving Big Tech fewer guardrails. Last week, Google lifted its ban on AI being used for the development of weapons and surveillance tools. All of these changes make the results of this study more relevant as workers get access to more AI tools and are tasked with overseeing more AI-generated information. Researchers did point out that with any new technological innovation, concerns of declining human cognition are commonplace. For example, they note that Socrates objected to writing, Trithemius objected to printing and educators have long been wary of calculator and internet usage. But they also point out, "A key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise."
[6]
Is AI making us stupider? Maybe, according to one of the world's biggest AI companies
There is only so much thinking most of us can do in our heads. Try dividing 16,951 by 67 without reaching for a pen and paper. Or a calculator. Try doing the weekly shopping without a list on the back of last week's receipt. Or on your phone. By relying on these devices to help make our lives easier, are we making ourselves smarter or dumber? Have we traded efficiency gains for inching ever closer to idiocy as a species? This question is especially important to consider with regard to generative artificial intelligence (AI) technology such as ChatGPT, an AI chatbot owned by tech company OpenAI, which at the time of writing is used by 300 million people each week. According to a recent paper by a team of researchers from Microsoft and Carnegie Mellon University in the United States, the answer might be yes. But there's more to the story. Thinking well The researchers assessed how users perceive the effect of generative AI on their own critical thinking. Generally speaking, critical thinking has to do with thinking well. One way we do this is by judging our own thinking processes against established norms and methods of good reasoning. These norms include values such as precision, clarity, accuracy, breadth, depth, relevance, significance and cogency of arguments. Other factors that can affect quality of thinking include the influence of our existing world views, cognitive biases, and reliance on incomplete or inaccurate mental models. The authors of the recent study adopt a definition of critical thinking developed by American educational psychologist Benjamin Bloom and colleagues in 1956. It's not really a definition at all. Rather it's a hierarchical way to categorise cognitive skills, including recall of information, comprehension, application, analysis, synthesis and evaluation. The authors state they prefer this categorisation, also known as a "taxonomy", because it's simple and easy to apply. However, since it was devised it has fallen out of favour and has been discredited by Robert Marzano and indeed by Bloom himself. In particular, it assumes there is a hierarchy of cognitive skills in which so-called "higher-order" skills are built upon "lower-order" skills. This does not hold on logical or evidence-based grounds. For example, evaluation, usually seen as a culminating or higher-order process, can be the beginning of inquiry or very easy to perform in some contexts. It is more the context than the cognition that determines the sophistication of thinking. An issue with using this taxonomy in the study is that many generative AI products also seem to use it to guide their own output. So you could interpret this study as testing whether generative AI, by the way it's designed, is effective at framing how users think about critical thinking. Also missing from Bloom's taxonomy is a fundamental aspect of critical thinking: the fact that the critical thinker not only performs these and many other cognitive skills, but performs them well. They do this because they have an overarching concern for the truth, which is something AI systems do not have. Higher confidence in AI equals less critical thinking Research published earlier this year revealed "a significant negative correlation between frequent AI tool usage and critical thinking abilities". The new study further explores this idea. It surveyed 319 knowledge workers such as healthcare practitioners, educators and engineers who discussed 936 tasks they conducted with the help of generative AI. Interestingly, the study found users consider themselves to use critical thinking less in the execution of the task, than in providing oversight at the verification and editing stages. In high-stakes work environments, the desire to produce high-quality work combined with fear of reprisals serve as powerful motivators for users to engage their critical thinking in reviewing the outputs of AI. But overall, participants believe the increases in efficiency more than compensate for the effort expended in providing such oversight. The study found people who had higher confidence in AI generally displayed less critical thinking, while people with higher confidence in themselves tended to display more critical thinking. This suggests generative AI does not harm one's critical thinking - provided one has it to begin with. Problematically, the study relied too much on self-reporting, which can be subject to a range of biases and interpretation issues. Putting this aside, critical thinking was defined by users as "setting clear goals, refining prompts, and assessing generated content to meet specific criteria and standards". "Criteria and standards" here refer more to the purposes of the task than to the purposes of critical thinking. For example, an output meets the criteria if it "complies with their queries", and the standards if the "generated artefact is functional" for the workplace. This raises the question of whether the study was really measuring critical thinking at all. Becoming a critical thinker Implicit in the new study is the idea that exercising critical thinking at the oversight stage is at least better than an unreflective over-reliance on generative AI. The authors recommend generative AI developers add features to trigger users' critical oversight. But is this enough? Critical thinking is needed at every stage before and while using AI - when formulating questions and hypotheses to be tested, and when interrogating outputs for bias and accuracy. The only way to ensure generative AI does not harm your critical thinking is to become a critical thinker before you use it. Becoming a critical thinker requires identifying and challenging unstated assumptions behind claims and evaluating diverse perspectives. It also requires practising systematic and methodical reasoning and reasoning collaboratively to test your ideas and thinking with others. Chalk and chalkboards made us better at mathematics. Can generative AI make us better at critical thinking? Maybe - if we are careful, we might be able to use generative AI to challenge ourselves and augment our critical thinking. But in the meantime, there are always steps we can, and should, take to improve our critical thinking instead of letting an AI do the thinking for us.
[7]
Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
Researchers from Microsoft and Carnegie Mellon University warn that the more you use AI, the more your cognitive abilities deteriorate. Artificial intelligence may one day make humans obsoleteâ€"just not in the way that you're thinking. Instead of AI getting so good at completing tasks that it takes the place of a person, we may just become so reliant on imperfect tools that our own abilities atrophy. A new study published by researchers at Microsoft and Carnegie Mellon University found that the more humans lean on AI tools to complete their tasks, the less critical thinking they do, making it more difficult to call upon the skills when they are needed. The researchers tapped 319 knowledge workersâ€"a person whose job involves handling data or informationâ€"and asked them to self-report details of how they use generative AI tools in the workplace. The participants were asked to report tasks that they were asked to do, how they used AI tools to complete them, how confident they were in the AI's ability to do the task, their ability to evaluate that output, and how confident they were in their own ability to complete the same task without any AI assistance. Over the course of the study, a pattern revealed itself: the more confident the worker was in the AI's capability to complete the task, the more often they could feel themselves letting their hands off the wheel. The participants reported a "perceived enaction of critical thinking" when they felt like they could rely on the AI tool, presenting the potential for over-reliance on the technology without examination. This was especially true for lower-stakes tasks, the study found, as people tended to be less critical. While it's very human to have your eyes glaze over for a simple task, the researchers warned that this could portend to concerns about "long-term reliance and diminished independent problem-solving.†By contrast, when the workers had less confidence in the ability of AI to complete the assigned task, the more they found themselves engaging in their critical thinking skills. In turn, they typically reported more confidence in their ability to evaluate what the AI produced and improve upon it on their own. Another noteworthy finding of the study: users who had access to generative AI tools tended to produce "a less diverse set of outcomes for the same task" compared to those without. That passes the sniff test. If you're using an AI tool to complete a task, you're going to be limited to what that tool can generate based on its training data. These tools aren't infinite idea machines, they can only work with what they have, so it checks out that their outputs would be more homogenous. Researchers wrote that this lack of diverse outcomes could be interpreted as a "deterioration of critical thinking†for workers. The study does not dispute the idea that there are situations in which AI tools may improve efficiency, but it does raise warning flags about the cost of that. By leaning on AI, workers start to lose the muscle memory they've developed from completing certain tasks on their own. They start outsourcing not just the work itself, but their critical engagement with it, assuming that the machine has it handled. So if you're worried about getting replaced by AI and you're using it uncritically for your work, you just might create a self-fulfilling prophecy.
[8]
Microsoft study warns AI reliance may erode critical thinking skills at
extended use of generative AI leads to decreased critical thinking ability among employees, study finds. A recent study conducted by Microsoft and Carnegie Mellon University warns that excessive dependence on generative AI technology at work can lead to a deterioration of critical thinking skills and cognitive abilities that should be preserved, according to stuff.co.za. The study surveyed 319 knowledge workers who reported using generative AI at least once a week in their professional tasks, as reported by Forbes [https://www.forbes.com/sites/dimitarmixmihov/2025/02/11/ai-is-making-you-dumber-microsoft-researchers-say/]. Participants shared examples of their AI use that fell into three main categories: creation, information, and advice. For the creation category, an example is writing a formulaic email to a colleague; for the information category, it includes researching a topic or summarizing a long article; and for the advice category, it involves asking for guidance or making a chart from existing data, according to TechCrunch. Participants self-reported on the effect generative AI technology is having on their cognitive functions. The findings indicated that the more humans use AI, the more their cognitive abilities deteriorate, leading to a concerning 'atrophy' in critical thinking abilities. "Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved," the researchers stated. "A key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise," stated the study. When workers rely on generative AI, their focus shifts towards verifying the quality of AI-generated answers instead of engaging in higher-order critical thinking skills like creating, evaluating, and analyzing information. The study found that the more employees trust AI tools to perform their tasks, the lower their capacity for critical thinking and independent evaluation, which can lead to impaired independent problem-solving and critical reflection in the long term. Moreover, the researchers detected that users with access to generative AI tools produce a less diverse set of outcomes for the same task compared to those without, reflecting a deterioration of critical thinking, as it indicates a lack of personal, contextualized, critical, and reflective judgment of AI output. The study found that reduced critical thinking makes it more difficult for humans to call upon their skills when they are needed. One participant noted that she used ChatGPT to write a performance review but double-checked the result, fearing she might accidentally provide a document that could lead to her being removed from her job. Another respondent reported that he had to edit AI-generated emails he was about to send to his boss -- whose culture places more emphasis on hierarchy and age -- so that he wouldn't commit a faux pas. In many cases, participants checked AI-generated answers using regular internet search queries, utilizing resources such as YouTube and Wikipedia. About 36% of participants reported using critical thinking skills to mitigate potential negative consequences of using AI. However, not all participants were familiar with the limitations of AI. The researchers suggest that to compensate for the shortcomings of generative AI, workers need to understand how these shortcomings arise. The article was written with the assistance of a news analysis system.
[9]
Study Finds That People Who Entrust Tasks to AI Are Losing Critical Thinking Skills
Trusting artificial intelligence over the real thing seems, per a new study, to be atrophying folks' critical thinking skills. As flagged by the folks at 404 Media, new research from Carnegie Mellon and Microsoft -- yes, the same company that invested nearly $14 billion into OpenAI and is essentially subsidizing the ChatGPT maker -- suggests that the more people use AI, to less critical thinking they do. "Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved," the researchers wrote in the paper. "A key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise." The research team surveyed 319 "knowledge workers" -- basically, folks who solve problems for work, though definitions vary -- about their experiences using generative AI products in the workplace. From social workers to people who write code for a living, the professionals surveyed were all asked to share three real-life examples of when they used AI tools at work and how much critical thinking they did when executing those tasks. In total, more than 900 examples of AI use at work were shared with the researchers. The findings from those examples were striking: overall, those who trusted the accuracy of the AI tools found themselves thinking less critically, while those who trusted the tech less used more critical thought when going back over AI outputs. "The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI," the researchers wrote. "Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving." This isn't enormously surprising. Something we've observed in many domains, from self-driving vehicles to scrutinizing news articles produced by AI, is that humans quickly go on autopilot when they're supposed to be overseeing an automated system, often allowing mistakes to slip past. The use of AI also appeared to hinder creativity, the researchers found, with workers using AI tools producing a "less diverse set of outcomes for the same task" compared to people relying on their own cognitive abilities. As the scientists behind the study point out, there have long been concerns about "offloading" cognitive tasks to new technologies. From calculators to GPS maps, many new technologies have aided everyday tasks by taking out the legwork -- but this time, if the boosters are to be believed, we just might end up replacing ourselves entirely.
[10]
Is AI making us dumb? | TechCrunch
Researchers from Microsoft and Carnegie Mellon University recently published a study looking at how using generative AI at work affects critical thinking skills. "Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved," the paper states. When people rely on generative AI at work, their effort shifts toward verifying that an AI's response is good enough to use, instead of using higher-order critical thinking skills like creating, evaluating, and analyzing information. If humans only intervene when AI responses are insufficient, the paper says, then workers are deprived of "routine opportunities to practice their judgment and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise." In other words, when we rely too much on AI to think for us, we get worse at solving problems ourselves when AI fails. In this study of 319 people, who reported using generative AI at least once a week at work, respondents were asked to share three examples of how they use generative AI at work, which fall into three main categories: creation (writing a formulaic email to a colleague, for example); information (researching a topic or summarizing a long article); and advice (asking for guidance or making a chart from existing data). Then, they were asked if they practice critical thinking skills when doing the task, and if using generative AI makes them use more or less effort to think critically. For each task that respondents mentioned, they were also asked to share how confident they were in themselves, in generative AI, and in their ability to evaluate AI outputs. About 36% of participants reported that they used critical thinking skills to mitigate potential negative outcomes from using AI. One participant said she used ChatGPT to write a performance review, but double checked the AI output for fear that she could accidentally submit something that would get her suspended. Another respondent reported that he had to edit AI-generated emails that he would send to his boss -- whose culture places more emphasis on hierarchy and age -- so that he wouldn't commit a faux pas. And in many cases, participants verified AI-generated responses with more general web searches from resources like YouTube and Wikipedia, possibly defeating the purpose of using AI in the first place. In order for workers to compensate for the shortcomings of generative AI, they need to understand how those shortcomings happen. But not all participants were familiar with the limits of AI. "Potential downstream harms of GenAI responses can motivate critical thinking, but only if the user is consciously aware of such harms," the paper reads. In fact, the study found that participants who reported confidence in AI used less critical thinking effort than those who reported having confidence in their own abilities. While the researchers hedge against saying that generative AI tools makes you dumber, the study shows that over reliance on generative AI tools can weaken our capacity for independent problem-solving.
[11]
Microsoft's latest study confirms it: AI is making us dumber
TL;DR: A recent report by Microsoft and Carnegie Mellon researchers highlights that generative AI use may impair critical thinking. The study of 319 knowledge workers found over-reliance on AI weakens cognitive abilities, leading to "mechanised convergence" and "digital amnesia." A recent report published by researchers from Microsoft and Carnegie Mellon, has revealed that the use of generative AI may be linked to a deterioration in critical thinking. The logic of the conclusion is clear: the more you use AI tools, the less critical thinking you're likely to do. However, the research - conducted on 319 knowledge workers - very clearly puts things in perspective. Credit: Getty Images The researchers reached these conclusions through self reporting - having the participants answer questions about how they used AI, when they applied critical thinking, and how much effort was required. One of the findings was that an over-reliance on AI can weaken cognitive abilities. The researchers described how respondents would often mistake "copy-pasting with minor tweaks" as critical thought - accepting AI output without challenging its assertions. This tendency is concerning, especially given how confidently wrong AI models tend to be. Look no further than Google's AI summaries, and some of their hilarious responses, as an example. Another was that AI can lead to 'mechanised convergence', which basically refers to less diversity of thought when approaching a given task. Instead of adopting creative, and diverse solutions, AI would push people towards very generic and formulaic outputs. "Users with access to GenAI tools produce a less diverse set of outcomes for the same task, compared to those without." "Digital amnesia" was another phenomenon described in the study. Which essentially describes how relying on AI means we remember less, think less, and lose the ability to process complex information ourselves. "While GenAI and conversational search engines can streamline tasks like literature reviews, some fear that outsourcing this work could harm our ability to learn and remember, in what is sometimes referred to as 'digital amnesia." the researchers wrote Instinctively, we probably all saw this coming. Having endless information in the palm of your hand is bound to create some level of reliance and mental atrophy. Yet, if there's anything to take away from this, it's not that AI is bad. It's that we need to be active in creating a healthy relationship with it.
[12]
Overreliance on A.I. Erodes 'the Quality of Human Thought," Microsoft Study Finds
A.I. deprives humans of "routine opportunities to practice their judgement and strengthen their cognitive musculature." Humans have always been wary of how technological advances could deteriorate our thinking skills. The printing press, for example, led some to worry that scribes would become lazy; calculators spooked math teachers; and the internet spawned widespread anxiety over its cognitive impacts. When it comes to generative A.I., however, there may be real cause for concern, according to a new study from researchers at Microsoft (MSFT) and Carnegie Mellon University. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters Like printing, calculators and the internet, A.I. tools "are the latest in a long line of technologies that raise questions about their impact on the quality of human thought," said the survey's authors, who found that individuals with higher confidence in generative A.I. tools rely less on critical thinking skills. According to the researchers, this correlation could lead to widespread obstacles for workers, as improperly used technologies "can and do result in the deterioration of cognitive faculties that ought to be preserved." The study surveyed 318 "knowledge workers," defined as professionals who handle information, and examined 936 examples of how they use generative A.I. at work. Some of the cited tasks included a lawyer using ChatGPT to find relevant laws for a legal case, a teacher using DALL-E to generate an image for a presentation on hand washing at school, and a commodities trader using ChatGPT to seek recommendations in improving their trading skills. Besides asking participants to self-report A.I. work tasks, the survey evaluated their confidence in generative A.I.'s abilities and their confidence in evaluating A.I. outputs and completing the same tasks without the technology. The results showed that those with less confidence in such tools used critical thinking skills to ensure to improve the quality of their work, while a reliance on A.I. tools often diminished "independent problem-solving," according to the study, which noted that knowledge workers are increasingly trading "hands-on engagement for the challenge of verifying and editing A.I. outputs." Participants linked upticks in critical thinking to a desire to avoid potential negative outcomes of their A.I. use, ranging from outdated information to wrong outcomes or false mathematical formulas. This was especially evident in high-stakes scenarios, such as work assignments that could impact one's employment or events that could cause social conflict, like communications to coworkers with different cultural backgrounds. Could A.I. dull cognitive abilities over time? While the use of generative A.I. for seemingly low-stakes tasks, like grammar-checking, might appear less worrying, the study's authors warn that such overreliance could inadvertently cause negative outcomes to occur more frequently when more important tasks appear. "Without regular practice in common and/or low-stakes scenarios, cognitive abilities can deteriorate over time, and thus create risks if high-stakes scenarios are the only opportunities available for exercising such abilities," they said. Sure, generative A.I. tools can lighten grunt work by automating tasks for workers. But, according to the study, a "key irony" of such automation is that it deprives humans of "routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do rise."
[13]
Using ChatGPT? AI Could Damage Your Critical Thinking Skills, According to a Microsoft Study
The researchers cautioned that while AI could make workplaces more efficient, it could also lead to less critical engagement, long-term reliance, and reduced independent thinking. What if the most pressing danger of AI is not its ability to replace jobs, as more than one in five U.S. workers fear, but its potential to cause cognitive decline? Researchers at Microsoft and Carnegie Mellon University published a new study last month that claims to be the first to examine the effects of AI tools on critical thinking. The researchers found that the more confident human beings were in AI's abilities to get a task done, the fewer critical-thinking skills they used. Humans confident in AI left critical thinking to ChatGPT instead of doing it themselves and strengthening their cognitive abilities. Related: Would You Pay $200 for ChatGPT? OpenAI's New Reasoning Model Has a Hefty Price Tag. "Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved," the researchers wrote, adding that "a key irony of automation is that by mechanizing routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgment and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise." The researchers surveyed 319 knowledge workers, or workers who handle data or information, to find how confident they were in AI's capabilities and how much critical thinking they employed when using AI to complete tasks. Critical thinking was defined as falling under one of six categories: knowledge (remembering ideas), comprehension (understanding ideas), application (putting ideas to work in the real world), analysis (contrasting and relating ideas), synthesis (combining ideas), and evaluation (judging ideas). The surveyed knowledge workers used AI like ChatGPT at least once a week and gave 936 examples of how they used AI at work, ranging from looking up facts to summarizing a text. They mainly used critical thinking to set clear prompts, refine prompts, and verify AI responses against external sources. Six out of the seven researchers listed are associated with Microsoft Research, the research subsidiary of Microsoft created in 1991. Microsoft has deep interests in AI, with its investment in ChatGPT-maker OpenAI totaling close to $14 billion and its plans to spend $80 billion on AI data centers in the fiscal year ending in June. The researchers caution that while AI can make workplaces more efficient, it could "also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving." In other words, AI has a hidden cost: It could lead workers to lose muscle memory for more routine tasks.
[14]
Some workers are already outsourcing their brains to AI
Dammit, that was our job here at The Reg. Now if you get a task you don't understand, you may assume AI has the answers Some knowledge workers risk becoming over-reliant on generative AI and their problem-solving skills may decline as a result, according to a study penned by researchers from Microsoft Research and Carnegie Mellon University. In a paper titled "The Impact of Generative AI on Critical Thinking", the seven researchers report and analyze a survey in which they asked 319 knowledge workers who use generative AI at least weekly how and if they apply critical thinking when using tools such as Copilot and ChatGPT. The research found that workers who are confident tackling a task are more likely to apply critical thinking to the output of a generative AI service, while those less comfortable with a task often assume generative AI produced adequate answers and don't bother to think about what the brainbox delivered. Confidence in AI is associated with reduced critical thinking effort, while self-confidence is associated with increased critical thinking The researchers suggest their findings point to a need for a re-think about the design of enterprise AI tools. "Confidence in AI is associated with reduced critical thinking effort, while self-confidence is associated with increased critical thinking," the paper states, adding "This duality indicates that design strategies should focus on balancing these aspects." AI tools, the team suggests, should incorporate mechanisms to support long-term skill development and encourage users to engage in reflective thinking when interacting with AI-generated outputs. "This aligns with the goals of explainable AI," the researchers said, referring to the practice of having AI outline how it delivered its output. The call for AI to show its workings is good news for the latest chain-of-thought AI models from DeepSeek and OpenAI - but merely explaining AI's reasoning isn't enough. Good AI tools should foster critical thinking through proactive design strategies that encourage user reflection and provide assistance when necessary, the researchers wrote. That might seem like criticism of current AI tools, but the paper doesn't go there. The authors also stop short of recommending that knowledge workers reduce AI use to avoid "cognitive offload" and the potential "deterioration of cognitive faculties that ought to be preserved." The authors didn't respond to questions from The Register. The paper concludes that we should adapt to an AI-infused world by applying critical thinking to verify AI outputs and how they can be used in daily work. Which may be what one would expected given six of seven authors work at the company that sells Copilot. Yes, the researchers admit, knowledge workers should be taught to "maintain foundational skills in information gathering and problem-solving [to] avoid becoming over-reliant on AI," just not too much. Those working with systems like ChatGPT, Copilot, and other generative AI tools should be trained "on developing skills in information verification, response integration and task stewardship." This isn't the only study to conclude that more reliance on AI is having a negative impact on critical thinking skills, but previous work has concluded that we need to preserve our current critical thinking skills - not offload them to AI and change the way we engage those crucial faculties to simply validate and integrate AI output. "When using genAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship," the authors conclude. The paper will be presented at the 2025 Conference on Human Factors in Computing Systems which starts in late April. ®
[15]
Shut it all down? Microsoft research suggests AI usage is making us feel dumber - but you don't need to panic yet
The survey-based study opens the door to deeper research into AI side effects Brain rot is usually associated with endless doom scrolling of low-quality online content, but a new Microsoft Research study suggests that overusing AI might be causing "the deterioration of cognitive faculties." Oh no. Performed in collaboration with Carnegie Mellon University, the study - titled The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers - asked 319 participants who use "GenAI tools at work at least once per week" (such as Gemini, ChatGPT, CoPilot) to judge how AI usage at work was affecting their critical thinking skills. It found that "while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving." The study added that people's jobs are becoming less about solving problems and more about making sure the AI has correctly solved the problem - morphing our work from "task execution to task stewardship" - which led people to feel like their ability to think critically isn't as sharp as it was before they started using AI. Seeing the headlines and reading the study it can feel like it's time to pull the big AI lever and shut it all down for good in order to save our brains from being irreparably damaged by a tool that may be hurting us more than it's helping us. But while the study certainly highlights challenges we desperately need to tackle in the AI space - ultimately the relationship between humans and AI we should be cultivating - it's not quite as dire as it seems. That's because the study focuses on how AI users perceive AI usage is affecting their critical thinking. While the study uses a survey to quantify these feelings it comes down to how people feel and, more importantly, it doesn't create a true quantitative comparison between frequent AI users and people who never or rarely use the AI. While people who use AI a lot may feel they're less capable of critical thinking it could be that because they're not as distracted by less difficult tasks they can more easily tackle challenging ones thanks to AI's help - or it could be there's been no change in their critical thinking skills. But to measure actual changes in critical thinking we'd need a study with quantitative tests that compare different groups' ability (groups decided based on their AI usage) to complete tasks - including a control who never uses AI for work. Until we have that kind of study it's impossible to judge if AI really is making us dumber. However, that doesn't mean we should just dismiss this Microsoft research. Firstly, using this study as inspiration for a more quantitative study like the one I've described would be a fantastic next step. Secondly, while the participants' feelings may or may not represent factual reality, they represent reality for the people in this study - and they shouldn't be dismissed. If people feel like they're simply AI stewards rather than performing meaningful work that could lead to dissatisfaction in their job. Some psychologists believe dissatisfaction with how we spend our time scrolling is the root cause of the social media brain rot experience (via the BBC). A similar sense of meaninglessness could be causing AI users' similar brain deterioration feelings at work. With AI becoming more prevalent we'll need to work out how best to fight back feelings of deterioration, but a good first step will always be some self-care. Be that finding a new job that stimulates us, taking up a hobby that we can devote our freed-up brain power towards, or some other strategy that works best for you.
[16]
Using AI tools like ChatGPT can reduce critical thinking skills
A survey of workers who used generative AI to complete tasks found that they use critical thinking less when they trusted the AI to do the task accurately Using generative AI can limit its users' critical thinking when doing tasks. People using generative AI also think critically less when they trust the AI to do a task, such as developing an argument for a paper or presentation. The researchers behind the findings say the solution is to adapt the technology, rather than to limit its use. Len Tankelevitch at Microsoft Research and his colleagues asked 319 workers to take part in a survey. The workers,...
[17]
Microsoft Says Workers are Already Using AI to Lighten Workloads, at Risk to Their Own Brainpower
The findings were stark. Survey respondents said that when they had high confidence that an AI tool would do well at a particular task, they felt less need to apply their own critical thinking. On the other hand, when a worker had high confidence in their own skills and less in the AI's, they felt good about putting effort into evaluating the output the AI gave them and then improving it. AI is redefining how we see work It all boils down to the fact that when knowledge workers use AI tools, it shifts the way they think about performing activities like analysis, synthesis, and evaluation of information. The AI moves a worker's focus from information gathering to information verification when using an AI to help try to understand something, and when using an AI for problem solving, the shift is away from carrying out the actual solving process to task stewardship. Think of it like this: When aircraft didn't have an autopilot, fliers had to concentrate the whole time on operating the airplane -- navigating, controlling, reacting to technical challenges, and feeling the way the wind was blowing. Modern day jetliner pilots have a very different job. They have to be able to fly the plane manually in case of unexpected problems, but minute to minute, what they're often doing is monitoring the aircraft as it automatically flies itself to make sure it's doing the right thing.
Share
Share
Copy Link
A recent study by Microsoft and Carnegie Mellon University researchers suggests that overreliance on AI tools in the workplace may be eroding critical thinking skills, raising concerns about long-term cognitive impacts.
A recent study conducted by researchers from Microsoft and Carnegie Mellon University has raised concerns about the potential negative impact of artificial intelligence (AI) on human critical thinking skills in the workplace 1. The research, which surveyed 319 knowledge workers across various professions, suggests that increased reliance on AI tools may be leading to a deterioration of cognitive abilities 2.
The study revealed several important insights:
Higher confidence in AI correlates with less critical thinking, while higher self-confidence is associated with more critical thinking 3.
Workers are increasingly shifting from task execution to oversight when using generative AI, potentially reducing critical engagement with their work 4.
The use of AI tools may lead to a less diverse set of outcomes for the same task, reflecting a lack of personal, contextualized, and reflective judgment 4.
Researchers warn that by mechanizing routine tasks and leaving exception-handling to human users, AI tools may be depriving workers of opportunities to practice judgment and strengthen their "cognitive musculature" 4. This could leave workers "atrophied and unprepared" when exceptions arise, potentially impacting long-term problem-solving abilities 5.
The study identified several factors that influence the use of critical thinking when working with AI:
To address these concerns, the researchers suggest developing AI tools that motivate users to engage in critical thinking. Some recommendations include:
The researchers note that concerns about new technologies impacting human cognition are not new, citing historical objections to writing, printing, and calculators 5. However, they emphasize the unique challenges posed by AI in potentially diminishing independent problem-solving skills 3.
As AI continues to proliferate across various sectors, with significant impacts on businesses and workforce dynamics, the findings of this study become increasingly relevant 5. The research underscores the need for careful consideration of how AI tools are integrated into workplace processes to preserve and enhance human cognitive abilities.
Reference
[1]
[2]
As AI becomes increasingly integrated into daily life and work, concerns arise about its impact on human cognitive abilities, particularly critical thinking skills.
4 Sources
4 Sources
As AI tools become increasingly prevalent in universities, educators grapple with maintaining academic integrity and fostering critical thinking skills among students.
2 Sources
2 Sources
University of South Australia researchers explore the relationship between AI and human creativity, concluding that while AI can generate creative outputs, it fundamentally relies on human input and guidance.
3 Sources
3 Sources
As AI technology advances, businesses and users face challenges with accuracy and reliability. Experts suggest ways to address gaps in AI performance and human expertise to maximize AI's potential.
2 Sources
2 Sources
A recent survey reveals widespread apprehension among Australians regarding artificial intelligence. The study emphasizes the crucial role of media literacy in addressing these concerns and navigating the evolving AI landscape.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved