Curated by THEOUTPOST
On Wed, 19 Feb, 12:03 AM UTC
4 Sources
[1]
Is AI making us dumber?
If you don't need to think about easy questions, will you be able to answer complex questions? Opinion I don't want to sound like an aging boomer, yet when I see junior programmers relying on AI tools like Copilot, Claude, or GPT for simple coding tasks, I wonder if they're doing themselves more harm than good. It's made doing simple jobs much easier, but it's only by learning the skills you need for minor tasks that you can master the abilities you must have for major jobs. I'm not the only one who worries about that. Namanyay Goel, an independent developer, recently wrote a blog post - with more than a million hits - that clearly struck a nerve. Goel wrote: I agree. I'm not saying you need to learn the skills I picked up in the '70s and '80s with IBM 360 Assembler and Job Control Language (JCL). That would be foolish. But, by working with such tools, I grokked how computers worked at a very low level, which, in turn, helped me pick up C and Bash. From there, I wrote some moderately complex programs. I can't say I was ever a great developer. I wasn't. But I knew enough to turn in good work. Will today's neophyte programmers be able to say the same? I wonder. I really do. As Goel said: "AI gives you answers, but the knowledge you gain is shallow. With StackOverflow, you had to read multiple expert discussions to get the full picture. It was slower, but you came out understanding not just what worked but why it worked." Exactly so. In my day, it was Usenet and the comp newsgroups - yes, I'm old - but at its best, the experience was the same. The newsgroups were made up of people eager not just to address how to solve a particular problem but to understand the nature of the problem. This isn't just two people spouting off. A recent Microsoft Research study, The Impact of Generative AI on Critical Thinking, found that knowledge workers with "higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking." Further, "used improperly, technologies can and do result in the deterioration of cognitive faculties." Another study by Michael Gerlich at SBS Swiss Business School in Zurich, Switzerland, also found "a negative correlation between frequent AI use and critical thinking abilities." Grant Blashki, a professor at the University of Melbourne, agrees. Blashki wrote: "It's a simple case of 'use it or lose it.' When we outsource a cognitive task to technology, our brains adapt by shifting resources elsewhere - or just going idle. Convenience comes with a cost. If AI takes over too much of our cognitive workload, we may find ourselves less capable of deep thinking when it really matters." That's bad. It's especially bad when people are still learning how to think in their field. Sure, we get faster answers, but as Blashki noted: "It's the difference between climbing a mountain and taking a helicopter to the top. Sure, you get the view either way, but one experience builds strength, resilience, and pride - the other is just a free ride." Besides, as much as you may want to turn over all your work to an AI so you can get back to watching Severance or The Night Agent, you still can't trust AI. AI chatbots have been getting better at not hallucinating, but even the best of them still do it. Even with programming, my ZDNet colleague David Gewirtz, who's been testing chatbots for their development skills for two years, observed: "AIs can't write entire apps or programs. But they excel at writing a few lines and are not bad at fixing code." That's nice, but it won't help you when you need to write a complex application. So, what should you do? Here's my list: Don't treat AI as a magic answer box. Trust, but verify its answers. Use AI results as a starting point. For programming, work out how it's solving your problem. Consider whether there's a better way. Look for sites where the smart people are talking about your field of expertise. Ask questions there, answer questions there, study how others are dealing with their problems. Get involved with your colleagues' professional conversations. When you do code reviews, don't stop when the code works. Look deeper into understanding the process. Last but not least, try coding, writing, or whatever from scratch. Stretch your mental muscles. Blashki said it best: "The goal isn't to reject AI - it's to create a balanced relationship where AI enhances human intelligence rather than replacing it." ®
[2]
As AI gets smarter, are we getting dumber?
Artificial intelligence (AI) is no longer just a futuristic fantasy -- it's here, and it's thinking for us. From generating essays and diagnosing diseases to analyzing legal documents and coding software, AI is increasingly taking over tasks that once required years of human expertise. But here's the twist: as AI gets smarter, are we getting dumber? There's growing concern that an over-reliance on AI could lead to a kind of cognitive atrophy -- let's call it AI apathy. If machines can handle the tough thinking, will humans stop flexing their own mental muscles? Will students, professionals, and everyday knowledge workers slowly lose the ability (or motivation) to problem-solve, analyze and create on their own? The signs are already here. Research on how we use GPS trackers suggests that when we rely too heavily on automated navigation, our spatial memory declines. Studies on pilots show that those who rely on autopilot lose critical situational awareness skills. And psychologists have documented the so-called "Google Effect" -- which is the tendency to forget information because we know we can just look it up online again. So, what happens when AI isn't just giving us directions or retrieving facts but actually doing our thinking? Cognitive load theory tells us that our brains need a certain level of difficulty to process information deeply. If something is too easy -- like, say, getting AI to write an essay for you -- your brain doesn't engage enough to form lasting knowledge. Psychologists call this the productive struggle: the idea that working through difficult problems builds resilience, confidence and deep understanding. Canadian-American psychologist Albert Bandura, a pioneer in motivation research, called it mastery experience -- the boost in confidence that comes from figuring something out on your own. The problem? AI can short-circuit this process. When students or professionals rely too much on AI, they miss out on the intellectual workout that strengthens their analytical skills. Then there's self-determination theory, a well-established motivation framework, which highlights competence as a key driver of engagement. People feel motivated when they overcome challenges -- not when answers are handed to them on a silver platter. If AI makes tasks too easy, it risks stripping away the satisfaction of learning, reducing motivation in the process. It's the difference between climbing a mountain and taking a helicopter to the top. Sure, you get the view either way, but one experience builds strength, resilience, and pride -- the other is just a free ride. The GPS effect: How automation can dull our skills If you've ever used a GPS or satnav and realized later that you have no idea how you got to your destination, you've experienced automation-induced cognitive decline in action. Studies have found that people who frequently use GPS show poorer spatial memory and a weaker ability to navigate without assistance. In one experiment, people who relied heavily on GPS performed worse on navigation tasks that required them to remember routes. Even more concerning, frequent GPS users showed declines in hippocampal function -- the brain region responsible for spatial memory. It's a simple case of "use it or lose it." When we outsource a cognitive task to technology, our brains adapt by shifting resources elsewhere -- or just going idle. Convenience comes with a cost. If AI takes over too much of our cognitive workload, we may find ourselves less capable of deep thinking when it really matters. AI and knowledge work: The rise of intellectual sluggishness Now, let's bring this back to AI. If navigation apps weaken our spatial awareness and autopilot dulls a pilot's situational awareness, what happens when AI starts handling intellectual tasks -- writing reports, solving math problems, synthesizing research? The risk is intellectual complacency. If AI is always there to generate answers, will students and professionals stop pushing themselves to think critically? Will they trust AI blindly, without questioning its logic? Early evidence suggests this is already happening. Education researchers report that tertiary students who used generative AI tools for their essays ultimately performed worse in their examinations, raising questions about the role of AI in education. In some creative fields, studies have shown a mixed picture with generative AI increasing creativity but reducing the number of new ideas. In one study, readers found Gen AI-enabled stories more enjoyable, but over time, found these stories became more and more similar to stories by humans alone. Even beyond individual skills, there's a bigger issue: what happens to innovation if AI users become passive consumers of machine-generated knowledge instead of active thinkers? Great scientific discoveries, philosophical insights and artistic breakthroughs don't come from taking shortcuts -- they come from wrestling with tough ideas. If we let AI do all the heavy lifting, we risk stagnation in human creativity and critical thought. How to avoid AI apathy So should we just abandon AI and go back to pen and paper? That's not happening. AI is an incredibly powerful tool, and the key is learning how to use it without losing ourselves in it. Here's how in five easy steps: 1. Use AI as a thinking partner, not a crutch Instead of letting AI think for you, use it to enhance your thinking. For example, students can use AI to brainstorm ideas but still write their own essays. Professionals can use AI for research but critically evaluate those findings rather than blindly accepting them. 2. Prioritize process over easy answers Schools and workplaces can emphasize the workings of how you've arrived at a conclusion, not just the final answer. Requiring explanations, alternative solutions and independent reasoning helps maintain cognitive engagement. 3. Practice 'unplugged' thinking Just as pilots need manual flying refreshers, knowledge workers might benefit from AI-free exercises. Writing an essay without AI, doing mental math or brainstorming without digital help keeps the brain active and adaptable. AI can be used to guide learning rather than replace effort. For example, AI tutors can give hints instead of direct answers, helping students reach solutions independently. The goal isn't to reject AI -- it's to create a balanced relationship where AI enhances human intelligence rather than replacing it. The future of thinking in an AI world AI is here to stay, and its ability to outperform humans in certain cognitive tasks is only growing. But that doesn't mean we should surrender our intellectual abilities to it. The risk of AI apathy is real: if we rely too much on AI, our own analytical and creative skills could wither from disuse. But if we consciously design education, work and daily life to keep human thinking in the loop, we can maintain our cognitive edge. In a world where AI is getting smarter, our challenge is to make sure we don't get dumber. Because, at the end of the day, AI might be able to think for us -- but it's up to us to make sure we keep thinking for ourselves.
[3]
Microsoft Says AI Kills Critical Thinking -- Here's What That Means for You
If you're letting AI do your thinking and evaluating, of course your skills drop. AI is really good at what it's good at, and terrible at other things. A lot of its strengths are found it its ability to do very basic tasks quickly. Simple writing assignments are something it excels at. My big concern, though, is that the more you use AI, the less you use your brain. The less you use your brain, the dumber you are. So when you run across a problem AI can't solve, will you have the skills to do so yourself? It's something I've worried about on a societal level, more than just my ability to make appointments for my cats. What the HR experts think Last fall, at Unleash, I sat down for a one-on-one with HR analyst and data guru Josh Bersin. I asked him this very question: "Are you concerned that AI will make people dumber?" He gave an optimistic answer to my pessimism and said three important things: If we have AI doing basic tasks, that frees us up to do higher level tasks, which will add value. There are plenty of tasks that people used to do and we don't have to do any more -- for instance, I don't have to know how to weave fabric or butcher a cow in order to wear clothes and eat a hamburger. Sure, someone has to do those things, but in earlier times, far more people needed that knowledge.
[4]
Use it or lose it: AI will rot our brains
Technology can physically change our brains as it becomes an integral part of daily life - but every time we outsource a function, we risk letting our ability atrophy away. What happens when that ability is critical thinking itself? As a tail-end Gen-Xer, I've had the remarkable experience of going from handwritten rolodex entries and curly-cord rotary phones through to today's cloudborne contact lists, which instantly let you contact people in any of a dozen ways within seconds, whatever sort of phone or device you're holding. My generation's ability to remember phone numbers is a bit of a coccyx - the vestigial remainder of a structure no longer required. And there are plenty of those in the age of the smartphone. The prime example is probably navigation. Reading a map, integrating it into your spatial mental plan of an area, remembering key landmarks, highway numbers and street names as navigation points, and then thinking creatively to find ways around traffic jams and blockages is a pain in the butt, especially when your phone can do it all on the fly, taking traffic, speed cameras and current roadworks all into account to optimize the plan on the fly. But if you don't use it, you lose it; the brain can be like a muscle in that regard. Outsourcing your spatial abilities to Apple or Google has real consequences - studies have now shown in an "emphatic" manner that increased GPS use correlates to a steeper decline in spatial memory. And spatial memory appears to be so important to cognition that another research project was able to predict which suburbs are more likely to have a higher proportion of Alzheimer's patients to nearly 84% accuracy just by ranking how navigationally "complex" the area was. The "use it or lose it" idea becomes particularly scary in 2025 when we look at generative Large Language Model (LLM) AIs like ChatGPT, Gemini, Llama, Grok, Deepseek, and hundreds of others that are improving and proliferating at astonishing rates. Among a thousand other uses, these AIs more or less allow you to start outsourcing thinking itself. The concept of cognitive offloading, pushed to an absurd - but suddenly logical - extreme. They've only been widely used for a couple of years at this point, during which they've exhibited an explosive rate of improvement, but many people already find LLMs an indispensable part of daily life. They're the ultimate low-cost or no-cost assistant, making encyclopedic (if unreliable) knowledge available in a usable format, at speeds well beyond human thought. AI adoption rates are off the charts; according to some estimates, humanity is jumping on the AI bandwagon considerably faster than it got on board with the internet itself. But what are the brain effects we can expect as the global population continues to outsource more and more of its cognitive function? Does AI accelerate humanity towards Idiocracy faster than Mike Judge could've imagined? A team of Microsoft researchers have made an attempt to get some early information onto the table and answer these kinds of questions. Specifically, the study attempted to assess the effect of generative AIs on critical thinking. Without long-term data at hand, or objective metrics to go by, the team surveyed a group of 319 "knowledge workers," who were asked to self-assess their mental processes over a total of 936 tasks. Participants were asked when they engaged critical thinking during these tasks, how they enacted it, whether generative AI affected the effort of critical thinking, and to what extent. They were also asked to rate their confidence in their own ability to do these tasks, and their confidence in the AI's ability. The results were unsurprising; the more the participants believed in the AI's capability, the less critical thinking they reported. Interestingly, the more the participants trusted their own expertise, the more critical thinking they reported - but the nature of the critical thinking itself changed. People weren't solving problems themselves as much as checking the accuracy of the AI's work, and "aligning the outputs with specific needs and quality standards." Does this point us toward a future as a species of supervisors over the coming decades? I doubt it; supervision itself strikes me as the sort of thing that'll soon be easy enough to automate at scale. And that's the new problem here; cognitive offloading was supposed to get our minds off the small stuff so we can engage with the bigger stuff. But I suspect AIs won't find our "bigger stuff" much more challenging than our smaller stuff. Humanity becomes a God of the Gaps, and those gaps are already shrinking fast. Perhaps WALL-E got it wrong; it's not the deterioration of our bodies we need to watch out for as the age of automation dawns, but the deterioration of our brains. There's no hover chair for that - but at least there's TikTok? Let's give the last word to DeepSeek R1 today - that seems appropriate, and frankly I'm not sure I've got any printable words to follow this. "I am what happens," writes the Chinese AI, "when you try to carve God from the wood of your own hunger."
Share
Share
Copy Link
As AI becomes increasingly integrated into daily life and work, concerns arise about its impact on human cognitive abilities, particularly critical thinking skills.
As artificial intelligence (AI) becomes increasingly integrated into our daily lives and work processes, concerns are growing about its potential impact on human cognitive abilities, particularly critical thinking skills. This phenomenon, dubbed "AI apathy," is raising questions about the long-term effects of over-reliance on AI technologies 1234.
AI tools have undoubtedly made many tasks more efficient and accessible. However, this convenience may come at a cost. As Professor Grant Blashki from the University of Melbourne notes, "It's a simple case of 'use it or lose it.' When we outsource a cognitive task to technology, our brains adapt by shifting resources elsewhere - or just going idle" 1. This adaptation could lead to a decline in critical thinking abilities and problem-solving skills.
Recent studies have begun to shed light on this issue:
A Microsoft Research study found that higher confidence in generative AI is associated with less critical thinking, while higher self-confidence is linked to more critical thinking 1.
Research by Michael Gerlich at SBS Swiss Business School in Zurich, Switzerland, discovered a negative correlation between frequent AI use and critical thinking abilities 1.
Studies on GPS use have shown that over-reliance on automated navigation can lead to a decline in spatial memory and navigation skills 24.
Psychologists emphasize the importance of "productive struggle" in learning and skill development. This concept suggests that working through difficult problems builds resilience, confidence, and deep understanding 2. AI's ability to provide quick answers may short-circuit this crucial process, potentially impacting motivation and engagement in learning and problem-solving tasks.
The impact of AI on education and professional development is a growing concern. Some researchers have reported that tertiary students who used generative AI tools for their essays ultimately performed worse in their examinations 2. In the professional world, there are worries about junior programmers becoming overly reliant on AI coding assistants, potentially hindering their ability to tackle complex programming challenges independently 1.
Experts suggest several strategies to mitigate the potential negative effects of AI on critical thinking:
Use AI as a thinking partner rather than a crutch, enhancing human intelligence instead of replacing it 12.
Prioritize understanding the process of problem-solving over simply obtaining quick answers 2.
Engage in activities that require active thinking and creativity without AI assistance 1.
In educational and professional settings, emphasize the importance of understanding the underlying principles and methodologies, not just the final outputs 3.
As AI continues to evolve, the relationship between human cognition and artificial intelligence will likely become more complex. Josh Bersin, an HR analyst and data guru, offers an optimistic perspective, suggesting that AI could free humans to focus on higher-level tasks, potentially adding more value 3. However, this optimism is tempered by concerns about the potential for widespread cognitive atrophy if critical thinking skills are not actively maintained and developed.
As we navigate this new landscape, the challenge will be to harness the benefits of AI while preserving and enhancing human cognitive abilities. The goal, as Professor Blashki puts it, is "to create a balanced relationship where AI enhances human intelligence rather than replacing it" 1.
Reference
[1]
[2]
[4]
A recent study by Microsoft and Carnegie Mellon University researchers suggests that overreliance on AI tools in the workplace may be eroding critical thinking skills, raising concerns about long-term cognitive impacts.
17 Sources
17 Sources
As artificial intelligence rapidly advances, experts and policymakers grapple with potential risks and benefits. The debate centers on how to regulate AI development while fostering innovation.
2 Sources
2 Sources
As AI tools become increasingly prevalent in universities, educators grapple with maintaining academic integrity and fostering critical thinking skills among students.
2 Sources
2 Sources
AI is revolutionizing the programming landscape, offering both opportunities and challenges for entry-level coders. While it simplifies coding tasks, it also raises the bar for what constitutes an "entry-level" programmer.
2 Sources
2 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved