The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 5 Mar, 12:01 AM UTC
2 Sources
[1]
Students must learn to be more than mindless 'machine-minders'
University students have taken to artificial intelligence in the same way that an anxious new driver with a crumpled road map might take to satnav -- that is to say, hungrily and understandably. A survey of UK undergraduates by the Higher Education Policy Institute think-tank shows 92 per cent of them are using generative AI in some form this year compared with 66 per cent last year, while 88 per cent have used it in assessments, up from 53 per cent last year. What should universities do? My instinct would be to lean in. Tell your students you will be giving the same essay question to a tool such as ChatGPT. They will be marked on how much better their version is than the machine's: how much more original, creative, perceptive or accurate. Or give them the AI version and tell them to improve upon it, as well as to identify and correct its hallucinations. After all, your students' prospects in the world of work are going to depend on how much value they can add, over and above what a machine can spit out. What's more, studies of AI use at work suggest these editing and supervising tasks will become increasingly common. A Microsoft study published this year on knowledge workers' use of generative AI found the tool had changed "the nature of critical thinking" from "information gathering to information verification", from "problem-solving to AI response integration" and from "task execution to task stewardship". But like many pleasingly neat solutions to complex problems, mine turns out to be a terrible idea. Maria Abreu, a professor of economic geography at Cambridge university, told me her department had experimented along these lines. But when they gave undergraduates an AI text and asked them to improve it, the results were disappointing. "The improvements were very cosmetic, they didn't change the structure of the arguments," she said. Masters students did better, perhaps because they had already honed the ability to think critically and structure arguments. "The worry is, if we don't train them to do their own thinking, are they going to then not develop that ability?" After the pandemic prompted a shift to assessments in which students had access to the internet, Abreu's department is now going back to closed exam conditions. Michael Veale, an associate professor at University College London's law faculty, told me his department had returned to using more traditional exams, too. Veale, who is an expert on technology policy, sees AI as a "threat to the learning process" because it offers an alluring short-cut to students who are pressed for time and anxious to get good marks. "We're worried. Our role is to warn them of these short-cuts -- short-cuts that limit their potential. We want them to be using the best tools for the job in the workplace when the time comes, but there's a time for that, and that time isn't always at the beginning," he says. This concern doesn't just apply to essay-based subjects. A study of novice programmers by the ACM Digital Library found that students with better grades used generative AI tools smartly to "accelerate towards a solution". Others did poorly and probably gained misconceptions, but maintained "an unwarranted illusion of competence" thanks to the AI. We might soon see the same patterns in work. The knowledge workers study by Microsoft (which is making a huge push to get AI into workplaces) found generative AI tools "reduce the perceived effort of critical thinking while also encouraging over-reliance on AI". Of course, this is nothing new. In 1983, Lisanne Bainbridge put her finger on the problem in a famous paper called "Ironies of Automation". She argued that humans asked to be "'machine-minding' operators" would find their skills and knowledge would atrophy through lack of regular use, making it harder for them to intervene when they needed to. In many cases, that has been fine. People embraced satnav and forgot how to navigate properly. The world didn't end. But it won't be fine for everyone to uncritically swallow often-faulty AI output across a vast range of work tasks. How to avoid this future? As with the programming students, it appears the answer is to know your stuff: the Microsoft study found that people with higher self-confidence -- who knew they could perform the task without AI if they wanted to -- applied more critical thought. The researchers concluded that "a focus on maintaining foundational skills in information gathering and problem-solving would help workers avoid becoming overreliant on AI". In other words, to use the short-cut effectively rather than mindlessly, you need to know how to do it without the short-cut. Universities -- and students -- take note.
[2]
Humanities teaching will have to adapt to AI | Letter
I agree with Prof Andrew Moran and Dr Ben Wilkinson (Letters, 2 March) that cheap and easy‑to‑use AI tools create problems for universities, but the reactions of many academics to these new developments remind me of the way some people responded to the arrival of cheap pocket calculators in the 1970s. Reports of the imminent death of maths teaching in schools proved exaggerated. Maths teachers had to adapt, not least to teach students the longstanding rule "garbage in, garbage out"; if students had no idea of the fundamental principles and ideas behind maths, they would not realise their answer was meaningless. Today's humanities teachers are going to have to adapt in similar ways. Our students need to recognise, for example, when AI has harvested such poor-quality information that its responses are inaccurate. But, more importantly, they need to learn how to make their work genuinely stand out in a sea of increasingly generic AI-generated essays, not least because they will need to make their job application letters, reports and written work stand out. AI has been defined as teaching computers to do badly what people do well, and despite recent breakthroughs, AI hasn't changed that much. Genuinely good writing exhibits genuinely human qualities - such as individuality and empathy. True intelligence is still something people do best. Jim Endersby Professor of the history of science, University of Sussex
Share
Share
Copy Link
As AI tools become increasingly prevalent in universities, educators grapple with maintaining academic integrity and fostering critical thinking skills among students.
Recent surveys indicate a significant increase in AI usage among university students. According to the Higher Education Policy Institute, 92% of UK undergraduates are now using generative AI in some form, up from 66% last year. More strikingly, 88% have employed AI in assessments, compared to 53% in the previous year 1.
As AI tools become more prevalent, universities face the challenge of adapting their teaching and assessment methods. Initial attempts to incorporate AI into coursework have yielded mixed results. Professor Maria Abreu of Cambridge University reported disappointing outcomes when students were asked to improve AI-generated texts, noting that "improvements were very cosmetic, they didn't change the structure of the arguments" 1.
Educators express concern that overreliance on AI tools may hinder the development of crucial critical thinking skills. Michael Veale, an associate professor at University College London, views AI as a "threat to the learning process" due to its allure as a shortcut for time-pressed students 1.
The impact of AI extends beyond essay-based subjects. A study by the ACM Digital Library found that in programming courses, high-performing students used AI tools effectively to accelerate problem-solving, while others developed misconceptions and an "unwarranted illusion of competence" 1.
The challenges faced in education mirror potential issues in the workplace. A Microsoft study on knowledge workers' use of generative AI found that these tools "reduce the perceived effort of critical thinking while also encouraging over-reliance on AI" 1.
Professor Jim Endersby of the University of Sussex draws parallels between the current AI situation and the introduction of pocket calculators in the 1970s. He argues that, like math teachers then, today's humanities educators must adapt their methods to ensure students can critically evaluate AI-generated content and produce work that stands out 2.
Experts emphasize the importance of maintaining foundational skills in information gathering and problem-solving to avoid overreliance on AI. The ability to perform tasks without AI assistance is crucial for effective and critical use of these tools. As Endersby notes, "Genuinely good writing exhibits genuinely human qualities - such as individuality and empathy. True intelligence is still something people do best" 2.
Reference
[1]
[2]
Exeter University pioneers AI-friendly assessments as higher education grapples with ChatGPT's impact. The move sparks debate on academic integrity and the future of education in the AI era.
2 Sources
2 Sources
A dramatic increase in AI usage among UK university students for academic work has prompted calls for urgent policy changes and assessment reviews.
2 Sources
2 Sources
As AI becomes increasingly integrated into daily life and work, concerns arise about its impact on human cognitive abilities, particularly critical thinking skills.
4 Sources
4 Sources
A recent study by Microsoft and Carnegie Mellon University researchers suggests that overreliance on AI tools in the workplace may be eroding critical thinking skills, raising concerns about long-term cognitive impacts.
17 Sources
17 Sources
Quizlet's latest report reveals a shift in AI adoption trends in education, with a slowdown in pace but an increase in intentional and strategic implementation. The study highlights both the benefits and challenges of AI integration in learning environments.
2 Sources
2 Sources