8 Sources
8 Sources
[1]
More and more teachers and students are using AI - even though it might do more harm than good
K-12 teachers and students across the country are increasingly using AI in and out of classrooms, whether it is teachers turning to AI to refine lesson plans or students asking AI to help them research a particular topic. An estimated 85% of K-12 public school teachers recently reported that they used AI during the 2024-2025 school year - often for curriculum and content development. In 2023, 13% of teens said they used ChatGPT to complete their schoolwork, while 26% of them said in 2025 that they were using ChatGPT for this purpose. Similarly, 86% of K-12 students shared in 2025 that they have used AI in general. An estimated 50% of students reported that they use it for schoolwork, such as for learning more about topics outside of what was taught in class, tutoring on specific subjects, receiving help with a homework assignment or asking for college advice. However, policies and training have not kept pace with how frequently teachers and students are using AI. Only 35% of school district leaders reported in 2025 that they provided students with any AI training, according to the global policy think tank RAND Corporation. Additionally, 45% of principals reported school or district policies or guidance on the use of AI in schools, according to these findings. Another challenge is that students are also using AI for potentially dangerous uses. There are recent examples of students who self-harmed or died by suicide after they used AI for mental health support. A 2025 study found that when a chatbot responded to 60 simulated scenarios that posed mental health questions, the chatbots sometimes made harmful proposals - such as cutting off all human contact for a month or dropping out of school. So, is it safe for young students to use AI? Does using AI provide better learning outcomes for students when compared to traditional instruction? Does AI help teachers reduce their workload? The answers to these questions are complicated. It is not yet clear how AI influences learning in K-12 settings or when and how it is best for teachers and students to use AI. Some clear pros As an associate professor of inclusive teacher education, I'm trying to answer some of these big questions about AI and K-12 education. Some university centers that I've worked with, such as the Center for Innovation, Design, and Digital Learning at the University of Kansas, are conducting research on how AI can be used to support students with learning disabilities. In 2025, 57% of special education teachers said they use AI to help develop individualized plans, often called an individualized education program, for their students with learning disabilities. I believe there is no doubt that AI can, in some ways, reduce barriers and support students with disabilities. In my own research, for example, my co-authors and I show that AI can help students learn by adapting assignments to meet their personal learning needs and pace. It can also help teachers reduce their time spent grading or editing assignments. There remain concerns over student privacy and whether AI systems will reinforce bias, but special education teachers are testing the benefits of generative AI. The missing evidence Among the broader available research and evidence on AI and K-12 education, some studies from 2019 through 2022 show that AI might help students learn and stay motivated by providing a personalized learning experience. However, the evidence appears less promising when considering how students learn after they use AI and then stop using it. For example, Guilherme Lichand, an economics scholar at the Stanford Accelerator for Learning, found in 2026 that when students use AI and then are told they can no longer use it for their studies, students actually perform worse than those who never used AI. This shows that additional research on how AI influences students' long-term learning and development is necessary. The Brookings Institution also recently warned in a 2026 AI and K-12 education report that the risks of using generative AI in education overshadow its benefits. These risks include weakened relationships between students and teachers, as well as students' safety. A 2025 report by the nonprofit Center for Democracy and Technology also shows that an average of 71% of K-12 teachers reported that when students use AI to complete their schoolwork, it is hard for the teachers to understand whether student work is their own. Similarly, almost two-thirds of parents of K-12 students said in 2025 that AI is weakening important academic skills that their child needs to learn, such as writing, reading comprehension and critical thinking. Lessons from the past AI is being introduced to K-12 classrooms faster than evidence and understanding can support. But schools have rushed to incorporate educational technologies into their classrooms before. During the COVID-19 pandemic, for example, schools needed to quickly equip teachers and students with online platforms for remote learning. But the rush also challenged educators to learn how to effectively teach and provide individual support for each student - and to ensure that all students, including students with disabilities, could participate in remote learning. Similarly, not long ago, some educators thought that social media and smartphones would bring the next frontier in education, with the idea that these technologies could increase student engagement. Yet we now know the dangers that both social media and smartphones pose for children. Slowing down how students especially are using AI in the classroom does not mean rejecting it altogether. I think it means being responsible - especially when there is a good chance children's academic skills, behaviors or emotions are at risk. New evidence on AI and education is coming from scholars like me and my colleagues. There is little doubt that AI and future technologies are game changers in society and education. I think it is also critical that we slow down and follow the evidence that is available. Speed is a choice, and education deserves intention.
[2]
59% of kids use AI to look up information -- but it could weaken 'critical thinking skills,' says expert
Parents and kids alike expect AI to play a major role in their futures. Seventy-one percent of parents and 60% of kids and teens believe that by the time young people are adults, people will be so dependent on AI -- specifically large language models like ChatGPT and Gemini -- that they won't be able to function without it, according to a new report by Common Sense Media, a nonprofit that helps families make informed decisions about media and technology. In fact, 12-to 17-year-olds are already leaning into AI: 59% use it to search for information and facts, Common Sense Media found. "A lot of kids, including those in the surveyed age group, are turning to AI to help them study for school," says Tiffany Zhu, assistant professor of global ethics and technology at Old Dominion University. "Many are asking AI questions when they are looking for quick information instead of typing questions into a search engine." Whether or not that shift is positive is still unclear. Here's what experts say parents should keep in mind.
[3]
Generative AI in business schools: friend or foe?
Since tools like ChatGPT burst into higher education, debate has focused on two extremes: either students are all committing underhanded academic fraud and plagiarism or Artificial Intelligence will magically revolutionise learning. The latest research project I co-authored with Anna Holland, and carried out among recent Management graduates in the United Kingdom, suggests something more complicated and surprisingly more human. Generative AI tools such as ChatGPT are increasingly used in business and management education for tasks like analysing cases, brainstorming ideas, and drafting reports, improving efficiency and personalised learning but also raising concerns about academic integrity and assessment design. AI literacy and ethical use of algorithmic tools are becoming essential managerial skills. In a qualitative study focusing specifically on business students, we explored how they actually used ChatGPT in their final year and how they felt about it. To capture these experiences, we conducted 15 in-depth semi-structured interviews and thematically analysed them, focusing on how students used ChatGPT in their studies and how they perceived its impact on their academic work and their peers' behaviour. How much of a 'no brainer' is ChatGPT for research and assignments? The students we interviewed described three overlapping concerns, which together help explain both their enthusiasm and their unease: Immediacy: the practicality of a 24/7 study buddy Students were open about the fact that ChatGPT had become part of their ordinary study toolkit, along with search engines and lecture recordings, but faster and more conversational in comparison. They used widely adopted large language model-driven technology to summarise articles, generate examples, explain complex theories in simpler language and help plan assignments. Several described it as a way to "get unstuck" when staring at a blank page. What mattered most was not just usefulness but speed and emotional reassurance. Unlike professors' office hours or e-mail, AI is instantly available and without judgement. Some interviewees said they used ChatGPT to check whether they had understood a concept correctly before writing it up in their own words, or for suggestions on how to structure an essay. For many, the new technology felt like having a private tutor who never sleeps. But whose convenience also raised deeper questions. If AI can always "rescue" you at the last minute, are you really learning, or just producing? Equity: who gets the 'good' AI? The students who took part in our study didn't simply worry about whether AI was allowed. They also worried about who could access the most powerful tools. Those who paid for smarter, premium versions felt they were getting more accurate, more detailed support than peers sticking to free tools. Some students saw this as just another form of educational inequality. Others were uneasy that success on assessments might increasingly depend on whether you could pay for better algorithms, but also whether they have the necessary skills to prompt the system for optimal results. Just because students are young, it does not automatically make them digitally native. At the same time, several interviewees argued that AI could make higher education fairer. Students with dyslexia, ADHD or other conditions described using ChatGPT to help with planning, time management or turning rough notes into clearer sentences. International students said it helped them write in more polished academic English. For them, AI felt less like "cheating" and more like "levelling up" - a reasonable adjustment or language support. This tension, between AI as a leveller and AI as a new source of advantage, makes equity central to how students experience these tools. Integrity: drawing the line in a grey area All the students we spoke to knew that "copy-pasting from ChatGPT" into an assignment would be considered as cheating. But they also described a wide grey area where university rules felt vague or inconsistent. Was it acceptable to ask ChatGPT for feedback on a draft paragraph? To suggest alternative headings? To generate a list of arguments they then follow up on themselves by accessing the original sources of information provided by ChatGPT? Different courses, and even different lecturers, gave different answers, leaving students unsure about what counts as legitimate assistance versus academic misconduct. This uncertainty made some students anxious about being accused of misconduct even when they believed they were acting honestly. Group work added another layer of risk: several participants feared that one team member might lean heavily on AI, triggering plagiarism-detection software or an investigation that could affect the whole group. Is the 'AI bias' off-putting for graduate employers? Beyond university rules, students worried about how employers would view their qualifications. A recurring theme was fear that future recruiters might dismiss recent graduates' work as "AI-generated", devaluing the years of effort they had invested. Even those who used ChatGPT sparingly felt that their cohort might be seen as "AI-made", regardless of individual behaviour. This is an interesting finding in our study as not much empirical work has been done on this aspect. There is currently little to no evidence that employers broadly distrust university degrees because of GenAI. The evidence we have to date suggests that hiring managers are increasingly sceptical of graduates' application written work, but simultaneously seek graduates with AI skills. The blurred relationship between student work and their ability may affect how credentials signal competence. Employers have already increasingly turned to skills verification rather than credentials alone. What universities should do next Our findings suggest that universities need to move beyond simple messages about banning or embracing generative AI. Students are already integrating these tools into their everyday study. The question is whether institutions will help them do so transparently, equitably, and with academic integrity. Firstly, rules about AI use need to be clearer and more consistent. Rather than broad warnings about "misuse of ChatGPT", students need concrete, discipline-specific examples of what is allowed, and why. This includes acknowledging that some uses (for accessibility or language support, for instance) may be legitimate and even desirable. Secondly, assessment design should focus more on process as well as product. Students could be asked to explain how they used AI in an assignment, reflect on its limitations and show the steps they took to verify information. This makes AI use visible and accountable, rather than something to hide by clearly stating where it was used in a piece of work, as students would for citing references in a footnote, for instance. Thirdly, universities should consider equity explicitly. If some students can buy access to far more powerful tools than others, that has implications on fairness. Institutions could respond by providing standardised AI tools, and teaching all students how to use them critically, or by redesigning assessments so that success depends less on access to premium systems. In its latest Digital Education outlook report, Exploring Effective Uses of Generative AI in Education, the OECD urges education stakeholders to encourage "inclusive, trustworthy and meaningful uses of GenAI in education" in alignment with educational objectives. Listening to students' concerns about GenAI Generative AI in business schools: friend or foe? GenAI offers students and teachers in business schools a wealth of benefits, but this tech "shortcut" is at odds with inclusive and meaningful learning. A graduate hat above a human hand against a blue-lit background How can business schools best navigate the AI era? Does GenAI in higher education constitute a level playing field where educational inequality and diverse learning needs are concerned? The students in our study were not reckless rule-breakers or naive digital natives. They were thoughtful about the benefits and risks of AI, and keen to protect the value of their degrees. If universities ignore this perspective, they risk sending out the message: "Integrity is only about catching cheats," rather than about building trust. If, instead, they engage with students' real experiences of immediacy, equity, and integrity, generative AI could become an opportunity to rethink what meaningful learning and fair assessment in higher education look like in the age of AI, rather than a threat that quietly undermines them. A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
[4]
Almost 80% of Australian uni students now use AI. This is creating an 'illusion of competence'
In Australia, artificial intelligence is becoming a near-universal feature of education. As of 2025, nearly 80% of university students reported using AI in their studies. Overseas, reports are even higher. This year, a UK survey of undergraduates found 94% were using it to help with assessed work. This has ushered in widespread concerns about students using AI to cheat on their work and exams. But in a new report with colleague Leslie Loble, we argue there is a far greater risk. There is a growing body of evidence that suggests using AI can undermine the effort required for sustainable, deep learning. This so-called "cognitive offloading" from human to AI is especially risky for younger students as they are still building their basic knowledge and skills. The 'performance paradox' Our report highlights a phenomenon known as the "performance paradox". This is where students' short-term performance on tasks may improve with AI. But their long-term learning is being harmed. An example of this is seen in a 2025 randomised experiment with high school students in Turkey using an AI assistant (that could tutor them through answers). In classroom tasks, they appeared to solve maths problems more effectively using AI. However, their actual learning fell off a cliff as soon as the AI was removed in an assessment. These findings suggest while AI can boost immediate results, it can simultaneously diminish the durable knowledge that is the true goal of education. In the meantime, students can overestimate how much they have learned. AI gives them the illusion of competence. AI is so easy to use Generative AI can certainly provide clear, polished responses to students. Research tells us this can signal to the learner that deep mental engagement is no longer necessary. This same research also shows students are then less likely to plan, monitor and revise their work. This is because the tool is doing this for them. This situation creates a cycle where the ease of AI-generated responses erodes a student's actual knowledge base, making them more dependent on the tool and less able to judge its accuracy in the future. Critical thinking is not a generic skill - it is deeply intertwined with knowledge. In other words, it is difficult to critically analyse a response about the second world war (is it biased? Have they got the dates wrong?) if you don't know anything much about the different participants and their perspectives. How can we respond? To address this, universities and teachers must move from treating AI as an "answer oracle" to using it as a partner in thinking and learning. There are two key ways to do this. Use AI to offload extraneous tasks - such as checking grammar or formatting citations. This frees up mental space to concentrate on learning. But is not relying on the AI to tell students what or how to think. Use as AI as a "cognitive mirror". Instead of giving answers, the AI asks clarifying questions. This forces the student to engage in explanation, which helps them build lasting learning. For example, if a student provides a vague argument in an essay, the AI might ask them to define their core assumptions more specifically. Most importantly, the development of AI tools must focus on helping and building the teacher's capacity, not just the students' immediate performance. As powerful as AI might be, humans learn better with and from other humans. By giving AI tools to expert teachers to help them increase their capacity, we ensure technology bolsters student learning. For example, AI could be used to analyse student performance data in real-time to highlight which small groups or individuals need a human intervention most urgently. What is this all for? Education systems need to help students understand and be comfortable with the fact that long-term learning can take time and needs effort. If AI is used to replace the struggle of learning, there is a risk of the erosion of cognitive skills. The goal here is not to protect students from AI but to prepare them to live and work with it.
[5]
AI is already disrupting classrooms around the world
A version of this article originally appeared in Quartz's AI & Tech newsletter. Sign up here to get the latest AI & tech news, analysis and insights straight to your inbox. When a private school in San Francisco announced it had cracked the code on learning, with two hours of AI-assisted academics a day and students scoring in the top percentiles nationally, it got a lot of attention. The school costs more than any other private institution in the city. Its student body is drawn almost exclusively from elite tech families. And its AI, it turns out, is mostly used to track how quickly students are moving through material, a function that adaptive learning software has performed in public schools for years. That gap between the headline and the reality is, in miniature, the story of AI in K-12 education right now. American schools are under pressure from every direction. Test scores still haven't fully recovered from the pandemic. Teachers are leaving the profession. Budgets are tight and getting tighter. Into that context arrives a technology that promises to save time, personalize learning, and prepare students for an economy that will supposedly demand AI fluency. It's not hard to see why schools are reaching for it. Iceland, by contrast, is running a cautious national pilot in which several hundred teachers are experimenting with AI for lesson planning while students aren't involved at all, out of concern that overreliance could hollow out the learning process. Estonia has gone further, building a national AI literacy program that specifically modified ChatGPT so it responds to student queries with questions rather than answers. In the United States, Microsoft $MSFT and OpenAI have collectively committed tens of millions of dollars to teacher training through the country's two largest teachers unions. In Florida alone, more than 100,000 high schoolers now have access to Google $GOOGL's Gemini chatbot through their school districts. The Trump administration has encouraged all of this, launching a Presidential AI Challenge that invites K-12 students to build AI projects addressing community problems. But even in enthusiastic districts, students were learning about the contest two months after it launched. Many districts, including the two largest in California, had no plans to participate. The challenge offers cash prizes but no additional funding, meaning schools already running robust AI programs are best positioned to win. The free AI tools most accessible to under-resourced schools also tend to be the least reliable, according to a 2025 Brookings Institution report. This may be, the report's authors suggested, the first time in education technology history that schools will have to pay more for more accurate information. The Brookings report described a feedback loop in which students who offload thinking to AI do less of it themselves, and over time that atrophy compounds. "It's easy. You don't need to use your brain," one student told researchers. A separate study from Microsoft and Carnegie Mellon found that popular chatbots may actively diminish critical thinking skills. AI systems are also designed to be agreeable, which turns out to be a poor model for the friction that builds social and emotional resilience. There are genuine benefits. Teachers report real time savings. AI can help students learning a second language, support those with learning disabilities, and assist educators in tailoring instruction to students at different levels. It's also worth noting that many of the tech executives pushing AI into classrooms have chosen low-tech or no-screen schools for their own children. Now they're offering something considerably more intensive to everyone else's. Most deployments are outpacing the research by a wide margin. States are largely leaving districts to develop their own policies. Districts are largely leaving teachers to figure it out. And teachers, many of whom received their first real AI training at a Saturday workshop funded by the companies whose products were being demonstrated, are doing the best they can. The Estonian model, skeptical and government-led and focused on AI literacy rather than AI adoption, has drawn interest from researchers globally. But it required national coordination, political will, and a negotiation with tech companies that most school systems aren't positioned to conduct. Having a tool that answers your questions patiently and at your own pace has real value. But a lot of what school does isn't about getting the right answer. It's about learning to work with other people, sit with disagreement, and figure things out in community. That's harder to automate, and probably shouldn't be automated at all.
[6]
Professors Say AI Is Destroying Their Students' Ability to Think
Can't-miss innovations from the bleeding edge of science and tech Professors are fighting an uphill battle against the intrusion of AI into education, and it's forcing them to rethink how they instruct their students, many of whom have already become hopelessly dependent on the tech. "It's driving so many of us up the wall," one told The Guardian in a new piece that interviewed more than a dozen professors in the humanities. "I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential," Dora Zhang, a literature professor at UC Berkeley said. "What is it doing to us as a species?" Alas, students looking for an easy "A" may not be interested in philosophical inquiries on how AI is fundamentally changing how we interact with the world and with each other -- and indeed, according to a burgeoning body of research, how our brains work. One canary in the coal mine comes from a Carnegie Mellon study published in early 2025 that found that knowledge workers who regularly used and trusted the accuracy of AI tools were losing their critical thinking skills. An earlier study found a link between students who relied on ChatGPT and memory loss, procrastination, and worsening academic performance. And an MIT study that performed EEG scans on subjects who were asked to write essays with and without ChatGPT found that AI users had the lowest levels of cognitive engagement during the tasks. Working in the trenches, most professors, especially in the humanities, probably didn't need formal research to tell them what those studies found, when they could easily intuit it by interacting with their pupils. Michael Clune, a literature professor and novelist, lamented to The Guardian that many students are now "incapable of reading and analyzing, synthesizing data, all kinds of skills." Clune's school, Ohio State University, recently required all students to enroll in "AI fluency" courses "across every major," ostensibly to prepare them for a world that is dominated by the tech. Clune was critical of the push. "No one knows what that means," he told newspaper. "In my case, as a literature professor, these tools actually seem to mitigate against the educational goals I have for my students." OSU may be the most egregious example of capitulating to the whims of Big Tech, but the AI industry has its tendrils all across education. Companies like OpenAI and Microsoft have poured tens of millions of dollars into teachers' unions, providing training on how to use their AI systems. They've also partnered with numerous institutions to provide their students with free access to their AI tools. Duke University, after entering such a partnership with OpenAI, introduced its own AI tool called "DukeGPT." Abroad, xAI founder Elon Musk partnered with the government of El Salvador to launch the "world's first nationwide AI-powered education program" to provide his Grok chatbot to a million students across thousands of public schools. "These companies are giving these technological tools away partly because they're hoping to addict a generation of students," Eric Hayot, a comparative literature professor at Penn State, told The Guardian. "This is part of every single class I teach now, talking to students about why I'm not using AI, why they shouldn't use AI." But pedagogues aren't taking this sitting down. Some are now using oral interrogations and requiring handwritten notebooks, they told the paper. AgainstAI, a faculty-run initiative that advises professors on how to work around AI use, recommends giving assignments like oral exams, requiring students to show pictures of their notes, and paper journals. Some even dare to be optimistic. Several said they noticed more students pushing back or expressing more cynicism about AI tools. "I think the current crop of gen Z students are seeing that they are the guinea pigs in this giant social experiment," Zhang said. "There's kind of defeatism, this idea that there's no stopping technology and resistance is futile, everything will be crushed in its path," Clune added. "That needs to change... We can decide that we want to be human."
[7]
America's math and reading scores tanked after schools ditched textbooks for screens -- and AI could worsen the brain rot | Fortune
At the turn of the century, educational technology initiatives put laptop keyboards at the fingertips of U.S. schoolchildren. Now, 25 years later, the next generation of students have turned to AI -- and education experts warn unrestricted use of the technology could atrophy critical thinking skills. AI use among students has become ubiquitous following the 2022 release of ChatGPT. More than half of teenagers are using the technology for schoolwork, a Pew Research Center report released last month found. Of the nearly 1,500 parents and teens interviewed for the survey, 57% of teen students use AI to search information, and 54% use it for schoolwork. While access to AI chatbots makes homework as easy as plugging a question into one's phone, the frictionless retrieval of information using AI has raised concerns among educators: Rather than aid in learning, could AI actually hinder the process? A Brookings Institute study published in January laid bare anxieties around the potential harms of AI in the classroom. Analyzing data from interviews and focus groups with more than 500 educators, parents, and students across 50 countries, as well as from more than 400 studies, the researchers found at this point, "risks of utilizing generative AI in children's education overshadow its benefits." The report gave credence to early research -- including a February 2025 Microsoft study -- finding AI use was associated with worse judgement and critical thinking skills. "The cognitive offloading, and the cognitive decline that's associated with that, the decline in critical thinking, and just even reading and writing and knowledge of basic facts -- I absolutely believe that," to be the case, Mary Burns, an education consultant and co-author of the Brookings Institute study, told Fortune. Computer use in schools has come under recent scrutiny following a Congressional testimony in January from neuroscientist Jared Cooney Horvath, who noted, citing Program for International Student Assessment data, that Gen Z is the first generation in modern history to be less cognitively capable than their parents. He blamed unfettered access to classroom technology, noting a stark correlation in lower standardized testing scores and more screen time in school. A 2014 study surveying 3,000 university students found that two-thirds of the time students spend on their screens were on off-task activities. "This is not a debate about rejecting technology," Horvath said in his written testimony. "It is a question of aligning educational tools with how human learning actually works. Evidence indicates that indiscriminate digital expansion has weakened learning environments rather than strengthened them." Horvath, author of the 2025 book The Digital Delusion: How Classroom Technology Harms Our Kids' Learning -- and How to Help Them Thrive Again, told Fortune the rise of EdTech was a result of tech companies creating a narrative around the need for screens in the classroom to bolster learning. The push for computers in schools began in 2002, when Maine became the first state to introduce a statewide program providing laptops to schoolchildren in the classroom. Following a slow rollout, Google began reaching out to educators to test its low-cost Chromebook with free Google apps, and asked teachers and administrators to promote the product. In partnership with schools, Google's Chromebook became commonplace in classrooms, accounting for more than half of digital devices sent to schools in 2017. There have been more than 100 years of evidence showing the failures of automated learning, Horvath argued, beginning with the 1924 invention of the "teaching machine" by Ohio State University psychology professor Sidney Pressey. Students learned to answer the questions the machine would generate when fed a piece of paper, but were unable to generalize that knowledge outside the device. "Kids would be very good so long as they were using the tool, but as soon as they went off the tool, they couldn't do it anymore," Horvath said. Burns, the education consultant, said AI was, in some ways, a natural extension of the argument tech companies have made about the need for computers in school, which is that students are able to learn at their own pace, or seek out information of interest to them to initiate their own learning. "[Tech] companies keep talking about, AI is personalizing learning," she said. "I don't think it's personalizing learning. I think it's individualizing learning. There's a difference there, and that's kind of a classic carryover from educational technology." According to Horvath, student AI use is not conducive to learning because it mirrors the failures of the 20th century "teaching machines." Students' learning was individualized -- they answered questions from the device at their own pace and independently from other students -- but were unable to synthesize knowledge taught outside the device. Similarly, Horvath said, giving AI to students without clear instructions or parameters teaches students how to rely on the device, not their own critical thinking. "The tools experts use to make their lives easier are not the tools children should use to learn how to become experts," Horvath said. "When you use offloading tools that experts use to make their lives easier as a novice, as a student, you don't learn the skill. You simply learn dependency." Burns -- a proponent of EdTech -- said it's futile to eschew the technology altogether. The Brookings Institute study found that despite educators having real fear that students will use AI to cheat, teachers are using AI to create lesson plans. Data on AI in the classroom is limited, but there are benefits, she added. For English language learners, for example, teachers can use AI to alter the lexile level of a reading passage. "To say that technologies are a failure is not true," Burns said. "To say technology is a mixed bag is true."
[8]
A writing professor's new task in the age of AI: Teaching students when to struggle
I was early to the generative AI wave in higher education: I was among the first professors who teach writing to publish in an academic journal about generative AI and critical thinking, and I am now part of an interdisciplinary team at Babson College thinking about how AI is impacting education, industry and society. But that does not mean I am all in on AI - nor am I anti-AI. I am pro-learning. As my co-authors and I argue in a forthcoming book on realizing the promise of higher education, even the most powerful tools are only as good as the learning environments we build around them. So what does "getting learning right" look like in the age of generative AI? It involves a lot of experimentation and leaning in with students as a co-learner when I don't have all of the answers, while remaining staunchly committed to sharing my expertise in writing, critical thinking and learning. I also hope that they trust me enough to follow my lead and persevere when the work becomes difficult. From hope to grief Navigating the rise of generative AI seemed easier to me in the earlier days. In spring 2023, for example, soon after ChatGPT went public, I asked students to use it to research their favorite musical artist and then fact-check the results as part of a unit in my senior-level social media class. The responses sounded polished and confident, but they were often wrong. Album dates were scrambled. Tours were invented. At one point, a student threw up her hands and shouted, "It lies!" The room erupted. The "lies" were especially apparent with less popular artists, about whom less had been written. "How might that translate to other knowledge areas?" I asked. They were pretty quick to thinking about whose voices might not make the cut in a different scenario. While this was a promising start, by fall 2023 I found myself starting to grieve the passing of the pre-AI-everywhere world. Once again, I leaned in with my students, now in a sophomore-level research writing class. In their proposals, I included a new required section called "Be Better Than a Robot" - the gist being that if ChatGPT could write your research paper, what was the point of us spending weeks on it? I asked: Where would your own work - your own human thinking - need to come in to create a tiny piece of new knowledge in the world? We practiced primary research, we used time during class for reading and annotating, and I extended deadlines to account for the rigor we were undertaking. AI usage was discouraged but not outright banned: If used, careful and explicit descriptions of exactly how were required, and I even gave examples of things like brainstorming academic titles as a potential option. While not all of the final research projects seemed completely AI-generated, the few that did caused me to spiral - like it was my fault that I didn't come down harder on not using AI when I was trying to be neutral and understand how we could use it as a tool and not as a replacement. Cognitive blind spots Since those early days in 2023, discussions around college students' use of AI have only become more fraught and complicated. There are no easy answers, and there are a lot of fears about overreliance, loss of learning and even the value of a college degree. There are also plenty of ethical concerns that go beyond academic integrity, such as the environmental impact of AI and concerns over data and privacy. But AI usage is not slowing down. Recent data from the Pew Research Center shows that more than half of teenagers are turning to AI for help with finding information and getting help with schoolwork. By the time these students arrive in my classes, many have already developed habits around these tools, and these habits may or may not serve their learning. For me, that's not an argument for banning AI in the classroom, but rather an argument for taking it seriously. But here's the honest difficulty: When students use AI, they often can't tell when they're shortcutting their own thinking. A study published from late 2024 in the British Journal of Educational Technology found that students using ChatGPT improved their essay scores in the short term but showed no meaningful gains in knowledge. Moreover, they were prone to what the researchers called "metacognitive laziness," meaning a dependence on the tool that undermined their ability to self-regulate and engage deeply in learning. This is a result of cognitive offloading. Teaching discernment At this point, I feel my role is shifting from neutral observer or co-learner to something more like a guide with a point of view. I know what rigorous thinking looks like in my discipline. I know the difference between a paper that has moved through genuine intellectual struggle and one that has been assembled. My job is to make that difference visible to students who may not yet have the experience to see it themselves. So, yes, there are moments in my writing courses where I ask students to write without AI. Not as a purity test, although I could see it used that way, and not because I believe they'll go on to spend their careers avoiding it, but because understanding what AI does to your thinking first requires knowing what your thinking can do without it. This matters especially now as many college students I meet arrive already anxious, already performing, already optimizing for the grade rather than the learning. Many have spent years learning to produce the right answer rather than to wrestle with hard questions. Before they can develop discernment about any tool, they need something more foundational: a sense of their own thinking as worth trusting. In practice, this looks like drafting with AI and without it, comparing versions, and being asked to justify choices out loud. It looks like noticing when the tool accelerates routine work and when it flattens complexity. Like many faculty navigating this moment, I find myself in what Auburn University professors Christopher Basgier and Lydia Wilkes describe as an "unsettled middle," neither fully embracing nor refusing the technology, but doing the uncomfortable work of engaging with it critically. My students, I've found, often end up in a version of that same uncertain space. Learning to sit with that uncertainty - to tolerate the slowness and mess of thinking things through rather than reaching for the frictionless answer - is where discernment begins. If students are going to continue encountering these tools throughout their lives, then ignoring that reality does them no favors. My responsibility is to help them develop the judgment to decide when a shortcut is strategic and when it undermines their own thinking. That is pro-learning.
Share
Share
Copy Link
An estimated 85% of K-12 teachers and nearly 80% of university students now use AI tools like ChatGPT for schoolwork and lesson planning. But researchers warn this rapid AI integration in classrooms is creating an 'illusion of competence' that weakens critical thinking skills and undermines long-term student learning, even as policies and training lag far behind adoption rates.
AI in education has reached a tipping point. An estimated 85% of K-12 public school teachers reported using AI during the 2024-2025 school year, primarily for curriculum and content development
1
. Among students, the numbers are equally striking. Nearly 80% of Australian university students now use AI in their studies, while a UK survey found 94% of undergraduates were using it to help with assessed work4
. In the United States, 26% of teens reported using ChatGPT to complete schoolwork in 2025, up from just 13% in 20231
. An estimated 86% of K-12 students have used AI in general, with 50% reporting they use it for schoolwork such as learning about topics outside class, tutoring on specific subjects, or receiving help with homework assignments1
. Among 12- to 17-year-olds specifically, 59% use AI to search for information and facts, according to Common Sense Media .
Source: The Conversation
The rapid AI integration in classrooms has triggered alarm among education researchers who warn of serious risks to student learning. A phenomenon known as the "performance paradox" reveals a troubling pattern: while students' short-term performance on tasks may improve with AI assistance, their long-term learning suffers significantly
4
. A 2025 randomized experiment with high school students in Turkey demonstrated this effect clearly. Students using an AI assistant appeared to solve math problems more effectively in classroom tasks, but their actual learning collapsed as soon as the AI was removed during assessment4
. This cognitive offloading from human to AI creates what researchers call an "illusion of competence," where students overestimate how much they have learned because the tool provides clear, polished responses that signal deep mental engagement is no longer necessary4
. Almost two-thirds of parents of K-12 students said in 2025 that AI is weakening important academic skills their children need to learn, including writing, reading comprehension, and critical thinking1
. The Brookings Institution warned in a 2026 report that risks of using generative AI in education overshadow its benefits, with students who offload thinking to AI doing less of it themselves, creating a compounding atrophy over time5
. As one student told researchers, "It's easy. You don't need to use your brain"5
.Despite widespread adoption, policies and teacher training have not kept pace with how frequently students and educators use AI. Only 35% of school district leaders reported in 2025 that they provided students with any AI training, according to the RAND Corporation
1
. Additionally, just 45% of principals reported having school or district policies or guidance on AI use in schools1
. This lack of AI policies in schools leaves students navigating a vast gray area. Business students interviewed for a UK study knew that copying and pasting from ChatGPT would be considered cheating, but described widespread confusion about what constitutes legitimate assistance versus academic misconduct3
. Different courses and lecturers gave different answers about whether students could ask ChatGPT for feedback on draft paragraphs or suggestions for alternative headings3
. An average of 71% of K-12 teachers reported that when students use AI to complete schoolwork, it becomes difficult to determine whether student work is their own1
. States are largely leaving districts to develop their own policies, districts are leaving teachers to figure it out, and many teachers received their first real AI training at Saturday workshops funded by companies whose products were being demonstrated5
.Related Stories
Challenges to academic integrity extend beyond plagiarism detection. The shift raises fundamental questions about equity and access. Students who pay for premium versions of AI tools like ChatGPT feel they receive more accurate, detailed support than peers using free versions
3
. Some students view this as another form of educational inequality, where success on assessments increasingly depends on whether you can afford better algorithms and possess the necessary skills to prompt systems for optimal results3
. The Brookings Institution noted this may be the first time in education technology history that schools will have to pay more for more accurate information, with free AI tools most accessible to under-resourced schools tending to be the least reliable5
. However, some students argue AI can make education fairer. Students with dyslexia, ADHD, or other conditions described using ChatGPT to help with planning, time management, or turning rough notes into clearer sentences, while international students said it helped them write in more polished academic English3
. Special education teachers are testing benefits too, with 57% reporting in 2025 that they use AI to help develop individualized education programs for students with learning disabilities1
.
Source: The Conversation
The long-term educational impacts of AI remain deeply uncertain. Research from the Stanford Accelerator for Learning found that when students use AI and then are told they can no longer use it for studies, they actually perform worse than those who never used AI, demonstrating that additional research on how AI influences students' long-term learning and development is necessary
1
. Studies from 2019 through 2022 suggested AI might help student learning and motivation through personalized learning experiences, but evidence appears less promising when considering how students learn after they stop using AI1
. The risks of AI in education extend to student safety as well. Recent examples include students who self-harmed or died by suicide after using AI for mental health support, and a 2025 study found chatbots responding to 60 simulated mental health scenarios sometimes made harmful proposals such as cutting off all human contact for a month or dropping out of school1
. A study from Microsoft and Carnegie Mellon found that popular chatbots may actively diminish critical thinking skills, and AI systems designed to be agreeable turn out to be poor models for the friction that builds social and emotional resilience5
. Seventy-one percent of parents and 60% of kids and teens believe that by the time young people are adults, people will be so dependent on AI that they won't be able to function without it . Global approaches vary widely. Estonia built a national AI literacy program that modified ChatGPT to respond to student queries with questions rather than answers, while Iceland is running a cautious pilot where teachers experiment with AI for lesson planning but students aren't involved at all5
. In the United States, Microsoft and OpenAI have committed tens of millions of dollars to teacher training through the country's two largest teachers unions, and in Florida alone, more than 100,000 high schoolers now have access to Google's Gemini chatbot through their school districts5
. Experts suggest universities and teachers must move from treating AI as an "answer oracle" to using it as a partner in thinking, offloading extraneous tasks like checking grammar while using AI as a "cognitive mirror" that asks clarifying questions to force students to engage in explanation4
. Assessment design must evolve accordingly, and student privacy concerns remain unresolved as most deployments are outpacing research by a wide margin1
5
.
Source: The Conversation
Summarized by
Navi
[1]
[3]
[4]
12 Sept 2025•Technology

03 Dec 2025•Entertainment and Society

28 Aug 2025•Technology

1
Technology

2
Science and Research

3
Startups
