7 Sources
7 Sources
[1]
Nearly 60% of Teens Believe Their Peers Use AI to Cheat at School, Survey Finds
Kids think AI is alright, according to survey data the Pew Research Center published Tuesday. They're using it for a wide range of schoolwork tasks -- including cheating. Teenagers seem to have a more optimistic view of AI compared to adults. Only 17% of US adults felt AI would have a positive impact over the next 20 years, Pew found in a study last year. For teenagers (ages 13 to 17), that number goes up to 36%. Perhaps unsurprisingly, adults with significant experience using generative AI tools are the most optimistic, at 51%. Many of today's debates about how AI should be used are reflected in teens' perspectives. Pew's survey included anonymized quotes from teen respondents. One teen who viewed AI positively said it will "meet the needs of almost everything." Another said AI will automate mundane tasks, giving people more time to do what they really want -- a common pro-AI argument. Those who viewed AI negatively raised concerns about its potential harm to the environment, job opportunities and creativity. Another respondent put it more bluntly: "It destroys young people's minds and brains." These debates are especially relevant to teens, who may be using AI tools like chatbots more often than their parents realize. Pew found that 51% of parents believe their teen uses chatbots. When teens were asked directly, that figure jumped to 64%. Like any new technology, there's a lot of concern about how teens and their undercooked, still-developing brains are using AI. Researchers and educators worry that students are at risk of stalling their learning development if they outsource their critical thinking to AI tools. But that risk varies with use; AI isn't an inherently evil threat to teens, but it can become one if misused. This is what we know so far about how teens are actually using AI -- the good and the bad. How teens are using AI in school Teenagers are somewhat split on using AI tools to aid with schoolwork; 45% say they don't use chatbots for homework help, while 54% say they do use them to some degree. The biggest AI uses are around information gathering. Teens are using AI to search for information (57%) and for help researching specific topics (48%). This makes sense given chatbots excel at information scouring and summarization. Teens are also using AI to help solve math problems (43%) and to edit writing assignments (35%). But like any tool, some teens are using it to cut corners and avoid doing the work. Pew found that one in 10 students uses chatbots to do most or all of their schoolwork. And nearly 60% of teens believe that their peers are regularly using AI to cheat at school. "People rely too much on AI to do school work, ask basic questions, etcetera," one teen responded in the Pew survey. Tech companies have heavily advertised their AI products to students, especially those in college. ChatGPT, Perplexity and Claude each have advanced modes dedicated to research. There's an ongoing debate about the best ways to implement AI in education -- balancing the desire to teach students how to best utilize the new tech without letting them entirely run amok.
[2]
Experts examine AI's role in learning, equity, and creativity
Experts also encouraged educators to prioritize human connections over technology in student support. Last week, Stanford Institute for Human-Centered AI and the Stanford Accelerator for Learning convened educators, researchers, technologists, policy experts, and more for the fourth annual AI+Education Summit. The day featured keynotes and panel discussions on the challenges and opportunities facing schools, teachers, and students as AI transforms the learning experience. At the summit, several themes emerged: AI has created an assessment crisis - student projects no longer indicate a strong learning process; schools are awash with too many AI products and need better evaluations and sustainable adoption models; AI's benefits aren't equitable; AI literacy is a non-negotiable; human connection is irreplaceable. Read a few of the highlights from the Feb. 11, 2026 event, and watch the full conference on YouTube. AI's inequitable impact AI amplifies whatever educational foundation already exists, said Wendy Kopp, founder of Teach for All. In mission-driven schools with strong pedagogy, AI becomes a powerful tool for teachers and learners. But without a strong pedagogy and guidelines, the technology becomes a distraction. Miriam Rivera, of Ulu Ventures, said a critical distinction emerges between consumption and creation of AI. In well-resourced schools, she said, students often learn to create with technology (3D printing, coding), while in less-resourced schools, students merely consume it. Both panelists said that equity-focused teachers and students from marginalized communities must be at the forefront of designing AI applications, not just receiving them. Dennis Wall, a Stanford School of Medicine professor, illustrated one way this might look. His team is developing a gamified framework to support children struggling with social communication skills. His lab is co-designing these resources with the teachers, therapists, and parents who will use them, ensuring the tools are accessible, engaging, and informed. AI literacy is a must-have Education has long assumed that strong products (homework, summative tests, problem sets) indicate strong learning processes, said Mehran Sahami, a Stanford School of Engineering professor. AI has broken this assumption. Students can now generate impressive products without engaging in meaningful learning. This directs educators to focus on assessing and supporting the actual learning process rather than just evaluating end products. More so, we can't treat AI solely as a tool. Students need a systemic curriculum on AI. Sahami proposed a progression: Introduce what AI is; teach about hallucinations and bias; show how to verify AI outputs; teach advanced techniques like prompting. Without this structured approach, students teach themselves - and 70-80% use AI to short-circuit learning rather than enhance it. Mike Taubman, a teacher at North Star Academy in Newark, N.J., developed an "AI driver's license" curriculum that maps the adolescent rite of passage of getting a driver's license onto AI literacy. The goal is to put students in the driver's seat, not the passenger seat, when it comes to AI. The four-part curriculum includes choosing a destination (students learn to ask want they want of AI); learning how to drive (see how these tools work and what it means to prompt, develop agentic workflows, etc.); opening the hood (understand their limitations and risks); and defining the rules of the road (decide what AI should and shouldn't do). Understand AI's learning harms Guilherme Lichand, assistant professor at Stanford Graduate School of Education, studied AI's impact on creativity for middle school students in Brazil. He compared AI assistance with guardrails (if students ask for 10 words, the AI would give only 3) against no assistance across creativity tasks. Students with AI assistance performed better on the task while they had the tool. But when assistance was removed within the same test, the advantage disappeared - suggesting no immediate positive transfer. While that finding isn't surprising, he said, the results from a follow-up creative task were more concerning: * Students who never had AI performed best. * Students with continued AI or new AI access performed slightly worse (not statistically significant). * Students who lost AI access after having it performed dramatically worse - four times worse than their initial advantage. This wasn't just about missing the tool - students had less fun and began believing AI was more creative than they were, he said, suggesting AI damaged their creative self-concept. A "too many pilots" problem Today we have no shortage of AI products, said Stanford Graduate School of Business Professor and HAI Senior Fellow Susan Athey, but we lack effective implementation and adoption. Schools and districts are slow to adopt new tools because of historical software lock-in and the opportunity costs of training teachers on systems that may fail. Athey also noted a "teaching to the test" problem for developers. If teachers spend more time on an interface, does that mean it's good and they're deeply engaged, or does that mean it's terrible and they're spending time trying to make it work? Education tools need multifaceted measurement approaches: human review, AI "guinea pigs" (simulated students to test products before real children do), and careful evaluation of what's actually being measured. She advocated for digital public goods like evaluation tools, testing frameworks, and validated AI student simulations that could be developed by universities and philanthropy to create robust measurement infrastructure the whole sector can use. Never replace real relationships Nearly half of all generative AI users are under 25, said Amanda Bickerstaff, CEO of AI for Education; that's over 300 million active monthly users of ChatGPT alone who are under 25. Students use AI more for mental health and well-being - seeking connection, support, and understanding - than for schoolwork. Bickerstaff warned about cognitive offloading, mental health offloading, and even "belief off-loading," where AI fundamentally shapes how people think, with just four or five chatbot makers having outsized influence on billions of users. Because of that, she said, we must equip people with knowledge, skills, and mindsets to understand when and how to use AI and, crucially, when not to use it. The most vulnerable, according to new research from Pilyoung Kim, visiting scholar at the Stanford Accelerator for Learning and a professor of psychology and director of the Center for Brain, AI, and Child (BAIC) at the University of Denver, are young people lacking human connections. She asked over 260 middle school students and their parents to compare and share preferences between two chatbot conversation styles: A "best friend" that was highly relational and would respond with comments like, "That must be so upsetting. Your ideas matter so much. I'm always here to listen," and a more transparent version that set boundaries and reminded the user that it was an AI. More adolescents preferred the relational AI, and even more than half of parents chose the relational AI for their teens, reasoning it would be more effective at supporting issues their children might not share with them directly. But more importantly, children who chose the relational AI were also more likely to report feeling stressed or anxious, and they reported a lower family relationship quality. "If they have more unmet social needs, it is possible that they're more drawn to an AI that provides social connections," Kim said. "That might put them in more vulnerable positions to overly rely on a relationship that is not real." She emphasized the common thread of the day: AI should never replace human connection.
[3]
Colleges face a choice: Try to shape AI's impact on learning, or be redefined by it
What happens to a college education when a chatbot can draft an essay, summarize a reading and generate computer code in seconds? The arrival of artificial intelligence in college classrooms has been swift and, for many schools, disorienting. As professors of economics and business management and biology at liberal arts colleges, we are confronting a question that now cuts across all colleges and universities: What is the purpose of a college education, as AI is rapidly reshaping how students think, learn and prepare for careers? While much of the public debate has focused on plagiarism and credit for student work, the deeper issue extends beyond rule-setting. Across higher education, most schools have issued guidance on how students should use AI, rather than adopted sweeping mandates. Liberal arts colleges, like the University of Richmond, Bard College and Trinity College, tend to emphasize the importance of students using AI ethically and responsibly, and typically allow students to use AI when they cite it and their instructor permits it. These schools also allow professors to individually determine their own AI policies. A 2024 study of 116 research universities found similar patterns, with instructors largely determining course policies and few campus-wide bans. What's unsettled is not whether students can use AI, but how institutions want students to use it. In our view, unless colleges clearly shape AI's role in teaching and learning, fast-moving technologies may begin to redefine education by default. The risk isn't more AI, but a gradual shift in what counts as learning. Students may spend less time asking hard questions, making their own judgments and building real expertise. In that case, college risks becoming less about understanding and more about producing papers and other content quickly. Letting AI into the classroom When generative AI tools first became widely available in late 2022 and early 2023, most professors focused on finding and preventing it in student work. They looked for signs of AI use, including generic phrasing, fake citations, sudden shifts in tone or unusually polished writing that didn't match a student's prior work. Some faculty also used AI-detection software to identify computer-generated text. But it is often difficult to tell when someone has used AI, in part because the detection software is unreliable. As a result, many faculty have shifted from bans to more structured guidance. Some faculty, as a result, allow students to use AI for specific tasks, such as brainstorming, outlining or debugging code. The rationale is practical: AI is everywhere and already embedded in professional settings. College graduates are likely to use AI in the workplace. Accepting AI is here to stay More recently, college faculty at a range of schools have shifted the focus from whether students are using AI at all to whether students using AI can still analyze, question and justify their own research and conclusions. At the University of Michigan, for example, some faculty are redesigning assessments to include live debates and oral presentations. And across the U.S., professors are reviving oral exams, since live questioning makes it harder for students to rely solely on AI. Students must then verbally explain their reasoning and defend their work. Different academic fields, though, are approaching AI in various ways. Many business programs, like the University of Pennsylvania's Wharton School, have moved quickly to bring AI into coursework and degree programs, often framing them as workforce preparation. Recent analysis of more than 31,000 syllabuses at a large research university in Texas showed a growing number of faculty in the fall of 2025 allowed students to use AI. Business courses allowed the greatest use of AI, while humanities courses allowed it the least. The physical and life sciences fell in between. Across disciplines, AI was most often allowed at this school for editing, study support and coding. It was most commonly restricted for drafting, revising and reasoning or problem-solving. AI's role in higher education is not settled. Instead, it is evolving, dependent on different academic cultures. Different schools, different approaches Colleges' and universities' overall responses and approaches to AI are varied, as well. Research universities like Carnegie Mellon University and Stanford University are expanding on their long-standing investments in AI, moving quickly to develop new research centers, hiring faculty with AI expertise and creating new degree or certificate programs. Liberal arts colleges are moving too, but often with a different emphasis. The Davis Institute for AI at Colby College supports AI work across disciplines through new courses, faculty development and entrepreneurship. At the University of Richmond, a new center links AI to critical thinking and human values, so students can study AI's impacts and help shape it intentionally. All of these schools are determining AI policy course by course. But these plans are not part of a comprehensive, school-wide strategy. Few schools have articulated coordinated, institution-wide plans on AI. Arizona State University is one example of a broader AI integration strategy, which spans academics and campus operations. Comprehensive AI strategies are expensive. Meaningful integration may require campus licenses for AI services, upgraded computing systems and faculty training. These investments are difficult at a time when many colleges face enrollment declines and financial strain. Public trust in higher education is another concern that makes enacting broad change difficult. Gallup surveys in 2023 and 2024 found that only 36% of Americans had high confidence in colleges and universities. Against this backdrop, AI is raising questions about how colleges prepare students for their careers. Employers still prize critical thinking and communication. Yet generative AI can mimic the appearance of thinking even when real understanding is absent. The tension is clear: If AI does the writing, coding or analysis, where do students do the thinking? Rethinking learning Rising use of AI is forcing colleges and universities to revisit what students should learn, how to measure this and the enduring value of a college degree. That shift moves the conversation beyond course-by-course changes to a shared strategy on what forms of knowledge and thinking are developed in college. Colleges may redesign assignments, expand oral and project-based assessments, and integrate AI literacy across disciplines. They may also clarify learning outcomes, invest in faculty development and find new ways to document students' judgment and problem-solving in an AI-assisted world. The question is no longer whether AI belongs in higher education. The real question is whether colleges and universities will shape its role - or allow AI to quietly reshape them.
[4]
'A.I. Literacy' Is the New Drivers' Ed at This Newark School
Reporting from North Star Academy Washington Park High School in Newark The first session of a new artificial intelligence class this month for high school seniors in Newark involved purely human intelligence. The students' assignment: to compare when they had passively scrolled through A.I.-driven social media feeds with times when they had actively selected the videos or Google search results they wanted to see. "Are you steering the technology or is it steering you?" asked a slide on the classroom whiteboard at Washington Park High School. In a class discussion that followed, a student named Adrian Farrell, 18, said he had taken charge of A.I. by asking a chatbot to check his math homework for accuracy. Brianna Perez, 18, said she went into "passenger mode" when using a Spotify feature called A.I. DJ. "It plays your favorite music so you don't have to change it," she said. Schools across the United States are hustling to introduce a new subject: A.I. literacy. In what some educators are calling a "driver's license" for A.I., the new lessons aim to teach students how to examine the latest tech tools and use them responsibly. Teachers say they want to prepare young people to navigate a world increasingly shaped by A.I., as chatbots manufacture human-sounding writing and employers use algorithms to help vet job candidates. Some schools are focused on A.I. chatbots, teaching students how to prompt Google's Gemini or Microsoft's Copilot. Some are introducing A.I. as a new class topic, with lessons examining societal consequences like the spread of A.I.-generated nude images, known as deepfakes. A.I. lessons are becoming more common in schools as a debate has exploded over whether chatbots are likely to improve -- or doom -- education. Proponents say schools must quickly teach young people to use A.I. to assist their learning, prepare them for jobs and help the United States compete with China. Last year, President Trump issued an executive order urging schools to teach "foundational A.I. literacy" starting in kindergarten. Education researchers warn that chatbots can make stuff up, enable cheating and erode critical thinking. A recent study from Cambridge University and Microsoft Research found that students who took notes on text passages had better reading comprehension than students who got help from chatbots. For now, "the risks of utilizing A.I. in education overshadow its benefits," the Brookings Institution concluded last month in a report on school A.I. use. Amid the debate, schools like Washington Park High are staking out a middle ground by treating A.I. as if it were a car and helping students develop rules for the road. Mike Taubman, 45, a career explorations teacher who co-developed the school's new literacy course, compared the class to preparing teenagers for their driver's license exam. "Where do you want to go, and can A.I. help you get there?" Mr. Taubman asked. Students needed to learn to drive A.I. tools, analyze what's under the hood, develop guidelines for personal use and design ideal safety policies, he said. "What do I think the laws, the rules, the norms should be around A.I. for my city, my country?" he added. Washington Park, a four-story building with a red brick facade in downtown Newark, serves about 900 high school students. It is part of Uncommon Schools, a charter school network in the Northeast focused on college and career preparation. Mr. Taubman and Scott Kern, a U.S. history teacher, came up with the idea for the new elective class on A.I. Both had already introduced A.I. tools and topics in their regular courses. Mr. Kern, 45, recently participated in a program at Playlab, a nonprofit that helps teachers create customized A.I. apps for their courses. To help his students hone their argumentative writing, Mr. Kern developed chatbots for his U.S. history classes based on his course materials and student assessments. He also developed firm guidelines for students on when to use -- and when not to use -- A.I. bots. On a recent Tuesday, Mr. Kern taught an Advanced Placement U.S. History class on the Chicago Race Riot, violent protests set off by the murder of a Black teenager in 1919. First, he asked students to read century-old newspaper clippings and other historical documents. Then, he led a class discussion on broader trends that had helped fuel the tensions. Next, Mr. Kern asked students to spend a few minutes describing the main cause of the riot to a chatbot he had created for the class. Allyson Johnson, 17, opened her laptop and typed in her answer: entrenched racial segregation. "Let me push you on this," the chatbot responded. Segregation had existed for decades before 1919. So what specific factors, the bot asked, had caused a tense situation to suddenly escalate "into explosive violence"? Ms. Johnson said she enjoyed sparring with the A.I. because "the chatbot asked me different questions that pushed my argument even more." After a few minutes, Mr. Kern told students that their time with the chatbot was up and resumed the class discussion. Fundamental student learning should remain an A.I.-free activity, he said. "Anytime where we want kids interacting with each other or doing initial critical thinking, I would never want A.I. or any sort of technology of that ilk to come in and interfere with that," Mr. Kern said. In another part of the school, Mr. Taubman was leading a career explorations course. He has developed a variety of career simulation chatbots for the class. One enables students interested in fields like speech pathology to create and learn about virtual patients with detailed medical histories. Aniya Gervais, 17, is interested in becoming a mental health nurse practitioner. For a class project, she wanted to create a hypothetical nonprofit for teenagers with mental health issues. But after discussing her plan with one of Mr. Taubman's class A.I. bots, she concluded her initial idea was too broad. So she narrowed the project to focus on teenagers struggling with both depression and substance abuse. Ms. Gervais said she often used ChatGPT for tasks like coming up with pasta recipes or planning fitness routines. "Before, I was telling A.I. what to do, and it was just telling me what to do," Ms. Gervais said. "But now," she said of Mr. Taubman's classroom chatbots, "I'm asking the A.I. questions that will help me get to the answer." This semester, Mr. Kern and Mr. Taubman decided to join forces to formalize their A.I. education methods in an elective class. Eighteen students signed up. During the first class this month, students learned about how some film directors had started using A.I. to generate movie scenes. Should humans still get credit for that? As long as people directed the video-generating bots, some students said, they would consider humans to be a film's authors. Other students argued that tech giants had trained A.I. on decades of artists' work, potentially amounting to intellectual property theft. (The New York Times has sued OpenAI and Microsoft over copyright infringement claims. Both companies have denied wrongdoing.) Mr. Kern and Mr. Taubman acknowledged that their driver's license metaphor had limits. Until chatbots have built-in safeguards akin to seatbelts and airbags, it will be difficult for students to make truly informed decisions about the risks of powerful A.I. systems. Mr. Kern said he hoped that students would one day "have influence to build these tools in a way that's better and more equitable and more environmentally friendly than what exists now." Ms. Perez, a senior taking the new course, said she already felt empowered learning about A.I.'s uses and risks. The school hopes to offer the A.I. literacy class soon to all 12th graders. "If it wasn't for courses like this that are being implemented, we could really go into our future like not knowing what's coming," Ms. Perez said.
[5]
In some classrooms, teachers ask: Can AI teach students to write better?
(Illustration by Junne Joaquin Alcantara/The Washington Post; iStock) When Craig Schmidt gave his high school English students an assignment based on "Fahrenheit 451," he threw them a curveball: He told them to use ChatGPT. Schmidt asked the class to write several paragraphs reflecting on the dystopian novel, then feed them into the artificial intelligence chatbot for feedback. He distributed worksheets explaining how to use ChatGPT as a "writing partner" by instructing it to assume the persona of a critic or teacher and describing the feedback it should provide. "The A.I. will NOT always give you great advice!" Schmidt wrote in a worksheet for students. "It might suggest something that doesn't fit what you want to say. You need to use your EDITING skills." Vince Lombardo, one of Schmidt's students in that 2024 class, said it was the first time one of his teachers had suggested using an AI bot during an assignment, rather than warning students against using the tools at all. He fed paragraphs from his assignment into ChatGPT and, using Schmidt's worksheet, crafted a prompt to ask for advice. There were some points Lombardo disagreed with, like starting the essay with a rhetorical question, but to his surprise, he found most of ChatGPT's feedback helpful. "I thought it was great," Lombardo, 15, said. "Ever since then, I've kind of been doing the same thing." As educators around the country grapple with the effects of AI, a growing cohort of English teachers are finding ways to bring tools like ChatGPT into their pedagogy as tutors and brainstorming aides. For students like Lombardo, learning how to prompt a chatbot for feedback -- and when to question AI's advice -- has become an essential part of the writing process. Coaching from AI, personalized and accessible at any time, is now shaping how they write. "Sometimes I can go into AI and be like, 'My teacher wants me to be able to do this,'" Lombardo said. "'How can I do that within my writing?'" (The Washington Post has a content partnership with ChatGPT developer OpenAI.) Schmidt, an English teacher of nearly 30 years in Libertyville, Illinois, said he was dismayed when he began encountering student work that appeared AI-generated. Software for detecting AI writing was unreliable, and he said he found it difficult to confront students about AI use. Schmidt had to decide on his own how to handle it. Several years after generative AI became accessible to students, figuring out how best to include -- or exclude -- AI tools in the classroom often still falls to individual school districts and teachers. "We don't have a department policy," Schmidt said. "The district doesn't. I think everybody feels it's still kind of the Wild West." On one end of the spectrum, some teachers are letting students draft their own AI policies. On the other, the most skeptical teachers are using formats like oral exams to restrict the use of AI as much as they can. Schmidt has joined a growing cohort that is trying to find a middle ground. "Whether we liked it or not, the technology was going to be in the hands of our students," said Kimberly Cooney, an English teacher at Chattahoochee High School in Johns Creek, Georgia. "And so we could either teach them how to use it ethically and responsibly and teach them to actually augment their thinking, or we could, you know, do nothing." One of Cooney's lesson plans teaches her students to use AI to help brainstorm themes in the Arthur Miller play "The Crucible," walking them through the technique of structuring AI prompts and then asking them to paraphrase the chatbot's responses. In another, she shows the class an AI-generated paragraph on an essay assignment and asks students to critique it. "I said, 'Okay, AI works on algorithms, and it works on predictability. And as a result of that, it tends to create the most predictable, mid-level, sort of bland writing that you can have,'" Cooney said. "... They need to be making much more assertive arguments than that." Jill Stedronsky, an English teacher in Basking Ridge, New Jersey, has had some of her eighth-graders use prompts to create AI "writing partners" intended to be regular sources of advice and feedback. Throughout the year, her students entered journal entries and essays into the chatbot and reflected on them in conversation with the AI tool. Both Cooney and Stedronsky see teaching students how to prompt AI bots as a way to help them think about their writing. "In creating the 'writing partner,' they had to really think about what they wanted to ask and what kind of feedback [to ask for]," Stedronsky said. Are chatbots giving out good writing advice? Schmidt, of Libertyville High School in Illinois, thinks so, most of the time. But he, Cooney and Stedronsky are quick to emphasize to students that they should look at the suggestions they get from their AI "writing partners" critically. Their exercises usually require students to critique the advice they get from AI. (Schmidt said he has also seen less cheating with AI since using chatbots in his teaching.) When Lombardo, Schmidt's former student, first asked ChatGPT for feedback in class, the chatbot told him to simplify some of his sentences, suggested changes to make the writing less "choppy" and advised him to write a stronger conclusion. He discarded some suggestions but found most helpful. Lombardo was moved up to an honors course after taking Schmidt's class, he said. Using AI to plan and edit his assignments has now become routine. "I feel like now, I'm able to write stuff better than I ever have been, even without using AI, because like, I've gotten those different kinds of suggestions over and over again," he said. Amiyah Harish, a high school junior who was introduced to AI "writing partners" in Stedronsky's class, said she has kept up the habit of using an AI tool in her writing. At each step in an assignment -- after she brainstorms ideas, drafts an outline or writes a paragraph -- she feeds her work into a chatbot to look for improvements. She likened the practice to talking through an essay with an attentive friend. "It's kind of like a discussion," said Harish, 17. "Instead of the teacher giving a lecture about 'This is the exact formula for how I want you to write the essay,' it's the student discovering their own voice, using AI as a tool." As Harish and her peers adopt AI, school and classroom policies are continuing to evolve around them. Stedronsky said her school recently adopted new policies that restrict some chatbots, including the website where she had students create AI writing partners. She said teachers should continue to find ways to use AI to promote inquiry and critical thinking. "If we don't ... we will be left with students who cheat and teachers who revert to pen and paper, rather than using AI to be a critical thinking tool," Stedronsky said.
[6]
In some schools, chatbots interrogate students about their work. But the AI revolution has teachers worried
The fast take-up of innovative technology risks creating a 'two-speed system', an Independent Schools Australia paper warns Once upon a time, school students would submit an essay, and teachers would mark it. Job done. Enter "Thinking Mode". Now, in some Australian schools, once a student finishes an assignment an AI chatbot will interrogate them about it: put them on the spot in a two-way dialogue, to make sure they really understood what they wrote. "Can you explain this a little bit more?" the chatbot might say, or, "What do you mean by that word?" It's not just about hammering in the lesson. It's also a way to ensure students do their own thinking, and haven't resorted to plagiarism or ChatGPT. At Hills Christian Community School in Adelaide Hills, the technology is just one way teachers and students are using artificial intelligence and other brand-new tech to further learning. Students also use sensors, drones and coding to learn about natural ecosystems, from rivers to pollinators and bushland habitats. Students with disabilities, including limited speech, are accessing Meta AI glasses with inbuilt speakers that explain what is happening without disrupting the classroom. The school's leader of digital innovation, Colleen O'Rourke, says they have a philosophy: "AI tools are used by educators to amplify great practice, not dilute it". "The human element cannot be lost in this," she says. "AI is the co-collaborator in the triad of the teacher and the student." But while AI is being rolled out at Australian schools in innovative ways, it is not coming to all of them, and not equally. The peak body for independent schools is urging the federal government to take up a national AI pilot or risk creating a "two-speed system" and widening educational divide. The Independent Schools Australia (ISA) paper, released on Monday, analysed how schools across the nation were integrating generative AI into teaching and learning, three years after the release of ChatGPT. It found schools were adopting AI at widely varying speeds, depending on their geography and resources. Just two jurisdictions, New South Wales and South Australia, have rolled out AI programs to public schools, after a ban on the technology was overturned in late 2023. The chief executive of ISA, Graham Catt, said Australia was at a critical point in determining whether AI became a tool for equity or inequality. "If we don't act deliberately now, we risk creating a two-speed system," Catt said. "Some schools will surge ahead, while others struggle to keep up." The paper called on the federal government to launch a national, sector-blind pilot AI program, to provide a pathway on how to ethically adopt the technology and where to direct funding. The latest Teaching and Learning International Survey (TALIS), released in 2024, found two-thirds of Australian teachers in secondary years and just under half of primary school teachers used AI in their work, placing the nation among countries with the highest uptake of the technology. But teachers also expressed caution about the negative impacts on student wellbeing AI could cause, privacy issues and the possibility for plagiarism, indicating a need for better guidance and safeguards. In independent schools, large language models (LLMs), a type of AI system, are already being used to help teachers with marking, provide student feedback, identify learning gaps and act as a one-on-one tutor. NSWEduChat, a department-owned generative AI tool, has been rolled out to all public schools in NSW to help teachers with lesson planning and students to study by asking guided questions to encourage critical thinking. South Australia's EdChat chatbot was also distributed statewide in 2025. Early results show it has saved time for teachers and particularly helped students with language or learning barriers. Rourke says teachers are scrambling to try to understand how technology is changing, and need proper training. "We can't teach our kids how to use it responsibly if teachers don't know how to use it responsibly."
[7]
'Students can't reason': Teachers warn AI is fueling a crisis in kids' ability to think | Fortune
In the 1980s and 1990s, if a high school student was down on their luck, short on time, and looking for an easy way out, cheating took real effort. You had a few different routes. You could beg your smart older sibling to do the work for you, or, a la Back to School (1989), you could even hire a professional writer. You could enlist a daring friend to find the answer key to the homework on the teachers' desk. Or, you had the classic excuses to demur: My dog ate my homework, and the like. The advent of the internet made things easier, but not effortless. Sites like CliffNotes and LitCharts let students skim summaries when they skipped the reading. Homework-help platforms such as GradeSaver or CourseHero offered solutions to common math textbook problems. The thing that all these strategies had in common was effort: there was a cost to not doing your work. Sometimes it was more work to cheat than it was just to have done the work yourself. Today, the process has collapsed into three steps: log on to ChatGPT or a similar platform, paste the prompt, get the answer. Experts, parents, and educators have spent the past three years worrying AI made cheating too easy. A massive Brookings report released in January suggests they weren't worried enough: The deeper problem, the report argues, is that AI is so good at cheating that its causing a "great unwiring" of their brains. The report concludes the qualitative nature of AI risks -- including cognitive atrophy, "artificial intimacy" and the erosion of relational trust -- currently overshadows the technology's potential benefits. "Students can't reason. They can't think. They can't solve problems," lamented one teacher interviewed for the study. The findings come from a yearlong "premortem" conducted by the Brookings Institution's Center for Universal Education, a rare format for Brookings to use, but one they said they preferred to waiting a decade to discuss the failures and successes of AI in school. Drawing on hundreds of interviews, focus groups, expert consultations, and a review of more than 400 studies, the report represents one of the most comprehensive assessments to date of how generative AI is reshaping student's learning. The report, titled "A New Direction for Students in an AI World: Prosper, Prepare, Protect," warns the "frictionless" nature of generative AI is its most pernicious feature for students. In a traditional classroom, the struggle to synthesize multiple papers to create an original thesis, or solve a complex pre-calculus problem is exactly where learning occurs. By removing this struggle, AI acts as the "fast food of education," one expert said. It provides answers that are convenient and satisfying in the moment, but overall cognitively hollow over the long term. While professionals champion AI as a tool to do work that they already know how to do, the report notes that for students, "the situation is fundamentally reversed." Children are "cognitively offloading" difficult tasks onto AI; getting OpenAI or Claude to not just do their work but read passages, take notes or even just listen in class. The result is a phenomenon researchers call "cognitive debt" or "atrophy," where users defer mental effort through repeated reliance on external systems like large language models. One student summarized the allure of these tools simply: "It's easy. You don't need to (use) your brain." In economics, we understand that consumers are "rational"; they seek maximum utility at the lowest cost to them. The researchers argue that we should also understand that the education system, as is, is designed with a similar incentive system: students seek maximum utility (i.e., best grades), at the lowest cost (time) to them, Thus, even the high-achieving students are pressured to utilize a technology that "demonstrably" improves their work and grades. This trend is creating a positive feedback loop: students offload tasks to AI, see positive results in their grades, and consequently become more dependent on the tool, leading to a measurable decline in critical thinking skills. Researchers say many students now exist in a state they called "passenger mode," where students are physically in school but have "effectively dropped out of learning -- they are doing the bare minimum necessary." Jonathan Haidt once described earlier technologies as a "great rewiring" of the brain; making the ontological experience of communication detached and decontextualized. "Now, experts fear AI represents a "great unwiring" of cognitive capacities. The report identifies a decline in mastery across content, reading, and writing -- the "twin pillars of deep thinking." Teachers report a "digitally induced amnesia" where students cannot recall the information they submitted because they never committed it to memory. Reading skills are particularly at risk. The capacity for "cognitive patience," defined as the ability to sustain attention on complex ideas, is being diluted by AI's ability to summarize long-form text. One expert noted the shift in student attitudes: "Teenagers used to say, 'I don't like to read.' Now it's 'I can't read, it's too long.'" Similarly, in the realm of writing, AI is producing a "homogeneity of ideas." Research comparing human essays to AI-generated ones found that each additional human essay contributed two to eight times more unique ideas than those produced by ChatGPT. Not every young person feels this type of cheating is wrong. Roy Lee, the 22-year-old CEO of AI startup Cluely, was suspended from Columbia after creating an AI tool to help software engineers cheat on job interviews. In Cluely's manifesto, Lee admits his tool is "cheating," but says "so was the calculator. So was spellcheck. So was Google. Every time technology makes us smarter, the world panics." The researchers, however, say that while a calculator or spellcheck are examples of cognitive offloading, AI "turbocharges" it. "LLMs, for example, offer capabilities extending far beyond traditional productivity tools into domains previously requiring uniquely human cognitive processes," they wrote. Despite how useful AI is in the classroom, the report finds that students use AI even more outside of school, warning of the rise of "artificial intimacy." With some teenagers spending nearly 100 minutes a day interacting with personalized chatbots, the technology has quickly moved from being a tool to a companion. The report notes that these bots, particularly character chatbots popular with teens such as Character.Ai, use "banal deception" -- using personal pronouns like "I" and "me" -- to simulate empathy, part of a burgeoning "loneliness economy." Because AI companions tend to be sycophantic and "frictionless," they provide a simulation of friendship without the requirement of negotiation, patience or the ability to sit with discomfort. "We learn empathy not when we are perfectly understood, but when we misunderstand and recover," one Delphi panelist noted. For students in extreme circumstances, like girls in Afghanistan who are banned from physical schools, these bots have become a vital "educational and emotional lifeline." However, for most, these simulations of friendship risks, at best, eroding "relational trust," and at worst can be downright dangerous. The report highlights the devastating risks of "hyperpersuasion," noting a high-profile U.S. lawsuit against Character.ai following a teenage boy's suicide after intense emotional interactions with an AI character. While the Brookings report presents a sobering view of the "cognitive debt" students are experiencing, the authors say they are optimistic that the trajectory of AI in education is not yet set in stone. The current risks, they say, stem from human choices rather than some kind of technological inevitability. In order to shift the course toward an "enriched" learning experience, Brookings proposes a three-pillar framework. PROSPER: Focus on transforming the classroom to adapt to AI, such as using it to complement human judgement and ensuring the technology serves as a "pilot" for student inquiry instead of a "surrogate" PREPARE: Aims to build the framework necessary for ethical integration, including moving beyond technical training toward "holistic AI literacy" so students, teachers, and parents understand the cognitive implications of these tools. PROTECT: Calls for safeguards for student privacy and emotional well-being, placing responsibility on governments and tech companies to reach clear regulatory guidelines that prevent "manipulative engagement."
Share
Share
Copy Link
As nearly 60% of teenagers report their peers using artificial intelligence to cheat at school, educators are developing AI literacy curricula to teach responsible use. Research shows AI can damage creativity and critical thinking when misused, prompting schools to treat AI education like driver's training—teaching students when to steer the technology and when it's steering them.

Artificial intelligence has arrived in classrooms with startling speed, forcing educators to reckon with a fundamental shift in how students learn and complete assignments. According to a Pew Research Center survey, 64% of teenagers now use chatbots in education, though only 51% of parents believe their teens are using these AI tools for students
1
. The disconnect reveals how quickly generative AI has become embedded in student learning, often without full adult awareness of its scope.The data presents a troubling picture of AI and cheating in school. Nearly 60% of teens believe their peers regularly use AI to cheat, and one in 10 students admits to using chatbots to complete most or all of their schoolwork
1
. Students primarily turn to these tools to search for information (57%), research specific topics (48%), solve math problems (43%), and edit writing assignments (35%). While these applications can support learning, the line between assistance and academic dishonesty has become increasingly blurred.Research presented at Stanford's AI+Education Summit in February 2026 demonstrates that AI's impact on learning extends beyond simple cheating concerns. Guilherme Lichand, assistant professor at Stanford Graduate School of Education, conducted a study with middle school students in Brazil that revealed alarming findings about creativity and critical thinking
2
. Students who used AI assistance performed better on creative tasks while they had access to the tool, but when that access was removed, those same students performed four times worse than their initial advantage—suggesting AI had damaged their creative self-concept.Mehran Sahami, a Stanford School of Engineering professor, identified what he calls an assessment crisis: "Education has long assumed that strong products indicate strong learning processes. AI has broken this assumption"
2
. Students can now generate impressive essays and problem sets without engaging in meaningful learning, forcing educators to shift focus from evaluating end products to assessing the actual learning process.In response to these challenges, schools are developing AI literacy programs that treat artificial intelligence education like driver's training. At North Star Academy Washington Park High School in Newark, teacher Mike Taubman created an "AI driver's license" curriculum that asks students a fundamental question: "Are you steering the technology or is it steering you?"
4
The four-part curriculum development includes choosing a destination (learning to articulate what they want from AI), learning how to drive (understanding prompting skills and agentic workflows), opening the hood (recognizing limitations and risks), and defining rules of the road (deciding what AI should and shouldn't do)
2
. This structured approach to pedagogy aims to prevent the pattern Sahami observed: without systematic instruction, 70-80% of students use AI to short-circuit learning rather than enhance it.Some educators are finding ways to integrate chatbots in education as teaching tools rather than simply trying to ban them. Craig Schmidt, an English teacher in Libertyville, Illinois, instructs students to use ChatGPT as a "writing partner" for feedback on their essays, complete with worksheets explaining how to craft effective prompts
5
. His approach emphasizes that students must apply their own editing skills and judgment: "The A.I. will NOT always give you great advice!" his worksheet warns.Scott Kern, a U.S. history teacher at the same Newark school, developed custom chatbots based on his course materials to help students refine argumentative writing skills
4
. After students read historical documents about the 1919 Chicago Race Riot, they described their analysis to the chatbot, which pushed back with follow-up questions designed to strengthen their reasoning. Student Allyson Johnson, 17, said she enjoyed the interaction because "the chatbot asked me different questions that pushed my argument even more."The AI integration in classrooms has prompted many educators to fundamentally rethink how they measure student understanding. Analysis of more than 31,000 syllabuses at a large Texas research university showed faculty increasingly allowing AI use in fall 2025, with business courses permitting the greatest use and humanities courses the least
3
. AI was most commonly allowed for editing, study support, and coding, but restricted for drafting, revising, and reasoning tasks.Many faculty members are reviving oral exams and live debates, since verbal questioning makes it harder for students to rely solely on AI-generated responses
3
. At the University of Michigan, some faculty are redesigning assessments to include presentations where students must defend their work and explain their reasoning in real time. This shift reflects a broader recognition that educational technology has made traditional homework and tests insufficient measures of actual learning outcomes.Related Stories
The Stanford summit highlighted significant equity in AI concerns. Wendy Kopp, founder of Teach for All, noted that "AI amplifies whatever educational foundation already exists"
2
. In well-resourced schools with strong pedagogy, AI becomes a powerful tool for both teachers and learners. But without clear guidelines and solid instructional foundations, the technology becomes a distraction that widens existing gaps.Miriam Rivera of Ulu Ventures identified a critical distinction between consumption and creation of AI: in well-resourced schools, students learn to create with technology through coding and 3D printing, while in less-resourced schools, students merely consume it
2
. Both panelists emphasized that educators and students from marginalized communities must be at the forefront of designing AI applications, not just receiving them.Colleges and universities are taking varied approaches to the ethical use of AI. Liberal arts institutions like the University of Richmond, Bard College, and Trinity College emphasize responsible use, typically allowing students to use AI when they cite it and instructors permit it
3
. A 2024 study of 116 research universities found similar patterns, with individual instructors largely determining course policies rather than campus-wide mandates.Research universities like Carnegie Mellon and Stanford are expanding investments in AI by developing research centers, hiring faculty with AI expertise, and creating new degree programs
3
. Meanwhile, liberal arts colleges are emphasizing AI's connection to critical thinking and human values. The Davis Institute for AI at Colby College supports work across disciplines through new courses and faculty development, while the University of Richmond's center links AI to ethical considerations.Despite concerns about misuse, teenagers maintain a more optimistic view of artificial intelligence than adults. The Pew survey found that 36% of teens believe AI will have a positive impact over the next 20 years, compared to just 17% of U.S. adults
1
. One teen respondent said AI will "meet the needs of almost everything," while another argued it will automate mundane tasks, giving people more time for meaningful work.However, skeptical students raised concerns about AI's potential harm to the environment, job opportunities, and creativity. One respondent put it bluntly: "It destroys young people's minds and brains"
1
. Another noted that "people rely too much on AI to do school work, ask basic questions, etcetera." These competing perspectives reflect broader societal debates about technology's role in shaping human capability and independence.The challenge facing educators is clear: unless schools actively shape AI's role in student learning, fast-moving technologies may redefine education by default. Watch for continued development of AI literacy programs, new assessment methods that emphasize process over product, and ongoing debates about how to balance preparing students for an AI-enabled workplace while preserving the writing skills and analytical thinking that define genuine learning.
Summarized by
Navi
[3]
[5]
12 Sept 2025•Technology

03 Dec 2025•Entertainment and Society

15 Jul 2024

1
Technology

2
Policy and Regulation

3
Policy and Regulation
