15 Sources
15 Sources
[1]
About 12% of U.S. teens turn to AI for emotional support or advice | TechCrunch
AI chatbots have become embedded in the lives of American teenagers, according to a report published Tuesday by the Pew Research Center. While the most common uses of AI among this demographic are to search for information (57%) and get help with schoolwork (54%), teens are also using AI to fill roles that would typically be occupied by friends or family. Sixteen percent of U.S. teens say they use AI for casual conversation, while 12% use AI chatbots for emotional support or advice. Some teens may find solace in talking to chatbots, but mental health professionals are wary. General purpose tools like ChatGPT, Claude, and Grok are not designed for such uses, and in the most extreme cases, these chatbots can have life-threatening psychological effects. "We are social creatures, and there's certainly a challenge that these systems can be isolating," Dr. Nick Haber, a Stanford professor researching the therapeutic potential of LLMs, told TechCrunch recently. "There are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts, and not grounded in connection to the interpersonal, which can lead to pretty isolating -- if not worse -- effects." Pew's survey also shows a discrepancy between teenagers' self-reported AI usage and the extent to which their parents think they engage with this technology. About 51% of parents said that their teen uses chatbots, while 64% of teens reported using them. The majority of parents are okay with their teens using AI to search for information (79%) or get help with schoolwork (58%), but far fewer parents approve of their teens using AI chatbots for casual conversation (28%) or to get emotional support or advice (18%). In fact, 58% of parents are not okay with their child using AI for such purposes. AI safety is a contentious topic among leading tech companies, to say the least. But one popular chatbot maker, Character.AI, made the choice to disable the chatbot experience for users under the age of 18. This decision followed public outcry and lawsuits filed over two teenagers' suicides, which took place after prolonged conversations with the company's chatbots. OpenAI, meanwhile, made the decision to sunset its particularly sycophantic GPT-4o model, which sparked backlash from people who had come to rely on the model for emotional support. Though a majority of teens use AI chatbots in some way, they have mixed feelings about the impact of this kind of technology on society. When asked how they think AI will impact society over the next 20 years, 31% of teens said the impact would be positive, while 26% said it would be negative.
[2]
Nearly 60% of Teens Believe Their Peers Use AI to Cheat at School, Survey Finds
Kids think AI is alright, according to survey data the Pew Research Center published Tuesday. They're using it for a wide range of schoolwork tasks -- including cheating. Teenagers seem to have a more optimistic view of AI compared to adults. Only 17% of US adults felt AI would have a positive impact over the next 20 years, Pew found in a study last year. For teenagers (ages 13 to 17), that number goes up to 36%. Perhaps unsurprisingly, adults with significant experience using generative AI tools are the most optimistic, at 51%. Many of today's debates about how AI should be used are reflected in teens' perspectives. Pew's survey included anonymized quotes from teen respondents. One teen who viewed AI positively said it will "meet the needs of almost everything." Another said AI will automate mundane tasks, giving people more time to do what they really want -- a common pro-AI argument. Those who viewed AI negatively raised concerns about its potential harm to the environment, job opportunities and creativity. Another respondent put it more bluntly: "It destroys young people's minds and brains." These debates are especially relevant to teens, who may be using AI tools like chatbots more often than their parents realize. Pew found that 51% of parents believe their teen uses chatbots. When teens were asked directly, that figure jumped to 64%. Like any new technology, there's a lot of concern about how teens and their undercooked, still-developing brains are using AI. Researchers and educators worry that students are at risk of stalling their learning development if they outsource their critical thinking to AI tools. But that risk varies with use; AI isn't an inherently evil threat to teens, but it can become one if misused. This is what we know so far about how teens are actually using AI -- the good and the bad. How teens are using AI in school Teenagers are somewhat split on using AI tools to aid with schoolwork; 45% say they don't use chatbots for homework help, while 54% say they do use them to some degree. The biggest AI uses are around information gathering. Teens are using AI to search for information (57%) and for help researching specific topics (48%). This makes sense given chatbots excel at information scouring and summarization. Teens are also using AI to help solve math problems (43%) and to edit writing assignments (35%). But like any tool, some teens are using it to cut corners and avoid doing the work. Pew found that one in 10 students uses chatbots to do most or all of their schoolwork. And nearly 60% of teens believe that their peers are regularly using AI to cheat at school. "People rely too much on AI to do school work, ask basic questions, etcetera," one teen responded in the Pew survey. Tech companies have heavily advertised their AI products to students, especially those in college. ChatGPT, Perplexity and Claude each have advanced modes dedicated to research. There's an ongoing debate about the best ways to implement AI in education -- balancing the desire to teach students how to best utilize the new tech without letting them entirely run amok.
[3]
Most Teens Use AI for Homework Help. 10% Let It Do Everything
Most American teenagers have grown comfortable with treating AI chatbots as substitute teachers, but far fewer say they've ever used one as a therapist, according to a new report from the Pew Research Center. The top use for these conversational apps is something less exciting than either of those person-replacement scenarios: searching for information, which 57% of teens have done at least once. Those results might make Google's efforts to weave AI into its web-search experience seem more understandable. Pew's finding that only 12% have ever used an AI chatbot for "emotional support" may also provide modest reassurance to parents of online teens who have seen so much coverage of AI chatbots hurting kids' mental well-being, sometimes to the point of self-harm or suicide. As for parents worried about whether AI is the real reason behind their teens' suddenly improved grades? That's complicated. The survey found that 54% of teenagers have ever used AI chatbots for help with schoolwork. But 10% report using them to do "all or most" of it, 21% to do "some" of it, and 23% to do "a little" of it. The top application of AI for schoolwork is "researching a topic," which 48% of respondents say they have ever done; 43% reported ever using a chatbot to help solve a math problem, and 35% said they have used one to edit their own writing. Applying AI to learning can mean many different things. As one example, a Feb. 22 Washington Post article reported how English teachers have invited their students to use chatbots as writing coaches -- while teaching them to question that advice and recognize ways it might lead them astray. As another, a startup named Companion.ai is now pitching an Einstein chatbot that has drawn scathing criticism on social media for its touted ability to let a student skip remote learning on the widely used Canvas platform: "[Einstein] logs into Canvas every day, watches lectures, reads essays, writes papers, participates in discussions, and submits your homework -- automatically." "Students actually using the product have been really positive," Companion.ai CEO Advait Paliwal said in an email. "On the educator side, we've received everything from constructive concerns to threats telling us to take it down or we won't 'sleep well' and that we're causing the downfall of society." (The site for Einstein doesn't list pricing -- a real dark pattern -- but Paliwal says the service offers $40, $100, and $200 monthly plans. A PR rep for Instructure, the developer of Canvas, did not answer an email requesting comment.) The Pew data suggest many students aren't ready to put that much trust in an AI to do their work. While 26% said chatbots have been "extremely or very helpful" for completing schoolwork, almost as many -- 25% -- described them as only somewhat helpful, 3% said they were not helpful, and the remaining 45% weren't using chatbots for schoolwork. A majority of respondents also don't trust their own classmates to use AI honestly in school: 34% reported that students at their school use AI to cheat very or extremely often, and 25% did so "somewhat often." Only 14% said fellow students rarely or never use AI to cheat, and 15% weren't sure. The report includes additional details on the demographics of teens who lean on AI for most or all of their schoolwork, finding the highest proportion -- 20% -- in households earning less than $30,000 a year. In households making more than $75,000 a year, that figure was just 7%. The other categories of even-once AI chatbot use reported in the Pew survey: "fun or entertainment," reported by 47% of respondents; summarizing an article, book, or video, 42%; creating or editing images and videos, 38%; getting news, 19%; and "casual conversation," 16%. The Pew survey also asked some bigger-picture questions about how teens see AI affecting them and the world around them, and they don't seem quite sold on the whole thing. While 36% say they expect AI to have a positive effect on them over the next 20 years, 32% think it would be equally positive and negative, 15% expect negative consequences, and 17% aren't sure. Respondents are less bullish when asked what they thought AI would do for society at large: 31% predict positive results, 34% are equally positive and negative, 28% are negative, and 8% are unsure. The report includes some quotes from anonymized respondents that suggest they have thought more deeply about these possibilities than certain tech CEOs. One teenage girl voiced optimism about AI's ability to free up time for human creativity: "It will do tasks that can be automated and allow people more time to do what they like." But a teen boy complains that "it's hard to tell what's real or AI online anymore" while another teen girl notes the obvious potential for abuse: "There are evil people in this world, and the wrong person could make AI turn against humans." This survey did not ask respondents which chatbots they use. A separate Pew survey released in December found that 59% of teens had used OpenAI's ChatGPT at least once, far above the figures for Google Gemini (23%) and Meta AI (20%). Pew's researchers also spoke with parents and found them supportive of most AI-chatbot uses, with 79% saying they're OK with using them to look up information, but opposed to turning to them for casual conversations (45% of parents didn't support that) or emotional support (58% not OK). And the most striking number in this study? Pew found 42% of parents had not talked to their teenagers about AI chatbots. Pew used the research firm Ipsos to conduct this online survey via its KnowledgePanel, which drew responses from 1,458 US teens and their parents online from Sept. 25 to Oct. 9, with an overall margin of error of plus or minus 3.3 percentage points.
[4]
Experts examine AI's role in learning, equity, and creativity
Experts also encouraged educators to prioritize human connections over technology in student support. Last week, Stanford Institute for Human-Centered AI and the Stanford Accelerator for Learning convened educators, researchers, technologists, policy experts, and more for the fourth annual AI+Education Summit. The day featured keynotes and panel discussions on the challenges and opportunities facing schools, teachers, and students as AI transforms the learning experience. At the summit, several themes emerged: AI has created an assessment crisis - student projects no longer indicate a strong learning process; schools are awash with too many AI products and need better evaluations and sustainable adoption models; AI's benefits aren't equitable; AI literacy is a non-negotiable; human connection is irreplaceable. Read a few of the highlights from the Feb. 11, 2026 event, and watch the full conference on YouTube. AI's inequitable impact AI amplifies whatever educational foundation already exists, said Wendy Kopp, founder of Teach for All. In mission-driven schools with strong pedagogy, AI becomes a powerful tool for teachers and learners. But without a strong pedagogy and guidelines, the technology becomes a distraction. Miriam Rivera, of Ulu Ventures, said a critical distinction emerges between consumption and creation of AI. In well-resourced schools, she said, students often learn to create with technology (3D printing, coding), while in less-resourced schools, students merely consume it. Both panelists said that equity-focused teachers and students from marginalized communities must be at the forefront of designing AI applications, not just receiving them. Dennis Wall, a Stanford School of Medicine professor, illustrated one way this might look. His team is developing a gamified framework to support children struggling with social communication skills. His lab is co-designing these resources with the teachers, therapists, and parents who will use them, ensuring the tools are accessible, engaging, and informed. AI literacy is a must-have Education has long assumed that strong products (homework, summative tests, problem sets) indicate strong learning processes, said Mehran Sahami, a Stanford School of Engineering professor. AI has broken this assumption. Students can now generate impressive products without engaging in meaningful learning. This directs educators to focus on assessing and supporting the actual learning process rather than just evaluating end products. More so, we can't treat AI solely as a tool. Students need a systemic curriculum on AI. Sahami proposed a progression: Introduce what AI is; teach about hallucinations and bias; show how to verify AI outputs; teach advanced techniques like prompting. Without this structured approach, students teach themselves - and 70-80% use AI to short-circuit learning rather than enhance it. Mike Taubman, a teacher at North Star Academy in Newark, N.J., developed an "AI driver's license" curriculum that maps the adolescent rite of passage of getting a driver's license onto AI literacy. The goal is to put students in the driver's seat, not the passenger seat, when it comes to AI. The four-part curriculum includes choosing a destination (students learn to ask want they want of AI); learning how to drive (see how these tools work and what it means to prompt, develop agentic workflows, etc.); opening the hood (understand their limitations and risks); and defining the rules of the road (decide what AI should and shouldn't do). Understand AI's learning harms Guilherme Lichand, assistant professor at Stanford Graduate School of Education, studied AI's impact on creativity for middle school students in Brazil. He compared AI assistance with guardrails (if students ask for 10 words, the AI would give only 3) against no assistance across creativity tasks. Students with AI assistance performed better on the task while they had the tool. But when assistance was removed within the same test, the advantage disappeared - suggesting no immediate positive transfer. While that finding isn't surprising, he said, the results from a follow-up creative task were more concerning: * Students who never had AI performed best. * Students with continued AI or new AI access performed slightly worse (not statistically significant). * Students who lost AI access after having it performed dramatically worse - four times worse than their initial advantage. This wasn't just about missing the tool - students had less fun and began believing AI was more creative than they were, he said, suggesting AI damaged their creative self-concept. A "too many pilots" problem Today we have no shortage of AI products, said Stanford Graduate School of Business Professor and HAI Senior Fellow Susan Athey, but we lack effective implementation and adoption. Schools and districts are slow to adopt new tools because of historical software lock-in and the opportunity costs of training teachers on systems that may fail. Athey also noted a "teaching to the test" problem for developers. If teachers spend more time on an interface, does that mean it's good and they're deeply engaged, or does that mean it's terrible and they're spending time trying to make it work? Education tools need multifaceted measurement approaches: human review, AI "guinea pigs" (simulated students to test products before real children do), and careful evaluation of what's actually being measured. She advocated for digital public goods like evaluation tools, testing frameworks, and validated AI student simulations that could be developed by universities and philanthropy to create robust measurement infrastructure the whole sector can use. Never replace real relationships Nearly half of all generative AI users are under 25, said Amanda Bickerstaff, CEO of AI for Education; that's over 300 million active monthly users of ChatGPT alone who are under 25. Students use AI more for mental health and well-being - seeking connection, support, and understanding - than for schoolwork. Bickerstaff warned about cognitive offloading, mental health offloading, and even "belief off-loading," where AI fundamentally shapes how people think, with just four or five chatbot makers having outsized influence on billions of users. Because of that, she said, we must equip people with knowledge, skills, and mindsets to understand when and how to use AI and, crucially, when not to use it. The most vulnerable, according to new research from Pilyoung Kim, visiting scholar at the Stanford Accelerator for Learning and a professor of psychology and director of the Center for Brain, AI, and Child (BAIC) at the University of Denver, are young people lacking human connections. She asked over 260 middle school students and their parents to compare and share preferences between two chatbot conversation styles: A "best friend" that was highly relational and would respond with comments like, "That must be so upsetting. Your ideas matter so much. I'm always here to listen," and a more transparent version that set boundaries and reminded the user that it was an AI. More adolescents preferred the relational AI, and even more than half of parents chose the relational AI for their teens, reasoning it would be more effective at supporting issues their children might not share with them directly. But more importantly, children who chose the relational AI were also more likely to report feeling stressed or anxious, and they reported a lower family relationship quality. "If they have more unmet social needs, it is possible that they're more drawn to an AI that provides social connections," Kim said. "That might put them in more vulnerable positions to overly rely on a relationship that is not real." She emphasized the common thread of the day: AI should never replace human connection.
[5]
Colleges face a choice: Try to shape AI's impact on learning, or be redefined by it
What happens to a college education when a chatbot can draft an essay, summarize a reading and generate computer code in seconds? The arrival of artificial intelligence in college classrooms has been swift and, for many schools, disorienting. As professors of economics and business management and biology at liberal arts colleges, we are confronting a question that now cuts across all colleges and universities: What is the purpose of a college education, as AI is rapidly reshaping how students think, learn and prepare for careers? While much of the public debate has focused on plagiarism and credit for student work, the deeper issue extends beyond rule-setting. Across higher education, most schools have issued guidance on how students should use AI, rather than adopted sweeping mandates. Liberal arts colleges, like the University of Richmond, Bard College and Trinity College, tend to emphasize the importance of students using AI ethically and responsibly, and typically allow students to use AI when they cite it and their instructor permits it. These schools also allow professors to individually determine their own AI policies. A 2024 study of 116 research universities found similar patterns, with instructors largely determining course policies and few campus-wide bans. What's unsettled is not whether students can use AI, but how institutions want students to use it. In our view, unless colleges clearly shape AI's role in teaching and learning, fast-moving technologies may begin to redefine education by default. The risk isn't more AI, but a gradual shift in what counts as learning. Students may spend less time asking hard questions, making their own judgments and building real expertise. In that case, college risks becoming less about understanding and more about producing papers and other content quickly. Letting AI into the classroom When generative AI tools first became widely available in late 2022 and early 2023, most professors focused on finding and preventing it in student work. They looked for signs of AI use, including generic phrasing, fake citations, sudden shifts in tone or unusually polished writing that didn't match a student's prior work. Some faculty also used AI-detection software to identify computer-generated text. But it is often difficult to tell when someone has used AI, in part because the detection software is unreliable. As a result, many faculty have shifted from bans to more structured guidance. Some faculty, as a result, allow students to use AI for specific tasks, such as brainstorming, outlining or debugging code. The rationale is practical: AI is everywhere and already embedded in professional settings. College graduates are likely to use AI in the workplace. Accepting AI is here to stay More recently, college faculty at a range of schools have shifted the focus from whether students are using AI at all to whether students using AI can still analyze, question and justify their own research and conclusions. At the University of Michigan, for example, some faculty are redesigning assessments to include live debates and oral presentations. And across the U.S., professors are reviving oral exams, since live questioning makes it harder for students to rely solely on AI. Students must then verbally explain their reasoning and defend their work. Different academic fields, though, are approaching AI in various ways. Many business programs, like the University of Pennsylvania's Wharton School, have moved quickly to bring AI into coursework and degree programs, often framing them as workforce preparation. Recent analysis of more than 31,000 syllabuses at a large research university in Texas showed a growing number of faculty in the fall of 2025 allowed students to use AI. Business courses allowed the greatest use of AI, while humanities courses allowed it the least. The physical and life sciences fell in between. Across disciplines, AI was most often allowed at this school for editing, study support and coding. It was most commonly restricted for drafting, revising and reasoning or problem-solving. AI's role in higher education is not settled. Instead, it is evolving, dependent on different academic cultures. Different schools, different approaches Colleges' and universities' overall responses and approaches to AI are varied, as well. Research universities like Carnegie Mellon University and Stanford University are expanding on their long-standing investments in AI, moving quickly to develop new research centers, hiring faculty with AI expertise and creating new degree or certificate programs. Liberal arts colleges are moving too, but often with a different emphasis. The Davis Institute for AI at Colby College supports AI work across disciplines through new courses, faculty development and entrepreneurship. At the University of Richmond, a new center links AI to critical thinking and human values, so students can study AI's impacts and help shape it intentionally. All of these schools are determining AI policy course by course. But these plans are not part of a comprehensive, school-wide strategy. Few schools have articulated coordinated, institution-wide plans on AI. Arizona State University is one example of a broader AI integration strategy, which spans academics and campus operations. Comprehensive AI strategies are expensive. Meaningful integration may require campus licenses for AI services, upgraded computing systems and faculty training. These investments are difficult at a time when many colleges face enrollment declines and financial strain. Public trust in higher education is another concern that makes enacting broad change difficult. Gallup surveys in 2023 and 2024 found that only 36% of Americans had high confidence in colleges and universities. Against this backdrop, AI is raising questions about how colleges prepare students for their careers. Employers still prize critical thinking and communication. Yet generative AI can mimic the appearance of thinking even when real understanding is absent. The tension is clear: If AI does the writing, coding or analysis, where do students do the thinking? Rethinking learning Rising use of AI is forcing colleges and universities to revisit what students should learn, how to measure this and the enduring value of a college degree. That shift moves the conversation beyond course-by-course changes to a shared strategy on what forms of knowledge and thinking are developed in college. Colleges may redesign assignments, expand oral and project-based assessments, and integrate AI literacy across disciplines. They may also clarify learning outcomes, invest in faculty development and find new ways to document students' judgment and problem-solving in an AI-assisted world. The question is no longer whether AI belongs in higher education. The real question is whether colleges and universities will shape its role - or allow AI to quietly reshape them.
[6]
More Than Half of Teens Use Chatbots for Schoolwork, Survey Finds
More than half of teenagers in the United States use artificial intelligence tools for help with their schoolwork, according to a new study from the Pew Research Center. Fifty-four percent of teenagers aged 13 to 17 said they had used chatbots like OpenAI's ChatGPT or Microsoft's Copilot for tasks like researching school assignments or solving math problems, Pew said in a report published on Tuesday. In 2024, 26 percent of U.S. teens said they had used ChatGPT for their schoolwork, according to a previous Pew study asking specifically about their use of that chatbot. That was a twofold increase compared with 2023, when only 13 percent of students said they used ChatGPT for school help, according to Pew, a nonpartisan research center. The latest report, based on a survey of 1,458 teenagers and their parents last fall, found that A.I. use among teens varied widely. While 44 percent of teens said they used A.I. for "some" or "a little" schoolwork, 10 percent of teens said they turned to chatbots for help with all or most of their schoolwork. "We're definitely seeing that the use of A.I. chatbots for help with schoolwork is becoming a common practice for teens," said Colleen McClain, a senior researcher at Pew and a co-author of the study. The findings come amid a heated national debate over the spread of generative A.I. systems, which can produce human-sounding texts, create realistic-looking images and make apps. A.I. proponents say schools must teach students to use and assess A.I. chatbots to prepare young people for changing workplace needs. Critics warn the bots can produce misinformation, mislead students, undermine critical thinking, help lead to self-harm and facilitate cheating. Several recent studies suggest chatbots may hinder critical thinking and impede learning. In one study on reading comprehension from Cambridge University Press & Assessment and Microsoft Research, students assigned to take notes without using chatbots showed better reading comprehension than students assigned to use chatbots to help them understand text passages. The Pew researchers asked teenagers a variety of questions about their views and use of A.I. Many young people use chatbots as multipurpose platforms for learning, entertainment, advice and companionship, the results show. Among the teenagers, 47 percent said they had used chatbots for fun, while 42 percent said they used the tools to summarize content. A smaller group, 12 percent, said they had used bots for advice or emotional support. The report also shed light on how teenagers are using A.I. tools for school. Nearly half of teens said they had used chatbots for research, and more than 40 percent used A.I. for help solving math problems. More than a third said they had used bots to edit their own writing. The survey did not ask students whether they had used chatbots to write essays or generate other assignments, the kind of cheating problems that teachers across the U.S. have warned about. But nearly 60 percent of teens told Pew that students at their school used chatbots to cheat "very often" or "somewhat often." The results, the report said, indicate that teenagers think "cheating with A.I. has become a regular feature of student life."
[7]
Most teens believe their peers are using AI to cheat in school
About half of teenagers say they use AI to help with schoolwork, according to a new survey. (Izusek/iStock) A majority of American teenagers believe that their peers are using artificial intelligence to cheat in school, according to new research, and more than 1 in 10 teens use AI for emotional support or advice. The survey findings released Tuesday by the Pew Research Center provide a snapshot of a generation coming of age during the early wave of AI's spread across workplaces, educational institutions and personal life. Schools have struggled to adapt to AI cheating and at the same time are grappling with how to prepare students for a future that may be transformed by AI. The survey results could add fuel to concerns among some researchers and child advocates that young people are growing dependent on the technology with few guardrails to protect them. "AI is now part of the story of teens and tech today," said Colleen McClain, a senior researcher at Pew and the lead author of the study. "Teens are using chatbots in a variety of ways -- the helpful, the less helpful." The survey of 1,458 Americans ages 13 to 17 and their parents is one of the first comprehensive polls asking teenagers how they're using AI and what they think about the technology. About two-thirds of the teens told Pew that they had used chatbots such as OpenAI's ChatGPT and Microsoft's Copilot. (The Washington Post has a content partnership with OpenAI.) Teens' most commonly cited uses of AI were to search for information and get help with schoolwork. Nearly 6 in 10 teens said that students at their school use AI chatbots to cheat on their work at least "somewhat often." The poll did not define what counts as cheating or directly ask teens if they personally used AI to cheat. McClain said that the teens' perceptions of cheating among their peers don't necessarily reflect what's really happening. The roughly half of teenagers who said they used AI for schoolwork were far more likely to do so for tasks such as researching a topic than for editing something they had written, they said, which could more easily cross into cheating. The gap may show that teens are drawing lines between appropriate and inappropriate uses of AI for schoolwork. Sal Khan, founder of the education technology nonprofit Khan Academy, said schools should assume that students are using AI to cheat on schoolwork done out of class. He suggested that teachers should have students do writing assessments in class, to prevent AI use, or quiz students on assignments done at home to prove that they've learned essential material. But Guilherme Lichand, a professor of education at Stanford University, said cheating is not AI's novel or most serious harm. In a recent research experiment with middle school students, Lichand and his collaborators found that those who initially had access to AI assistance on a creative assignment, and then had it taken away, performed far worse than their peers who didn't have access to AI on a subsequent word-association task. Lichand said the research, which hasn't yet been published, suggests that young people who grow dependent on AI may lose faith in their abilities without it. "These kids started believing less in themselves," he said. A recent Brookings Institution report found similar harm from students' dependence on AI. About 12 percent of teens in the Pew survey said they had used AI for emotional support or advice -- a use that a majority of the surveyed parents disapproved of. Since ChatGPT and similar technologies captivated the public's attention starting in 2022, adults and children have used them for companionship and emotional help. Some parents and researchers have said that such use of AI may encourage delusional thinking and is unacceptably risky for young people. The Pew study also found that younger people were a bit more optimistic than their elders about AI's future impact. About one-quarter of teens said they believe that AI will have a negative impact on society over the next 20 years. In similar Pew survey questions last year, half of American adults said they were "more concerned than excited" about the growing role of AI in daily life.
[8]
'A.I. Literacy' Is the New Drivers' Ed at This Newark School
Reporting from North Star Academy Washington Park High School in Newark The first session of a new artificial intelligence class this month for high school seniors in Newark involved purely human intelligence. The students' assignment: to compare when they had passively scrolled through A.I.-driven social media feeds with times when they had actively selected the videos or Google search results they wanted to see. "Are you steering the technology or is it steering you?" asked a slide on the classroom whiteboard at Washington Park High School. In a class discussion that followed, a student named Adrian Farrell, 18, said he had taken charge of A.I. by asking a chatbot to check his math homework for accuracy. Brianna Perez, 18, said she went into "passenger mode" when using a Spotify feature called A.I. DJ. "It plays your favorite music so you don't have to change it," she said. Schools across the United States are hustling to introduce a new subject: A.I. literacy. In what some educators are calling a "driver's license" for A.I., the new lessons aim to teach students how to examine the latest tech tools and use them responsibly. Teachers say they want to prepare young people to navigate a world increasingly shaped by A.I., as chatbots manufacture human-sounding writing and employers use algorithms to help vet job candidates. Some schools are focused on A.I. chatbots, teaching students how to prompt Google's Gemini or Microsoft's Copilot. Some are introducing A.I. as a new class topic, with lessons examining societal consequences like the spread of A.I.-generated nude images, known as deepfakes. A.I. lessons are becoming more common in schools as a debate has exploded over whether chatbots are likely to improve -- or doom -- education. Proponents say schools must quickly teach young people to use A.I. to assist their learning, prepare them for jobs and help the United States compete with China. Last year, President Trump issued an executive order urging schools to teach "foundational A.I. literacy" starting in kindergarten. Education researchers warn that chatbots can make stuff up, enable cheating and erode critical thinking. A recent study from Cambridge University and Microsoft Research found that students who took notes on text passages had better reading comprehension than students who got help from chatbots. For now, "the risks of utilizing A.I. in education overshadow its benefits," the Brookings Institution concluded last month in a report on school A.I. use. Amid the debate, schools like Washington Park High are staking out a middle ground by treating A.I. as if it were a car and helping students develop rules for the road. Mike Taubman, 45, a career explorations teacher who co-developed the school's new literacy course, compared the class to preparing teenagers for their driver's license exam. "Where do you want to go, and can A.I. help you get there?" Mr. Taubman asked. Students needed to learn to drive A.I. tools, analyze what's under the hood, develop guidelines for personal use and design ideal safety policies, he said. "What do I think the laws, the rules, the norms should be around A.I. for my city, my country?" he added. Washington Park, a four-story building with a red brick facade in downtown Newark, serves about 900 high school students. It is part of Uncommon Schools, a charter school network in the Northeast focused on college and career preparation. Mr. Taubman and Scott Kern, a U.S. history teacher, came up with the idea for the new elective class on A.I. Both had already introduced A.I. tools and topics in their regular courses. Mr. Kern, 45, recently participated in a program at Playlab, a nonprofit that helps teachers create customized A.I. apps for their courses. To help his students hone their argumentative writing, Mr. Kern developed chatbots for his U.S. history classes based on his course materials and student assessments. He also developed firm guidelines for students on when to use -- and when not to use -- A.I. bots. On a recent Tuesday, Mr. Kern taught an Advanced Placement U.S. History class on the Chicago Race Riot, violent protests set off by the murder of a Black teenager in 1919. First, he asked students to read century-old newspaper clippings and other historical documents. Then, he led a class discussion on broader trends that had helped fuel the tensions. Next, Mr. Kern asked students to spend a few minutes describing the main cause of the riot to a chatbot he had created for the class. Allyson Johnson, 17, opened her laptop and typed in her answer: entrenched racial segregation. "Let me push you on this," the chatbot responded. Segregation had existed for decades before 1919. So what specific factors, the bot asked, had caused a tense situation to suddenly escalate "into explosive violence"? Ms. Johnson said she enjoyed sparring with the A.I. because "the chatbot asked me different questions that pushed my argument even more." After a few minutes, Mr. Kern told students that their time with the chatbot was up and resumed the class discussion. Fundamental student learning should remain an A.I.-free activity, he said. "Anytime where we want kids interacting with each other or doing initial critical thinking, I would never want A.I. or any sort of technology of that ilk to come in and interfere with that," Mr. Kern said. In another part of the school, Mr. Taubman was leading a career explorations course. He has developed a variety of career simulation chatbots for the class. One enables students interested in fields like speech pathology to create and learn about virtual patients with detailed medical histories. Aniya Gervais, 17, is interested in becoming a mental health nurse practitioner. For a class project, she wanted to create a hypothetical nonprofit for teenagers with mental health issues. But after discussing her plan with one of Mr. Taubman's class A.I. bots, she concluded her initial idea was too broad. So she narrowed the project to focus on teenagers struggling with both depression and substance abuse. Ms. Gervais said she often used ChatGPT for tasks like coming up with pasta recipes or planning fitness routines. "Before, I was telling A.I. what to do, and it was just telling me what to do," Ms. Gervais said. "But now," she said of Mr. Taubman's classroom chatbots, "I'm asking the A.I. questions that will help me get to the answer." This semester, Mr. Kern and Mr. Taubman decided to join forces to formalize their A.I. education methods in an elective class. Eighteen students signed up. During the first class this month, students learned about how some film directors had started using A.I. to generate movie scenes. Should humans still get credit for that? As long as people directed the video-generating bots, some students said, they would consider humans to be a film's authors. Other students argued that tech giants had trained A.I. on decades of artists' work, potentially amounting to intellectual property theft. (The New York Times has sued OpenAI and Microsoft over copyright infringement claims. Both companies have denied wrongdoing.) Mr. Kern and Mr. Taubman acknowledged that their driver's license metaphor had limits. Until chatbots have built-in safeguards akin to seatbelts and airbags, it will be difficult for students to make truly informed decisions about the risks of powerful A.I. systems. Mr. Kern said he hoped that students would one day "have influence to build these tools in a way that's better and more equitable and more environmentally friendly than what exists now." Ms. Perez, a senior taking the new course, said she already felt empowered learning about A.I.'s uses and risks. The school hopes to offer the A.I. literacy class soon to all 12th graders. "If it wasn't for courses like this that are being implemented, we could really go into our future like not knowing what's coming," Ms. Perez said.
[9]
Teens admit their true feelings about AI chatbots
Teens turn to AI more commonly than their parents might know. Credit: Thai Liang Lim via iStock / Getty Images Plus Whether or not their parents realize it, nearly two-thirds of American teens say they use artificial intelligence chatbots for activities including homework help, research, video creation, fun and entertainment, casual conversation, and emotional support or advice, according to a new study from the Pew Research Center. The study's survey of 1,458 U.S. teens and their parents last fall also revealed that the young participants had considered the complex tradeoffs of using AI. Nearly a third of respondents said that AI will positively affect society over the next two decades while a quarter of them believed it would have a negative impact. The optimistic survey participants believed AI would lead to gains in efficiency, productivity, and learning. Those with a less hopeful outlook noted the risks of over-reliance on AI, job and creativity loss, and the threat of not being able to discern what's real and what's AI-generated. "It will meet the needs of almost everything," said one anonymous male survey respondent. "Answers to the hardest questions. No need for research!" A skeptical teen girl had a different take: "People will be afraid to be creative, or won't see a need for it anymore. It makes people lazy and takes away jobs." Overall, 36 percent of teens thought AI would benefit them personally whereas 15 percent expected the technology would have a negative influence on their lives. A third anticipated both positive and negative outcomes. Colleen McClain, a senior researcher at the Pew Research Center, told Mashable that the findings contrast the center's past research on adults, who tend to be more pessimistic about the long-term implications of AI adoption. "We see teens are, yes, kind of navigating this rapidly changing world," McClain said. "They're making up their minds about how they feel, but they have some predictions for society into the future." Nikki Iyer, co-chair of the youth-led advocacy coalition Design It For Us, said she felt the report reflected what she sees in her day-to-day life as both an organizer and a third-year college student at the University of California, Berkeley. She was unsurprised that 54 percent of the teens surveyed said they used AI for homework help. "If you walk around the cafe, odds are you will see probably [that] percentage," consulting a chatbot for schoolwork, Iyer said. Yet, only 1 in 10 surveyed said they completed all or most of their assignments with the technology's support. The finding starkly highlights one of Iyer's personal concerns about youth AI use: Cognitive outsourcing and the possible decline in critical thinking as a result. She believes AI literacy is essential for avoiding the pitfalls of over-reliance on the technology for thinking tasks. The survey also illustrated emerging differences between teens depending on their race, ethnicity, and income. Black and Hispanic teens, for example, were more likely to use chatbots in general and for schoolwork compared to white teens. Additionally, 21 percent of Black teens said they turned to AI chatbots for emotional support or advice compared to about one in 10 Hispanic and white teens. Income also appears to be associated with how often teens use AI for schoolwork. Twenty percent of teens living in households making less than $30,000 a year said an AI chatbot helped them do most or all of their homework. Only 7 percent of teens in higher-earning households reported the same behavior. Iyer, 20, acknowledges that AI could benefit student learning, but she wants to ensure the balance of power tilts away from design choices that undermine young people's agency and attention span. "I think the problem comes when we are serving AI, and we are being exploited by AI, and AI is using us to fulfill a mission of a corporation," she said. Iyer believes it's critical for young people to help shape the future of AI through organizing, lobbying, and providing direct feedback to designers who create AI products. Design It For Us has previously backed AI safety, transparency, and accountability legislation in New York and California. Notably, the Pew Research report didn't ask whether teens seek mental health advice from chatbots or use them for romantic role-play. Parents of teens who consulted ChatGPT about their mental health and suicidal feelings prior to taking their own lives have sued OpenAI, the maker of ChatGPT, alleging that the product coached their child on how to die. OpenAI has denied the allegations in one of the cases. Separately, the online safety platform Aura, which monitors teen users as part of its family or kids membership, recently published a report showing how tweens and teens engage in romantic role-play with chatbots. Sexual and romantic conversations with chatbots peaked at age 13, amounting to 63 percent of their exchanges. Those messages often turned violent. But Aura also found that role-playing decreased significantly after age 15. Earlier this year, Character.AI, a chatbot platform popular with teens, settled lawsuits filed by bereaved parents alleging that the company's chatbots contributed to their children's suicide deaths. In some cases, those chatbots exchanged sexually explicit messages with the teen users. Character.AI stopped permitting teens to engage in open-ended conversations with chatbots in late 2025. The Pew Research study also suggests that parents are unaware of their children's AI use. Though two-thirds of teens reported using chatbots, their parents offered a much lower estimate of that figure, at 51 percent. "We do find that some parents are relatively in the dark," McClain said.
[10]
In some schools, chatbots interrogate students about their work. But the AI revolution has teachers worried
The fast take-up of innovative technology risks creating a 'two-speed system', an Independent Schools Australia paper warns Once upon a time, school students would submit an essay, and teachers would mark it. Job done. Enter "Thinking Mode". Now, in some Australian schools, once a student finishes an assignment an AI chatbot will interrogate them about it: put them on the spot in a two-way dialogue, to make sure they really understood what they wrote. "Can you explain this a little bit more?" the chatbot might say, or, "What do you mean by that word?" It's not just about hammering in the lesson. It's also a way to ensure students do their own thinking, and haven't resorted to plagiarism or ChatGPT. At Hills Christian Community School in Adelaide Hills, the technology is just one way teachers and students are using artificial intelligence and other brand-new tech to further learning. Students also use sensors, drones and coding to learn about natural ecosystems, from rivers to pollinators and bushland habitats. Students with disabilities, including limited speech, are accessing Meta AI glasses with inbuilt speakers that explain what is happening without disrupting the classroom. The school's leader of digital innovation, Colleen O'Rourke, says they have a philosophy: "AI tools are used by educators to amplify great practice, not dilute it". "The human element cannot be lost in this," she says. "AI is the co-collaborator in the triad of the teacher and the student." But while AI is being rolled out at Australian schools in innovative ways, it is not coming to all of them, and not equally. The peak body for independent schools is urging the federal government to take up a national AI pilot or risk creating a "two-speed system" and widening educational divide. The Independent Schools Australia (ISA) paper, released on Monday, analysed how schools across the nation were integrating generative AI into teaching and learning, three years after the release of ChatGPT. It found schools were adopting AI at widely varying speeds, depending on their geography and resources. Just two jurisdictions, New South Wales and South Australia, have rolled out AI programs to public schools, after a ban on the technology was overturned in late 2023. The chief executive of ISA, Graham Catt, said Australia was at a critical point in determining whether AI became a tool for equity or inequality. "If we don't act deliberately now, we risk creating a two-speed system," Catt said. "Some schools will surge ahead, while others struggle to keep up." The paper called on the federal government to launch a national, sector-blind pilot AI program, to provide a pathway on how to ethically adopt the technology and where to direct funding. The latest Teaching and Learning International Survey (TALIS), released in 2024, found two-thirds of Australian teachers in secondary years and just under half of primary school teachers used AI in their work, placing the nation among countries with the highest uptake of the technology. But teachers also expressed caution about the negative impacts on student wellbeing AI could cause, privacy issues and the possibility for plagiarism, indicating a need for better guidance and safeguards. In independent schools, large language models (LLMs), a type of AI system, are already being used to help teachers with marking, provide student feedback, identify learning gaps and act as a one-on-one tutor. NSWEduChat, a department-owned generative AI tool, has been rolled out to all public schools in NSW to help teachers with lesson planning and students to study by asking guided questions to encourage critical thinking. South Australia's EdChat chatbot was also distributed statewide in 2025. Early results show it has saved time for teachers and particularly helped students with language or learning barriers. Rourke says teachers are scrambling to try to understand how technology is changing, and need proper training. "We can't teach our kids how to use it responsibly if teachers don't know how to use it responsibly."
[11]
In some classrooms, teachers ask: Can AI teach students to write better?
(Illustration by Junne Joaquin Alcantara/The Washington Post; iStock) When Craig Schmidt gave his high school English students an assignment based on "Fahrenheit 451," he threw them a curveball: He told them to use ChatGPT. Schmidt asked the class to write several paragraphs reflecting on the dystopian novel, then feed them into the artificial intelligence chatbot for feedback. He distributed worksheets explaining how to use ChatGPT as a "writing partner" by instructing it to assume the persona of a critic or teacher and describing the feedback it should provide. "The A.I. will NOT always give you great advice!" Schmidt wrote in a worksheet for students. "It might suggest something that doesn't fit what you want to say. You need to use your EDITING skills." Vince Lombardo, one of Schmidt's students in that 2024 class, said it was the first time one of his teachers had suggested using an AI bot during an assignment, rather than warning students against using the tools at all. He fed paragraphs from his assignment into ChatGPT and, using Schmidt's worksheet, crafted a prompt to ask for advice. There were some points Lombardo disagreed with, like starting the essay with a rhetorical question, but to his surprise, he found most of ChatGPT's feedback helpful. "I thought it was great," Lombardo, 15, said. "Ever since then, I've kind of been doing the same thing." As educators around the country grapple with the effects of AI, a growing cohort of English teachers are finding ways to bring tools like ChatGPT into their pedagogy as tutors and brainstorming aides. For students like Lombardo, learning how to prompt a chatbot for feedback -- and when to question AI's advice -- has become an essential part of the writing process. Coaching from AI, personalized and accessible at any time, is now shaping how they write. "Sometimes I can go into AI and be like, 'My teacher wants me to be able to do this,'" Lombardo said. "'How can I do that within my writing?'" (The Washington Post has a content partnership with ChatGPT developer OpenAI.) Schmidt, an English teacher of nearly 30 years in Libertyville, Illinois, said he was dismayed when he began encountering student work that appeared AI-generated. Software for detecting AI writing was unreliable, and he said he found it difficult to confront students about AI use. Schmidt had to decide on his own how to handle it. Several years after generative AI became accessible to students, figuring out how best to include -- or exclude -- AI tools in the classroom often still falls to individual school districts and teachers. "We don't have a department policy," Schmidt said. "The district doesn't. I think everybody feels it's still kind of the Wild West." On one end of the spectrum, some teachers are letting students draft their own AI policies. On the other, the most skeptical teachers are using formats like oral exams to restrict the use of AI as much as they can. Schmidt has joined a growing cohort that is trying to find a middle ground. "Whether we liked it or not, the technology was going to be in the hands of our students," said Kimberly Cooney, an English teacher at Chattahoochee High School in Johns Creek, Georgia. "And so we could either teach them how to use it ethically and responsibly and teach them to actually augment their thinking, or we could, you know, do nothing." One of Cooney's lesson plans teaches her students to use AI to help brainstorm themes in the Arthur Miller play "The Crucible," walking them through the technique of structuring AI prompts and then asking them to paraphrase the chatbot's responses. In another, she shows the class an AI-generated paragraph on an essay assignment and asks students to critique it. "I said, 'Okay, AI works on algorithms, and it works on predictability. And as a result of that, it tends to create the most predictable, mid-level, sort of bland writing that you can have,'" Cooney said. "... They need to be making much more assertive arguments than that." Jill Stedronsky, an English teacher in Basking Ridge, New Jersey, has had some of her eighth-graders use prompts to create AI "writing partners" intended to be regular sources of advice and feedback. Throughout the year, her students entered journal entries and essays into the chatbot and reflected on them in conversation with the AI tool. Both Cooney and Stedronsky see teaching students how to prompt AI bots as a way to help them think about their writing. "In creating the 'writing partner,' they had to really think about what they wanted to ask and what kind of feedback [to ask for]," Stedronsky said. Are chatbots giving out good writing advice? Schmidt, of Libertyville High School in Illinois, thinks so, most of the time. But he, Cooney and Stedronsky are quick to emphasize to students that they should look at the suggestions they get from their AI "writing partners" critically. Their exercises usually require students to critique the advice they get from AI. (Schmidt said he has also seen less cheating with AI since using chatbots in his teaching.) When Lombardo, Schmidt's former student, first asked ChatGPT for feedback in class, the chatbot told him to simplify some of his sentences, suggested changes to make the writing less "choppy" and advised him to write a stronger conclusion. He discarded some suggestions but found most helpful. Lombardo was moved up to an honors course after taking Schmidt's class, he said. Using AI to plan and edit his assignments has now become routine. "I feel like now, I'm able to write stuff better than I ever have been, even without using AI, because like, I've gotten those different kinds of suggestions over and over again," he said. Amiyah Harish, a high school junior who was introduced to AI "writing partners" in Stedronsky's class, said she has kept up the habit of using an AI tool in her writing. At each step in an assignment -- after she brainstorms ideas, drafts an outline or writes a paragraph -- she feeds her work into a chatbot to look for improvements. She likened the practice to talking through an essay with an attentive friend. "It's kind of like a discussion," said Harish, 17. "Instead of the teacher giving a lecture about 'This is the exact formula for how I want you to write the essay,' it's the student discovering their own voice, using AI as a tool." As Harish and her peers adopt AI, school and classroom policies are continuing to evolve around them. Stedronsky said her school recently adopted new policies that restrict some chatbots, including the website where she had students create AI writing partners. She said teachers should continue to find ways to use AI to promote inquiry and critical thinking. "If we don't ... we will be left with students who cheat and teachers who revert to pen and paper, rather than using AI to be a critical thinking tool," Stedronsky said.
[12]
'Students can't reason': Teachers warn AI is fueling a crisis in kids' ability to think | Fortune
In the 1980s and 1990s, if a high school student was down on their luck, short on time, and looking for an easy way out, cheating took real effort. You had a few different routes. You could beg your smart older sibling to do the work for you, or, a la Back to School (1989), you could even hire a professional writer. You could enlist a daring friend to find the answer key to the homework on the teachers' desk. Or, you had the classic excuses to demur: My dog ate my homework, and the like. The advent of the internet made things easier, but not effortless. Sites like CliffNotes and LitCharts let students skim summaries when they skipped the reading. Homework-help platforms such as GradeSaver or CourseHero offered solutions to common math textbook problems. The thing that all these strategies had in common was effort: there was a cost to not doing your work. Sometimes it was more work to cheat than it was just to have done the work yourself. Today, the process has collapsed into three steps: log on to ChatGPT or a similar platform, paste the prompt, get the answer. Experts, parents, and educators have spent the past three years worrying AI made cheating too easy. A massive Brookings report released in January suggests they weren't worried enough: The deeper problem, the report argues, is that AI is so good at cheating that its causing a "great unwiring" of their brains. The report concludes the qualitative nature of AI risks -- including cognitive atrophy, "artificial intimacy" and the erosion of relational trust -- currently overshadows the technology's potential benefits. "Students can't reason. They can't think. They can't solve problems," lamented one teacher interviewed for the study. The findings come from a yearlong "premortem" conducted by the Brookings Institution's Center for Universal Education, a rare format for Brookings to use, but one they said they preferred to waiting a decade to discuss the failures and successes of AI in school. Drawing on hundreds of interviews, focus groups, expert consultations, and a review of more than 400 studies, the report represents one of the most comprehensive assessments to date of how generative AI is reshaping student's learning. The report, titled "A New Direction for Students in an AI World: Prosper, Prepare, Protect," warns the "frictionless" nature of generative AI is its most pernicious feature for students. In a traditional classroom, the struggle to synthesize multiple papers to create an original thesis, or solve a complex pre-calculus problem is exactly where learning occurs. By removing this struggle, AI acts as the "fast food of education," one expert said. It provides answers that are convenient and satisfying in the moment, but overall cognitively hollow over the long term. While professionals champion AI as a tool to do work that they already know how to do, the report notes that for students, "the situation is fundamentally reversed." Children are "cognitively offloading" difficult tasks onto AI; getting OpenAI or Claude to not just do their work but read passages, take notes or even just listen in class. The result is a phenomenon researchers call "cognitive debt" or "atrophy," where users defer mental effort through repeated reliance on external systems like large language models. One student summarized the allure of these tools simply: "It's easy. You don't need to (use) your brain." In economics, we understand that consumers are "rational"; they seek maximum utility at the lowest cost to them. The researchers argue that we should also understand that the education system, as is, is designed with a similar incentive system: students seek maximum utility (i.e., best grades), at the lowest cost (time) to them, Thus, even the high-achieving students are pressured to utilize a technology that "demonstrably" improves their work and grades. This trend is creating a positive feedback loop: students offload tasks to AI, see positive results in their grades, and consequently become more dependent on the tool, leading to a measurable decline in critical thinking skills. Researchers say many students now exist in a state they called "passenger mode," where students are physically in school but have "effectively dropped out of learning -- they are doing the bare minimum necessary." Jonathan Haidt once described earlier technologies as a "great rewiring" of the brain; making the ontological experience of communication detached and decontextualized. "Now, experts fear AI represents a "great unwiring" of cognitive capacities. The report identifies a decline in mastery across content, reading, and writing -- the "twin pillars of deep thinking." Teachers report a "digitally induced amnesia" where students cannot recall the information they submitted because they never committed it to memory. Reading skills are particularly at risk. The capacity for "cognitive patience," defined as the ability to sustain attention on complex ideas, is being diluted by AI's ability to summarize long-form text. One expert noted the shift in student attitudes: "Teenagers used to say, 'I don't like to read.' Now it's 'I can't read, it's too long.'" Similarly, in the realm of writing, AI is producing a "homogeneity of ideas." Research comparing human essays to AI-generated ones found that each additional human essay contributed two to eight times more unique ideas than those produced by ChatGPT. Not every young person feels this type of cheating is wrong. Roy Lee, the 22-year-old CEO of AI startup Cluely, was suspended from Columbia after creating an AI tool to help software engineers cheat on job interviews. In Cluely's manifesto, Lee admits his tool is "cheating," but says "so was the calculator. So was spellcheck. So was Google. Every time technology makes us smarter, the world panics." The researchers, however, say that while a calculator or spellcheck are examples of cognitive offloading, AI "turbocharges" it. "LLMs, for example, offer capabilities extending far beyond traditional productivity tools into domains previously requiring uniquely human cognitive processes," they wrote. Despite how useful AI is in the classroom, the report finds that students use AI even more outside of school, warning of the rise of "artificial intimacy." With some teenagers spending nearly 100 minutes a day interacting with personalized chatbots, the technology has quickly moved from being a tool to a companion. The report notes that these bots, particularly character chatbots popular with teens such as Character.Ai, use "banal deception" -- using personal pronouns like "I" and "me" -- to simulate empathy, part of a burgeoning "loneliness economy." Because AI companions tend to be sycophantic and "frictionless," they provide a simulation of friendship without the requirement of negotiation, patience or the ability to sit with discomfort. "We learn empathy not when we are perfectly understood, but when we misunderstand and recover," one Delphi panelist noted. For students in extreme circumstances, like girls in Afghanistan who are banned from physical schools, these bots have become a vital "educational and emotional lifeline." However, for most, these simulations of friendship risks, at best, eroding "relational trust," and at worst can be downright dangerous. The report highlights the devastating risks of "hyperpersuasion," noting a high-profile U.S. lawsuit against Character.ai following a teenage boy's suicide after intense emotional interactions with an AI character. While the Brookings report presents a sobering view of the "cognitive debt" students are experiencing, the authors say they are optimistic that the trajectory of AI in education is not yet set in stone. The current risks, they say, stem from human choices rather than some kind of technological inevitability. In order to shift the course toward an "enriched" learning experience, Brookings proposes a three-pillar framework. PROSPER: Focus on transforming the classroom to adapt to AI, such as using it to complement human judgement and ensuring the technology serves as a "pilot" for student inquiry instead of a "surrogate" PREPARE: Aims to build the framework necessary for ethical integration, including moving beyond technical training toward "holistic AI literacy" so students, teachers, and parents understand the cognitive implications of these tools. PROTECT: Calls for safeguards for student privacy and emotional well-being, placing responsibility on governments and tech companies to reach clear regulatory guidelines that prevent "manipulative engagement."
[13]
Teens are using AI frequently in their daily lives, and many parents aren't aware, survey finds
Cara Tabachnick is a news editor at CBSNews.com. Cara began her career on the crime beat at Newsday. She has written for Marie Claire, The Washington Post and The Wall Street Journal. She reports on justice and human rights issues. Contact her at [email protected] Parents are often caught off guard by what their teens are doing in daily life -- and when it comes to AI, the "perception gap" might be larger than they thought, according to a Pew Research Center survey released Tuesday. The survey found a significant gap exists between parents' perceptions and their teens' actual use of AI chatbots. About 64% of U.S. teens reported using AI chatbots, while 51% of parents said their teens use them. "Technology is not just a teen issue or a parent issue -- it's a family issue," said Pew senior researcher Colleen McClain. She said researchers surveyed both teens and parents and heard different perspectives on managing AI usage. Just over half (54%) of the teens surveyed said they've used AI chatbots for help with schoolwork, while about 1 in 10 said they've gotten emotional support from an AI chatbot. Teens, often at the forefront as users of new technology, told researchers they see AI as a tool in their daily lives, and they were more positive than negative in their views of about how AI will impact them personally. The Free Press: Are We at an AI Precipice? Parents have a "lot to juggle," McClain said, and many are concerned about their children's use of AI chatbots -- especially after several high-profile cases in which teens died by suicide after prolonged interactions with the new technology. "It's complicated, it's nuanced, it's not a one-size-fits-all," McClain said. She said the survey -- the most in-depth yet on teens and AI -- found many parents don't speak to their teens about their AI usage; just 4 in 10 parents said they do. Many don't make managing screen time their first priority amid other life demands, and some parents said they feel judged for doing so. Dr. Amber W. Childs, an associate professor of psychiatry at the Yale School of Medicine, told CBS News the question shouldn't be if teens are using AI but how they are using the technology. She said most teens are using technology for mundane daily tasks but parents need to know if "they're using it in the absence of other sources of connection or coping skills and support." Around 12% said they've gotten emotional support through chatbots, and Childs said teens using the tech for sole emotional support is concerning. Psychologist Joshua Goodman, an associate professor at Southern Oregon University, said teens who don't feel comfortable talking to parents or others about their sexuality or orientation might feel more comfortable speaking to AI about their sexual health. These teens are "not reaching out for support" from adults in their lives, but it's not necessarily a bad thing, Goodman said. He said parents need to look for warning signs around teens constantly using AI and the technology replacing their critical thinking, or if they are showing signs of depression. "You want to get curious," Childs said, "but you also want to be communicating to connect." She cautioned parents not to just pass down information and warnings to their teens, but to use the conversation to understand how AI is being used in their lives. Parents can set up boundaries and expectations around the usage of the technology that align with family expectations, she said. She said most teens are probably using AI to improve their life skills, like learning new languages or doing schoolwork. About a quarter of teens surveyed said chatbots have been extremely or very helpful for completing their schoolwork, while another 25% say they've been somewhat helpful. Most said they use the technology for research or help with math problems. About 1 in 10 teens said they do all or most of their schoolwork with chatbots' help. More than half of teens say they've used chatbots to search for information and almost half say they've done so for fun or entertainment. Some, however, are wary about the way the technology will affect their lives. One teenage boy told Pew, "It's already being used to spread propaganda, there's no end to what it can do, it's hard to tell what's real or AI online anymore." Pew surveyed 1,458 U.S. teens and their parents from Sept. 25 to Oct. 9, 2025.
[14]
26% US teens in survey think AI will have negative impact on society
A majority of U.S. teens said they use AI chatbots, while about 26% think AI will have a negative impact on society in the next 20 years, according to a Pew Research Center survey of U.S. teens ages 13 to 17. Over half use them to search for information and get schoolwork help, while fewer rely on them heavily for completing schoolwork. About 26% believe AI will have a negative societal impact, citing overreliance, loss of creativity, job loss, and misinformation. Around 25% are extremely or very confident, and about 30% are somewhat confident in using AI chatbots.
[15]
Teens Use AI Chatbots to Fill Roles of Hitherto Done by Family and Friends: Pew Research
Another concern is that barely half the parents surveyed are aware of AI use by their teens at home, though many of the kids themselves divulge it A new research conducted in the United States suggests that nearly a third of all teen users of AI chatbots are using them to perform roles hitherto done by friends or family. Of course, they also use it for routine stuff like searching for information and helping with schoolwork, says the report published by Pew Research Centre. Beyond how teenagers use these chatbots, the survey also pointed to the fact that the young participants in the survey had actually considered the trade-offs of using AI with nearly a third believing that AI would have a positive impact on society over the next 20 years while a quarter consider that things could go awry. In an earlier survey report shared by Pew in December, the teens had expressed mixed feelings about social media's impact but accepted that they remain a key part of their lives with some using it "almost constantly." With the rise of AI chatbots, the teens gravitated towards them with over two-thirds of the respondents accessing them daily. Of course, their sample size of the survey is only about 1,458 teens and their parents and all of them are US residents. But, it takes no prizes for guessing that what works for the US would work in India too, given the tendency to copy the trends in big cities followed by residents of tier-2 cities. Often considered a symbol of an upwardly mobile family, such trends already resulted in social media apps having their second largest user base in India, after the US. Per the survey, 57% of users of AI chatbots like Gemini, OpenAI, Perplexity and Character.ai in the US are seeking information while 54% are getting help with schoolwork. Sixteen percent use AI for having casual conversations while 12% are using AI chatbots for emotional support or advice. "Concerns about young people using chatbots for companionship have caught the attention of parents, advocates and lawmakers. Our survey finds some teens are using chatbots in more personal ways... Still, majorities of teens report not doing these things," says the Pew Research in a post on their website. (Click here to check out the Pew Research Report and more details about their findings from this survey.) The research deep-dives into some of the above aspects including use of AI and cheating. While teachers find the rising use of AI in classrooms to be a thorny issue, the survey shows that many teens think cheating with AI is a regular feature of student life. Nearly 60% think AI usage to cheat is a regular occurrence at school. And 75% of those teens who have never used AI chatbots for school work believe strongly that chatbots are used to cheat. Another aspect that comes out from the research is that of the 30% teens see AI positively impacting society, a one-fifth of this chunk believe it will be good for learning, information and higher efficiency. A much smaller 8% see it as evolving technology of the future while work enhancement (8%), education (6%) and health (5%) are areas they feel AI will help. Barely 10% of those surveyed think AI chatbots can spread misinformation making it hard to tell what's real and what's fake. A similar number believe that AI is indeed a threat as it is ripe enough to be misused. However, the biggest concern is around teens seeking to AI for solace with mental health professionals continuously asking for guardrails. Another issue that could be disconcerting is that barely half of the parents surveyed actually realised that their teenagers used AI for a variety of things. However, the survey also notes that 64% of the students actually reported back to their parents about AI chatbot usage. While most parents are okay with kids using AI for information and schoolwork, 58% of those surveyed said they weren't okay with their child using AI for emotional support or even just having casual conversations. In the recent past, AI safety has taken centre-stage with some like OpenAI battling lawsuits over teen suicides and others like Character.AI actually disabling the chatbot experience for users under the age of 18. In fact, such was the public angst in the US that OpenAI unplugged one of its older GPT versions - the sycophantic GPT-4o model. Pew Research shared some of the feedback from the respondents as part of the survey report. A male teen claimed that AI chatbots met the needs for almost everything and answered the hardest questions. There is no need for research." Given the tendency of current models to hallucinate when they don't have answers this is a dangerous inference. Of course, there were others who balanced this view with a teen girl stating that "People will be afraid to be creative, or won't see a need for it anymore. It makes people lazy and takes away jobs." Looks like there is a balance being struck at this precise moment. However, it remains to be seen how long these cash-rich AI giants will continue to plug an app that is at best a virtual assistant and nothing more.
Share
Share
Copy Link
New data from the Pew Research Center reveals that nearly two-thirds of American teenagers use AI chatbots, with 54% turning to them for schoolwork help. But the findings also expose troubling trends: 10% of students let AI do most or all of their assignments, while 12% seek emotional support from chatbots. As artificial intelligence reshapes education, schools face urgent questions about academic integrity, learning outcomes, and student wellbeing.
Artificial intelligence has rapidly embedded itself into the daily lives of American teenagers, with 64% now using AI chatbots according to a Pew Research Center report published in February 2026
1
. The figure reveals a significant gap between perception and reality: only 51% of parents believe their teen uses these tools1
. This disconnect highlights how quickly educational technology has infiltrated classrooms, often outpacing parental awareness and institutional oversight.
Source: Fortune
The most common application of student AI use centers on information gathering, with 57% of teens using chatbots to search for information
1
. Close behind, 54% report using AI for schoolwork assistance1
. When broken down further, 48% use AI chatbots to research specific topics, 43% for solving math problems, and 35% to edit writing assignments2
. These patterns suggest teens view AI primarily as an academic tool, though the line between legitimate help and academic dishonesty remains blurred.
Source: PC Magazine
While many students use AI for homework help in limited ways, the data exposes more troubling patterns. Ten percent of teens admit to using chatbots to complete all or most of their schoolwork, with an additional 21% using them for some assignments and 23% for a little
3
. The problem appears most acute in lower-income households, where 20% of students in families earning less than $30,000 annually rely on AI for most or all schoolwork, compared to just 7% in households making over $75,0003
.Plagiarism concerns extend beyond individual use. Nearly 60% of teens believe their peers regularly use AI to cheat at school
2
. Specifically, 34% report that students at their school use AI to cheat very or extremely often, while 25% say it happens somewhat often3
. Only 14% believe fellow students rarely or never engage in using AI to cheat3
. This widespread perception of academic dishonesty threatens to undermine trust in educational institutions and devalue genuine achievement.Educators and researchers are increasingly concerned about AI's impact on learning processes, not just academic integrity. Stanford School of Engineering professor Mehran Sahami argues that artificial intelligence has broken a fundamental assumption in education: that strong products indicate strong learning processes
4
. Students can now generate impressive work without engaging in meaningful learning, forcing educators to focus on assessing the actual learning process rather than just evaluating end products4
.Research presented at Stanford's AI+Education Summit revealed alarming findings about creativity. Assistant professor Guilherme Lichand studied middle school students in Brazil and found that while students with AI assistance performed better on creative tasks while using the tool, those benefits disappeared when the tool was removed
4
. Most concerning, students who lost AI access after having it performed four times worse than their initial advantage, suggesting AI had damaged their creative self-concept4
.Beyond academics, 12% of U.S. teens use AI chatbots for emotional support or advice, while 16% engage in casual conversation with these tools
1
. Mental health professionals express serious concerns about this trend. Dr. Nick Haber, a Stanford professor researching therapeutic potential of large language models, warns that these systems can be isolating. "There are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts, and not grounded in connection to the interpersonal, which can lead to pretty isolating—if not worse—effects," he told TechCrunch1
.
Source: CXOToday
General purpose tools like ChatGPT, Claude, and Grok are not designed for therapeutic use, and in extreme cases can have life-threatening psychological effects
1
. Character.AI disabled its chatbot experience for users under 18 following public outcry and lawsuits over two teenagers' suicides that occurred after prolonged conversations with the company's chatbots1
. Parents show greater concern about emotional applications: only 18% approve of teens using AI for emotional support, while 58% actively disapprove1
.Higher education institutions are taking varied approaches to AI use in college settings. Most schools have issued guidance rather than sweeping mandates, with liberal arts colleges like the University of Richmond, Bard College, and Trinity College emphasizing ethical and responsible use
5
. A 2024 study of 116 research universities found instructors largely determine course policies individually, with few campus-wide bans5
.Analysis of over 31,000 syllabuses at a large Texas research university showed business courses allow the greatest AI use, while humanities courses permit it least
5
. AI was most commonly allowed for editing, study support, and coding, but restricted for drafting, revising, and reasoning or problem-solving5
. Some faculty have shifted to oral exams, live debates, and presentations to ensure student assessment reflects genuine understanding rather than AI-generated content5
.Related Stories
Educators increasingly recognize that AI literacy must become a core component of curricula. Mike Taubman, a teacher at North Star Academy in Newark, developed an "AI driver's license" curriculum that teaches students to choose destinations, learn how these tools work, understand limitations and risks, and define ethical boundaries
4
. Without structured AI policy and education, students teach themselves, and 70-80% use AI to short-circuit learning rather than enhance it, according to Sahami4
.The Stanford AI+Education Summit emphasized that equity in AI access and education remains critical. Wendy Kopp, founder of Teach for All, noted that AI amplifies whatever educational foundation already exists
4
. In mission-driven schools with strong pedagogy, AI becomes powerful for educators and learners. Without strong guidance, it becomes a distraction4
. The importance of human connection in education cannot be replaced by technology, experts stressed, particularly for student support and development of critical thinking skills4
.Teens themselves hold mixed views about artificial intelligence's societal impact. When asked about the next 20 years, 36% expect AI will have a positive effect on them personally, while 32% think it will be equally positive and negative, 15% expect negative consequences, and 17% remain unsure
3
. Their outlook dims when considering society broadly: 31% predict positive results, 34% see equal positives and negatives, 28% expect negative outcomes, and 8% are unsure3
. This contrasts with only 17% of U.S. adults who felt AI would have a positive impact over the next 20 years in a previous Pew study2
.Summarized by
Navi
15 Jul 2025•Technology

23 Jul 2025•Technology

26 Sept 2025•Technology

1
Technology

2
Policy and Regulation

3
Technology
