6 Sources
6 Sources
[1]
As AI tools reshape education, schools struggle with how to draw the line on cheating
The book report is now a thing of the past. Take-home tests and essays are becoming obsolete. High school and college educators around the country say student use of artificial intelligence has become so prevalent that to assign writing outside of the classroom is like asking students to cheat. "The cheating is off the charts. It's the worst I've seen in my entire career," says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. "Anything you send home, you have to assume is being AI'ed." The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study, how teachers teach, and it's creating new confusion over what constitutes academic dishonesty. "We have to ask ourselves, what is cheating?" says Cuny, a 2024 recipient of California's Teacher of the Year award. "Because I think the lines are getting blurred." Cuny's students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him "lock down" their screens or block access to certain sites. He's also integrating AI into his lessons and teaching students how to use AI as a study aid "to get kids learning with AI instead of cheating with AI." In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading. "I used to give a writing prompt and say, 'In two weeks I want a five-paragraph essay,'" says Gibson. "These days, I can't do that. That's almost begging teenagers to cheat." Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in "The Great Gatsby." Many students say their first instinct is now to ask ChatGPT for help "brainstorming." Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: "Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!" Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation and it's sometimes hard to know where to draw the line. College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading "felt like a different language" until she read AI summaries of the texts. "Sometimes I feel bad using ChatGPT to summarize reading, because I wonder is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?" Her class syllabi say things like: "Don't use AI to write essays and to form thoughts," she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater. Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences. "Whether you can use AI or not, depends on each classroom. That can get confusing," says Valencia 11th grader Jolie Lahey, who credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them and then explain problems they got wrong. But this year, her teachers have strict "No AI" policies. "It's such a helpful tool. And if we're not allowed to use it that just doesn't make sense," Lahey says. "It feels outdated." Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term "AI literacy" has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges. Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions. The University of California, Berkeley emailed all faculty new AI guidance that instructs them to "include a clear statement on their syllabus about course expectations" around AI use. The guidance offered language for three sample syllabus statements -- for courses that require AI, ban AI in and out of class, or allow some AI use. "In the absence of such a statement, students may be more likely to use these technologies inappropriately," the email said, stressing that AI is "creating new confusion about what might constitute legitimate methods for completing student work." At Carnegie Mellon University there has been a huge uptick in academic responsibility violations due to AI but often students aren't aware they've done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university's Heinz College of Information Systems and Public Policy. For example, one English language learner wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English but didn't realize the platform also altered his language, which was flagged by an AI detector. Enforcing academic integrity policies has been complicated by AI, which is hard to detect and even harder to prove, said Fitzsimmons. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line but are now more hesitant to point out violations because they don't want to accuse students unfairly, and students are worried that if they are falsely accused there is no way to prove their innocence. Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told that a blanket ban on AI "is not a viable policy" unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to "flipped classrooms," where homework is done in class. Emily DeJeu, who teaches communication courses at Carnegie Mellon's business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in "a lockdown browser" that blocks students from leaving the quiz screen. "To expect an 18-year-old to exercise great discipline is unreasonable, that's why it's up to instructors to put up guardrails." ___ The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
[2]
Universities Can Abdicate to AI. Or They Can Fight.
Too many school leaders have been reluctant to impose harsh penalties for unauthorized chatbot use. Since the release of ChatGPT, in 2022, colleges and universities have been engaged in an experiment to discover whether artificially intelligent chatbots and the liberal-arts tradition can coexist. Notwithstanding a few exceptions, by now the answer is clear: They cannot. AI-enabled cheating is pretty much everywhere. As a May New York magazine essay put it, "students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education." This rampant, unauthorized AI use degrades the educational experience of individual students who overly rely on the technology and those who wish to avoid using it. When students ask ChatGPT to write papers, complete problem sets, or formulate discussion queries, they rob themselves of the opportunity to learn how to think, study, and answer complex questions. These students also undermine their non-AI-using peers. Recently, a professor friend of mine told me that several students had confessed to him that they felt their classmates' constant AI use was ruining their own college years. Widespread AI use also subverts the institutional goals of colleges and universities. Large language models routinely fabricate information, and even when they do create factually accurate work, they frequently depend on intellectual-property theft. So when an educational institution as a whole produces large amounts of AI-generated scholarship, it fails to create new ideas and add to the storehouse of human wisdom. AI also takes a prodigious ecological toll and relies on labor exploitation, which is impossible to square with many colleges' and universities' professed commitment to protecting the environment and fighting economic inequality. Some schools have nonetheless responded to the AI crisis by waving the white flag: The Ohio State University recently pledged that students in every major will learn to use AI so they can become "bilingual" in the tech; the California State University system, which has nearly half a million students, said it aims to be "the nation's first and largest A.I.-empowered university system." Teaching students how to use AI tools in fields where they are genuinely necessary is one thing. But infusing the college experience with the technology is deeply misguided. Even schools that have not bent the knee by "integrating" AI into campus life are mostly failing to come up with workable answers to the various problems presented by AI. At too many colleges, leaders have been reluctant to impose strict rules or harsh penalties for chatbot use, passing the buck to professors to come up with their own policies. In a recent cri de coeur, Megan Fritts, a philosophy professor at the University of Arkansas at Little Rock, detailed how her own institution has not articulated clear, campus-wide guidance on AI use. She argued that if the humanities are to survive, "universities will need to embrace a much more radical response to AI than has so far been contemplated." She called for these classrooms to ban large language models, which she described as "tools for offloading the task of genuine expression," then went a step further, saying that their use should be shunned, "seen as a faux pas of the deeply different norms of a deeply different space." Read: ChatGPT doesn't need to ruin college Yet to my mind, the "radical" policy Fritts proposes -- which is radical, when you consider how many universities are encouraging their students to use AI -- is not nearly radical enough. Shunning AI use in classrooms is a good start, but schools need to think bigger than that. All institutions of higher education in the United States should be animated by the same basic question: What are the most effective things -- even if they sound extreme -- that we can do to limit, and ideally abolish, the unathorized use of AI on campus? Once the schools have an answer, their leaders should do everything in their power to make these things happen. The answers will be different for different kinds of schools, rich or poor, public or private, big or small. At the type of place where I taught until recently -- a small, selective, private liberal-arts college -- administrators can go quite far in limiting AI use, if they have the guts to do so. They should commit to a ruthless de-teching not just of classrooms but of their entire institution. Get rid of Wi-Fi and return to Ethernet, which would allow schools greater control over where and when students use digital technologies. To that end, smartphones and laptops should also be banned on campus. If students want to type notes in class or papers in the library, they can use digital typewriters, which have word processing but nothing else. Work and research requiring students to use the internet or a computer can take place in designated labs. This lab-based computer work can and should include learning to use AI, a technology that is likely here to stay and about which ignorance represents neither wisdom nor virtue. These measures may sound draconian but they are necessary to make the barrier to cheating prohibitively high. Tech bans would also benefit campus intellectual culture and social life. This is something that many undergraduates seem to recognize themselves, as antitech "Luddite clubs" with slogans promising human connection sprout up at colleges around the country, and the ranks of students carrying flip phones grow. Nixing screens for everyone on campus, and not just those who self-select into antitech organizations, could change campus communities for the better -- we've already seen the transformative impact of initiatives like these at the high-school level. My hope is that the quad could once again be a place where students (and faculty) talk to one another, rather than one where everyone walks zombified about the green with their nose down and their eyes on their devices. Colleges that are especially committed to maintaining this tech-free environment could require students to live on campus, so they can't use AI tools at home undetected. Many schools, including those with a high number of students who have children or other familial responsibilities, might not be able to do this. But some could, and they should. (And they should of course provide whatever financial aid is necessary to enable students to live in the dorms.) Restrictions also must be applied without exceptions, even for students with disabilities or learning differences. I realize this may be a controversial position to take, but if done right, a full tech ban can benefit everyone. Although laptops and AI transcription services can be helpful for students with special needs, they are rarely essential. Instead of allowing a disability exception, colleges with tech bans should provide peer tutors, teaching assistants, and writing centers to help students who require extra assistance -- low-tech strategies that decades of pedagogical research show to be effective in making education more accessible. This support may be more expensive than a tech product, but it would give students the tools they really need to succeed academically. The idea that the only way to create an inclusive classroom is through gadgets and software is little more than ed-tech-industry propaganda. Investing in human specialists, however, would be good for students of all abilities. Last year I visited my undergraduate alma mater, Haverford College, which has a well-staffed writing center, and one student said something that's stuck with me: "The writing center is more useful than ChatGPT anyway. If I need help, I go there." Another reason that a no-exceptions policy is important: If students with disabilities are permitted to use laptops and AI, a significant percentage of other students will most likely find a way to get the same allowances, rendering the ban useless. I witnessed this time and again when I was a professor -- students without disabilities finding ways to use disability accommodations for their own benefit. Professors I know who are still in the classroom have told me that this remains a serious problem. Read: AI cheating is getting worse Universities with tens of thousands of students might have trouble enforcing a campus smartphone-and-laptop ban, and might not have the capacity to require everyone to live on campus. But they can still take meaningful steps toward creating a culture that prioritizes learning and creativity, and that cultivates the attention spans necessary for sustained intellectual engagement. Schools that don't already have an honor code can develop one. They can require students to sign a pledge vowing not to engage in unauthorized AI use, and levy consequences, including expulsion, for those who don't comply. They can ban websites such as ChatGPT from their campus networks. Where possible, they can offer more small, discussion-based courses. And they can require students to write essays in class, proctored by professors and teaching assistants, and to take end-of-semester written tests or oral exams that require extensive knowledge of course readings. Many professors are already taking these steps themselves, but few schools have adopted such policies institution-wide. Some will object that limiting AI use so aggressively will not prepare students for the "real world," where large language models seem omnipresent. But colleges have never mimicked the real world, which is why so many people romanticize them. Undergraduate institutions have long promised America's young people opportunities to learn in cloistered conditions that are deliberately curated, anachronistic, and unrepresentative of work and life outside the quad. Why should that change? Indeed, one imagines that plenty of students (and parents) might eagerly apply to institutions offering an alternative to the AI-dominated college education offered elsewhere. If this turns out not to be true -- if America does not have enough students interested in reading, writing, and learning on their own to fill its colleges and universities -- then society has a far bigger problem on its hands, and one might reasonably ask why all of these institutions continue to exist. Taking drastic measures against AI in higher education is not about embracing Luddism, which is generally a losing proposition. It is about creating the conditions necessary for young people to learn to read, write, and think, which is to say, the conditions necessary for modern civilization to continue to reproduce itself. Institutions of higher learning can abandon their centuries-long educational project. Or they can resist.
[3]
As AI tools reshape education, schools struggle with how to draw the line on cheating
The book report is now a thing of the past. Take-home tests and essays are becoming obsolete. High school and college educators around the country say student use of artificial intelligence has become so prevalent that to assign writing outside of the classroom is like asking students to cheat. "The cheating is off the charts. It's the worst I've seen in my entire career," says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. "Anything you send home, you have to assume is being AI'ed." The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study, how teachers teach, and it's creating new confusion over what constitutes academic dishonesty. "We have to ask ourselves, what is cheating?" says Cuny, a 2024 recipient of California's Teacher of the Year award. "Because I think the lines are getting blurred." Cuny's students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him "lock down" their screens or block access to certain sites. He's also integrating AI into his lessons and teaching students how to use AI as a study aid "to get kids learning with AI instead of cheating with AI." In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading. "I used to give a writing prompt and say, 'In two weeks I want a five-paragraph essay,'" says Gibson. "These days, I can't do that. That's almost begging teenagers to cheat." Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in "The Great Gatsby." Many students say their first instinct is now to ask ChatGPT for help "brainstorming." Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: "Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!" Students are uncertain when AI usage is out of bounds Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation and it's sometimes hard to know where to draw the line. College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading "felt like a different language" until she read AI summaries of the texts. "Sometimes I feel bad using ChatGPT to summarize reading, because I wonder is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?" Her class syllabi say things like: "Don't use AI to write essays and to form thoughts," she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater. Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences. "Whether you can use AI or not, depends on each classroom. That can get confusing," says Valencia 11th grader Jolie Lahey, who credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them and then explain problems they got wrong. But this year, her teachers have strict "No AI" policies. "It's such a helpful tool. And if we're not allowed to use it that just doesn't make sense," Lahey says. "It feels outdated." Schools are introducing guidelines, gradually Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term "AI literacy" has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges. Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions. The University of California, Berkeley emailed all faculty new AI guidance that instructs them to "include a clear statement on their syllabus about course expectations" around AI use. The guidance offered language for three sample syllabus statements -- for courses that require AI, ban AI in and out of class, or allow some AI use. "In the absence of such a statement, students may be more likely to use these technologies inappropriately," the email said, stressing that AI is "creating new confusion about what might constitute legitimate methods for completing student work." At Carnegie Mellon University there has been a huge uptick in academic responsibility violations due to AI but often students aren't aware they've done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university's Heinz College of Information Systems and Public Policy. For example, one English language learner wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English but didn't realize the platform also altered his language, which was flagged by an AI detector. Enforcing academic integrity policies has been complicated by AI, which is hard to detect and even harder to prove, said Fitzsimmons. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line but are now more hesitant to point out violations because they don't want to accuse students unfairly, and students are worried that if they are falsely accused there is no way to prove their innocence. Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told that a blanket ban on AI "is not a viable policy" unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to "flipped classrooms," where homework is done in class. Emily DeJeu, who teaches communication courses at Carnegie Mellon's business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in "a lockdown browser" that blocks students from leaving the quiz screen. "To expect an 18-year-old to exercise great discipline is unreasonable, that's why it's up to instructors to put up guardrails." ___ The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
[4]
As AI tools reshape education, schools struggle with how to draw the line on cheating
High school and college educators say that student use of artificial intelligence has become so widespread that they need to rethink how to assign and assess students The book report is now a thing of the past. Take-home tests and essays are becoming obsolete. High school and college educators around the country say student use of artificial intelligence has become so prevalent that to assign writing outside of the classroom is like asking students to cheat. "The cheating is off the charts. It's the worst I've seen in my entire career," says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. "Anything you send home, you have to assume is being AI'ed." The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study, how teachers teach, and it's creating new confusion over what constitutes academic dishonesty. "We have to ask ourselves, what is cheating?" says Cuny, a 2024 recipient of California's Teacher of the Year award. "Because I think the lines are getting blurred." Cuny's students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him "lock down" their screens or block access to certain sites. He's also integrating AI into his lessons and teaching students how to use AI as a study aid "to get kids learning with AI instead of cheating with AI." In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading. "I used to give a writing prompt and say, 'In two weeks I want a five-paragraph essay,'" says Gibson. "These days, I can't do that. That's almost begging teenagers to cheat." Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in "The Great Gatsby." Many students say their first instinct is now to ask ChatGPT for help "brainstorming." Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: "Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!" Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation and it's sometimes hard to know where to draw the line. College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading "felt like a different language" until she read AI summaries of the texts. "Sometimes I feel bad using ChatGPT to summarize reading, because I wonder is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?" Her class syllabi say things like: "Don't use AI to write essays and to form thoughts," she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater. Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences. "Whether you can use AI or not, depends on each classroom. That can get confusing," says Valencia 11th grader Jolie Lahey, who credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them and then explain problems they got wrong. But this year, her teachers have strict "No AI" policies. "It's such a helpful tool. And if we're not allowed to use it that just doesn't make sense," Lahey says. "It feels outdated." Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term "AI literacy" has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges. Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions. The University of California, Berkeley emailed all faculty new AI guidance that instructs them to "include a clear statement on their syllabus about course expectations" around AI use. The guidance offered language for three sample syllabus statements -- for courses that require AI, ban AI in and out of class, or allow some AI use. "In the absence of such a statement, students may be more likely to use these technologies inappropriately," the email said, stressing that AI is "creating new confusion about what might constitute legitimate methods for completing student work." At Carnegie Mellon University there has been a huge uptick in academic responsibility violations due to AI but often students aren't aware they've done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university's Heinz College of Information Systems and Public Policy. For example, one English language learner wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English but didn't realize the platform also altered his language, which was flagged by an AI detector. Enforcing academic integrity policies has been complicated by AI, which is hard to detect and even harder to prove, said Fitzsimmons. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line but are now more hesitant to point out violations because they don't want to accuse students unfairly, and students are worried that if they are falsely accused there is no way to prove their innocence. Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told that a blanket ban on AI "is not a viable policy" unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to "flipped classrooms," where homework is done in class. Emily DeJeu, who teaches communication courses at Carnegie Mellon's business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in "a lockdown browser" that blocks students from leaving the quiz screen. "To expect an 18-year-old to exercise great discipline is unreasonable, that's why it's up to instructors to put up guardrails." ___ The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
[5]
As AI Tools Reshape Education, Schools Struggle With How to Draw the Line on Cheating
The book report is now a thing of the past. Take-home tests and essays are becoming obsolete. High school and college educators around the country say student use of artificial intelligence has become so prevalent that to assign writing outside of the classroom is like asking students to cheat. "The cheating is off the charts. It's the worst I've seen in my entire career," says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. "Anything you send home, you have to assume is being AI'ed." The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study, how teachers teach, and it's creating new confusion over what constitutes academic dishonesty. "We have to ask ourselves, what is cheating?" says Cuny, a 2024 recipient of California's Teacher of the Year award. "Because I think the lines are getting blurred." Cuny's students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him "lock down" their screens or block access to certain sites. He's also integrating AI into his lessons and teaching students how to use AI as a study aid "to get kids learning with AI instead of cheating with AI." In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading. "I used to give a writing prompt and say, 'In two weeks I want a five-paragraph essay,'" says Gibson. "These days, I can't do that. That's almost begging teenagers to cheat." Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in "The Great Gatsby." Many students say their first instinct is now to ask ChatGPT for help "brainstorming." Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: "Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!" Students are uncertain when AI usage is out of bounds Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation and it's sometimes hard to know where to draw the line. College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading "felt like a different language" until she read AI summaries of the texts. "Sometimes I feel bad using ChatGPT to summarize reading, because I wonder is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?" Her class syllabi say things like: "Don't use AI to write essays and to form thoughts," she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater. Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences. "Whether you can use AI or not, depends on each classroom. That can get confusing," says Valencia 11th grader Jolie Lahey, who credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them and then explain problems they got wrong. But this year, her teachers have strict "No AI" policies. "It's such a helpful tool. And if we're not allowed to use it that just doesn't make sense," Lahey says. "It feels outdated." Schools are introducing guidelines, gradually Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term "AI literacy" has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges. Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions. The University of California, Berkeley emailed all faculty new AI guidance that instructs them to "include a clear statement on their syllabus about course expectations" around AI use. The guidance offered language for three sample syllabus statements -- for courses that require AI, ban AI in and out of class, or allow some AI use. "In the absence of such a statement, students may be more likely to use these technologies inappropriately," the email said, stressing that AI is "creating new confusion about what might constitute legitimate methods for completing student work." At Carnegie Mellon University there has been a huge uptick in academic responsibility violations due to AI but often students aren't aware they've done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university's Heinz College of Information Systems and Public Policy. For example, one English language learner wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English but didn't realize the platform also altered his language, which was flagged by an AI detector. Enforcing academic integrity policies has been complicated by AI, which is hard to detect and even harder to prove, said Fitzsimmons. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line but are now more hesitant to point out violations because they don't want to accuse students unfairly, and students are worried that if they are falsely accused there is no way to prove their innocence. Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told that a blanket ban on AI "is not a viable policy" unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to "flipped classrooms," where homework is done in class. Emily DeJeu, who teaches communication courses at Carnegie Mellon's business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in "a lockdown browser" that blocks students from leaving the quiz screen. "To expect an 18-year-old to exercise great discipline is unreasonable, that's why it's up to instructors to put up guardrails." ___ The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
[6]
As AI tools reshape education, schools struggle with how to draw the line on cheating - The Economic Times
Students are uncertain when AI usage is out of bounds, Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation and it's sometimes hard to know where to draw the line.The book report is now a thing of the past. Take-home tests and essays are becoming obsolete. High school and college educators around the country say student use of artificial intelligence has become so prevalent that to assign writing outside of the classroom is like asking students to cheat. "The cheating is off the charts. It's the worst I've seen in my entire career," says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. "Anything you send home, you have to assume is being AI'ed." The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study, how teachers teach, and it's creating new confusion over what constitutes academic dishonesty. "We have to ask ourselves, what is cheating?" says Cuny, a 2024 recipient of California's Teacher of the Year award. "Because I think the lines are getting blurred." Cuny's students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him "lock down" their screens or block access to certain sites. He's also integrating AI into his lessons and teaching students how to use AI as a study aid "to get kids learning with AI instead of cheating with AI." In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading. "I used to give a writing prompt and say, 'In two weeks I want a five-paragraph essay,'" says Gibson. "These days, I can't do that. That's almost begging teenagers to cheat." Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in "The Great Gatsby." Many students say their first instinct is now to ask ChatGPT for help "brainstorming." Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: "Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!" Students are uncertain when AI usage is out of bounds Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation and it's sometimes hard to know where to draw the line. College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading "felt like a different language" until she read AI summaries of the texts. "Sometimes I feel bad using ChatGPT to summarize reading, because I wonder is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?" Her class syllabi say things like: "Don't use AI to write essays and to form thoughts," she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater. Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences. "Whether you can use AI or not, depends on each classroom. That can get confusing," says Valencia 11th grader Jolie Lahey, who credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them and then explain problems they got wrong. But this year, her teachers have strict "No AI" policies. "It's such a helpful tool. And if we're not allowed to use it that just doesn't make sense," Lahey says. "It feels outdated." Schools are introducing guidelines, gradually Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term "AI literacy" has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges. Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions. The University of California, Berkeley emailed all faculty new AI guidance that instructs them to "include a clear statement on their syllabus about course expectations" around AI use. The guidance offered language for three sample syllabus statements - for courses that require AI, ban AI in and out of class, or allow some AI use. "In the absence of such a statement, students may be more likely to use these technologies inappropriately," the email said, stressing that AI is "creating new confusion about what might constitute legitimate methods for completing student work." At Carnegie Mellon University there has been a huge uptick in academic responsibility violations due to AI but often students aren't aware they've done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university's Heinz College of Information Systems and Public Policy. For example, one English language learner wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English but didn't realize the platform also altered his language, which was flagged by an AI detector. Enforcing academic integrity policies has been complicated by AI, which is hard to detect and even harder to prove, said Fitzsimmons. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line but are now more hesitant to point out violations because they don't want to accuse students unfairly, and students are worried that if they are falsely accused there is no way to prove their innocence. Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told that a blanket ban on AI "is not a viable policy" unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to "flipped classrooms," where homework is done in class. Emily DeJeu, who teaches communication courses at Carnegie Mellon's business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in "a lockdown browser" that blocks students from leaving the quiz screen. "To expect an 18-year-old to exercise great discipline is unreasonable, that's why it's up to instructors to put up guardrails."
Share
Share
Copy Link
As AI tools become ubiquitous in education, schools and universities grapple with defining cheating and adapting teaching methods. The widespread use of AI for schoolwork is forcing educators to rethink traditional assessment techniques and policies.
The rapid integration of artificial intelligence (AI) into education has sparked a significant transformation in teaching and learning methodologies. As AI tools become increasingly sophisticated and accessible, educators are grappling with the challenge of maintaining academic integrity while adapting to this new technological landscape
1
3
.The prevalence of AI-assisted schoolwork has reached unprecedented levels, with educators like Casey Cuny, a California Teacher of the Year recipient, describing the situation as 'cheating off the charts'
1
. This widespread use of AI has rendered traditional take-home assignments and essays nearly obsolete, as teachers now assume that any work completed outside the classroom may involve AI assistance4
.In response to this challenge, educators are reimagining their teaching and assessment strategies. Many have shifted to in-class writing assignments and increased use of verbal assessments to ensure original student work
3
. Some teachers, like Cuny, are employing screen-monitoring software to control student access to certain websites during class1
.The integration of AI in education has blurred the lines between acceptable assistance and cheating. Students like Lily Brown, a college sophomore, express uncertainty about when AI usage crosses ethical boundaries
1
. This ambiguity is further complicated by inconsistent policies across different classrooms and institutions5
.Related Stories
Educational institutions are gradually developing more comprehensive guidelines for AI use. Universities like UC Berkeley have begun providing faculty with new AI guidance, emphasizing the importance of clear syllabus statements regarding AI usage expectations
1
. However, some critics argue that these measures are insufficient, with calls for more radical responses to preserve the integrity of humanities education2
.As the education sector continues to navigate this AI-driven landscape, the concept of 'AI literacy' has emerged as a focal point
5
. Many institutions are shifting from outright bans on AI to more nuanced approaches that aim to harness its potential while mitigating risks. This evolving perspective reflects a growing recognition of AI's inevitable role in future learning environments and the workforce1
3
.Summarized by
Navi
[2]
[3]
[5]
U.S. News & World Report
|1
Business and Economy
2
Business and Economy
3
Technology