Curated by THEOUTPOST
On Thu, 15 May, 12:03 AM UTC
3 Sources
[1]
College Professors Are Using ChatGPT. Some Students Aren't Happy.
Sign up for the On Tech newsletter. Get our best tech reporting from the week. Get it sent to your inbox. In February, Ella Stapleton, then a senior at Northeastern University, was reviewing lecture notes from her organizational behavior class when she noticed something odd. Was that a query to ChatGPT from her professor? Halfway through the document, which her business professor had made for a lesson on models of leadership, was an instruction to ChatGPT to "expand on all areas. Be more detailed and specific." It was followed by a list of positive and negative leadership traits, each with a prosaic definition and a bullet-pointed example. Ms. Stapleton texted a friend in the class. "Did you see the notes he put on Canvas?" she wrote, referring to the university's software platform for hosting course materials. "He made it with ChatGPT." "OMG Stop," the classmate responded. "What the hell?" Ms. Stapleton decided to do some digging. She reviewed her professor's slide presentations and discovered other telltale signs of A.I.: distorted text, photos of office workers with extraneous body parts and egregious misspellings. She was not happy. Given the school's cost and reputation, she expected a top-tier education. This course was required for her business minor; its syllabus forbade "academically dishonest activities," including the unauthorized use of artificial intelligence or chatbots. "He's telling us not to use it, and then he's using it himself," she said. Ms. Stapleton filed a formal complaint with Northeastern's business school, citing the undisclosed use of A.I. as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than $8,000. When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it while others deployed A.I. detection services, despite concerns about their accuracy. But, oh, how the tables have turned. Now students are complaining on sites like Rate My Professors about their instructors' overreliance on A.I. and scrutinizing course materials for words ChatGPT tends to overuse, like "crucial" and "delve." In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free. For their part, professors said they used A.I. chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads and served as automated teaching assistants. Their numbers are growing. In a national survey of more than 1,800 higher-education instructors last year, 18 percent described themselves as frequent users of generative A.I. tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The A.I. industry wants to help, and to profit: The start-ups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities. (The Times has sued OpenAI for copyright infringement for use of news content without permission.) Generative A.I. is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Ms. Stapleton's teacher, muddling their way through the technology's pitfalls and their students' disdain. Making the Grade Last fall, Marie, 22, wrote a three-page essay for an online anthropology course at Southern New Hampshire University. She looked for her grade on the school's online platform, and was happy to have received an A. But in a section for comments, her professor had accidentally posted a back-and-forth with ChatGPT. It included the grading rubric the professor had asked the chatbot to use and a request for some "really nice feedback" to give Marie. "From my perspective, the professor didn't even read anything that I wrote," said Marie, who asked to use her middle name and requested that her professor's identity not be disclosed. She could understand the temptation to use A.I. Working at the school was a "third job" for many of her instructors, who might have hundreds of students, said Marie, and she did not want to embarrass her teacher. Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor told Marie that she did read her students' essays but used ChatGPT as a guide, which the school permitted. Robert MacAuslan, vice president of A.I. at Southern New Hampshire, said that the school believed "in the power of A.I. to transform education" and that there were guidelines for both faculty and students to "ensure that this technology enhances, rather than replaces, human creativity and oversight." A dos and don'ts for faculty forbids using tools, such as ChatGPT and Grammarly, "in place of authentic, human-centric feedback." "These tools should never be used to 'do the work' for them," Dr. MacAuslan said. "Rather, they can be looked at as enhancements to their already established processes." After a second professor appeared to use ChatGPT to give her feedback, Marie transferred to another university. Paul Shovlin, an English professor at Ohio University in Athens, Ohio, said he could understand her frustration. "Not a big fan of that," Dr. Shovlin said, after being told of Marie's experience. Dr. Shovlin is also an A.I. faculty fellow, whose role includes developing the right ways to incorporate A.I. into teaching and learning. "The value that we add as instructors is the feedback that we're able to give students," he said. "It's the human connections that we forge with students as human beings who are reading their words and who are being impacted by them." Dr. Shovlin is a proponent of incorporating A.I. into teaching, but not simply to make an instructor's life easier. Students need to learn to use the technology responsibly and "develop an ethical compass with A.I.," he said, because they will almost certainly use it in the workplace. Failure to do so properly could have consequences. "If you screw up, you're going to be fired," Dr. Shovlin said. One example he uses in his own classes: In 2023, officials at Vanderbilt University's education school responded to a mass shooting at another university by sending an email to students calling for community cohesion. The message, which described promoting a "culture of care" by "building strong relationships with one another," included a sentence at the end that revealed that ChatGPT had been used to write it. After students criticized the outsourcing of empathy to a machine, the officials involved temporarily stepped down. Not all situations are so clear cut. Dr. Shovlin said it was tricky to come up with rules because reasonable A.I. use may vary depending on the subject. His department, the Center for Teaching, Learning and Assessment, instead has "principles" for A.I. integration, one of which eschews a "one-size-fits-all approach." The Times contacted dozens of professors whose students had mentioned their A.I. use in online reviews. The professors said they had used ChatGPT to create computer science programming assignments and quizzes on required reading, even as students complained that the results didn't always make sense. They used it to organize their feedback to students, or to make it kinder. As experts in their fields, they said, they can recognize when it hallucinates, or gets facts wrong. There was no consensus among them as to what was acceptable. Some acknowledged using ChatGPT to help grade students' work; others decried the practice. Some emphasized the importance of transparency with students when deploying generative A.I., while others said they didn't disclose its use because of students' skepticism about the technology. Most, however, felt that Ms. Stapleton's experience at Northeastern -- in which her professor appeared to use A.I. to generate class notes and slides -- was perfectly fine. That was Dr. Shovlin's view, as long as the professor edited what ChatGPT spat out to reflect his expertise. Dr. Shovlin compared it to a longstanding practice in academia of using content, such as lesson plans and case studies, from third-party publishers. To say a professor is "some kind of monster" for using A.I. to generate slides "is, to me, ridiculous," he said. The Calculator on Steroids Shingirai Christopher Kwaramba, a business professor at Virginia Commonwealth University, described ChatGPT as a partner that saved time. Lesson plans that used to take days to develop now take hours, he said. He uses it, for example, to generate data sets for fictional chain stores, which students use in an exercise to understand various statistical concepts. "I see it as the age of the calculator on steroids," Dr. Kwaramba said. Dr. Kwaramba said he now had more time for student office hours. Other professors, like David Malan at Harvard, said the use of A.I. meant fewer students were coming to office hours for remedial help. Dr. Malan, a computer science professor, has integrated a custom A.I. chatbot into a popular class he teaches on the fundamentals of computer programming. His hundreds of students can turn to it for help with their coding assignments. Dr. Malan has had to tinker with the chatbot to hone its pedagogical approach, so that it offers only guidance and not the full answers. The majority of 500 students surveyed in 2023, the first year it was offered, said they found it helpful. Rather than spend time on "more mundane questions about introductory material" during office hours, he and his teaching assistants prioritize interactions with students at weekly lunches and hackathons -- "more memorable moments and experiences," Dr. Malan said. Katy Pearce, a communication professor at the University of Washington, developed a custom A.I. chatbot by training it on versions of old assignments that she had graded. It can now give students feedback on their writing that mimics her own at any time, day or night. It has been beneficial for students who are otherwise hesitant to ask for help, she said. "Is there going to be a point in the foreseeable future that much of what graduate student teaching assistants do can be done by A.I.?" she said. "Yeah, absolutely." What happens then to the pipeline of future professors who would come from the ranks of teaching assistants? "It will absolutely be an issue," Dr. Pearce said. A Teachable Moment After filing her complaint at Northeastern, Ms. Stapleton had a series of meetings with officials in the business school. In May, the day after her graduation ceremony, the officials told her that she was not getting her tuition money back. Rick Arrowood, her professor, was contrite about the episode. Dr. Arrowood, who is an adjunct professor and has been teaching for nearly two decades, said he had uploaded his class files and documents to ChatGPT, the A.I. search engine Perplexity and an A.I. presentation generator called Gamma to "give them a fresh look." At a glance, he said, the notes and presentations they had generated looked great. "In hindsight, I wish I would have looked at it more closely," he said. He put the materials online for students to review, but emphasized that he did not use them in the classroom, because he prefers classes to be discussion-oriented. He realized the materials were flawed only when school officials questioned him about them. The embarrassing situation made him realize, he said, that professors should approach A.I. with more caution and disclose to students when and how it is used. Northeastern issued a formal A.I. policy only recently; it requires attribution when A.I. systems are used and review of the output for "accuracy and appropriateness." A Northeastern spokeswoman said the school "embraces the use of artificial intelligence to enhance all aspects of its teaching, research and operations." "I'm all about teaching," Dr. Arrowood said. "If my experience can be something people can learn from, then, OK, that's my happy spot."
[2]
Student Livid After Catching Her Professor Using ChatGPT, Asks For Her Money Back
Many students aren't allowed to use artificial intelligence to do their assignments -- and when they catch their teachers doing so, they're often peeved. In an interview with the New York Times, one such student -- Northeastern's Ella Stapleton -- was shocked earlier this year when she began to suspect that her business professor had generated lecture notes with ChatGPT. When combing through those notes, the newly-matriculated student noticed a ChatGPT search citation, obvious misspellings, and images with extraneous limbs and digits -- all hallmarks of AI use. "He's telling us not to use it," Stapleton said, "and then he's using it himself." Alarmed, the senior brought up the professor's AI use with Northeastern's administration and demanded her tuition back. After a series of meetings that ran all the way up until her graduation earlier this month, the school gave its final verdict: that she would not be getting her $8,000 in tuition back. Most of the educators the NYT spoke to -- who, like Stapleton's, had been caught by students using AI tools like ChatGPT -- didn't think it was that big of a deal. To the mind of Paul Shovlin, an English teacher and AI fellow at Ohio University, there is no "one-size-fits-all" approach to using the burgeoning tech in the classroom. Students making their AI-using professors out to be "some kind of monster," as he put it, is "ridiculous." That take, which over-inflates the student's concerns to make her sound hystrionic, dismisses another burgeoning consensus: that others view the use of AI at work as lazy and look down upon people who use it. In a new study from Duke, business researchers found that people both anticipate and experience judgment from their colleagues for using AI at work. The study involved more than 4,400 people who, through a series of four experiments, indicated ample "evidence of a social evaluation penalty for using AI." "Our findings reveal a dilemma for people considering adopting AI tools," the researchers wrote. "Although AI can enhance productivity, its use carries social costs." For Stapleton's professor, Rick Arrowood, the Northeastern lecture notes scandal really drove that point home. Arrowood told the NYT that he used various AI tools -- including ChatGPT, the Perplexity AI search engine, and an AI presentation generator called Gamma -- to give his lectures a "fresh look." Though he claimed to have reviewed the outputs, he didn't catch the telltale AI signs that Stapleton saw. "In hindsight," he told the newspaper, "I wish I would have looked at it more closely." Arrowood said he's now convinced professors should think harder about using AI and disclose to their students when and how it's used -- a new stance indicating that the debacle was, for him, a teachable moment. "If my experience can be something people can learn from," he told the NYT, "then, OK, that's my happy spot."
[3]
College Professors Are Turning to ChatGPT to Generate Course Materials. One Student Noticed -- and Asked for a Refund.
Ella Stapleton noticed in February that the lecture notes for her organizational behavior class at Northeastern University appeared to have been generated by ChatGPT. Midway through the document was the statement to "expand on all areas. Be more detailed and specific," which could have been a prompt directed to the AI chatbot. Stapleton looked at other course materials from that class, including slide presentations, and detected AI use in the form of photos of people with extra limbs and misspelled text. She was taken aback, especially because the course syllabus distributed by her professor, Rick Arrowood, prohibited students from using AI. "He's telling us not to use it and then he's using it himself," Stapleton told The New York Times in a report published on Wednesday. Stapleton took the matter up with Northeastern's business school in a formal complaint, asking for her tuition for the class back. The total refund would be over $8,000 for the course. Related: These 4 Words Make It Obvious You Used AI to Write a Paper, According to New Research Northeastern denied Stapleton's request this month, the day after she graduated from the university. Arrowood, an adjunct professor who has been an instructor at various colleges for over fifteen years, admitted to The New York Times that he had put his class files and documents through ChatGPT to refine them. He said that the situation made him approach AI more cautiously and tell students outright when he uses it. Stapleton's situation highlights the growing use of AI in higher education. A survey conducted by consulting group Tyton Partners in 2023 found that 22% of higher-education teachers said they frequently utilized generative AI. The same survey conducted in 2024 found that the percentage had nearly doubled to close to 40% of instructors within the span of a year. AI use is becoming more prevalent among students, too. OpenAI released a study in February showing that more than one-third of young adults in the U.S. ages 18 to 24 use ChatGPT, with 25% of their messages tied to learning and schoolwork. The top two use cases of ChatGPT among this demographic were tutoring and writing help. Related: ChatGPT Is Writing Lots of Job Applications, But Companies Are Quickly Catching On. Here's How. Tyton's 2024 survey found that faculty who use AI are tapping into the technology to create in-class activities, write syllabi, generate rubrics for grading student work, and churn out quizzes and tests. Meanwhile, the study found that students are using AI to help answer homework questions, assist with writing assignments, and take lecture notes. In response to student AI use, colleges have adapted and released guidelines for using ChatGPT and other generative AI. For example, Harvard University advises students to protect confidential data, such as non-public research, when using AI chatbots and ensure that AI-generated content is free from inaccuracies or hallucinations. NYU's policy mandates that students receive instructor approval before using ChatGPT. Universities are also using software to uncover AI use in written materials, like essays. However, New York Magazine reported earlier this month that college students are getting around AI detectors by sprinkling typos into their ChatGPT-written papers. Related: Using ChatGPT? AI Could Damage Your Critical Thinking Skills, According to a Microsoft Study The trend of using AI in college could lead to less critical thinking. Researchers at Microsoft and Carnegie Mellon University published a study earlier this year that found that humans who used AI and were confident in its abilities used fewer critical thinking skills. "Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved," the researchers wrote.
Share
Share
Copy Link
A growing trend of college professors using AI tools like ChatGPT for course materials has led to student complaints and raised questions about academic integrity and the value of higher education.
The use of artificial intelligence tools, particularly ChatGPT, has become increasingly prevalent in higher education. A recent survey by Tyton Partners revealed that the percentage of higher-education instructors frequently using generative AI tools nearly doubled from 18% in 2023 to almost 40% in 2024 1. This rapid adoption has sparked controversy and raised questions about academic integrity and the value of education.
Ella Stapleton, a senior at Northeastern University, discovered her business professor, Rick Arrowood, had used ChatGPT to generate lecture notes and course materials 1. This revelation led to a formal complaint, with Stapleton requesting a tuition refund of over $8,000 for the class. Her main contention was the hypocrisy of professors using AI while prohibiting students from doing the same.
Similarly, a student named Marie at Southern New Hampshire University found that her anthropology professor had used ChatGPT to grade essays and provide feedback 2. These incidents have led to growing student dissatisfaction, with many arguing that they are paying for human instruction, not AI-generated content.
Professors argue that AI tools help them manage overwhelming workloads and serve as automated teaching assistants. Rick Arrowood admitted to using various AI tools to give his lectures a "fresh look," but acknowledged the need for closer scrutiny and transparency in AI usage 3.
Paul Shovlin, an English professor and AI faculty fellow at Ohio University, emphasized that there is no "one-size-fits-all" approach to incorporating AI in the classroom 3. However, he cautioned against using these tools to replace authentic, human-centric feedback.
Universities are grappling with the rapid integration of AI in education. Some institutions, like Southern New Hampshire University, have developed guidelines for both faculty and students on the appropriate use of AI tools 2. Harvard University advises students to protect confidential data when using AI chatbots, while NYU requires instructor approval for ChatGPT usage 3.
Concerns have been raised about the potential negative impact of AI on students' critical thinking skills. A study by Microsoft and Carnegie Mellon University found that humans who relied heavily on AI and were confident in its abilities demonstrated fewer critical thinking skills 3. This finding underscores the importance of balancing AI usage with traditional learning methods.
As AI continues to permeate higher education, institutions face the challenge of adapting their policies and practices. The incidents highlighted in this report suggest a need for greater transparency, clearer guidelines, and ongoing discussions about the role of AI in academia. Balancing the benefits of AI with maintaining the integrity and value of human-led education remains a crucial challenge for universities moving forward.
Reference
[1]
A new study from the University of East Anglia compares essays written by students and ChatGPT, finding that while AI-generated essays are coherent, they lack the personal touch and engagement strategies of human-written work.
6 Sources
6 Sources
Exeter University pioneers AI-friendly assessments as higher education grapples with ChatGPT's impact. The move sparks debate on academic integrity and the future of education in the AI era.
2 Sources
2 Sources
A paradox emerges in schools as teachers increasingly use AI tools for various tasks while attempting to restrict student access, raising ethical questions and concerns about the future of education.
2 Sources
2 Sources
As AI tools like ChatGPT enter classrooms, educators grapple with adapting teaching methods and assessments. The article explores the challenges and opportunities presented by AI in education, from homework assistance to rethinking curriculum design.
3 Sources
3 Sources
A dramatic increase in AI usage among UK university students for academic work has prompted calls for urgent policy changes and assessment reviews.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved