6 Sources
6 Sources
[1]
Essay challenge: ChatGPT vs students
ChatGPT vs students: study reveals who writes better (and it's not the AI) AI generated essays don't yet live up to the efforts of real students - according to new research from the University of East Anglia (UK). A new study published today compared the work of 145 real students with essays generated by ChatGPT. While the AI essays were found to be impressively coherent and grammatically sound, they fell short in one crucial area - they lacked a personal touch. As the line between human and machine writing continues to blur, the study underlines the importance of fostering critical literacy and ethical awareness in the digital age. It is hoped that the findings could help educators spot cheating in schools, colleges and universities worldwide by recognising machine-generated essays.. Prof Ken Hyland, from UEA's School of Education and Lifelong Learning, said: "Since its public release, ChatGPT has created considerable anxiety among teachers worried that students will use it to write their assignments. "The fear is that ChatGPT and other AI writing tools potentially facilitate cheating and may weaken core literacy and critical thinking skills. This is especially the case as we don't yet have tools to reliably detect AI-created texts. "In response to these concerns, we wanted to see how closely AI can mimic human essay writing, particularly focusing on how writers engage with readers." The research team analysed 145 essays written by real university students and another 145 generated by ChatGPT. "We were particularly interested in looking at what we called 'engagement markers' like questions and personal commentary," said Prof Hyland. "We found that the essays written by real students consistently featured a rich array of engagement strategies, making them more interactive and persuasive. "They were full of rhetorical questions, personal asides, and direct appeals to the reader - all techniques that enhance clarity, connection, and produce a strong argument. "The ChatGPT essays on the other hand, while linguistically fluent were more impersonal. The AI essays mimicked academic writing conventions but they were unable to inject text with a personal touch or to demonstrate a clear stance. "They tended to avoid questions and limited personal commentary. Overall, they were less engaging, less persuasive, and there was no strong perspective on a topic. "This reflects the nature of its training data and statistical learning methods, which prioritise coherence over conversational nuance," he added. Despite its shortcomings, the study does not dismiss the role of AI in the classroom. Instead, the researchers say that tools like ChatGPT should be used as teaching aids rather than shortcuts. "When students come to school, college or university, we're not just teaching them how to write, we're teaching them how to think - and that's something no algorithm can replicate," added Prof Hyland. This study was led by UEA in collaboration with Prof Kevin Jiang of Jilin University, China. 'Does ChatGPT write like a student? Engagement markers in argumentative essays' is published in the journal Written Communication.
[2]
AI isn't replacing student writing - but it is reshaping it
I'm a writing professor who sees artificial intelligence as more of an opportunity for students, rather than a threat. That sets me apart from some of my colleagues, who fear that AI is accelerating a glut of superficial content, impeding critical thinking and hindering creative expression. They worry that students are simply using it out of sheer laziness or, worse, to cheat. Perhaps that's why so many students are afraid to admit that they use ChatGPT. In The New Yorker magazine, historian D. Graham Burnett recounts asking his undergraduate and graduate students at Princeton whether they'd ever used ChatGPT. No one raised their hand. "It's not that they're dishonest," he writes. "It's that they're paralyzed." Students seem to have internalized the belief that using AI for their coursework is somehow wrong. Yet, whether my colleagues like it or not, most college students are using it. A February 2025 report from the Higher Education Policy Institute in the U.K. found that 92% of university students are using AI in some form. As early as August 2023 - a mere nine months after ChatGPT's public release - more than half of first-year students at Kennesaw State University, the public research institution where I teach, reported that they believed that AI is the future of writing. It's clear that students aren't going to magically stop using AI. So I think it's important to point out some ways in which AI can actually be a useful tool that enhances, rather than hampers, the writing process. Helping with the busywork A February 2025 OpenAI report on ChatGPT use among college-aged users found that more than one-quarter of their ChatGPT conversations were education-related. The report also revealed that the top five uses for students were writing-centered: starting papers and projects (49%); summarizing long texts (48%); brainstorming creative projects (45%); exploring new topics (44%); and revising writing (44%). These figures challenge the assumption that students use AI merely to cheat or write entire papers. Instead, it suggests they are leveraging AI to free up more time to engage in deeper processes and metacognitive behaviors - deliberately organizing ideas, honing arguments and refining style. If AI allows students to automate routine cognitive tasks - like information retrieval or ensuring that verb tenses are consistent - it doesn't mean they're thinking less. It means their thinking is changing. Of course, students can misuse AI if they use the technology passively, reflexively accepting its outputs and ideas. And overreliance on ChatGPT can erode a student's unique voice or style. However, as long as students learn how to use AI intentionally, this shift can be seen as an opportunity, rather than a loss, Clarifying the creative vision It has also become clear that AI, when used responsibly, can augment human creativity. For example, science comedy writer Sarah Rose Siskind recently gave a talk to Harvard students about her creative process. She spoke about how she uses ChatGPT to brainstorm joke setups and explore various comedic scenarios, which allows her to focus on crafting punchlines and refining her comedic timing. Note how Siskin used AI in ways that didn't supplant the human touch. Instead of replacing her creativity, AI amplified it by providing structured and consistent feedback, giving her more time to polish her jokes. Another example is the Rhetorical Prompting Method, which I developed alongside fellow Kennesaw State University researchers. Designed for university students and adult learners, it's a framework for conversing with an AI chatbot, one that emphasizes the importance of agency in guiding AI outputs. When writers use precise language to prompt, critical thinking to reflect, and intentional revision to sculpt inputs and outputs, they direct AI to help them generate content that aligns with their vision. There's still a process The Rhetorical Prompting Method mirrors best practices in process writing, which encourages writers to revisit, refine and revise their drafts. When using ChatGPT, though, it's all about thoughtfully revisiting and revising prompts and outputs. For instance, say a student wants to create a compelling PSA for social media to encourage campus composting. She considers her audience. She prompts ChatGPT to draft a short, upbeat message in under 50 words that's geared to college students. Reading the first output, she notices it lacks urgency. So she revises the prompt to emphasize immediate impact. She also adds some additional specifics that are important to her message, such as the location of an information session. The final PSA reads: "Every scrap counts! Join campus composting today at the Commons. Your leftovers aren't trash - they're tomorrow's gardens. Help our university bloom brighter, one compost bin at a time." The Rhetorical Prompting Method isn't groundbreaking; it's riffing on a process that's been tested in the writing studies discipline for decades. But I've found that it works by directing writers how to intentionally prompt. I know this because we asked users about their experiences. In an ongoing study, my colleagues and I polled 133 people who used the Rhetorical Prompting Method for their academic and professional writing: 92% reported that it helped them evaluate writing choices before and during their process. 75% said that they were able to maintain their authentic voice while using AI assistance. 89% responded that it helped them think critically about their writing. The data suggests that learners take their writing seriously. Their responses reveal that they are thinking carefully about their writing styles and strategies. While this data is preliminary, we continue to gather responses in different courses, disciplines and learning environments. All of this is to say that, while there are divergent points of view over when and where it's appropriate to use AI, students are certainly using it. And being provided with a framework can help them think more deeply about their writing. AI, then, is not just a tool that's useful for trivial tasks. It can be an asset for creativity. If today's students - who are actively using AI to write, revise and explore ideas - see AI as a writing partner, I think it's a good idea for professors to start thinking about helping them learn the best ways to work with it.
[3]
ChatGPT vs. students: Study reveals who writes better
AI-generated essays don't yet live up to the efforts of real students, according to new research from the University of East Anglia (UK). A new study published in Written Communication compared the work of 145 real students with essays generated by ChatGPT. The paper is titled "Does ChatGPT write like a student? Engagement markers in argumentative essays." While the AI essays were found to be impressively coherent and grammatically sound, they fell short in one crucial area -- they lacked a personal touch. As the line between human and machine writing continues to blur, the study underlines the importance of fostering critical literacy and ethical awareness in the digital age. It is hoped that the findings could help educators spot cheating in schools, colleges and universities worldwide by recognizing machine-generated essays. Prof Ken Hyland, from UEA's School of Education and Lifelong Learning, said, "Since its public release, ChatGPT has created considerable anxiety among teachers worried that students will use it to write their assignments. "The fear is that ChatGPT and other AI writing tools potentially facilitate cheating and may weaken core literacy and critical thinking skills. This is especially the case as we don't yet have tools to reliably detect AI-created texts. "In response to these concerns, we wanted to see how closely AI can mimic human essay writing, particularly focusing on how writers engage with readers." The research team analyzed 145 essays written by real university students and another 145 generated by ChatGPT. "We were particularly interested in looking at what we called 'engagement markers' like questions and personal commentary," said Prof Hyland. "We found that the essays written by real students consistently featured a rich array of engagement strategies, making them more interactive and persuasive. "They were full of rhetorical questions, personal asides, and direct appeals to the reader -- all techniques that enhance clarity, connection, and produce a strong argument. "The ChatGPT essays, on the other hand, while linguistically fluent, were more impersonal. The AI essays mimicked academic writing conventions but they were unable to inject text with a personal touch or to demonstrate a clear stance. "They tended to avoid questions and limited personal commentary. Overall, they were less engaging, less persuasive, and there was no strong perspective on a topic. "This reflects the nature of its training data and statistical learning methods, which prioritize coherence over conversational nuance," he added. Despite its shortcomings, the study does not dismiss the role of AI in the classroom. Instead, the researchers say that tools like ChatGPT should be used as teaching aids rather than shortcuts. "When students come to school, college or university, we're not just teaching them how to write, we're teaching them how to think -- and that's something no algorithm can replicate," added Prof Hyland. This study was led by UEA in collaboration with Prof Kevin Jiang of Jilin University, China.
[4]
AI isn't replacing student writing, but it is reshaping it
I'm a writing professor who sees artificial intelligence as more of an opportunity for students, rather than a threat. That sets me apart from some of my colleagues, who fear that AI is accelerating a glut of superficial content, impeding critical thinking and hindering creative expression. They worry that students are simply using it out of sheer laziness or, worse, to cheat. Perhaps that's why so many students are afraid to admit that they use ChatGPT. In The New Yorker magazine, historian D. Graham Burnett recounts asking his undergraduate and graduate students at Princeton whether they'd ever used ChatGPT. No one raised their hand. "It's not that they're dishonest," he writes. "It's that they're paralyzed." Students seem to have internalized the belief that using AI for their coursework is somehow wrong. Yet, whether my colleagues like it or not, most college students are using it. A February 2025 report from the Higher Education Policy Institute in the U.K. found that 92% of university students are using AI in some form. As early as August 2023 -- a mere nine months after ChatGPT's public release -- more than half of first-year students at Kennesaw State University, the public research institution where I teach, reported that they believed that AI is the future of writing. It's clear that students aren't going to magically stop using AI. So I think it's important to point out some ways in which AI can actually be a useful tool that enhances, rather than hampers, the writing process. Helping with the busywork A February 2025 OpenAI report on ChatGPT use among college-aged users found that more than one-quarter of their ChatGPT conversations were education-related. The report also revealed that the top five uses for students were writing-centered: starting papers and projects (49%); summarizing long texts (48%); brainstorming creative projects (45%); exploring new topics (44%); and revising writing (44%). These figures challenge the assumption that students use AI merely to cheat or write entire papers. Instead, it suggests they are leveraging AI to free up more time to engage in deeper processes and metacognitive behaviors -- deliberately organizing ideas, honing arguments and refining style. If AI allows students to automate routine cognitive tasks -- like information retrieval or ensuring that verb tenses are consistent -- it doesn't mean they're thinking less. It means their thinking is changing. Of course, students can misuse AI if they use the technology passively, reflexively accepting its outputs and ideas. And overreliance on ChatGPT can erode a student's unique voice or style. However, as long as students learn how to use AI intentionally, this shift can be seen as an opportunity, rather than a loss, Clarifying the creative vision It has also become clear that AI, when used responsibly, can augment human creativity. For example, science comedy writer Sarah Rose Siskind recently gave a talk to Harvard students about her creative process. She spoke about how she uses ChatGPT to brainstorm joke setups and explore various comedic scenarios, which allows her to focus on crafting punchlines and refining her comedic timing. Note how Siskind used AI in ways that didn't supplant the human touch. Instead of replacing her creativity, AI amplified it by providing structured and consistent feedback, giving her more time to polish her jokes. Another example is the Rhetorical Prompting Method, which I developed alongside fellow Kennesaw State University researchers. Designed for university students and adult learners, it's a framework for conversing with an AI chatbot, one that emphasizes the importance of agency in guiding AI outputs. When writers use precise language to prompt, critical thinking to reflect, and intentional revision to sculpt inputs and outputs, they direct AI to help them generate content that aligns with their vision. There's still a process The Rhetorical Prompting Method mirrors best practices in process writing, which encourages writers to revisit, refine and revise their drafts. When using ChatGPT, though, it's all about thoughtfully revisiting and revising prompts and outputs. For instance, say a student wants to create a compelling PSA for social media to encourage campus composting. She considers her audience. She prompts ChatGPT to draft a short, upbeat message in under 50 words that's geared to college students. Reading the first output, she notices it lacks urgency. So she revises the prompt to emphasize immediate impact. She also adds some additional specifics that are important to her message, such as the location of an information session. The final PSA reads: "Every scrap counts! Join campus composting today at the Commons. Your leftovers aren't trash -- they're tomorrow's gardens. Help our university bloom brighter, one compost bin at a time." The Rhetorical Prompting Method isn't groundbreaking; it's riffing on a process that's been tested in the writing studies discipline for decades. But I've found that it works by directing writers how to intentionally prompt. I know this because we asked users about their experiences. In an ongoing study, my colleagues and I polled 133 people who used the Rhetorical Prompting Method for their academic and professional writing: The data suggests that learners take their writing seriously. Their responses reveal that they are thinking carefully about their writing styles and strategies. While this data is preliminary, we continue to gather responses in different courses, disciplines and learning environments. All of this is to say that, while there are divergent points of view over when and where it's appropriate to use AI, students are certainly using it. And being provided with a framework can help them think more deeply about their writing. AI, then, is not just a tool that's useful for trivial tasks. It can be an asset for creativity. If today's students -- who are actively using AI to write, revise and explore ideas -- see AI as a writing partner, I think it's a good idea for professors to start thinking about helping them learn the best ways to work with it.
[5]
Man vs. machine: Why students still write better than AI - Earth.com
A new study shows that essays drafted by ChatGPT read smoothly, yet feel distant. AI essays lack the subtle devices that pull readers in. This gap may help teachers separate genuine coursework from machine writing. The finding urges stronger critical literacy in an age of rapid text generation. Researchers at the University of East Anglia, with a collaborator from Jilin University, worked on the comparison. They judged 145 student essays and the same number created by ChatGPT, scanning each for cues that invite a reader to engage. The scientists hope the results will aid teachers and exam boards worldwide in flagging suspicious assignments. Software detectors still generate false positives, so human judgment remains vital. By learning the discourse habits most students use, markers can notice when a script drops the personal edge that usually colors undergraduate prose. This knowledge supports fair grading and helps preserve academic integrity in classrooms already flooded with generative tools. Professor Ken Hyland of UEA's School of Education and Lifelong Learning voiced the worry driving the project. "Since its public release, ChatGPT has created considerable anxiety among teachers worried that students will use it to write their assignments," said Hyland, who warned of wider harm. "The fear is that ChatGPT and other AI writing tools potentially facilitate cheating and may weaken core literacy and critical thinking skills. This is especially the case as we don't yet have tools to reliably detect AI-created texts." The researchers searched for questions, personal asides, direct appeals, and other "engagement markers." "We were particularly interested in looking at what we called 'engagement markers' like questions and personal commentary," Hyland said. "We found that the essays written by real students consistently featured a rich array of engagement strategies, making them more interactive and persuasive." Human writers sprinkled their work with moments that addressed the reader, building a sense of dialogue. "They were full of rhetorical questions, personal asides, and direct appeals to the reader - all techniques that enhance clarity, connection, and produce a strong argument." "The ChatGPT essays, on the other hand, while linguistically fluent, were more impersonal. The AI essays mimicked academic writing conventions but they were unable to inject text with a personal touch or to demonstrate a clear stance," Hayland explained. Without questions or vivid commentary, the machine output felt flat. "They tended to avoid questions and limited personal commentary. Overall, they were less engaging, less persuasive, and there was no strong perspective on a topic." "This reflects the nature of its training data and statistical learning methods, which prioritize coherence over conversational nuance," noted Hyland. Even so, the authors do not reject AI tools. They see them as potential tutors when used openly. "When students come to school, college or university, we're not just teaching them how to write, we're teaching them how to think - and that's something no algorithm can replicate," said Hyland. The researchers urge teachers to design process‑based tasks that require drafts and reflection - steps no chatbot can provide. Training students to spot engagement markers, they add, will sharpen both writing and detection skills. The study lands as detection software races to catch up with generative models. Commercial tools still stumble on hybrid texts that mix human and machine sentences. For now, stylistic clues give teachers a slim but useful edge, though the target may shift as AI absorbs more conversational cues. Coursework must remain proof of independent thought. If essays lose that role to AI, qualification systems wobble. By showing where AI still falters, the study offers data to shape new safeguards while keeping space for honest, imaginative student voices. The authors place the project within a wider push for digital literacy. Students meet machine text in news feeds, search results, and chat apps long before they step into lecture halls. Teaching them to ask who wrote a sentence and why it was written is now central to education. Writing with a visible stance trains that habit. It also makes learners less willing to accept anonymous prose at face value. Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[6]
How AI is reshaping student writing
I'm a writing professor who sees artificial intelligence as more of an opportunity for students, rather than a threat. That sets me apart from some of my colleagues, who fear that AI is accelerating a glut of superficial content, impeding critical thinking and hindering creative expression. They worry that students are simply using it out of sheer laziness or, worse, to cheat. Perhaps that's why so many students are afraid to admit that they use ChatGPT. In The New Yorker magazine, historian D. Graham Burnett recounts asking his undergraduate and graduate students at Princeton whether they'd ever used ChatGPT. No one raised their hand.
Share
Share
Copy Link
A new study from the University of East Anglia compares essays written by students and ChatGPT, finding that while AI-generated essays are coherent, they lack the personal touch and engagement strategies of human-written work.
A groundbreaking study from the University of East Anglia (UEA) has shed light on the ongoing debate between AI-generated and human-written essays. The research, published in the journal Written Communication, compared 145 essays written by university students with an equal number generated by ChatGPT
1
.While ChatGPT demonstrated impressive linguistic fluency and adherence to academic writing conventions, it fell short in a crucial aspect - personal engagement. Professor Ken Hyland from UEA's School of Education and Lifelong Learning noted that student essays consistently featured a rich array of engagement strategies, making them more interactive and persuasive
2
.The study focused on "engagement markers" such as rhetorical questions, personal asides, and direct appeals to the reader. These elements, abundant in student essays, were largely absent in AI-generated content. As a result, the ChatGPT essays were found to be less engaging, less persuasive, and lacking a strong perspective on the topic
3
.Despite these findings, the study does not dismiss the role of AI in education. Instead, it suggests that tools like ChatGPT should be used as teaching aids rather than shortcuts. This perspective aligns with some educators who see AI as an opportunity to enhance the writing process
4
.A February 2025 report from the Higher Education Policy Institute found that 92% of university students are using AI in some form. The top five uses for students were writing-centered, including starting papers, summarizing texts, and brainstorming creative projects
4
.Researchers at Kennesaw State University have developed the Rhetorical Prompting Method, a framework for students to interact with AI chatbots while maintaining their agency in the writing process. This method emphasizes using precise language, critical thinking, and intentional revision when working with AI
4
.Related Stories
The study's findings could help educators identify machine-generated essays, addressing concerns about academic cheating. By recognizing the lack of personal engagement in AI-written texts, teachers may be better equipped to spot suspicious assignments
5
.As AI continues to evolve, the line between human and machine writing may become increasingly blurred. This study underscores the importance of fostering critical literacy and ethical awareness in the digital age. It also highlights the need for education systems to adapt, focusing on teaching students not just how to write, but how to think - a skill that, as Professor Hyland notes, "no algorithm can replicate"
1
.Summarized by
Navi
[1]
[2]
26 Jun 2025•Technology
27 May 2025•Technology
18 Jun 2025•Science and Research
1
Business and Economy
2
Technology
3
Business and Economy