Curated by THEOUTPOST
On Thu, 18 Jul, 4:02 PM UTC
2 Sources
[1]
How universities spot AI cheats - and the one word that gives it away
University lecturers might be getting better at identifying work by ChatGPT, but there are still few ways of proving or punishing it Sitting in his office, working his way through a pile of undergraduate essays, Dr Edward Skidelsky's suspicions were aroused by one tell-tale word - "delve". "The sentence was something like 'this essay delves into the rich tapestry of experiences...' and I thought there is that word 'delve' again," says the Exeter University philosophy lecturer. "The sentence is typical of the purple but empty prose that ChatGPT produces." ChatGPT, the AI software that creates text on any given topic in a matter of seconds, for free, was launched by OpenAI at the end of 2022. Other models soon followed - and their advent has prompted equal measures of horror and excitement across the education world. At its best, it is a tool that streamlines research. A recent investigation by the Higher Education Policy Institute (HEPI) reveals that more than half have admitted to using generative AI to "help prepare assessments". It's safe to say that uptake of the generative, AI software by university students is now endemic. But its pervasiveness means that the routine task of marking essays has become increasingly fraught for thousands of dons on campuses across the UK. Because what started as a trickle of AI text popping up in students' work has become a steady stream, resulting in "quite a lot of essays" written, at least in part, by generative AI. Along with "delve", academic writing is now increasingly littered with AI favourite words such as "showcasing", "underscores", "potential", "crucial", "enhancing" and "exhibited". There are other giveaways. Incorrect or incomplete references in students' work to papers in academic journals can be because "ChatGPT finds page numbers difficult". Sudden changes in writing styles within one essay is another red flag, as is the lack of a sustained argument. In this new world, poor grammar and spelling are reassuring, rather than irritating. "Ironically you know that students are not using AI just because they make mistakes in their grammar," says Dr Skidelsky. "ChatGPT content, although blandly empty, is mistake-free and that immediately distinguishes it from a student's own work." Professor Steve Fuller, a sociology professor at Warwick University, feels his antennae twitch when he comes across "certain words or phrases that get repeated in a mechanical fashion", a sign that ChatGPT is mindlessly repeating expressions that appear often in the internet material it is sampling. Fuller believes most students do not cheat. That said, he regularly comes across what he thinks is AI-generated text. A key sign is where students' answers include little or no reference to the course material. "The required reading is supposed to show up in their answers," says Prof Fuller. "But with ChatGPT there is no particular reason why it should. You end up with answers that might be correct but are very generic and not really on-point for the course." Some professors have been blunt in their assessment of the impact. Des Fitzgerald, a professor at University College Cork, has said that student use of AI has "gone totally mainstream" and described it as "a machine for producing crap". Meanwhile, as academics despair and try to hold the line on academic integrity, university policies around the use of AI can be vague and contradictory in practice. Where the line of "appropriate use" is drawn is ill-defined. AI-detecting software is not much help. Its own creators admit it is unreliable and many universities do not use it as a result. The generative AI phenomenon has left academics so at sea that they look back with nostalgia to the days of straightforward, old-fashioned plagiarism, which can be spotted by software and checked against source material. Proving students have cheated with ChatGPT is a more difficult prospect. There is no source document to verify. As one academic puts it, the tutor cannot prove anything, and the student cannot defend themselves. A HEPI study suggests that since the launch of generative AI, academic misconduct cases have rocketed - doubling or even tripling at some institutions. But academics say they are reluctant to report allegations without rock-solid evidence. "It is impossible to prove and you'd waste a lot of time," said Professor Fuller, at Warwick. "But if I do suspect, it is reflected in my marks and my comments." The professor recently gave an essay 62 per cent and wrote on it "this looks like it was generated by ChatGPT". "I also gave feedback and explained that it was a very surface treatment of the subject," he says. "The student didn't challenge me on it. I'm pretty sure I don't catch it all [ChatGPT generated text] but I'm also pretty sure I've never given a first to anyone that has used a lot of ChatGPT." Dr Skidelsky at Exeter has a similar approach: "You can mark it down just because it is bad, but you can't make an accusation without proof." Academics find themselves between a rock and a hard place. Many believe generative AI should be used by students and integrated into courses because it is a fact of life and employers will expect them to use it effectively. But overuse of ChatGPT risks students failing to put in the graft required to cement knowledge and develop crucial skills and capabilities. And it is naive for universities to treat generative AI as the equivalent of a calculator or an online thesaurus. As Dr Skidelsky says: "It is much more than just a tool; it does actually replace some pretty sophisticated cognitive processes and if students are encouraged to use it, they could end up not doing their thinking for themselves." One obvious way to ensure students' work is their own is through in-person exams - a method that has been replaced at many universities by coursework and non-invigilated online exams. So should they be reinstated? Institutions argue that a diet of exams fails to assess the most important things or reflect the world outside the classroom. And, as one academic points out, "Students don't like exams and they are the £9,250-a-year consumers and must be kept happy." But as more sophisticated iterations of generative AI come on the market the tell-tale signs that academics currently depend on to spot its use are likely to disappear. "The technology is going at a breakneck speed," says Kieran Obermanan Oberman, an associate professor at the London School of Economics. "Down the line, essays generated by ChatGPT, or similar, won't be bad and they won't be obvious." He predicts a future with more "AI resistant", in-person assessment - exams, oral tests and presentations in class. Outside of that, policing could include asking students to save multiple versions of their essay to track edits, making "massive copy and paste jobs" obvious. "It is constantly on your mind," says Oberman. "You are looking into the future and you know the tech is getting better and it might become harder to detect and harder for students to avoid using it if everyone else is using it. It is like doping in sports, and academia, like sport, is extremely competitive."
[2]
Tailoring university assessments in the age of ChatGPT
OpenAI last year released ChatGPT-4, the latest iteration of its powerful Artificial Intelligence (AI) text generator. The tool can generate convincingly human-like responses to almost any question users put to it. It can write limericks, tell jokes, and plot a novel. It can draft a convincing response to almost any question a high school teacher or university lecturer might ask students to write about. Previous iterations would often generate text riddled with strange and obvious mistakes. The responses that ChatGPT generates are capable of passing exams across many disciplines. It's tempting to think we'll always be able to distinguish between the work of AI and the work of humans, particularly when it comes to distinctly human tasks such as creative writing, careful reasoning, and drawing novel connections between different kinds of information. Unfortunately, this optimism is misguided. AI-generated prose and poetry can be beautiful. And with some clever prompting, AI tools can generate passable argumentative essays in philosophy and bioethics. This raises a serious worry for universities that students will be able to pass assessments without writing a single word themselves - or necessarily understanding the material they're supposed to be tested on. This isn't just a worry about the future; students have already begun submitting AI-generated work. Some institutions treat the use of AI text generators as cheating. Many schools and universities have banned the use of ChatGPT, but such bans will be hard to enforce. Compared to traditional forms of plagiarism, student use of AI-generated text is hard to detect - and harder still to prove, in part because new ChatGPT generates new responses each time a user inputs the same prompt. For its part, OpenAI is developing tools to detect AI-assisted cheating - though such tools are prone to making mistakes, and can at present be circumvented by asking ChatGPT to write in a style that its detector is unlikely to catch. Generative AI tools such as ChatGPT are poised to make far-reaching changes to how we approach writing tasks. Among other things, they'll make some tedious and difficult parts of the writing process easier. Sam Altman, the CEO of OpenAI, has compared the release of ChatGPT to the advent of the calculator. Calculators brought about enormous benefits; ChatGPT will, Altman claims, do the same. Schools have adapted to calculators by changing how math is tested and taught; we now need to do the same for ChatGPT. Rather than comforting us, the parallel with calculators should alert us to the magnitude of the task we face. We see two main threats posed by tools like ChatGPT. The first is that they'll produce content that's superficially plausible but entirely incorrect. AI outputs can thus leave us with a deeply mistaken picture of the world. Contrary to appearances, ChatGPT is not trying (but, often, failing) to assert facts about the world. Instead, it is (successfully) performing a different task - that of generating superficially plausible or convincing responses to a prompt. The second worry is that reliance on these tools will result in the erosion of important skills. Essay writing, for example, is valuable in part because the act of writing can help us think through difficult concepts and generate new ideas. In these early stages of the introduction of generative AI, educators may feel overwhelmed by the rapidly changing technological environment, but students are also coming along for the ride with us. We suggest four approaches. ChatGPT can be a useful tool. It can, for instance, help generate ideas and get words on the page. The worries about misinformation are serious. But these are best addressed by teaching students how to use these tools, how to understand their limitations, and how to fact check their output. Fortunately, the core skills cultivated by a good education provide a strong foundation for this project. Teaching students how to read critically, how to evaluate or corroborate evidence, and how to distinguish good arguments from bad, are things universities should be doing already. One approach might be to develop specific assessment tasks where students generate, analyse, and criticise AI outputs. While such tasks might have some role to play, we would caution against placing generative AI at the centre of education. We should remind ourselves that for most, choosing to participate in higher education comes from a genuine interest in a subject. This fact may go some way towards mitigating the temptation to outsource their studies to AI, particularly when the value of completing this work is clear to students. By designing assessments that are relevant to students' future careers, and clarifying the purpose of tasks about their development, we can encourage learners to engage with assessment in the way we intended. Assessment that engages with, and leverages, students' interests could motivate learners to remain engaged such that they don't see value in outsourcing the pursuit of their knowledge to AI. A key worry about AI text generation is that students won't understand what they appear to, given the work they've submitted. This concern can be met by balancing written work with other kinds of assessments. In particular, in-person oral presentations cannot be taken over by any algorithm, and so may be an ideal option (provided, of course, that any increase in workload for teaching staff is supported by the institution). Supplementing traditional essays with other assessments need not come at the expense of good assessment design. On the contrary, there are good educational reasons to vary written work with these other kinds of assessment; oral communication skills are enormously valuable across a range of professions. Another strategy involves designing assignments where students are either required to demonstrate their own understanding. This strategy may have a role to play, but it would come at a cost. We're amid a shift away from pen-and-paper examinations to authentic assessments - that is, assessments that evaluate skills students will employ in real-world settings. Few workplaces require their employees to write detailed discussions of difficult questions by hand, in isolation, and without the ubiquitous modern conveniences of an internet connection and a word processor. An alternative is to combine written essays with the presentation and discussion of this work during class time, potentially modeled after the format of a viva presentation or thesis defence (albeit made gentler and shorter according to the cohort being taught.) In our own experiments, we found that ChatGPT can generate convincing responses about major works in our respective disciplines. However, it fares very poorly when asked about the cutting edge of scholarly debate, since the corpus of work it was trained on contains much less discussion of this work. When asked to reference its claims, it's prone to hallucinate sources that don't exist. Dystopian visions in which AI teachers set tasks that students then farm out to AI look all too plausible. The immediate challenge for educators is to determine what an AI-literate skill set looks like, and how to evaluate whether students have these skills, especially when many of us are new to these skills ourselves. The deeper challenge posed by the 'threat' of AI is to imagine what education would look like should the tools available to us relieve us of the need to exercise these crucial skills.
Share
Share
Copy Link
Exeter University pioneers AI-friendly assessments as higher education grapples with ChatGPT's impact. The move sparks debate on academic integrity and the future of education in the AI era.
In a bold move that signals a significant shift in higher education, the University of Exeter has announced plans to revamp its assessment methods to accommodate the growing influence of artificial intelligence (AI) tools like ChatGPT. The university will now allow students to use AI in their academic work, provided they properly cite and critically engage with the AI-generated content 1.
This decision comes as universities worldwide grapple with the challenges posed by easily accessible AI writing tools. Rather than fighting against the tide, Exeter has chosen to embrace the technology, recognizing its potential to enhance learning experiences. Professor Lisa Roberts, Exeter's vice-chancellor, emphasized that the university aims to equip students with the skills needed to work alongside AI effectively 1.
Under the new guidelines, students will be evaluated on their ability to critically analyze and build upon AI-generated content. This approach shifts the focus from mere content creation to higher-order thinking skills such as evaluation, synthesis, and creative application of knowledge 2.
While Exeter's approach has been praised for its forward-thinking nature, it has also sparked debate within academic circles. Critics argue that allowing AI use in assessments could potentially undermine academic integrity and the development of essential writing skills. Supporters, however, contend that this approach better prepares students for a future where AI will be an integral part of many professions 1.
Exeter's decision reflects a broader trend in higher education institutions worldwide. Universities are increasingly recognizing the need to adapt their teaching and assessment methods to remain relevant in the AI age. This shift is not just about accommodating new technologies, but about fundamentally rethinking the skills and competencies that graduates will need in an AI-driven world 2.
As Exeter implements its new assessment strategy, other institutions will be watching closely. The success or challenges faced by Exeter could potentially influence policy decisions at universities around the globe. This pioneering approach may well set a new standard for how higher education institutions navigate the complex intersection of AI, academic integrity, and student learning in the 21st century.
Reference
[2]
A significant portion of research papers may already be co-authored by AI, raising questions about authorship, ethics, and the future of scientific publishing.
2 Sources
Recent tests reveal that AI detectors are incorrectly flagging human-written texts, including historical documents, as AI-generated. This raises questions about their accuracy and the potential consequences of their use in academic and professional settings.
2 Sources
A high school math teacher in California embraces AI tools in his classroom, sparking discussions about the potential benefits and ethical concerns of AI integration in education.
2 Sources
OpenAI, the creator of ChatGPT, has developed tools to detect AI-generated text but is taking a measured approach to their release. The company cites concerns about potential misuse and the need for further refinement.
12 Sources
Quizlet's latest report reveals a shift in AI adoption trends in education, with a slowdown in pace but an increase in intentional and strategic implementation. The study highlights both the benefits and challenges of AI integration in learning environments.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved