Colleges revive oral exams to combat generative AI as students submit perfect work but can't explain it

2 Sources

Share

Universities across the U.S. are bringing back Ancient Greek-style oral exams as professors notice a troubling pattern: students submit flawless AI-generated assignments but struggle to explain their work. From Cornell to NYU, educators are combining traditional Socratic methods with AI-powered assessments to verify genuine learning and address declining critical thinking skills.

News article

Colleges Revive Oral Exams as Generative AI Reshapes Academia

A crisis is unfolding across higher education as professors confront an unsettling reality: students are submitting perfect homework, yet when asked to explain their work face-to-face, they draw blank stares. This troubling disconnect has prompted universities including Cornell University, the University of Pennsylvania, and NYU Stern School of Business to resurrect a testing method as old as Socrates himself—oral exams

1

2

.

Chris Schaffer, who teaches biomedical engineering at Cornell University, introduced what he calls an "oral defense" last semester—an assessment involving no laptop, no chatbot, and no technology of any kind. "You won't be able to AI your way through an oral exam," Schaffer explains, capturing the sentiment driving this pedagogical shift

1

. Educators are no longer wondering if students will use generative AI for homework. The pressing question now centers on verifying student comprehension and determining what students are actually learning.

The Decline in Critical Thinking Skills Prompts Action

As generative AI tools like ChatGPT become more sophisticated since their launch in 2022, college instructors across the U.S. are documenting troubling new trends. Take-home essays and written assignments arrive flawless, but students cannot defend the material when questioned. Emily Hammer, an associate professor of Middle Eastern Languages and Cultures at the University of Pennsylvania, now pairs oral exams with written papers in her seminar classes to address this gap

1

.

"It comes across as if we're trying to prevent cheating," Hammer acknowledges. "That's not why we're doing this. We're doing this because students are actually losing skills, losing cognitive capacity and creativity." While she forbids AI use on all writing assignments, Hammer tells her class she knows enforcement is impossible. However, students who haven't written their papers themselves will find defending the material face-to-face "a very stressful situation"

2

.

The long-term impact of AI use on critical thinking remains uncertain, but educators worry students increasingly view the hard work of thinking as optional. Bruce Lenthall, executive director of Penn's Center for Teaching and Learning, describes Hammer's approach as part of "a massive shift toward in-person assessments," both written and oral, at the Ivy League institution

1

.

Ancient Greek-Style Oral Exams Meet Modern Assessment Methods

In-person oral assessments represent a departure from traditional modern American undergraduate practices, though they remain common in certain European universities like those using the Oxbridge tutorial system in England, where students meet faculty for weekly discussions

2

. Some U.S. colleges began moving toward oral exams during the COVID-19 pandemic to address concerns about online cheating, but interest has intensified dramatically since ChatGPT's emergence.

Engineering professor Huihui Qi at UC San Diego launched a three-year study during the pandemic on how to scale oral exams. Several universities have since invited her to provide faculty workshops or discuss her research, signaling growing institutional interest in combating generative AI through curriculum development

1

. Penn is among a small but growing number of universities running faculty workshops specifically on oral exams.

At New York University, several types of oral assessments are proliferating. More faculty are requiring office hours, assigning presentations, and cold-calling on students in class. Clay Shirky, vice provost for AI and technology in education, notes that instructors are saying, "I need to look my students in the eye and ask, 'Do you know this material?'"

2

.

Fighting Fire with Fire: AI-Powered Assessments Emerge

Panos Ipeirotis, a professor at NYU's Stern School of Business, has taken an unexpected approach to ensuring genuine learning—he's using AI-powered assessments to combat AI-generated assignments. Ipeirotis unveiled an AI-powered oral exam last semester for the final exam in his class on AI product management, calling it "fighting fire with fire"

1

.

Students log in from home at any time that fits their schedule. A voice cloned from a business school professor greets them, asks for identification, and conducts the exam. The chatbot, designed with ElevenLabs—a company that develops generative AI voice agents to conduct job interviews—starts with questions about a final group project and drills into details based on each student's answers. If students stumble, the AI agent provides clues along with criticism and positive feedback. Ipeirotis grades the exams separately, also with AI assistance

2

.

"We wanted to check: Do you know what your team did? Were you a free rider? Did you outsource everything to AI?" Ipeirotis explains. Students in the class this semester are redesigning the AI agent to smooth out some kinks, and Ipeirotis plans to use it in all his future classes. "I want oral exams everywhere now. I want to pair it with every single written assignment," he states. "I don't trust written assignments anymore to be the result of actual thinking"

1

.

Feedback from students last semester was mixed. Business major Andrea Liu found the chatbot's voice surprisingly human, but the conversation felt choppy with odd pauses. It asked multiple questions at once, which was confusing, and hearing a voice without seeing a person felt jarring

2

.

What This Means for Problem-Solving Skills and Academic Cheating

The shift toward oral exams signals a fundamental rethinking of assessment methods in academia. Educators are grappling with how to preserve problem-solving skills and authentic learning in an era where AI can produce polished written work instantly. The Socratic method, once the foundation of classical education, is being rediscovered as a tool for verifying whether students possess genuine understanding or merely access to sophisticated AI tools.

As more institutions adopt these practices, students should expect increased scrutiny of their comprehension through face-to-face interactions. The trend suggests a future where written assignments alone may no longer suffice as proof of learning, requiring students to demonstrate mastery through verbal explanation and real-time questioning. Whether traditional in-person oral assessments or AI-powered variations become the norm, the message from educators is clear: authentic engagement with course material cannot be outsourced to algorithms.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo