Curated by THEOUTPOST
On Thu, 24 Apr, 4:01 PM UTC
3 Sources
[1]
As ChatGPT scores B- in engineering, courses face shake-up
Now that AI is invading classrooms and homework assignment, students need to learn reasoning more than ever Students are increasingly turning to AI to help them with coursework, leaving academics scrambling to adjust their teaching practices or debating how to ban it altogether. But one professor likens AI to the arrival of the calculator in the classroom, and thinks the trick is to focus on teaching students how to reason through different ways to solve problems, and show them where AI may lead them astray. Over the past two years, Melkior Ornik, assistant professor in the Department of Aerospace Engineering at University of Illinois Urbana-Champaign, said both colleagues and students have fretted that many students are using AI models to complete homework assignments. So Ornik and PhD student Gokul Puthumanaillam came up with a plan to see if that was even possible. Ornik explained, "What we said is, 'Okay let's assume that indeed the students are, or at least some students are, trying to get an amazing grade or trying to get an A without any knowledge whatsoever. Could they do that?'" The academics ran a pilot study in one of the courses Ornik was teaching - a third-year undergraduate course on the mathematics of autonomous systems - to assess how a generative AI model would fare on course assignments and exams. There was still a significant disparity between the different types of problems that it could deal with or it couldn't deal with The results are documented in a preprint paper, "The Lazy Student's Dream: ChatGPT Passing an Engineering Course on Its Own." "In line with our concept of modeling the behavior of the 'ultimate lazy student' who wants to pass the course without any effort, we used the simplest free version of ChatGPT," said Ornik. "Overall, it performed very well, receiving a low B in the course." But the AI model's performance varied with the type of assignment. Ornik explained, "What I think was interesting in terms of thinking about how to adapt in the future is that while on average it did pretty decently - it got a B - there was still a significant disparity between the different types of problems that it could deal with or it couldn't deal with." With closed-form problems like multiple choice questions or making a calculation, OpenAI's ChatGPT, specifically GPT-4, did well. It got almost 100 percent on those sorts of questions. But when deeper thought was required, ChatGPT fared poorly. "Questions that were more like 'hey, do something, try to think of how to solve this problem and then write about the possibilities for solving this problem and then show us some graphs that show whether your method works or doesn't,' it was significantly worse there," said Ornik. "And so in these what we call 'design projects' it got like a D level grade." As Ornik sees it, the results offer some guidance about how educators should adjust their pedagogy to account for the expected use of AI on coursework. The situation today, he argues, is analogous to the arrival of calculators in classrooms. "Before calculators, people would do these trigonometric functions," Ornik explained. "They would have these books for logarithmic and trigonometric functions that would say, 'oh if you're looking for the value of sine of 1.65 turn to page 600 and it'll tell you the number.' Then of course that kind of got out of fashion and people stopped teaching students how to use this tool because now a bigger beast came to town. It was the calculator and it was maybe not perfect but decently competent. So we said, 'okay well I guess we'll trust this machine.'" "And so the real question that I want to deal with - and this is not a question that I can claim that I have any specific answer to - is what are things worth teaching? Is it that we should continue teaching the same stuff that we do now, even though it is solvable by AI, just because it is good for the students' cognitive health? "Or is it that we should give up on some parts of this and we should instead focus on these high-level questions that might not be immediately solvable using AI? And I'm not sure that there's currently a consensus on that question." Ornik said he's had discussions with colleagues from the University of Illinois' College of Education about why elementary school students are taught to do mental math and to memorize multiplication tables. "The answer is well this is good for the development of their brain even though we know that they will have phones and calculators," he said. "It is still good for them just in terms of their future learning capability and future cognitive capabilities to teach that. "So I think that this is a conversation that we should have. What are we teaching and why are we teaching it in this kind of new era of wide AI availability?" Ornik said he sees three strategies for dealing with the issue. One is to treat AI as an adversary and conduct classes in a way that attempts to preclude the use of AI. That would require measures like oral exams and assignments designed to be difficult to complete with AI. Another is to treat AI as a friend and simply teach students how to use AI. "Then there's the third option which is perhaps the option that I'm kind of closest to, which is AI as a fact," said Ornik. "So it's a thing that's out there that the students will use outside of the bounds of oral exams or whatever. In real life, when they get into employment, they will use it. So what should we do in order to make that use responsible? Can we teach them to critically think about AI instead of either being afraid of it or just swallowing whatever it produces kind of without thinking. "There's a challenge there. Students tend to over-trust computational tools and we should really be spending our time saying, 'hey you should use AI when it makes sense but you should also be sure that whatever it tells you is correct.'" It might seem premature to take AI as a given in the absence of a business model that makes it sustainable - AI companies still spend more than they make. Ornik acknowledged as much, noting that he's not an economist and therefore can't predict how things might go. He said the present feels a lot like the dot-com bubble around the year 2000. "That's certainly the feeling that we get now where everything has AI," he said. "I was looking at barbecue grills - the barbecue is AI powered. I don't know what that really means. From the best that I could see, it's the same technology that has existed for like 30 years. They just call it AI." Ornik also pointed to unresolved concerns related to AI models like data privacy and copyright. While those issues get sorted, Ornik and a handful of colleagues at the University of Illinois are planning to collect data from a larger number of engineering courses with the assumption that generative AI will be a reality for students. "We are now planning a larger study covering multiple courses, but also an exploration of how to change the course materials with AI's existence in mind: what are the things still worth learning?" One of the goals, he explained, is "to develop a kind of critical thinking module, something that instructors could insert into their lectures that spends an hour or two telling students, 'hey there's this great thing it's called ChatGPT. Here are some of its capabilities, but also it can fail quite miserably. Here are some examples where it has failed quite miserably that are related to what we're doing in class.'" Another goal is to experiment with changes in student assessments and in course material to adapt to the presence of generative AI. "Quite likely there will be courses that need to be approached in different ways and sometimes the material will be worth saving but we'll just change the assignments," Ornik said. "And sometimes maybe the thinking is, 'hey, should we actually even be teaching this anymore?'" ®
[2]
Opinion: Leadership needed as AI and education at a crossroads
In a brand new Silicon Republic series, Jonathan McCrea looks beyond the headlines at how AI is transforming business, work, education and research. This week he takes on AI in higher education. Last year in Dublin, I stood in a lecture theatre training a group of third-level educators on the use of AI in academia and research. As part of the session, I asked if any of them had received coursework from students that had been generated by an AI but passed off as their own. Nearly every single person in the room raised a hand. Some raised two. It was, of course, by then common knowledge that education and AI were at a crossroads, but it was still quite startling to see all of these hands twitching in the air, given how serious universities typically view plagiarism in student assessment. Interestingly, the tell-tale signs were not hallucinations or factual errors, but instead a dollop of overly grandiose prose, some American English spellings - or a sudden uptick in the clarity and quality of writing from students who - let's say - had not demonstrated such intellectual depth or eloquence earlier in the term. Many who spoke were concerned about the challenges AI posed to their abilities to evaluate their students comprehension. It's less than a year later. I think it's fair to say that, given access to any of the current generation of large language models (LLMs), even a mediocre student today could produce a 25,000-word thesis - complete with citations - in less than a day. I think it's also probably fair to say that the final output of this single day's work could be very difficult to separate from the work of a diligent and brilliant young mind who had devoted years to the study of the subject. The problem of hallucination The problem here is self-evident: if students can produce PhD-level essays without understanding them - let alone be able to critique their accuracy - that's a big problem. Despite the excellence of newer AI models, they can still make serious mistakes. While hallucination is becoming rarer, it is far from extinct. More problematic is the fact that some models will happily take something written on the internet and present it as fact. Marketing copy, four-year old Reddit reviews and published science can all sit happily side by side in the hierarchy of truth. Of course, the value of education - the whole point of it - is not just in knowing the right answer to the questions on a final-term paper. The value is in learning how to learn and gaining a deep understanding of a subject. And most importantly, the value is in creating individuals who have critical skills and can think for themselves. I spoke with Prof Alan Smeaton who leads the Government of Ireland AI Advisory Council and just published a report on AI and education. As one of the country's leading authorities on AI, he's a little biased of course, but he is surprisingly upbeat about the issue. The good news, he says, is that third-level institutions are set up to be nimble enough to adapt to this change. Colleges and lecturers have the freedom to design the students' learning experience however they see fit. To do that for the modern world though, staff in our universities and other higher-education institutions (HEIs) need to embrace these tools rather than bury their heads in the sand, thinking this whole AI thing will blow over. It will not. Guidance available There are pockets of proactivity across the country. University College Dublin has created a clear guide on how students should approach their use of any generative AI (GenAI) applications. Smeaton co-published a free course for Dublin City University staff on AI Literacy. The Centre for the Integration of Research, Teaching and Learning (CIRTL) at University College Cork issued excellent guidance on this problem in a 'Short Guide' for its faculty: "The more transparency there is around the rationale behind assessment decisions and the lines around acceptable or forbidden AI use, the better students will be able to meet these expectations. "Because, at the end of the day, AI is a tool and, as such, is neither inherently good nor inherently bad - what matters is how it is used. But to use it effectively - or to follow advice to avoid it completely - students need to understand what AI is, why it is or is not appropriate and, perhaps most importantly, to see the ways their own experience and expertise shape its use and effectiveness." The guide encourages staff to talk about AI with students, and to explain when and how it should be used. It also suggests ways of adapting assessment to the new world order, such as placing greater emphasis on collaborative work, analytical thinking, and personal reflection - or even requiring students to submit annotated drafts that show the steps they took to reach their conclusions. Confusion remains These are all great ideas and similar guidance can be found on nearly every university website if you look closely enough, but the burden of work in learning how to use AI, determining an approach on how to integrate it and communicating all of this to students still seems to lie mostly on the shoulders of the individual instructor. This is tricky for students who have to deal with variability from one lecture to another and the stakes can be high for the students who may get it wrong. In March 2023, Trinity College Dublin issued guidance that, "If a student generates content from a GenAI tool and submits it as their own work, it is considered plagiarism, which is defined as academic misconduct in accordance with the College Academic Integrity Policy". Yet, as of just a couple of months ago, Gen AI use is now permitted by default for Trinity students in all courses, unless otherwise specified. It's confusing, I would imagine, for students and staff alike. Many of the educators I spoke to on the subject across various disciplines and institutions weren't clear on what their role was in this domain. Many don't even use the technology themselves, which makes policing its use an interesting challenge. It's clear to me that we desperately need to raise the AI literacy of educators and students alike so they can know how to use AI tools in a reliable, secure, equitable and inclusive manner. There are myriad ways AI can help learners and educators to excel. Not every college lecturer is inspiring, and not every learner takes in information the same way. AI can be wonderful for student comprehension, of course. With Claude or ChatGPT or Gemini, students can essentially build their own tutor for any subject and interact with it using pictures, voice and text. For example, NotebookLM is a tool that I find learners love because it allows them to upload sources and then create study notes, study plans and even a voice podcast, which is a real stand-out feature. With these new tools, each individual student can personalise how they learn, what they learn and at what pace. This is possible today using free tools - it's just about knowing how to ask the right questions. Lecturers, armed with domain expertise and a rudimentary knowledge of how to leverage AI, can innovate to build better study materials, games or interactive content that align perfectly with their understanding of the subject matter and desired curriculum. Beyond study, the power of GenAI to co-create new work is only at its infancy. In the near future, we will come to view agents as collaborators and teammates in research and education and beyond - a new breed of digital colleague. Leadership needed We also need leadership from Government and HEIs to recognise that AI is here to stay and that we all need to adapt more quickly to this new reality. The prevailing wind of the last two years has understandably been one of caution. That now needs to give way to pragmatism and seizing the opportunity in front of us. Preparing young adults to work side-by-side with AI is not just smart: it's vital to their success in the real world. India Today recently reported that AI skills will be a mandatory part of coursework from the age of 8. For what it's worth, this terrifies me a little. I am not someone who blindly believes this wild enthusiasm for AI is a good thing. But it is a thing that is changing our very world. Put plainly, we need to understand GenAI now and wield it - or risk becoming its subjects. What's needed now is a coordinated national strategy that provides frameworks, funding and training to help educators and students navigate this digital transformation. Without this leadership, we face an inconsistent patchwork of approaches that leaves some learners and institutions behind. The organisations leading the way have shown that adapting to AI doesn't mean abandoning academic rigour - it just means redefining it for a world where human-AI collaboration is the new literacy. The question is no longer whether to embrace these tools, but rather how quickly we can develop the wisdom to use them well. Further information on Get Started with AI here. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news
[3]
How AI, Funding Cuts and Shifting Skills Are Redefining Education -- and What It Means for the Future of Work
In recent weeks, the academic world has been rocked by news that billions of dollars in federal funding have been frozen or withdrawn from some of the most well-known universities in the country. These shifts have disrupted research, derailed planning and shaken the foundation of institutions long dependent on what now feels like a relic of the past: steady, unquestioned government funding. But this moment is not just about budgets. It's about readiness. The education system -- particularly higher education -- is being tested on all fronts: from declining enrollment to waning employer trust, from outdated course catalogs to rising student skepticism, and most powerfully, from the sudden and exponential rise of artificial intelligence. As someone who's helped launch AI and emerging technology programs across the globe, established global innovation centers as hubs for learning and development and worked with companies on enterprise-wide reskilling, I believe this is not the time to panic. This is the time to rebuild. We are at a historic inflection point. The old frameworks are fading. The future is already arriving -- and it won't wait for us to catch up. Related: How AI Is Transforming Education Forever -- and What It Means for the Next Generation of Thinkers A few weeks ago, my 10-year-old son Matthew asked me why he had to memorize historical dates when ChatGPT could give him the answers instantly. He wasn't complaining; he was confused. Why are we being taught to work around the very tools the real world expects us to use? Then there's my five-year-old, Zachary. He doesn't "use" AI -- he absorbs it. He passively consumes answers from "Cha-Gi-PiPi" (that's what he calls ChatGPT) like it's a magical oracle. He taps the mic, asks it questions about trains or dinosaurs and trusts it completely. To him, this isn't technology -- it's just how knowledge flows. And that's the point: He doesn't question it, contextualize it or challenge it ... yet. He's growing up in a world where AI is normal, automatic and invisible. Which means we -- as educators, innovators and lifelong learners -- must teach the next generation not just how to use AI, but how to think with it. U.S. undergraduate college enrollment has declined by more than two million students since 2010, according to the National Center for Education Statistics (NCES). In fall 2023 alone, enrollment dropped by another 0.6%, continuing a long-term downward trend. At the same time, employers are steadily shifting toward skills-based hiring and micro-credentials. Meanwhile, learners are turning to YouTube, AI tools, bootcamps and virtual programs that meet them where they are. This is not about convenience. It's about alignment. And while higher education has made major strides in response -- particularly in online learning, industry credentials and AI exploration -- many institutions are still operating within systems that were designed for a different era. Related: Why We Must Reimagine Education in the Age of Technology AI is not just another tool. It's a new mental model. Students can now access real-time tutoring, instant content generation, personalized feedback and creative prompts with the swipe of a screen. For them, it's not artificial -- it's ambient. And yet, most education systems are stuck debating whether to ban it, regulate it or ignore it. The risk is that we're preparing students for an analog world that no longer exists. The World Economic Forum's Future of Jobs Report 2025 estimates that 39% of core job skills will change by 2030, identifying analytical thinking, AI literacy and creativity as critical capabilities. These are not just resume boosters -- they're survival skills. More importantly, the report makes one thing clear: We're still educating for a workforce that no longer exists. One defined by static roles, predictable ladders and siloed knowledge. That era is over, and education must move forward accordingly. As we face a convergence of AI acceleration, funding disruption and societal change, here are five urgent actions for leaders in education and innovation: 1. Integrate AI thoughtfully and systematically Yes, schools should be teaching students how to use and challenge AI. And many already are -- I'm fortunate to be collaborating with brilliant minds in academia who are actively piloting AI-powered tools, embedding them into classrooms and reframing what it means to learn. But let's not minimize it: This is hard work. It requires rethinking pedagogy, redesigning assessments and helping educators become co-learners. The institutions leading this change won't just teach AI -- they'll be transformed by it. 2. Redesign learning for exploration, not memorization In a world where information is infinite, facts are just the beginning. The true value lies in asking better questions, connecting ideas and applying insight. We must shift away from rote memorization toward curricula that nurture curiosity, agility and original thought. And yes, that means assessments have to evolve, too. 3. Scale co-innovation and cross-sector partnerships Higher education must move beyond internships and advisory boards toward true co-creation with industry. That means working side-by-side with companies to build relevant, modular, real-world-aligned learning tracks. These partnerships aren't new, but they've never been more necessary. The institutions that succeed will blur the line between campus and career. 4. Use AI to humanize education, not just automate it AI can streamline grading, flag struggling students, optimize course design and deliver real-time feedback. But its real power lies in what it frees educators to do: mentor, inspire and connect. Let's use AI not to remove the teacher, but to elevate the teacher's role to its most human expression. 5. Champion innovation and entrepreneurship as core, not elective Innovation and entrepreneurship aren't side projects. They're the engines of resilience. The students who can invent, adapt and build in uncertain conditions will lead every field -- from biotech to business. Every school should be a lab. Every campus, a studio. Because the future won't be handed to us -- we'll have to build it. Related: Why We Shouldn't Fear AI in Education (and How to Use It Effectively) That's why I'm launching a new four-part series here on Entrepreneur.com called "The Great Rethink: How AI Is Forcing the Reinvention of Education." In the weeks ahead, I'll explore: We are not here to preserve what was. We are here to reimagine what's next. If you're a founder, educator, policymaker or learning and development professional, this is your moment. If you're building, exploring, experimenting -- reach out. Share your vision. Ask your big questions. Because who but us will reinvent education? Not with wishful thinking. Not with course catalogs. And certainly not with the kind of funding we once assumed would always be there.
Share
Share
Copy Link
As AI tools like ChatGPT enter classrooms, educators grapple with adapting teaching methods and assessments. The article explores the challenges and opportunities presented by AI in education, from homework assistance to rethinking curriculum design.
As artificial intelligence (AI) tools like ChatGPT become increasingly prevalent in classrooms, educators and institutions are grappling with the challenges and opportunities these technologies present. The integration of AI in education is reshaping traditional teaching methods, assessment strategies, and the very nature of learning itself 1.
A recent study conducted by Melkior Ornik and Gokul Puthumanaillam at the University of Illinois Urbana-Champaign sought to evaluate how well ChatGPT could perform in a third-year undergraduate engineering course. The results were surprising:
The study's findings highlight the need for educators to adapt their teaching methods in response to AI's capabilities. Ornik suggests three potential strategies:
The widespread use of AI in coursework has raised concerns about academic integrity. Many educators report receiving AI-generated assignments from students, making it difficult to evaluate genuine comprehension 2.
Universities are developing guidelines for AI use in education:
The education system, particularly higher education, faces multiple challenges:
The World Economic Forum's Future of Jobs Report 2025 predicts that 39% of core job skills will change by 2030, emphasizing the need for analytical thinking, AI literacy, and creativity 3.
To address these challenges, education leaders should consider:
As AI continues to reshape the educational landscape, institutions must adapt quickly to prepare students for a rapidly changing workforce and technological environment. The future of education lies in embracing AI as a tool for enhancing learning while fostering critical thinking and creativity.
Reference
[1]
[2]
A paradox emerges in schools as teachers increasingly use AI tools for various tasks while attempting to restrict student access, raising ethical questions and concerns about the future of education.
2 Sources
2 Sources
Exeter University pioneers AI-friendly assessments as higher education grapples with ChatGPT's impact. The move sparks debate on academic integrity and the future of education in the AI era.
2 Sources
2 Sources
A high school math teacher in California embraces AI tools in his classroom, sparking discussions about the potential benefits and ethical concerns of AI integration in education.
2 Sources
2 Sources
Quizlet's latest report reveals a shift in AI adoption trends in education, with a slowdown in pace but an increase in intentional and strategic implementation. The study highlights both the benefits and challenges of AI integration in learning environments.
2 Sources
2 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved