2 Sources
[1]
I got an AI to impersonate me and teach me my own course - here's what I learned about the future of education
University of Oxford provides funding as a member of The Conversation UK. Imagine you had an unlimited budget for individual tutors offering hyper-personalised courses that maximised learners' productivity and skills development. This summer I previewed this idea - with a ridiculous and solipsistic test. I asked an AI tutor agent to play the role of me, an Oxford lecturer on media and AI, and teach me a personal master's course, based entirely on my own work. I set up the agent via an off-the-shelf ChatGPT tool hosted on the Azure-based Nebula One platform, with a prompt to research and impersonate me, then build personalised material based on what I already think. I didn't tell the large language model (LLM) what to read or do anything else to enhance its capabilities, such as giving it access to learning materials that aren't publicly available online. The agent's course in media and AI was well structured - a term-long, original six-module journey into my own collected works that I had never devised, but admit I would have liked to. It was interactive and rapid-fire, demanding mental acuity via regular switches in formats. It was intellectually challenging, like good Oxford tutorials should be. The agent taught with rigour, giving instant responses to anything I asked. It had a powerful understanding of the fast-evolving landscape of AI and media through the same lens as me, but had done more homework. This was apparently fed by my entire multimedia output - books, speeches, articles, press interviews, even university lectures I had no idea had even been recorded, let alone used to train GPT-4 or GPT-5. The course was a great learning experience, even though I supposedly knew it all already. So in the inevitable student survey, I gave the agentic version of me well-deserved, five-star feedback. For instance, in a section discussing the ethics of non-playing characters (NPCs) in computer games, it asked: If NPCs are generated by AI, who decides their personalities, backgrounds or morals? Could this lead to bias or stereotyping? And: If an AI NPC can learn and adapt, does it blur the line between character and "entity" [independent actor]? These are great, philosophical questions, which will probably come to the fore when and if Grand Theft Auto 6 comes out next May. I'm psyched that the agentic me came up with them, even if the real me didn't. Agentic me also built on what real me does know. In film, it knew about bog-standard Adobe After Effects, which I had covered (it's used for creating motion graphics and visual effects). But it added Nuke, a professional tool used to combine and manipulate visual effects in Avengers, which (I'm embarrased to say) I had never heard of. The course reading list So where did the agent's knowledge of me come from? My publisher Routledge did a training data deal with Open AI, which I guess could cover my books on media, AI and live experience. Unlike some authors, I'm up for that. My books guide people through an amazing and fast-moving subject, and I want them in the global conversation, in every format and territory possible (Turkish already out, Korean this month). That availability has to extend to what is now potentially the most discoverable "language" of all, the one spoken by AI models. The priority for any writer who agrees with this should be AI optimisation: making their work easy for LLMs to find, process and use - much like search engine optimisation, but for AI. To build on this, I further tested my idea by getting an agent powered by China's Deep Seek to run a course on my materials. When I found myself less visible in its training corpus, it was hard not to take offence. There is no greater diss in the age of AI than a leading LLM deeming your book about AI irrelevant. When I experimented with other AIs, they had issues getting their facts straight, which is very 2024. From Google's Gemini 2.5 Pro I learned hallucinatory biographical details about myself like a role running media company The Runaway Collective. When I asked Elon Musk's Grok what my best quote was, it said: "Whatever your question, the answer is AI." That's a great line, but Google DeepMind's Nobel-winning Demis Hassabis said it, not me. Where we're heading This whole, self-absorbed summer diversion was clearly absurd, though not entirely. Agentic self-learning projects are quite possibly what university teaching actually needs: interactive, analytical, insightful and personalised. And there is some emerging research around the value. This German-led study found that AI-generated tuition helped to motivate secondary school students and benefited their exam revision. It won't be long before we start to see this kind of real-time AI layer formally incorporated into school and university teaching. Anyone lecturing undergraduates will know that AI is already there. Students use AI transcription to take notes. Lecture content is ripped in seconds from these transcriptions, and will have trained a dozen LLMs within the year. To assist with writing essays, ChatGPT, Claude, Gemini and Deep Seek/Qwen are the sine qua non of Gen Z projects. But here's the kicker. As AI becomes ever more central to education, the human teacher becomes more important, not less. They will guide the learning experience, bringing published works to the conceptual framework of a course, and driving in-person student engagement and encouragement. They can extend their value as personal AI tutors - via agents - for each student, based on individual learning needs. Where do younger teachers fit in, who don't have a back catalogue to train LLMs? Well, the younger the teacher, the more AI-native they are likely to be. They can use AI to flesh out their own conceptual vision for a course by widening the research beyond their own work, by prompting the agent on what should be included. In AI, two alternate positions are often simultaneously true. AI is both emotionally intelligent and tone deaf. It is is both a glorified text predictor and a highly creative partner. It is costing jobs, yet creating them. It is dumbing us down, but also powering us up. So too in teaching. AI threatens the learning space, yet can liberate powerful interaction. A prevailing wisdom is that it will make students dumber. But perhaps AI could actually be unlocking for students the next level of personalisation, challenge and motivation.
[2]
I got an AI to impersonate me and teach me my own course. Here's what I learned about the future of education
Imagine you had an unlimited budget for individual tutors offering hyper-personalized courses that maximized learners' productivity and skills development. This summer I previewed this idea -- with a ridiculous and solipsistic test. I asked an AI tutor agent to play the role of me, an Oxford lecturer on media and AI, and teach me a personal master's course, based entirely on my own work. I set up the agent via an off-the-shelf ChatGPT tool hosted on the Azure-based Nebula One platform, with a prompt to research and impersonate me, then build personalized material based on what I already think. I didn't tell the large language model (LLM) what to read or do anything else to enhance its capabilities, such as giving it access to learning materials that aren't publicly available online. The agent's course in media and AI was well-structured -- a term-long, original six-module journey into my own collected works that I had never devised, but admit I would have liked to. It was interactive and rapid-fire, demanding mental acuity via regular switches in formats. It was intellectually challenging, like good Oxford tutorials should be. The agent taught with rigor, giving instant responses to anything I asked. It had a powerful understanding of the fast-evolving landscape of AI and media through the same lens as me, but had done more homework. This was apparently fed by my entire multimedia output -- books, speeches, articles, press interviews, even university lectures I had no idea had even been recorded, let alone used to train GPT-4 or GPT-5. The course was a great learning experience, even though I supposedly knew it all already. So in the inevitable student survey, I gave the agentic version of myself well-deserved, five-star feedback. For instance, in a section discussing the ethics of non-playing characters (NPCs) in computer games, it asked: "If NPCs are generated by AI, who decides their personalities, backgrounds or morals? Could this lead to bias or stereotyping?" And:" If an AI NPC can learn and adapt, does it blur the line between character and 'entity [independent actor]'?" These are great, philosophical questions, which will probably come to the fore when and if Grand Theft Auto 6 comes out next May. I'm psyched that the agentic me came up with them, even if the real me didn't. Agentic me also built on what real me does know. In film, it knew about bog-standard Adobe After Effects, which I had covered (it's used for creating motion graphics and visual effects). But it added Nuke, a professional tool used to combine and manipulate visual effects in Avengers, which (I'm embarrassed to say) I had never heard of. The course reading list So where did the agent's knowledge of me come from? My publisher, Routledge, did a training data deal with Open AI, which I guess could cover my books on media, AI and live experience. Unlike some authors, I'm up for that. My books guide people through an amazing and fast-moving subject, and I want them in the global conversation, in every format and territory possible (Turkish already out, Korean this month). That availability has to extend to what is now potentially the most discoverable "language" of all, the one spoken by AI models. The priority for any writer who agrees with this should be AI optimization: making their work easy for LLMs to find, process and use -- much like search engine optimization, but for AI. To build on this, I further tested my idea by getting an agent powered by China's Deep Seek to run a course on my materials. When I found myself less visible in its training corpus, it was hard not to take offense. There is no greater diss in the age of AI than a leading LLM deeming your book about AI irrelevant. When I experimented with other AIs, they had issues getting their facts straight, which is very 2024. From Google's Gemini 2.5 Pro I learned hallucinatory biographical details about myself like a role running the media company The Runaway Collective. When I asked Elon Musk's Grok what my best quote was, it said, "Whatever your question, the answer is AI." That's a great line, but Google DeepMind's Nobel-winning Demis Hassabis said it, not me. Where we're heading This whole, self-absorbed summer diversion was clearly absurd, though not entirely. Agentic self-learning projects are quite possibly what university teaching actually needs: interactive, analytical, insightful and personalized. And there is some emerging research around the value. This German-led study found that AI-generated tuition helped to motivate secondary school students and benefited their exam revision. It won't be long before we start to see this kind of real-time AI layer formally incorporated into school and university teaching. Anyone lecturing undergraduates will know that AI is already there. Students use AI transcription to take notes. Lecture content is ripped in seconds from these transcriptions, and will have trained a dozen LLMs within the year. To assist with writing essays, ChatGPT, Claude, Gemini and Deep Seek/Qwen are the sine qua non of Gen Z projects. But here's the kicker. As AI becomes ever more central to education, the human teacher becomes more important, not less. They will guide the learning experience, bringing published works to the conceptual framework of a course, and driving in-person student engagement and encouragement. They can extend their value as personal AI tutors -- via agents -- for each student, based on individual learning needs. Where do younger teachers fit in, who don't have a back catalog to train LLMs? Well, the younger the teacher, the more AI-native they are likely to be. They can use AI to flesh out their own conceptual vision for a course by widening the research beyond their own work, by prompting the agent on what should be included. In AI, two alternate positions are often simultaneously true. AI is both emotionally intelligent and tone-deaf. It is both a glorified text predictor and a highly creative partner. It is costing jobs, yet creating them. It is dumbing us down, but also powering us up. So too in teaching. AI threatens the learning space, yet can liberate powerful interaction. A prevailing wisdom is that it will make students dumber. But perhaps AI could actually be unlocking for students the next level of personalization, challenge and motivation. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Share
Copy Link
An Oxford lecturer on media and AI conducts an experiment using AI to teach himself his own course, revealing insights into the potential future of personalized education and the evolving role of human teachers in an AI-integrated learning environment.
An Oxford lecturer specializing in media and AI recently conducted an intriguing experiment to explore the potential of AI in education. The lecturer tasked an AI tutor agent to impersonate him and teach a personal master's course based entirely on his own work 12. This unique approach aimed to test the capabilities of AI in delivering hyper-personalized education.
Source: Phys.org
The experiment utilized an off-the-shelf ChatGPT tool hosted on the Azure-based Nebula One platform. The AI was prompted to research and impersonate the lecturer, then build personalized material based on his existing knowledge and publications 1. Importantly, the AI was not given access to any non-public learning materials, relying solely on publicly available information.
The AI-generated course proved to be well-structured, featuring a term-long, six-module journey through the lecturer's collected works. The content was interactive, intellectually challenging, and demanded mental acuity through regular format switches 2. The AI tutor demonstrated a comprehensive understanding of the rapidly evolving landscape of AI and media, often surpassing the lecturer's own knowledge in certain areas.
The AI's knowledge appeared to be sourced from the lecturer's entire multimedia output, including books, speeches, articles, press interviews, and even previously unknown recorded university lectures 1. This comprehensive data set allowed the AI to create a course that was both familiar and novel to the lecturer himself.
During the course, the AI raised thought-provoking ethical questions, particularly in a section discussing non-playing characters (NPCs) in computer games. It posed questions about AI-generated NPCs, their personalities, and the potential for bias or stereotyping 2. These philosophical inquiries demonstrated the AI's ability to engage in complex discussions beyond mere information regurgitation.
The lecturer extended his experiment by testing other AI models, including China's Deep Seek, Google's Gemini 2.Pro, and Elon Musk's Grok 1. This comparison revealed varying levels of accuracy and knowledge about the lecturer's work across different AI platforms, highlighting the current limitations and inconsistencies in AI-generated content.
Source: The Conversation
This experiment, while admittedly self-absorbed, offers valuable insights into the potential future of education. The AI-generated course exemplified the kind of interactive, analytical, and personalized learning experience that could benefit university teaching 2. Recent research has shown that AI-generated tuition can motivate secondary school students and improve exam revision outcomes 1.
As AI becomes increasingly integrated into education, the role of human teachers is likely to evolve rather than diminish. Teachers will be crucial in guiding the learning experience, contextualizing course material, and fostering in-person student engagement 2. They may also leverage AI to provide personalized tutoring experiences tailored to individual student needs.
While the experiment showcased the potential of AI in education, it also highlighted challenges such as data privacy, the need for AI optimization in academic writing, and the importance of fact-checking AI-generated content 1. As education systems adapt to incorporate AI, addressing these issues will be crucial for maintaining the integrity and effectiveness of learning experiences.
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
12 Sources
Business
1 day ago
12 Sources
Business
1 day ago
Microsoft has integrated a new AI-powered COPILOT function into Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
9 Sources
Technology
1 day ago
9 Sources
Technology
1 day ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
1 day ago
10 Sources
Technology
1 day ago
Meta rolls out an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
5 Sources
Technology
16 hrs ago
5 Sources
Technology
16 hrs ago
Nvidia introduces significant updates to its app, including global DLSS override, Smooth Motion for RTX 40-series GPUs, and improved AI assistant, enhancing gaming performance and user experience.
4 Sources
Technology
1 day ago
4 Sources
Technology
1 day ago