For this school year, Google Chrome introduced "Homework Helper," a new artificial intelligence feature aimed at students. But if this tool was meant to help students learn, it deserves an "F."
Here's why: If a student uses "Homework Helper," when they visit class websites or homework pages, a button appears in the address bar, the text field at the top of the Google web browser window that displays the URL or web address. Click on that button and a text box pops up offering to "solve question number one" or "show steps to solve." Along with Google search results, step-by-step answers are provided. The AI tool can answer all the questions on a page at once. In short, Google has introduced a cheat button.
Teachers complained about the button and Google told the Washington Post it had temporarily paused the feature, without committing to keeping it off. Meanwhile, the same button was added as a feature of Google Lens, so it's now usable on any website a student visits. The visual presentation of the tool is also striking in that it emphasizes the shortcut; the steps needed to solve a sample question are blurred out, while the answer is shown in bold text.
That button, along with many other AI tools, seems to misunderstand what it means to be a student. Giving students the answers instead of teaching them how to get there short-circuits the educational process of struggling, learning and understanding.
We see a lot of potential upside for generative AI to improve education. One of us co-created a chatbot platform called PingPong that helps educators create customized learning experiences for students and the other wrote a book on teaching effectively with ChatGPT. With our colleague Sharad Goel at Harvard, we have taught hundreds of students and professionals how AI works, how to use it and the societal implications of this technology. We are constantly leveraging generative AI to help us teach and we believe AI can be a tool for good in learning, if used correctly.
But we also know that real learning happens in environments removed from the ease and efficiency that AI companies tend to promise. True learning happens when there is friction: the discomfort we experience when we don't understand something - when we feel the frustration of doing something wrong 10 times before finally feeling the satisfaction of getting it right on the 11th try.
Generative AI has the potential to give students these experiences, especially in contexts where students cannot get the attention they deserve. Self-directed students can use AI as a thought partner that's available 24/7 and has infinite patience. For educators, AI can be leveraged to bring us closer to the elusive goal of personalized education. But the careless use of AI can grind learning to a halt, and a key challenge of the next decade will be ensuring that the good uses of AI prevail.
The tech giants' popular AI tools, including Anthropic's Claude, OpenAI's ChatGPT and Google's Gemini, have introduced special study-mode features to make their chatbots act more like personalized tutors. But these are weak attempts to guide students away from shortcuts and toward real learning.
So what would the future look like if AI makers took the challenge of helping people learn seriously? We see two important steps tech companies should take.
First, AI companies must confront the reality that their tools can hinder learning. These companies regularly release reports, benchmarks and toolkits on the potential risks of their models to cybersecurity, biosecurity, public health and medicine and election integrity. These efforts pair the potential of new AI capabilities with awareness of the threats they pose and strategies to mitigate them. AI creators must also recognize the serious risks their tools pose to how students learn.
Second, AI companies must see their role beyond that of chatbot as tutor/teaching buddy, and shift their focus to features that help teachers rigorously assess learning. As teachers panic about their students' use of AI, they are returning to offline ways to evaluate students' knowledge. They are even bringing back old-fashioned blue books to force students to write exams and show their work longhand, rather than relying on digital tests that can be manipulated with access to AI tools.
Technology companies should collaborate with educators to design ways that AI can be used more effectively in learning, rather than forcing teachers to adopt all-or-nothing approaches to AI. We've had some initial success creating oral assessments facilitated by AI chatbots, where the instructor can grade the transcript after the fact. But we are limited by the capabilities of existing technology. AI companies should focus on programs that assist instructors in teaching better, rather than on agents that work directly with students.
A better future will require forward-thinking students and teachers, along with good-faith actors in the AI space committed to enabling, rather than undermining, transformative learning. We must collectively step up to meet the challenge.
Teddy Svoronos and Dan Levy are senior lecturers in public policy at the Harvard Kennedy School, where they teach courses in quantitative methods, decision-making and generative AI. They also co-founded Teachly, a free online platform aimed to help educators teach more effectively and inclusively.