4 Sources
[1]
This puzzle game shows kids how they're smarter than AI
While the current generation of artificial intelligence chatbots still flub basic facts, the systems answer with such confidence that they're often more persuasive than humans. Adults, even those such as lawyers with deep domain knowledge, still regularly fall for this. But spotting errors in text is especially difficult for children, since they often don't have the contextual knowledge to sniff out falsehoods. University of Washington researchers developed the game AI Puzzlers to show kids an area where AI systems still typically and blatantly fail: solving certain reasoning puzzles. In the game, users get a chance to solve "ARC" puzzles (short for Abstraction and Reasoning Corpus) by completing patterns of colored blocks. They can then ask various AI chatbots to solve the puzzles and have the systems explain their solutions -- which they nearly always fail to do accurately. The team tested the game with two groups of kids. They found the kids learned to think critically about AI responses and discovered ways to nudge the systems toward better answers. Researchers presented their findings on June 25 at the Interaction Design and Children 2025 conference in Reykjavik, Iceland. The paper is published in the journal Proceedings of the 24th Interaction Design and Children. "Kids naturally loved ARC puzzles and they're not specific to any language or culture," said lead author Aayushi Dangol, a UW doctoral student in human centered design and engineering. "Because the puzzles rely solely on visual pattern recognition, even kids that can't read yet can play and learn. They get a lot of satisfaction in being able to solve the puzzles, and then in seeing AI -- which they might consider super smart -- fail at the puzzles that they thought were easy." ARC puzzles were developed in 2019 to be difficult for computers but easy for humans because they demand abstraction: being able to look at a few examples of a pattern, then apply it to a new example. Current cutting-edge AI models have improved at ARC puzzles, but they've not caught up with humans. Researchers built AI Puzzlers with 12 ARC puzzles that kids can solve. They can then compare their solutions to those from various AI chatbots; users can pick the model from a drop-down menu. An "Ask AI to Explain" button generates a text explanation of its solution attempt. Even if the system gets the puzzle right, its explanation of how is frequently inaccurate. An "Assist Mode" lets kids try to guide the AI system to a correct solution. "Initially, kids were giving really broad hints," Dangol said. "Like, 'Oh, this pattern is like a doughnut.' An AI model might not understand that a kid means that there's a hole in the middle, so then the kid needs to iterate. Maybe they say, 'A white space surrounded by blue squares.'" The researchers tested the system at the UW College of Engineering's Discovery Days last year with over 100 kids from grades 3 to 8. They also led two sessions with the KidsTeam UW, a project that works with a group of kids to collaboratively design technologies. In these sessions, 21 children ages 6-11 played AI Puzzlers and worked with the researchers. "The kids in KidsTeam are used to giving advice on how to make a piece of technology better," said co-senior author Jason Yip, a UW associate professor in the Information School and KidsTeam director. "We hadn't really thought about adding the Assist Mode feature, but during these co-design sessions, we were talking with the kids about how we might help AI solve the puzzles and the idea came from that." Through the testing, the team found that kids were able to spot errors both in the puzzle solutions and in the text explanations from the AI models. They also recognize differences in how human brains think and how AI systems generate information. "This is the internet's mind," one kid said. "It's trying to solve it based only on the internet, but the human brain is creative." The researchers also found that as kids worked in Assist Mode, they learned to use AI as a tool that needs guidance rather than as an answer machine. "Kids are smart and capable," said co-senior author Julie Kientz, a UW professor and chair in human centered design and engineering. "We need to give them opportunities to make up their own minds about what AI is and isn't, because they're actually really capable of recognizing it. And they can be bigger skeptics than adults."
[2]
Kids Outsmart AI in Puzzle Game That Builds Critical Thinking - Neuroscience News
Summary: A new puzzle-based game helps children recognize where artificial intelligence still struggles. The game features ARC tasks -- visual logic puzzles that are easy for humans but hard for AI -- and allows kids to compare their answers with chatbot responses. Even when AI gets the right answer, its explanation is often wrong, teaching kids to question confidently stated misinformation. Through trial, error, and guidance, children learn to refine their instructions and see AI as a tool rather than an authority. While the current generation of artificial intelligence chatbots still flub basic facts, the systems answer with such confidence that they're often more persuasive than humans. Adults, even those such as lawyers with deep domain knowledge, still regularly fall for this. But spotting errors in text is especially difficult for children, since they often don't have the contextual knowledge to sniff out falsehoods. University of Washington researchers developed the game AI Puzzlers to show kids an area where AI systems still typically and blatantly fail: solving certain reasoning puzzles. In the game, users get a chance to solve 'ARC' puzzles (short for Abstraction and Reasoning Corpus) by completing patterns of colored blocks. They can then ask various AI chatbots to solve the puzzles and have the systems explain their solutions -- which they nearly always fail to do accurately. The team tested the game with two groups of kids. They found the kids learned to think critically about AI responses and discovered ways to nudge the systems toward better answers. Researchers presented their findings June 25 at the Interaction Design and Children 2025 conference in Reykjavik, Iceland. "Kids naturally loved ARC puzzles and they're not specific to any language or culture," said lead author Aayushi Dangol, a UW doctoral student in human centered design and engineering. "Because the puzzles rely solely on visual pattern recognition, even kids that can't read yet can play and learn. They get a lot of satisfaction in being able to solve the puzzles, and then in seeing AI -- which they might consider super smart -- fail at the puzzles that they thought were easy." ARC puzzles were developed in 2019 to be difficult for computers but easy for humans because they demand abstraction: being able to look at a few examples of a pattern, then apply it to a new example. Current cutting-edge AI models have improved at ARC puzzles, but they've not caught up with humans. Researchers built AI Puzzlers with 12 ARC puzzles that kids can solve. They can then compare their solutions to those from various AI chatbots; users can pick the model from a drop-down menu. An "Ask AI to Explain" button generates a text explanation of its solution attempt. Even if the system gets the puzzle right, its explanation of how is frequently inaccurate. An "Assist Mode" lets kids try to guide the AI system to a correct solution. "Initially, kids were giving really broad hints," Dangol said. "Like, 'Oh, this pattern is like a doughnut.' An AI model might not understand that a kid means that there's a hole in the middle, so then the kid needs to iterate. Maybe they say, 'A white space surrounded by blue squares.'" The researchers tested the system at the UW College of Engineering's Discovery Days last year with over 100 kids from grades 3 to 8. They also led two sessions with the KidsTeam UW, a project that works with a group of kids to collaboratively design technologies. In these sessions, 21 children ages 6-11 played AI Puzzlers and worked with the researchers. "The kids in KidsTeam are used to giving advice on how to make a piece of technology better," said co-senior author Jason Yip, a UW associate professor in the Information School and KidsTeam director. "We hadn't really thought about adding the Assist Mode feature, but during these co-design sessions, we were talking with the kids about how we might help AI solve the puzzles and the idea came from that." Through the testing, the team found that kids were able to spot errors both in the puzzle solutions and in the text explanations from the AI models. They also recognize differences in how human brains think and how AI systems generate information. "This is the internet's mind," one kid said. "It's trying to solve it based only on the internet, but the human brain is creative." The researchers also found that as kids worked in Assist Mode, they learned to use AI as a tool that needs guidance rather than as an answer machine. "Kids are smart and capable," said co-senior author Julie Kientz, a UW professor and chair in human centered design and engineering. "We need to give them opportunities to make up their own minds about what AI is and isn't, because they're actually really capable of recognizing it. And they can be bigger skeptics than adults." Runhua Zhao and Robert Wolfe, both doctoral students in the Information School, and Trushaa Ramanan, a master's student in human centered design and engineering, are also co-authors on this paper. Funding: This research was funded by The National Science Foundation, the Institute of Education Sciences and the Jacobs Foundation's CERES Network. Author: Stefan Milne Source: University of Washington Contact: Stefan Milne - University of Washington Image: The image is credited to Neuroscience News Original Research: The findings will be presented at the Interaction Design and Children 2025 conference
[3]
Can kids outsmart AI? Puzzle games put them to the test - Earth.com
Many adults trust AI systems that confidently give wrong answers. Children, who lack deep domain knowledge, often find it even harder to tell when AI is wrong. A new game developed by University of Washington researchers flips the script. It helps kids recognize AI's failures and think more critically about its logic. AI Puzzlers draws its design from the Abstraction and Reasoning Corpus (ARC), a set of visual puzzles that are easy for humans and hard for machines. These puzzles don't require language. Instead, they ask users to spot a pattern and apply it to new inputs using color grids. The game engages kids by asking them to solve puzzles first. Then, they test AI chatbots on the same puzzles and compare answers. Even if the AI sometimes guesses the right answer, its explanation rarely matches. This mismatch becomes a key moment of discovery. Kids learn that confidence doesn't equal correctness. "Kids naturally loved ARC puzzles and they're not specific to any language or culture," said study lead author Aayushi Dangol. "Because the puzzles rely solely on visual pattern recognition, even kids that can't read yet can play and learn." Kids begin by thinking AI is smart. They expect it to outperform them. But when AI repeatedly fails, the surprise sparks curiosity and laughter. "That is very very wrong," one child said after watching an AI completely miss a basic pattern. Visual comparison helps kids instantly spot what the AI missed. This strengthens their own logic and boosts confidence. They begin to understand that being human brings advantages. Unlike AI, they can use creativity, context, and reasoning grounded in real experience. One child described the AI as having "the internet's mind," saying, "It's trying to solve it based only on the internet, but the human brain is creative." AI Puzzlers includes a special Assist Mode where kids give the AI clues. This mode turns them into guides, not just players. The children move from broad statements like "Make a donut" to specific instructions like "Place white in the center, blue all around." As they experiment, they learn how to help the AI get closer to the correct logic. The researchers found that this step-by-step refining deepened the kids' understanding. They weren't just pointing out errors. They were learning how AI misinterprets vague language and how precise input shapes better output. In one session, a child wrote, "Make a pattern of the colors and gray alternating and a background of white, red, light blue, green, yellow." The AI still got it wrong. The frustration was real. "I am so done with you, AI," the child said. But the effort showed critical thinking at work. The game uses three main modes: Manual, AI, and Assist. In Manual Mode, kids build their answers from scratch. AI Mode lets them test the chatbot's performance and read its reasoning. Assist Mode invites them to guide the AI, learning what helps and what doesn't. This design is grounded in Mayer and Moreno's theory of multimedia learning. By using both visuals and text, the game lightens cognitive overload and keeps kids engaged. Switching between modes lets them explore ideas, spot contradictions, and build layered understanding. The researchers used a participatory design approach called Cooperative Inquiry. In two summer sessions, 21 children aged 6 to 11 collaborated with adult facilitators. These kids weren't just test subjects. They helped shape the tool. Children gave feedback, refined features, and even inspired the Assist Mode. In group discussions, they evaluated AI logic, challenged explanations, and brainstormed ideas to improve AI understanding. One child noted: "AI is very scientific, given its scientific explanation, but sometimes it's better not to go super, duper scientific." The project showed that children, when given space and tools, are more than passive users. They become critics, testers, and co-creators. As they kept solving puzzles, kids saw the difference between how they think and how AI thinks. AI, they noticed, often guesses randomly or repeats errors. "AI just keeps guessing," one child said. Another called it "stupid" and said it only "gets lucky." The kids began to frame AI as limited. "Look at the references and think like a human being," one urged. They recognized that humans can draw from experience, emotion, and logic, while AI relies on patterns it has seen. This shift is crucial. Children stopped treating AI as infallible. They started viewing it as a tool that needs supervision, not praise. The system is open source and works on any browser. The team hopes to extend it with more puzzle types and newer AI models. They also want to explore if these critical thinking skills transfer to other settings like schoolwork or web searches. The researchers are also thinking about voice integration and better accessibility for colorblind users. The long-term vision is to help kids build habits of questioning, experimenting, and reflecting. These are skills that apply far beyond the game. This work shows that critical AI thinking doesn't need lectures. It can begin with puzzles, color grids, and curiosity. By allowing kids to compare, question, and fix AI logic, AI Puzzlers gives them agency. "Kids are smart and capable," said study co-senior author Julie Kientz. "We need to give them opportunities to make up their own minds about what AI is and isn't." The success of AI Puzzlers shows what's possible. When kids are given space to think critically, they don't just understand AI. They start to outthink it. The study is published in Proceedings of the 24th Interaction Design and Children. -- - Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[4]
This Puzzle Game Shows Kids How They're Smarter Than AI | Newswise
Newswise -- While the current generation of artificial intelligence chatbots still flub basic facts, the systems answer with such confidence that they're often more persuasive than humans. Adults, even those such as lawyers with deep domain knowledge, still regularly fall for this. But spotting errors in text is especially difficult for children, since they often don't have the contextual knowledge to sniff out falsehoods. University of Washington researchers developed the game AI Puzzlers to show kids an area where AI systems still typically and blatantly fail: solving certain reasoning puzzles. In the game, users get a chance to solve 'ARC' puzzles (short for Abstraction and Reasoning Corpus) by completing patterns of colored blocks. They can then ask various AI chatbots to solve the puzzles and have the systems explain their solutions -- which they nearly always fail to do accurately. The team tested the game with two groups of kids. They found the kids learned to think critically about AI responses and discovered ways to nudge the systems toward better answers. Researchers presented their findings June 25 at the Interaction Design and Children 2025 conference in Reykjavik, Iceland. "Kids naturally loved ARC puzzles and they're not specific to any language or culture," said lead author Aayushi Dangol, a UW doctoral student in human centered design and engineering. "Because the puzzles rely solely on visual pattern recognition, even kids that can't read yet can play and learn. They get a lot of satisfaction in being able to solve the puzzles, and then in seeing AI -- which they might consider super smart -- fail at the puzzles that they thought were easy." ARC puzzles were developed in 2019 to be difficult for computers but easy for humans because they demand abstraction: being able to look at a few examples of a pattern, then apply it to a new example. Current cutting-edge AI models have improved at ARC puzzles, but they've not caught up with humans. Researchers built AI Puzzlers with 12 ARC puzzles that kids can solve. They can then compare their solutions to those from various AI chatbots; users can pick the model from a drop-down menu. An "Ask AI to Explain" button generates a text explanation of its solution attempt. Even if the system gets the puzzle right, its explanation of how is frequently inaccurate. An "Assist Mode" lets kids try to guide the AI system to a correct solution. "Initially, kids were giving really broad hints," Dangol said. "Like, 'Oh, this pattern is like a doughnut.' An AI model might not understand that a kid means that there's a hole in the middle, so then the kid needs to iterate. Maybe they say, 'A white space surrounded by blue squares.'" The researchers tested the system at the UW College of Engineering's Discovery Days last year with over 100 kids from grades 3 to 8. They also led two sessions with the KidsTeam UW, a project that works with a group of kids to collaboratively design technologies. In these sessions, 21 children ages 6-11 played AI Puzzlers and worked with the researchers. "The kids in KidsTeam are used to giving advice on how to make a piece of technology better," said co-senior author Jason Yip, a UW associate professor in the Information School and KidsTeam director. "We hadn't really thought about adding the Assist Mode feature, but during these co-design sessions, we were talking with the kids about how we might help AI solve the puzzles and the idea came from that." Through the testing, the team found that kids were able to spot errors both in the puzzle solutions and in the text explanations from the AI models. They also recognize differences in how human brains think and how AI systems generate information. "This is the internet's mind," one kid said. "It's trying to solve it based only on the internet, but the human brain is creative." The researchers also found that as kids worked in Assist Mode, they learned to use AI as a tool that needs guidance rather than as an answer machine. "Kids are smart and capable," said co-senior author Julie Kientz, a UW professor and chair in human centered design and engineering. "We need to give them opportunities to make up their own minds about what AI is and isn't, because they're actually really capable of recognizing it. And they can be bigger skeptics than adults." Runhua Zhao and Robert Wolfe, both doctoral students in the Information School, and Trushaa Ramanan, a master's student in human centered design and engineering, are also co-authors on this paper. This research was funded by The National Science Foundation, the Institute of Education Sciences and the Jacobs Foundation's CERES Network.
Share
Copy Link
University of Washington researchers develop a puzzle game that demonstrates to children how they can outperform AI in certain reasoning tasks, fostering critical thinking about AI capabilities and limitations.
Researchers at the University of Washington have developed a novel puzzle game called AI Puzzlers, designed to show children how they can outperform artificial intelligence in certain reasoning tasks. The game, which utilizes ARC (Abstraction and Reasoning Corpus) puzzles, aims to foster critical thinking about AI capabilities and limitations among young players 12.
While current AI chatbots often provide inaccurate information, they do so with a level of confidence that can be misleading. This phenomenon poses a particular challenge for children, who may lack the contextual knowledge to identify falsehoods in AI-generated text 1. AI Puzzlers addresses this issue by demonstrating an area where AI systems consistently fail: solving specific types of reasoning puzzles.
The game features 12 ARC puzzles that require players to complete patterns using colored blocks. After solving a puzzle, users can challenge various AI chatbots to solve the same puzzle and explain their solutions. Importantly, the AI systems almost always fail to provide accurate solutions or explanations 13.
Source: Neuroscience News
Key features of the game include:
The research team, led by Aayushi Dangol, tested AI Puzzlers with over 100 children from grades 3 to 8 and conducted sessions with 21 children aged 6-11 through the KidsTeam UW project 12. Their findings, presented at the Interaction Design and Children 2025 conference, revealed several important outcomes:
Through playing AI Puzzlers, children developed a more nuanced understanding of AI capabilities:
The game's design, which allows for direct visual comparison between human and AI solutions, helps children instantly identify AI errors. This approach strengthens their logical thinking and boosts confidence in their own abilities 3.
Source: Earth.com
The researchers employed a participatory design approach, involving children in the game's development process. This collaboration led to features like the Assist Mode, where kids can provide hints to guide the AI 14.
The team plans to extend the game with more puzzle types and newer AI models, and explore how the critical thinking skills developed through AI Puzzlers might transfer to other settings, such as schoolwork or web searches 3.
Source: Phys.org
As co-senior author Julie Kientz notes, "Kids are smart and capable. We need to give them opportunities to make up their own minds about what AI is and isn't, because they're actually really capable of recognizing it. And they can be bigger skeptics than adults" 12.
Summarized by
Navi
[2]
Ilya Sutskever, co-founder of Safe Superintelligence (SSI), assumes the role of CEO following the departure of Daniel Gross to Meta. The move highlights the intensifying competition for top AI talent among tech giants.
6 Sources
Business and Economy
5 hrs ago
6 Sources
Business and Economy
5 hrs ago
Google's advanced AI video generation tool, Veo 3, is now available worldwide to Gemini app 'Pro' subscribers, including in India. The tool can create 8-second videos with audio, dialogue, and realistic lip-syncing.
7 Sources
Technology
21 hrs ago
7 Sources
Technology
21 hrs ago
A federal court has upheld an order requiring OpenAI to indefinitely retain all ChatGPT logs, including deleted chats, as part of a copyright infringement lawsuit by The New York Times and other news organizations. This decision raises significant privacy concerns and sets a precedent in AI-related litigation.
3 Sources
Policy and Regulation
13 hrs ago
3 Sources
Policy and Regulation
13 hrs ago
Microsoft's Xbox division faces massive layoffs and game cancellations amid record profits, with AI integration suspected as a key factor in the restructuring.
4 Sources
Business and Economy
13 hrs ago
4 Sources
Business and Economy
13 hrs ago
Google's AI video generation tool, Veo 3, has been linked to a surge of racist and antisemitic content on TikTok, raising concerns about AI safety and content moderation on social media platforms.
5 Sources
Technology
21 hrs ago
5 Sources
Technology
21 hrs ago