2 Sources
2 Sources
[1]
Will Smith eating spaghetti was peak AI chaos in 2023 -- now it shows how fast the tech has evolved
The internet's strangest benchmark reveals just how quickly synthetic video grew up Few could have predicted that a warped, glitchy video of Will Smith attempting to eat spaghetti would become one of the most important before-and-after markers in modern AI history. The original 2023 clip generated with ModelScope was memorably, operatically bad. Smith's face warped between mismatched expressions, his hands morphed into rubbery appendages, and the noodles floated like they were acting under their own strange gravitational code. 'Will Smith eating spaghetti' became a kind of shorthand for the unhinged early stage of AI video-generation. Three years later, the same idea showcases how quickly things have changed. A Reddit compilation titled '3 years of AI progress' charts the transformation through the meme. The chaos of early generative AI video is now the easiest way to show just how quickly the entire field has matured, to the point where most people now can't separate AI videos from reality. 3 years of AI progress from r/OpenAI The 2023 clip feels like a relic now, the sort of thing people show in documentaries about the dawn of a technology to illustrate its awkward adolescence. The AI couldn't keep Smith's identity stable from frame to frame, and the initial video revealed the real limits of early text-to-video systems. By early 2024, the meme had grown enough legs that Smith himself joined in on the joke, posting a TikTok in which he exaggerated every motion as he ate spaghetti in real life. The most up-to-date version, using Kling 3.0, has an entire scene of Smith eating spaghetti with a kid and even having a conversation, all from a single prompt. The improvements in AI video appear rapidly in the video. The way the eyes stay aligned, the facial structure stabilizes, and the bowl stops teleporting between frames. The spaghetti actually behaves like a physical object by the time the compilation reaches its most recent models. Even the lighting becomes coherent. Early models were capable of producing frames that looked good in isolation, but they couldn't sustain a character, a motion pattern, or even a scene across time. Kling 3.0 maintains continuity throughout. The short piece of video feels like it belongs to the same physical reality from start to finish. It's a time-compressed demonstration of how entire research priorities shifted. First came anatomical consistency, then motion coherence, then higher resolutions, then realistic physics, then the ability for models to follow the emotional or narrative intent of a prompt. Personality is what makes the spaghetti meme endure. And personality, of a sort, is what the newest models have begun to capture. In the early clips, nothing on screen behaves with intention. By the end, the AI-generated Smith really seems to be performing an action, as if guided by an internal logic rather than random frame-to-frame improvisation. That shift signals something important for the broader field of AI video. Once a model can maintain a character through movement, it opens the door to rendering human action in a way that fits inside our expectations. The internet has spent years archiving its own absurdity, but this meme has matured into a kind of yardstick. If a model can do this convincingly, it's operating at a level that the earliest systems couldn't have imagined.
[2]
How AI Will Smith eats spaghetti in 2026
If you want a glimpse at how far AI video generation has come since 2023, look no further than the "Will Smith eating spaghetti" test, which has basically become the Hello World of generative AI. In a video from a Reddit user on the r/OpenAI subreddit, the post shows the evolution of the test -- from its humble beginnings as a monstrous, pixelated mess to something far more cinematic, even if you can still tell it's AI. This version was made using the Kling 3.0 video generator, developed by Chinese tech company Kuaishou Technology. In it, Will Smith is seen at a dinner table not just eating spaghetti, but actually talking with a younger man seated across from him. They discuss the capabilities of Kling AI to create videos like the one you're watching, making it pretty clear that this is an ad. Still, it offers a striking look at just how much generative video has matured in a remarkably short period of time. Three years isn't that long -- though, in AI terms, it kind of is. If you recall, the very first version of AI Will Smith eating spaghetti was made with ModelScope and could barely keep the actor's face consistent from one frame to the next. By the following year, the video -- and countless variations of it -- had taken off as a meme, to the point that Smith himself poked fun at it, before later being caught using generative AI for a TikTok video of his own. Here's an example of the test in Veo 3.1 from last year. This Tweet is currently unavailable. It might be loading or has been removed. Among today's major players in video generation, like Grok and OpenAI, passing the spaghetti test has become much harder. These companies have put extremely strict guardrails in place around third-party likenesses and copyrighted material, especially as Hollywood continues to crack down on AI models trained on its IP. Mashable attempted to recreate the test using OpenAI's Sora and Google Gemini's Veo 3.1, but both attempts were denied on copyright grounds. For now, it seems that as more AI generators -- particularly U.S.-based ones -- pull back on the use of third-party likenesses, the spaghetti test may finally be nearing the end of the line.
Share
Share
Copy Link
A bizarre 2023 video of Will Smith eating spaghetti has become the internet's unofficial benchmark for AI video generation progress. What started as a glitchy, warped mess with ModelScope has evolved into cinematic, conversation-filled scenes with Kling 3.0. The transformation reveals how quickly AI video technology matured, though copyright restrictions may soon end this viral test.
The Will Smith eating spaghetti test has emerged as an unlikely yardstick for measuring the swift evolution of AI video generation technology. What began in 2023 as a memorably chaotic clip generated with ModelScope has transformed into a time-compressed demonstration of how rapidly synthetic video capabilities have matured
1
. The original video showcased AI at its most unhinged stage, with Smith's face warping between mismatched expressions, hands morphing into rubbery appendages, and noodles floating under their own strange gravitational code1
.
Source: Mashable
A recent Reddit compilation titled '3 years of AI progress' charts this dramatic transformation, revealing how viral AI-generated content has shifted from operatically bad to nearly indistinguishable from reality
1
. The meme gained enough traction that Smith himself joined the joke in early 2024, posting a TikTok where he exaggerated every motion while eating spaghetti in real life1
.
Source: TechRadar
The most recent version using Kling 3.0, developed by Kuaishou Technology, demonstrates the dramatic leap in capabilities
2
. The AI-generated videos now feature an entire scene of Smith eating spaghetti with a younger man, complete with conversation, all from a single prompt1
. Early models could produce frames that looked acceptable in isolation but couldn't sustain stable character identity, motion patterns, or even consistent scenes across time1
.The improvements are striking when viewed sequentially. Eyes stay aligned, facial structure stabilizes, and the bowl stops teleporting between frames. By the time the compilation reaches recent models, the spaghetti behaves like an actual physical object with realistic physics, and even the lighting becomes coherent
1
. This progression illustrates how research priorities shifted from anatomical consistency to motion coherence, then higher resolutions, realistic physics, and finally the ability to follow emotional or narrative intent1
.Related Stories
Despite showcasing AI evolution impressively, the Will Smith eating spaghetti test may be approaching its end. Major players in AI video generation like OpenAI's Sora and Google Gemini have implemented strict guardrails around celebrity likenesses and copyrighted material, particularly as Hollywood cracks down on AI models trained on its intellectual property
2
. Attempts to recreate the test using OpenAI's Sora and Google Gemini's Veo 3.1 were denied on copyright grounds2
.As U.S.-based AI generators pull back on using third-party likenesses, this unofficial AI benchmark that has served as the "Hello World" of generative AI may finally reach the end of the line
2
. The shift signals broader implications for AI video generation technology, where once a model can maintain a character through movement with intention rather than random frame-to-frame improvisation, it opens doors to rendering human action that fits inside viewer expectations1
. Three years may not seem long in conventional terms, but in AI terms, it represents a generational leap that has transformed the field from awkward adolescence to near-photorealistic capability.Summarized by
Navi
[1]
[2]
08 Jul 2025•Technology

03 Sept 2025•Entertainment and Society

04 Nov 2025•Entertainment and Society

1
Technology

2
Policy and Regulation

3
Health
