2 Sources
2 Sources
[1]
Runway says its new text-to-video AI generator has 'unprecedented' accuracy
Runway claims its latest text-to-video model generates even more accurate visuals than its last. In a blog post on Monday, Runway says its Gen-4.5 model can produce "cinematic and highly realistic outputs," potentially making it even more difficult to distinguish between what's real and what's AI. "Gen-4.5 achieves unprecedented physical accuracy and visual precision," Runway's announcement says. It adds that the new AI model is better at adhering to prompts, allowing it to produce detailed scenes without compromising video quality. Runway says that AI-generated objects "move with realistic weight, momentum and force," while liquids "flow with proper dynamics." The Gen-4.5 model is rolling out to all users gradually and will offer the same speed and efficiency as its predecessor, according to Runway. There are still some limitations, though, as the model may experience issues with object permanence and causal reasoning, meaning effects may happen before the cause, such as a door opening before someone uses a handle. Along with Runway, OpenAI is ramping up efforts to make its AI-generated videos look more lifelike. OpenAI highlighted upgrades to physics with the release of its Sora 2 text-to-video model in September, with Sora head Bill Peebles saying, "You can accurately do backflips on top of a paddleboard on a body of water, and all of the fluid dynamics and buoyancy are accurately modeled." Runway says its Gen-4.5 model is better at handling different visual styles, too, allowing it to produce more consistent photorealistic, stylized, and cinematic visuals. The startup claims that photorealistic visuals created with Gen-4.5 can be "indistinguishable from real-world footage with lifelike detail and accuracy."
[2]
Runway rolls out new AI video model that beats Google, OpenAI in key benchmark
Gen 4.5 allows users to generate high-definition videos based on written prompts that describe the motion and action they want. Runway said the model is good at understanding physics, human motion, camera movements and cause and effect. The model holds the No. 1 spot on the Video Arena leaderboard, which is maintained by the independent AI benchmarking and analysis company Artificial Analysis. To determine the text-to-video model rankings, people compare two different model outputs and vote for their favorite without knowing which companies are behind them. Google's Veo 3 model holds second place on the leaderboard, and OpenAI's Sora 2 Pro model is in seventh place. "We managed to out-compete trillion-dollar companies with a team of 100 people," Runway CEO Cristóbal Valenzuela told CNBC in an interview. "You can get to frontiers just by being extremely focused and diligent."
Share
Share
Copy Link
Runway launches Gen-4.5, a new text-to-video AI model that achieves unprecedented physical accuracy and tops industry benchmarks, beating Google's Veo 3 and OpenAI's Sora 2 Pro despite being developed by a much smaller team.

Runway has unveiled its latest text-to-video AI model, Gen-4.5, claiming it delivers "unprecedented physical accuracy and visual precision" in AI-generated video content
1
. The company announced that the new model can produce "cinematic and highly realistic outputs," raising concerns about the increasing difficulty in distinguishing between authentic and AI-generated content.According to Runway's announcement, Gen-4.5 demonstrates significant improvements in prompt adherence, allowing users to generate detailed scenes without compromising video quality
1
. The model excels at simulating realistic physics, with AI-generated objects moving "with realistic weight, momentum and force," while liquids "flow with proper dynamics."The Gen-4.5 model has achieved the top position on the Video Arena leaderboard, an independent ranking system maintained by Artificial Analysis
2
. This benchmark evaluates text-to-video models through blind comparisons, where users vote for their preferred outputs without knowing which companies created them.Google's Veo 3 model currently holds second place on the leaderboard, while OpenAI's Sora 2 Pro model ranks seventh
2
. This achievement is particularly notable given the significant resource disparity between the companies.Runway CEO Cristóbal Valenzuela emphasized the remarkable nature of this achievement, stating, "We managed to out-compete trillion-dollar companies with a team of 100 people"
2
. He attributed the success to being "extremely focused and diligent," demonstrating that smaller, specialized teams can compete at the frontier of AI development.The Gen-4.5 model allows users to generate high-definition videos based on written prompts describing desired motion and action . The system demonstrates proficiency in understanding physics, human motion, camera movements, and cause-and-effect relationships.
Related Stories
Runway claims that Gen-4.5 excels at handling various visual styles, producing consistent photorealistic, stylized, and cinematic visuals
1
. The company asserts that photorealistic visuals created with Gen-4.5 can be "indistinguishable from real-world footage with lifelike detail and accuracy."Despite these advances, the model still faces certain limitations. Runway acknowledges that Gen-4.5 may experience issues with object permanence and causal reasoning, potentially causing effects to occur before their causes, such as a door opening before someone uses the handle
1
.The release of Gen-4.5 occurs amid intensifying competition in the AI video generation space. OpenAI has been enhancing its Sora platform, with Sora 2 featuring improved physics simulation
1
. Sora head Bill Peebles highlighted the model's ability to "accurately do backflips on top of a paddleboard on a body of water, and all of the fluid dynamics and buoyancy are accurately modeled."The Gen-4.5 model is being rolled out gradually to all Runway users and maintains the same speed and efficiency as its predecessor
1
. This gradual deployment suggests Runway's cautious approach to managing server capacity and user experience during the transition.🟡_Summarized by
Navi