Curated by THEOUTPOST
On Fri, 17 Jan, 12:01 AM UTC
4 Sources
[1]
Luma AI Announces Ray2 AI Video Model Pre-trained with '10x Compute'
Luma Labs has introduced a new video generative model called Ray2. It claims to create realistic videos with natural and coherent motion. Ray2 is currently available on the Dream Machine platform for paid subscribers. "We are starting with text-to-video. Image-to-video, video-to-video, and editing capabilities will soon follow," said the company in a post on X. It can generate videos with a duration of five or ten seconds in multiple aspect ratios. The paid plans for Dream Machine start at £6.99 a month and enable users to create 1080p videos. "Today, we are introducing Ray2! Scaling pretraining by 10x (in comparison to Ray1) on a novel efficient architecture, we are able to unlock the next frontier in video generation -- fast natural coherent motion and physics," said Amit Jain, co-founder of Luma Labs, in a post on X. "This skyrockets the success rate of usable, production-ready generations and makes video storytelling accessible to many more people," he added. Jain also revealed that the company is considering releasing an API for it soon and that it is 'essential to our goals'. The company also released a long list of examples of videos generated across themes like natural motion, physics and simulation, photorealism, cinematic scenes, people and expressions. That said, users did run into a few issues accessing the model on launch day. One user on X complained that the servers were overwhelmed and that they had been waiting for 30 minutes for the result. Meanwhile, Jain noted that it was the 'launch day rush' and expected that things would stabilise soon. Users quickly tested the model. MattVidPro, an AI-focused tech YouTuber, conducted a poll on X, asking testers which model is more impressive: Ray 2, Google's Veo 2, or Vidu 2 from Vidu AI. The results revealed that over 80% picked Google's Veo 2 model. A user on X, Ben Nash, expressed that Veo is significantly ahead of its competitors and believes it is of superior quality compared to Sora, even though Sora offers a greater array of features. However, unlike OpenAI's Sora and Ray 2, Google's Veo 2 is not yet available for public use. Google's internal testing indicates that Veo outperforms competitors (such as China's Kling, Meta's Moviegen, and OpenAI's Sora) both in terms of quality and prompt adherence.
[2]
I just put Luma's new Ray2 AI video generator to the test -- and it's better than Sora
Luma Labs has given its popular Dream Machine AI creativity platform a major upgrade, bringing the new Ray2 video model into the system. This is a huge upgrade over the previous Ray 1.6, offering better realism and more natural motion. Ray2 was announced last year as part of a new partnership with Amazon AWS. It has finally been integrated into Dream Machine, available as the default option when you create a video. The AI startup describes Ray2 as "a new frontier in video generative models." To achieve the level of visual and motion realism they increased compute power 10x compared to previous models, which has "unlocked new freedoms of creative expression and visual storytelling." I've been testing Luma's Ray 2 since launch and the video generations are very impressive, but it's slow due to demand with some issues around clips refusing to generate or taking too long to be useful. These are all the same teething problems any platform has when launching a new model. So I'd say Ray2 is certainly in the running to be one of the best AI video generators available. Being built into Dream Machine already gives Ray2 a leg up compared to other video models because of how impressive Dream Machine is to work with. It makes creating content with AI more collaborative and less about throwing a prompt into the wind and hoping for the best. Accessing Ray2 is as simple as starting a new board in Dream Machine, selecting Video from the prompt bar and typing your prompt. The AI will handle the rest, showing you two videos and giving the usual adaptable interface where you can change elements of the prompt. Due to the issues I mentioned earlier I was only able to get about half of the prompts I tried to actually generate, and because of how slow it was I couldn't make use of the re-prompting and collaboration features that make Dream Machine so good. Despite that, I was still impressed. In one example I asked Luma's Ray2 to create a video of a knife slicing into an onion. Knife skills is something no video model -- with the exception of Google Veo 2 -- has been able to do consistently well. While it wasn't perfect, the motion was spot on and it did slice. Ray2 is also particularly good at animal motion. I asked it to generate several videos of dogs -- including one stretching and another catching butterflies -- and it did both very well. There were some elements of the butterfly video that were not perfect, but with Dream Machine that can be relatively easily corrected by replying to the original video and specifying what to change. When I was able to get videos to generate it happened extremely fast. It doesn't look like Luma had to sacrifice the speed of generation (something Dream Machine was famous for) in favor of improving quality. We seem to get performance and speed all in a single model. Overall it does appear that Ray2 is a significant step forward in generative video. Its leap feels very similar to the jump we saw when Luma first launched Dream Machine. It is also slightly better in terms of motion than OpenAI's flagship Sora model. It is not perfect. There are still artifact issues; sometimes the motion doesn't make sense; and its currently only text-to-video (although image-to-video is coming soon). However, these are all problems that plague every other model I've tried, including Sora, Runway, Kling and Pika. The biggest takeaway is just how fast AI video is evolving. Being able to generate ten seconds of high resolution video nearly indistinguishable from something filmed with a camera would have been unthinkable two years ago. Today, its commonplace and available from multiple companies.
[3]
Luma AI releases Ray2 generative video model with 'fast, natural' motion and better physics
Luma AI made waves with the launch of its Dream Machine generative AI video creation platform last summer. Of course, while that was only seven short months ago, the AI video space has advanced rapidly since then with the release of many new AI video creation models from rival startups in the U.S. and China, including Runway, Kling, Pika 2.0, OpenAI's Sora, Google's Veo 2, MiniMax's Hailuo, and open source alternatives such as Hotshot and Genmo's Mochi 1, to name but a few. Even Luma itself updated its Dream Machine platform recently to include new still image generation and brainstorming boards, as well as debuting an iOS app. But the updates continue: today, the San Francisco-based startup released Ray2, its newest video AI generation model, available now through its Dream Machine website and mobile apps for paying subscribers (to start). The model offers "fast, natural coherent motion and physics," according to co-founder and CEO Amit Jain on his X account, and was trained with 10 times more compute than the original Luma AI video model, Ray1. "This skyrockets the success rate of usable production-ready generations and makes video storytelling accessible to a lot more people," he added. Luma's Dream Machine platform on the web offers a free tier with 720 pixel generations capped at a variable number each month, whereas paid plans start at $6.99 per month for its "Lite" plan which offers 1080p visuals, and increases to Plus ($20.99/month), Unlimited ($66.49/month) and Enterprise ($1,672.92/year). A leap forward in video gen Right now, Ray2 is limited to tex-to-video, allowing users to type in descriptions that are transformed into 5 or 10 second video clips. The model can generate new videos in a matter of seconds, though right now it can take minutes at a time due to a crush of demand from new users. Examples shared by Luma and its early testers in its Creators program showcase the model's versatility, including a man running through an Antarctic snowstorm surrounded by explosions, and a ballerina performing on an ice floe in the Arctic. Impressively, all the motions in the example videos appear lifelike and fluid -- and often, with subjects moving much faster and more naturally than videos from rival AI generators, which often appear to generate in slow motion. It can even create realistic versions of surreal ideas such as a giraffe surfing, as X user @JeffSynthesized demonstrated. "Ray 2 is the real deal," he wrote on X. Other AI video creators who have tried the new model seem to largely agree, with Jerrod Lew posting on X: "Improved cinematography, lighting and realism has arrived and it's awesome." "...it's so good!" said AI video artist Heather Cooper. My own tests were a mixed bag, with some more complex prompts creating unnatural and glitchy results. But when it did produce clips that resembled more of what I had in mind in my prompts -- such as fencers crossing swords aboard a space station orbiting Jupiter -- it was undeniably impressive. Jain said Luma will also add image-to-video, video-to-video, and editing capabilities to Ray2 in the future, further expanding the tool's creative possibilities. Promotional Campaigns: The Ray2 Awards To celebrate the launch of Ray2, Luma Labs is hosting The Ray2 Awards, offering creators the chance to win up to $7,000 in prizes: Winners of both awards will be announced on January 27, 2025. Submissions can be uploaded via forms provided by Luma Labs, and creators are encouraged to use hashtags #Ray2 and #DreamMachine when sharing their work online. Additionally, Luma Labs has launched an affiliate program, allowing participants to earn commissions by promoting its tools.
[4]
This AI video generator can make a banana typing look realistic - and might challenge Sora
Luma Labs has premiered a powerful, new AI model for generating videos on its Dream Machine platform called Ray 2. The new model can produce an array of realistic video clips of up to 10 seconds, from recreating a bee pollinating flowers to more surreal ideas like the typing anthropomorphic banana seen above. The beauty of Ray 2 isn't just its ability to render these wild scenarios but to do so with motion and physics that look shockingly natural. Unlike earlier video generation tools, which often struggled to produce anything faster than a leisurely stroll, you can see people really book it in a run. Ray 2 is capable of this level of production due in part to Luma training it on ten times more computational power than its predecessor, Ray 1. That means more realistic characters, faster rendering, smoother motion, and far fewer glitches. Ray 2 is available through Luma's Dream Machine platform, which offers both free and paid subscription tiers. The free plan lets users dabble with 720p resolutions, while paid plans unlock higher-quality 1080p visuals and unlimited usage if you're willing to drop $66.49 a month. Luma has plans to expand Ray 2's capabilities with image-to-video, video-to-video, and editing tools. That could mean letting you turn a vacation photo into a short video or remixing a home movie into something cinematic. The company is also hosting the Ray 2 Awards, offering creators a chance to win up to $7,000 in prizes. There's a $5,000 prize for the most-viewed Ray 2 video on social media and a $3,000 raffle for anyone who shares their content and engages with Luma's promotional posts. If nothing else, it's a good excuse to finally bring your idea for "sloths competing in a high-stakes basketball game" to life. Ray 2's limits mean it won't quite blow any competition away, however. The time limit makes it less capable than OpenAI's Sora model in some ways. Sora focuses on creating longer-form, cohesive video narratives. Then there's Runway's Gen-2, which brings users tools to tweak lighting, camera angles, and more, and Pika, which regularly drops new features like picture-to-video ability that Luma is still developing. Still, Ray 2 has its charm and feels a bit like a streamlined alternative for those who prioritize speed and ease of use. The real significance of Ray 2 lies in how it helps lower the barrier to entry for anyone looking to make a video with AI. Even if that's something as weird as a banana typing a note.
Share
Share
Copy Link
Luma AI has released Ray2, a new AI video generation model that promises improved realism, natural motion, and better physics. This update to their Dream Machine platform challenges competitors like OpenAI's Sora and Google's Veo 2.
Luma AI, a San Francisco-based startup, has unveiled Ray2, its latest AI video generation model, marking a significant advancement in the rapidly evolving field of AI-generated content. Ray2 is now available on the Dream Machine platform for paid subscribers, offering improved realism, natural motion, and enhanced physics in generated videos 12.
Ray2 boasts several notable improvements over its predecessor:
Increased Compute Power: The model was trained using 10 times more computational power than the original Ray1, resulting in more realistic characters, faster rendering, and smoother motion 3.
Natural and Coherent Motion: Ray2 can generate videos with fast, natural, and coherent motion, addressing a common limitation in previous AI video models 12.
Enhanced Physics Simulation: The model demonstrates improved physics, allowing for more realistic interactions between objects and characters in generated videos 13.
Video Duration and Quality: Ray2 can produce videos lasting 5 or 10 seconds in multiple aspect ratios, with paid plans offering 1080p resolution 14.
The release of Ray2 comes at a time of intense competition in the AI video generation space. While some users have reported that Google's Veo 2 outperforms Ray2 in terms of quality, Luma's offering has several advantages:
Public Availability: Unlike Google's Veo 2, Ray2 is immediately accessible to paying subscribers 12.
Speed: When not overwhelmed by demand, Ray2 can generate videos extremely quickly, maintaining Dream Machine's reputation for speed 2.
User Interface: Being integrated into the Dream Machine platform gives Ray2 an edge in terms of user experience and collaborative features 2.
Despite its advancements, Ray2 is not without limitations:
Current Functionality: Ray2 is currently limited to text-to-video generation, with image-to-video, video-to-video, and editing capabilities planned for future updates 13.
Generation Issues: Some users have reported problems with video generation, including long wait times and failed attempts, likely due to high demand following the launch 12.
Artifacts and Inconsistencies: Like other AI video models, Ray2 still occasionally produces artifacts or illogical motions in generated videos 2.
The release of Ray2 represents a significant step in making AI video generation more accessible to a broader audience. Luma AI offers various subscription tiers, starting at $6.99 per month for the "Lite" plan, with higher tiers providing unlimited usage and enterprise options 34.
To promote the new model, Luma Labs is hosting The Ray2 Awards, offering up to $7,000 in prizes for creators who use the platform to generate innovative content 34.
As the AI video generation landscape continues to evolve rapidly, Ray2's release underscores the fierce competition and rapid advancements in this field. While it may not definitively surpass all competitors, Ray2 represents a significant leap forward in AI-generated video technology, potentially challenging established players like OpenAI's Sora and lowering the barrier to entry for AI-assisted video creation 1234.
Reference
[1]
[2]
[3]
Luma AI has launched Photon, a new AI image generation model, alongside an updated Dream Machine platform. The release introduces innovative features for creators, including consistent character generation and a user-friendly interface.
4 Sources
4 Sources
Google introduces Veo2, an advanced AI video generator that claims superior performance over competitors like OpenAI's Sora Turbo, featuring enhanced realism, cinematic quality, and improved prompt adherence.
24 Sources
24 Sources
Runway introduces Gen-3 Alpha Turbo, an AI-powered tool that can turn selfies into action-packed videos. This advancement in AI technology promises faster and more cost-effective video generation for content creators.
2 Sources
2 Sources
Runway AI, a leader in AI-powered video generation, has launched an API for its advanced video model. This move aims to expand access to its technology, enabling developers and enterprises to integrate powerful video generation capabilities into their applications and products.
8 Sources
8 Sources
OpenAI has officially released Sora, its advanced AI video generation tool, to ChatGPT Plus and Pro subscribers. This launch marks a significant advancement in AI-powered content creation, offering users the ability to generate high-quality video clips from text, images, and existing videos.
81 Sources
81 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved