5 Sources
5 Sources
[1]
Luma AI's New Ray3 Video Generator Can 'Think' Before Creating
Reasoning models are not uncommon in the world of AI. Many companies have them, including OpenAI's GPT-o3 and Google's Gemini 2.5. But AI image and video company Luma AI just dropped its first AI reasoning video model, named Ray3, and it's available now. A reasoning model is a kind of AI model that uses more computing time to process requests and can go back and check its answers. Typically, reasoning models give you better responses, whether that's more detail or a lower rate of errors. For Ray3, that reasoning power means you can create AI video clips with more complex action sequences. Typically, AI video clips are anywhere from 5 to 10 seconds long. (That's the sweet spot at least -- longer clips tend to get wonky fast.) So stuffing your prompt with action sequences leaves a lot of room for error. Ray3's ability to spend more time working through prompts means it can better handle those more advanced scenes. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. Luma AI CEO Amit Jain said reasoning models can do more than translate text to pixels. "It's able to evaluate and say, 'Oh, this is not good, or I need this to be better in this way,'" he said in an interview with CNET. As with reasoning models for chatbots, you can see the steps the model takes as it works. A new visual annotation tool shows you what the model is doing -- like marking characters to adjust and other areas to keep as is. You can also use this functionality to mark up frames and highlight changes you want made in successive prompts. Other upgrades help produce better clips, including the ability to generate in 16-bit HDR, a higher resolution that gives clips finer details and clarity. You can also take advantage of a new draft mode, which lets you test ideas quickly and generate shots in a lower resolution format. You can generate clips in 20 seconds in draft mode, Jain explained, and then upscale those to high-fidelity resolution when you're ready, which takes about 2 to 5 minutes to generate. Video creation is becoming an increasingly common use of generative AI. Many big tech companies have released AI video models over the past year, from Midjourney to Google's Veo 3. All these models aim to enhance creation and recent improvements to generate higher quality, include audio (for Veo 3) and generally level up to entice professional creators as well as AI enthusiasts. Professionals have voiced a number of concerns about AI-generated media, though, particularly around the training and deployment of AI models. A number of class action lawsuits by artists have been filed against AI companies. Luma AI's privacy policy says it can use information you provide to improve its services.
[2]
Luma AI created an AI video model that 'reasons' - what it does differently
AI developers are pushing video models onto creative industries. Just a few years ago, AI-generated video clips were a laughing stock on the internet -- anyone remember the nightmarish video of AI-generated Will Smith wolfing down spaghetti? The technology has come a long way since then: Today, tech startups are competing to deliver generative AI tools which, at least in their vision of the future, aim to rival the quality of Hollywood production studios -- at a tiny fraction of the cost. Also: This new AI video editor is an all-in-one production service for filmmakers - how to try it In the latest development in that competition, AI startup Luma AI announced its new video-generating model, Ray3, on Thursday. Its other product, Luma Dream Machine, lets users create videos from just their photos. The model is available now through Dream Machine. It's also accessible to paying customers of Adobe's Firefly and Creative Cloud Pro, who can generate unlimited videos through the model until Oct. 1. You've heard of so-called reasoning models like OpenAI's o3, which are thought to consider a query, especially complex ones, for more time than standard generative AI models in order to return a more helpful and thorough answer. But thus far, those models haven't had video-generating capabilities. This is a vaguely defined and ontologically debatable term that's thrown around a lot these days in how AI systems are marketed, a bit like "understanding," "creativity," and "agency." In simple terms, it refers to a model's ability to break problems down into multiple steps, reflect on the quality of its outputs, and iteratively improve upon them over time. Rather than just generating video from a text prompt, Ray3 breaks the production process down into multiple steps, just as a creative team would. It has multimodal reasoning capabilities, meaning it can generate text along with visual assets to help users sketch out the concepts for the final video. Filmmakers could, for example, prompt the system to annotate images or suggest camera angles for sequences of shots. The model is also the first of its kind, according to Luma AI, to be able to deliver video outputs in 4K high dynamic range, which means it offers a much broader visual spectrum of light and shadow. Also: Will AI damage human creativity? Most Americans say yes "The result is videos that feel more coherent, with characters that look consistent, scenes that unfold naturally over time, and physics that behave as they should," Luma AI wrote in a press release. Luma AI has not publicly disclosed any limits on the length of videos that Ray3 can generate, and the company did not immediately respond to ZDNET's request for comment on this subject. Luma AI is positioning Ray3 as an automated creative partner for filmmakers, video game designers, and advertisers. A new "draft mode," for example, enables Ray3 to quickly generate a variety of test clips, each with subtle variations, providing creative teams with a range of options and saving them time on the ideation process. "This lets creators enter a state of flow, experimenting freely without worrying about time or compute costs," Luma AI wrote. Also: AI models know when they're being tested - and change their behavior, research shows Other AI developers have been selling their own tools on the premise that they can serve as automated creative partners, onto which humans can offload time-intensive and routine tasks, saving them money in the process. Earlier this week, for example, Amazon unveiled an AI agent that can help brands with virtually every step of the process of creating a short video ad.
[3]
Luma AI launches Ray3, a next-gen cinematic video generation model with built-in reasoning - SiliconANGLE
Luma AI launches Ray3, a next-gen cinematic video generation model with built-in reasoning Artificial intelligence startup Luma AI Inc. today announced the launch of Ray3, a powerful text-to-video AI model with built-in reasoning, designed for high-quality cinematic visual production for professionals. Luma also revealed a partnership with Adobe Inc. to integrate the new model into Adobe's AI-enabled Firefly app, the company's all-in-one software for creative work. "Ray3 is our first step toward building intelligence for creative work," said Amit Jain, co-founder and chief executive of Luma AI. "Creative work is one of the most intellectually challenging things humans do, yet until now, much of the AI available to creatives has lagged far behind what's possible in coding and analysis with language models." Ray3's flagship capability is chain-of-thought reasoning, which allows it to "think" through scene descriptions and follow instructions from creative professionals. Jain said that until now, most generative video models on the market have been more like slot machines: exhibiting a lot of power but little intelligence. With reasoning, Ray3 can evaluate its own outputs and refine results to better preserve the artistic vision of the user. It can plan out complex scenes and judge whether its output makes sense before presenting it. The model works in a manner similar to animators and filmmakers, sketching out a storyboard before generating a final product. During this drafting process, users can collaborate with the model to provide more precise instructions, such as annotating portions of the video. The model can then follow along with complex, multi-step ideation. It also understands visual annotations, such as lines drawn on video stills during drafting, enabling it to follow user instructions more precisely. Ray3 represents a significant upgrade over Ray2, the company's previous-generation model, and is twice the size. It can generate true high dynamic range video using the professional ACES2065-1 EXR standard across 10-, 12- and 16-bit formats. In practical terms, this gives filmmakers and advertisers access to the same range of color exposure and lighting controls found in footage shot on high-end cameras. The model can also take standard dynamic range videos from virtually any source and convert them into HDR, providing richer color and more flexibility in editing. For example, Ray3's HDR transformation can brighten up an excessively dark scene without "washing out" its colors. Users can generate video clips up to 10 seconds long from both text and images. By annotating images with text, users gain even greater control over initial outputs. Thanks to the powerful composition and visual understanding engine built into the model, stitching together multiple scenes is easier because it maintains consistency between generations better than before. In addition to its partnership with Adobe, Luma AI said Ray3 is being adopted by Dentsu Digital Inc., one of Japan's largest integrated digital marketing firms. As a launch partner, Dentsu intends to use Ray3 in its production pipelines to give domestic brands greater control and capability in personalization and storytelling. Creative leaders including digital marketing firm Monks and advertising company StrawberryFrog LLC are also adopting Ray3 to scale their capabilities. In addition, Saudi Arabian AI company Humain said it plans to integrate Ray3 into its enterprise service for creative professionals. "Ray3 isn't just an upgrade, it's a quantum leap," said Steve Plimsoll, chief strategy officer of Humain. "By giving AI the power to reason across words, images, and motion, we're not only supercharging the speed and fidelity of creative output, but we're also weaving in smarter guardrails. That means sharper ideas delivered faster, and safer content that respects ethics, compliance, and cultural context."
[4]
Luma AI and Adobe Partner to Distribute New Generative Video Model
These Are The Influencers Who Could Take Over for Charlie Kirk The race to bring cinematic-level video to studios and filmmakers took another turn Thursday as AI video company Luma AI announced it was partnering with Adobe to release its new Ray3 model. Starting today, the AI video company is making Ray3 available to customers of Adobe's generative-AI app Adobe Firefly. Paid Firefly customers will get unlimited AI-assisted video over the next two weeks, with a cost structure kicking in after that. Other clients, such as Hollywood studios and streamers, can order the tool separately for their filmmakers. With the deal, Luma hopes its tech tool will become a staple for the industry. Backed by Amazon, a16z and others, Luma AI's tech aims to make AI-generated cinematic video more realistic. The company is in an arms race with Runway AI, Google Veo and others. Hollywood studios and filmmakers, the companies hope, will use the tools to generate video in-screen without requiring shoots, cutting down on costs. The quality of the AI-generated videos will be a key determining factor in the update of the new technology. To what extent AI video can, or should, replace physical production remains an open question. Luma has been releasing a steady stream of models since it came on the scene in early 2024 with Dream Machine. The tool was a quantum leap forward in what short-form video could do with basic text prompts. Ray3 continues the trend of short 10-second videos without dialogue but with a higher level of realism. In a call with THR, Luma AI CEO called Ray3 "the most intelligent video model on the market." He touted its ability to "reason," an at-times squishy AI term that essentially means the model can interrogate itself to improve on an existing task instead of requiring users to keep refining their prompts. "If coders get intelligent models, why shouldn't creators get intelligent models?" Jain said. In a demonstration, he showed a complex prompt that asks for the model to have characters turning to a light while it changes colors, followed by an explosion. The six-part sequence, he said, could not be handled by most models. Ray3 also provides the ability to doodle on an image, drawing out the trajectory of a character on screen, with the tool then generating a video of the movement. "With Ray3 now available in the Firefly app, Adobe customers are among the first to gain access to a powerful new video model that amplifies imagination and transforms workflows," Adobe's v-p of new GenAI business ventures Hannah Elsakr said in a statement. "We can't wait to see how they use it to bring their ideas to life." Veo 3 remains a leader thanks to Google's reach, tech and ability to train on millions of YouTube videos.
[5]
Adobe Creative Cloud has changed forever with world-first Luma AI Ray3 integration
Its initial integration of generative AI followed a similar approach, based around the creation of its own Adobe Firefly AI models, which were touted as being more ethical and commercially safe than those of rivals since they were trained only on licensed material. But the creative software giant today sealed a recent change in strategy with a world-first integration of Luma AI's new AI video generation model Ray3. Does it turn out that commercially safe isn't commercially viable? Until this year a product announcement from Adobe almost always meant new proprietary features in its vast suite of creative software. Today's news that it's adding access Ray3 before any other platform other than Luma AI's own Dream Machine suggests it now sees the integration of third-party AI models as an equally big sell. The change began earlier in the year. Last month, Firefly became one of the first platforms to add Google's Gemini 2.5 Flash Image, aka Nano Banana, just days after media went wild proclaiming the new AI model to be 'the end of Photoshop', and just as those viral 3D figurines began taking over the internet. Firefly now includes models from OpenAI, Ideogram, Pika, Black Forest Labs, Runway, with upcoming integrations planned with Moonvalley and Topaz Labs. Creatives have always used multiple tools, but Adobe integrations tended to come in the form of plugins developed by third-parties rather than integrations promoted by Adobe itself. Adobe says the aim behind the change in approach is to make Firefly the creative AI ecosystem of the future - an all-in-one platform for creative AI so that users don't need to go elsewhere. Where things get murky is what this means for the "commercially safe" pitch that Adobe originally made for Firefly. The company stresses that Content Credentials are added to all AI-generated assets to allow users to keep track of how they were generated, and it notes that creatives may want to use alternative AI models for ideation, not necessarily for generating finished assets for commercial use. However, it will be users' responsibility to remain aware of how they generated each asset, whether it was with Adobe's own Firefly or AI models from "trusted partners". "Our goal is clear: to make Firefly app the first place you turn to when new breakthrough creative AI models emerge," Adobe says. "Whether it's a commercially safe Firefly model from Adobe or the latest model from our expanding partner ecosystem, we're integrating the most in-demand AI models directly into the Firefly app - always with a focus on real creative workflows and the way you work today". Ray3 is the latest AI video model from Luma AI. It was built on a new multimodal reasoning system. The idea is that instead of responding randomly to prompts, it can "think through" what a user is asking, plan complex scenes, and judge whether its own output makes sense. It does this by generating text and visual tokens, which Luma AI compares to a director sketching out a storyboard before filming. As a result, videos feel more coherent, with characters that look more consistent, scenes that unfold more naturally, and more natural movement and other physics, Luma says. The company says Ray3 can generate cinematic, high-quality video footage of up to 10 seconds long. It's the first video AI model to support native 10-, 12-, and 16-bit High Dynamic Range (HDR) ACES2065-1 EXR format for deeper shadows and brighter highlights. Luma says the model can even be used to convert footage filmed in SDR into HDR. Users can animate still images, while keyframes provide control over timing and scene changes, and Extend allows users to lengthen a shot. Early users say that compared to other AI video generators, Ray3 is less prone to hallucinations. It also has features designed for creative use by including Draft Mode for faster iteration and native 1080p generation (the latter initially for select partners but rolling out generally). A neural upscaler can upscale output to 4K. "Creative work is one of the most intellectually challenging things humans do, yet until now, many of the AI available to creatives has lagged far behind what's possible in coding and analysis with language models," Amit Jain, CEO and co-founder of Luma AI says in the release announcement. "Many generative models today have been more like slot machines - powerful but not intelligent. "Ray3 changes that in a big way. Its groundbreaking reasoning system can understand intent, evaluate its own outputs, and refine results, significantly improving the accuracy and quality of generated video. More than twice the size of Ray2, Ray3 delivers new levels of fidelity, instruction following, and temporal coherence." Ray3 is available as a model option in the Adobe Firefly app, making Adobe the first third-party partner to launch it outside of Luma AI's Dream Machine platform. Adobe suggests that creatives use it via Firefly's Text to Video to quickly generate b-roll or background footage to fill gaps in videos or build dynamic transitions for social media posts. Ray3 also appears as a video generation option in the new Firefly Boards app, where Adobe suggests creatives can use it to explore visual directions for environments, shot compositions and camera perspectives before moving forward with a shoot. Everything generated with Ray3 in Firefly can be synced to your Creative Cloud account, so you can bring it into apps like Premiere Pro for more editing and refinement. For two weeks, Ray3 will be available only in Adobe Firefly and on Luma AI's Dream Machine platform. Adobe is allowing unlimited free Ray3 generations for all customers on a paid Firefly plan or Creative Cloud Pro plan until 1 October.
Share
Share
Copy Link
Luma AI launches Ray3, a groundbreaking AI video generation model with built-in reasoning capabilities, promising to revolutionize creative workflows in the film and advertising industries.
Luma AI has unveiled Ray3, a groundbreaking AI model set to revolutionize high-quality, cinematic video content . This innovation signifies a major leap in AI-generated media, offering enhanced capabilities to reshape creative workflows across various industries.
Source: Creative Bloq
Ray3's core innovation is its built-in reasoning capability, distinguishing it from prior AI video models . Unlike predecessors with limited coherence, Ray3 can "think" through complex scene descriptions and intricate instructions.
This reasoning allows Ray3 to:
This results in a more intelligent approach to video generation, emulating human animators and filmmakers .
Source: CNET
Ray3 features several technical improvements for enhanced output and versatility:
Source: The Hollywood Reporter
Related Stories
Ray3's launch is bolstered by significant industry collaborations:
Despite its advancements, Ray3 introduces considerations:
As AI evolves, tools like Ray3 are poised to significantly influence creative industries, redefining video production capabilities and fostering new artistic expressions.
Summarized by
Navi
[3]
[4]