2 Sources
2 Sources
[1]
Luma releases a new AI model that lets users generate a video from a start and end frame | TechCrunch
Luma, the a16z-backed AI video and 3D model company, released a new model called Ray3 Modify that allows users to modify existing footage by providing character reference images that preserve the performance of the original footage. Users can also provide a start and an end frame to guide the model to generate transitional footage. The company said Thursday the Ray3 Modify model solves the problems of preserving human performance while editing or generating effects using AI for creative studios. The startup said the model follows the input footage better, allowing studios to use human actors for creative or brand footage. Luma mentioned the new model retains the actor's original motion, timing, eye line, and emotional delivery while transforming the scene. With Ray3 Modify, users can provide a character reference for transformation to the original footage and convert the human actor's appearance into that character. This reference also allows creators to retain information like costumes, likeness, and identity across the shoot. What's more, users can also provide start and end reference frames to create a video using the new Ray3 Modify model. This is helpful for creators in directing transitions or controlling character movements or behaviour while maintaining continuity between scenes. "Generative video models are incredibly expressive but also hard to control. Today, we are excited to introduce Ray3 Modify that blends the real-world with the expressivity of AI while giving full control to creatives. This means creative teams can capture performances with a camera and then immediately modify them to be in any location imaginable, change costumes, or even go back and reshoot the scene with AI, without recreating the physical shoot," Amit Jain, co-founder and CEO of Luma AI, said in a statement. Luma said the new model is available to users through the company's Dream Machine platform. The company, which competes with the likes of Runway and Kling, released video modification capabilities in June 2025. The model release comes on the back of a fresh $900 million funding round, led by Saudi Arabia's Public Investment Fund-owned AI company Humain, for the startup announced in November. Existing investors like a16z, Amplify Partners, and Matrix Partners also participated in the round. The startup is also planning to build a 2GW AI cluster in Saudi Arabia along with Humain.
[2]
Luma AI brings character consistency to video with Ray3
Luma AI, an a16z-backed company specializing in AI video and 3D models, released Ray3 Modify, a new model that enables users to modify existing footage using character reference images while preserving the original performance. Users supply start and end frames to generate transitional footage, addressing preservation challenges for creative studios. Ray3 Modify tackles issues in maintaining human performance during AI-based editing or effects generation. The model adheres closely to input footage, permitting studios to employ human actors in creative or brand productions. It specifically retains the actor's original motion, timing, eye line, and emotional delivery as it transforms the scene. Users input a character reference to alter the human actor's appearance into the specified character within the footage. This process preserves details such as costumes, likeness, and identity throughout the entire shoot. Such capabilities ensure consistent visual elements across sequences without additional filming. Creators also supply start and end reference frames to produce video segments with Ray3 Modify. This feature assists in directing smooth transitions or specifying character movements and behaviors. Continuity between scenes remains intact, supporting complex narrative structures in video projects. Amit Jain, co-founder and CEO of Luma AI, stated, "Generative video models are incredibly expressive but also hard to control. Today, we are excited to introduce Ray3 Modify that blends the real-world with the expressivity of AI while giving full control to creatives. This means creative teams can capture performances with a camera and then immediately modify them to be in any location imaginable, change costumes, or even go back and reshoot the scene with AI, without recreating the physical shoot." The model integrates directly into Luma's Dream Machine platform, making it accessible to users immediately. Luma positions itself against competitors including Runway and Kling in the AI video generation space. The company introduced initial video modification capabilities back in June 2025. This release follows a $900 million funding round announced in November, led by Humain, an AI company owned by Saudi Arabia's Public Investment Fund. Existing investors a16z, Amplify Partners, and Matrix Partners joined the round, providing substantial capital for expansion. Video: Luma Luma plans to construct a 2GW AI cluster in Saudi Arabia in collaboration with Humain. This infrastructure initiative supports scaling computational resources for advanced AI video and 3D modeling operations.
Share
Share
Copy Link
Luma AI unveiled Ray3 Modify, a new model that lets creators modify existing video footage using character reference images while preserving the original human performance. The tool can generate a video from a start and end frame, addressing control challenges in AI video generation for creative studios.
Luma AI, the a16z-backed company specializing in AI video and 3D model technology, has released Ray3 Modify, a new model designed to modify existing video footage while maintaining the integrity of original performances
1
. The model addresses a persistent challenge in generative AI: balancing creative expression with precise control. By allowing users to provide character reference images and generate a video from a start and end frame, Ray3 Modify offers creative studios a way to transform footage without losing the nuances of human acting2
.The core innovation of Ray3 Modify lies in its ability to preserve original human performance during video editing and effects generation. The model retains critical elements including the actor's motion, timing, eye line, and emotional delivery while transforming the visual scene around them
1
. This capability allows creative teams to capture performances with a camera and immediately modify them to place actors in different locations, change costumes, or even reshoot scenes using AI without recreating the physical production2
. For studios working on brand content or narrative projects, this means significant time and cost savings while maintaining artistic vision.
Source: TechCrunch
Creators can input character reference images to transform a human actor's appearance into a specified character within the footage. This process ensures character consistency by preserving details such as costumes, likeness, and identity throughout the entire shoot
2
. The feature proves particularly valuable for productions requiring visual continuity across multiple scenes or sequences, eliminating the need for additional filming sessions to maintain consistent character appearances.Beyond character transformation, Ray3 Modify enables users to generate a video from a start and end frame, creating transitional footage that maintains narrative flow
1
. This functionality assists creators in directing smooth transitions, controlling character movements, and specifying behaviors while maintaining continuity between scenes2
. The approach gives filmmakers and content creators unprecedented control over complex narrative structures without requiring extensive reshoots.Related Stories
Amit Jain, co-founder and CEO of Luma AI, explained the model's purpose: "Generative video models are incredibly expressive but also hard to control. Today, we are excited to introduce Ray3 Modify that blends the real-world with the expressivity of AI while giving full control to creatives"
1
. The model is now available through Luma's Dream Machine platform, positioning the company against competitors including Runway and Kling in the increasingly competitive AI video space2
. Luma first introduced video modification capabilities in June 2025, and Ray3 Modify represents a significant evolution of those initial features.The model release follows a substantial $900 million funding round announced in November, led by Humain, an AI company owned by Saudi Arabia's Public Investment Fund
1
. Existing investors including a16z, Amplify Partners, and Matrix Partners participated in the round, providing capital for ambitious expansion plans2
. The startup plans to construct a 2GW AI cluster in Saudi Arabia in collaboration with Humain, supporting the computational demands of advanced AI video and 3D modeling operations1
. This infrastructure investment signals Luma's commitment to scaling its technology and competing at the highest levels of AI video generation.Summarized by
Navi
[1]
[2]
1
Technology

2
Technology

3
Technology
