3 Sources
[1]
Runway Launches New Aleph Model That Promises Next-Level AI Video Editing
Macy has been working for CNET for coming on 2 years. Prior to CNET, Macy received a North Carolina College Media Association award in sports writing. Runway, a pioneer in generative video, just unveiled its latest AI model called Runway Aleph that aims to redefine how people create and edit video content. Aleph builds on Runway's research into General World Models and Simulation Models, giving users a conversational AI tool that can instantly make complex edits to video footage, whether generated or existing. For instance, want to remove a car from a shot? Swap out a background? Restyle an entire scene? According to Runway, Aleph lets you do all that with a simple prompt. Read also: What Are AI Video Generators? What to Know About Google's Veo 3, Sora and More Unlike previous models that focused mostly on video generation from text, Aleph emphasizes "fluid editing." It can add or erase objects, tweak actions, change lighting and maintain continuity across frames, which are challenges that have historically tripped up AI video tools. The company says Aleph's local and global editing capabilities keep scenes, characters and environments consistent, so creators don't have to fix frame-by-frame glitches. "Runway Aleph is more than a new model -- it's a new way of thinking about video altogether," Runway wrote in its announcement. The launch comes as AI video creation heats up. Big players like OpenAI, Google, Microsoft and Meta have all showcased AI video models this year. But Runway, which helped popularize AI video with its earlier Gen-1 and Gen-2 models, says Aleph pushes things further by combining high-fidelity generation with real-time, conversational editing -- which could be significant for filmmakers, studios and advertisers who want faster workflows. Runway says Aleph is already being used by major studios, ad agencies, architecture firms, gaming companies and eCommerce teams. The company is giving early access to enterprise customers and creative partners starting now, with broader availability rolling out in the coming days.
[2]
Runway's New AI Video Model Can Edit Input Videos With Text Prompts
The firm will first release Aleph to its enterprise and creative users Runway released a new artificial intelligence (AI) video generation model on Friday that can edit elements in other input videos. Dubbed Aleph (the first letter of the Hebrew alphabet), the video-to-video AI model can perform a wide range of edits on the input, including adding, removing, and transforming objects; adding new angles and next frames; changing the environment, seasons, and the time of the day; and transforming the style. The New York City-based AI firm said that the video model will soon be rolled out to its enterprise and creative customers and the platform's users. AI video generation models have come a long way. We have moved from generating a couple of seconds of an animated scene to generating a full-fledged video with narrative and even audio. Runway has been at the forefront of this innovation with its video generation tools, which are now being used by production houses such as Netflix, Amazon, and Walt Disney. Now, the AI company has released a new model called Aleph, which can edit input videos. It is a video-to-video model that can manipulate and generate a wide range of elements. In a post on X (formerly known as Twitter), the company called it a state-of-the-art (SOTA) in-context video model that can transform an input video with simple descriptive prompts. In a blog post, the AI firm also showcased some of the capabilities Aleph will offer when it becomes available. Runway has said that the model will first be provided to its enterprise and creative customers, and then, in the coming weeks, it will be released broadly to all its users. However, the phrasing does not clarify whether users on the free tier will also get access to the model, or if it will be reserved for the paid subscribers. Coming to its capabilities, Aleph can take an input video and generate new angles and views of the same scene. Users can request a reverse low shot, an extreme close-up from the right side, or a wide shot of the entire stage. It is also capable of using the input video as a reference and generating the next frames of the video based on prompts. One of the most impressive abilities of the AI video model is to transform environments, locations, seasons, and the time of day. This means users can upload a video of a park on a sunny morning, and the AI model can add rain, sandstorm, snowfall effects, or even make it look like the night-time. These changes are made while keeping all the other elements of the video as they are. Aleph can also add objects to the video, remove things like reflections and buildings, change objects and materials entirely, change the appearance of a character, and recolour objects. Additionally, Runway claims the AI model can also take a particular motion of a video (think a flying drone sequence) and apply it to a different setting. Currently, Runway has not shared any technical details about the AI model, such as the supported length of the input video, supported aspect ratios, application programming interface (API) charges, etc. These will likely be shared when the company officially releases the model.
[3]
Runway Introduces Aleph: New AI Model That Edits & Transforms Video
Unlike previous AI tools, Aleph works directly on input footage. It allows creators to reshape scenes, swap objects, generate future frames, or recreate angles, all while keeping the original video context intact. Whether turning a sunny morning into a snowy evening or changing a city street into a desert landscape, the tool handles edits smoothly through natural language input. Aleph supports a range of features designed for professional post-production. From adding props to removing unwanted elements, it simplifies complex VFX tasks. It can shift lighting to mimic golden hour or convert scenes from day to night. The model also recreates motion, offering drone-like effects and perspective shifts with zero manual animation. With Aleph, camera control goes beyond physical setup. Creators can generate wide, close-up, or reverse angles from existing scenes. The tool even extends footage, continuing the story in motion and style without reshooting. Edits can include character transformations, such as aging, restyling, or motion mapping from one clip to another. Runway was promoting Aleph as the "state-of-the-art in-context video model." It understands the visual context of your uploaded clips and can intelligently make changes based on your instructions in the form of short prompts. Thus, it eliminates the need for many tedious frame-by-frame edits, offering speed and creative freedom in a single, simultaneous process. Now tied up with alpha testers, enterprise users, and creative professionals, Runway has plans for an expanded rollout to a wider set of users, but made no mention of an official launch date. Information about pricing, length support, and aspect-ratio support is still a mystery. Prominent industry names have already begun exploring this technology. Netflix and Disney use older by Runway in their content pipelines, and with Imax screening AI-generated films built on Runway tools, Aleph's presence bolsters the company in the rapidly growing creative tech space.
Share
Copy Link
Runway introduces Aleph, a cutting-edge AI model that promises to transform video editing with its ability to make complex edits through simple text prompts, offering new possibilities for filmmakers and content creators.
Runway, a pioneer in generative video technology, has unveiled its latest artificial intelligence (AI) model called Aleph, which promises to redefine video editing and creation 1. This state-of-the-art in-context video model builds upon Runway's research into General World Models and Simulation Models, offering users a powerful tool that can make complex edits to video footage through simple text prompts 2.
Source: NDTV Gadgets 360
Aleph's capabilities go far beyond traditional video editing tools, emphasizing what Runway calls "fluid editing." The model can perform a wide range of edits on input videos, including:
Unlike previous AI tools that focused primarily on video generation from text, Aleph works directly on input footage, allowing creators to reshape scenes while maintaining the original video context 3. This approach enables users to make significant changes to their videos without the need for frame-by-frame manual editing.
Source: CNET
Aleph supports a range of features designed for professional post-production work:
Runway claims that Aleph is already being used by major studios, ad agencies, architecture firms, gaming companies, and eCommerce teams 1. The company's previous models, Gen-1 and Gen-2, have been utilized by prominent industry names such as Netflix and Disney in their content pipelines 3. With Imax screening AI-generated films built on Runway tools, Aleph's introduction further solidifies the company's position in the rapidly growing creative tech space.
Source: Analytics Insight
Runway is currently providing early access to Aleph for enterprise customers and creative partners, with plans for broader availability in the coming weeks 2. While specific details about pricing, supported video lengths, and aspect ratios have not been disclosed, the company has emphasized that Aleph represents "a new way of thinking about video altogether" 1.
As AI video creation continues to evolve, with major players like OpenAI, Google, Microsoft, and Meta showcasing their own AI video models, Runway's Aleph stands out by combining high-fidelity generation with real-time, conversational editing capabilities 1. This unique approach could significantly impact the workflows of filmmakers, studios, and advertisers, potentially revolutionizing the post-production process in the film and video industry.
Microsoft launches an experimental 'Copilot Mode' for its Edge browser, integrating AI assistance into web browsing with features like multi-tab analysis, voice navigation, and task automation.
31 Sources
Technology
21 hrs ago
31 Sources
Technology
21 hrs ago
Microsoft and OpenAI are in discussions to revise their partnership, focusing on Microsoft's continued access to OpenAI's technology, including in the event of achieving artificial general intelligence (AGI). This negotiation comes as OpenAI seeks to restructure and expand its partnerships with other tech giants.
7 Sources
Technology
5 hrs ago
7 Sources
Technology
5 hrs ago
Z.ai, a Chinese AI startup, has released two powerful open-source AI models, GLM-4.5 and GLM-4.5-Air, featuring advanced reasoning and agentic capabilities. These cost-efficient models aim to compete with leading proprietary AI systems while offering more affordable pricing.
7 Sources
Technology
21 hrs ago
7 Sources
Technology
21 hrs ago
Anthropic introduces new weekly rate limits for Claude Code, citing misuse and continuous usage. The move aims to curb power users but has sparked controversy among developers.
7 Sources
Technology
21 hrs ago
7 Sources
Technology
21 hrs ago
EssilorLuxottica reports a significant surge in Ray-Ban Meta smart glasses sales, signaling a shift towards AI-powered eyewear as the next computing platform. The company's partnership with Meta and focus on AI integration in glasses is driving growth and innovation in the wearable technology market.
6 Sources
Technology
21 hrs ago
6 Sources
Technology
21 hrs ago