Curated by THEOUTPOST
On Wed, 7 May, 12:04 AM UTC
2 Sources
[1]
Lightricks just made AI video generation 30x faster -- and you won't need a $10,000 GPU
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Lightricks, the company behind popular creative apps like Facetune and VideoLeap, announced today the release of its most powerful AI video generation model to date. The LTX Video 13-billion-parameter model (LTXV-13B) generates high-quality AI video up to 30 times faster than comparable models while running on consumer-grade hardware rather than expensive enterprise GPUs. The model introduces "multiscale rendering," a novel technical approach that dramatically increases efficiency by generating video in progressive layers of detail. This enables creators to produce professional-quality AI videos on standard desktop computers and high-end laptops instead of requiring specialized enterprise equipment. "The introduction of our 13B parameter LTX Video model marks a pivotal moment in AI video generation with the ability to generate fast, high-quality videos on consumer GPUs," said Zeev Farbman, co-founder and CEO of Lightricks, in an exclusive interview with VentureBeat. "Our users can now create content with more consistency, better quality, and tighter control." How Lightricks democratizes AI video by solving the GPU memory problem A major challenge for AI video generation has been the enormous computational requirements. Leading models from companies like Runway, Pika, and Luma typically run in the cloud on multiple enterprise-grade GPUs with 80GB or more of VRAM (video memory), making local deployment impractical for most users. Farbman explained how LTXV-13B addresses this limitation: "The major dividing line between consumer and enterprise GPUs is the amount of VRAM. Nvidia positions their gaming hardware with strict memory limits -- the previous generation 3090 and 4090 GPUs maxed out at 24 gigabytes of VRAM, while the newest 5090 reaches 32 gigabytes. Enterprise hardware, by comparison, offers significantly more." The new model is designed to operate effectively within these consumer hardware constraints. "The full model, without any quantization, without any approximation, you will be able to run on top consumer GPUs -- 3090, 4090, 5090, including their laptop versions," Farbman noted. Inside 'multiscale rendering': The artist-inspired technique that makes AI video generation 30X faster The core innovation behind LTXV-13B's efficiency is its multiscale rendering approach, which Farbman described as "the biggest technical breakthrough of this release." "It allows the model to generate details gradually," he explained. "You're starting on the coarse grid, getting a rough approximation of the scene, of the motion of the objects moving, etc. And then the scene is kind of divided into tiles. And every tile is filled with progressively more details." This process mirrors how artists approach complex scenes -- starting with rough sketches before adding progressively finer details. The advantage for AI is that "your peak amount of VRAM is limited by a tile size, not the final resolution," Farbman said. The model also features a more compressed latent space, which requires less memory while maintaining quality. "With videos, you have a higher compression ratio that allows you, while you're in the latent space, to just take less VRAM," Farbman added. Why Lightricks is betting on open source when AI markets are increasingly closed While many leading AI models remain behind closed APIs, Lightricks has made LTXV-13B fully open source, available on both Hugging Face and GitHub. This decision comes during a period when open-source AI development has faced challenges from commercial competition. "A year ago, things were closed, but things are kind of opening up. We're seeing really a lot of cool LLMs and diffusion models opening up," Farbman reflected. "I'm more optimistic now than I was half a year ago." The open-source strategy also helps accelerate research and improvement. "The main rationality for open-sourcing it is to reduce the cost of your R&D," Farbman explained. "There are a ton of people in academia that use the model, write papers, and you're starting to become this curator that understands where the real gold is." How Getty and Shutterstock partnerships help solve AI's copyright challenges As legal challenges mount against AI companies using scraped training data, Lightricks has secured partnerships with Getty Images and Shutterstock to access licensed content for model training. "Collecting data for training AI models is still a legal gray area," Farbman acknowledged. "We have big customers in our enterprise segment that care about this kind of stuff, so we need to make sure we can provide clean models for them." These partnerships allow Lightricks to offer a model with reduced legal risk for commercial applications, potentially giving it an advantage in enterprise markets concerned about copyright issues. The strategic gamble: Why Lightricks offers its advanced AI model free to startups In an unusual move for the AI industry, Lightricks is offering LTXV-13B free to license for enterprises with under $10 million in annual revenue. This approach aims to build a community of developers and companies who can demonstrate the model's value before monetization. "The thinking was that academia is off the hook. These guys can do whatever they want with the model," Farbman said. "With startups and industry, you want to create win-win situations. I don't think you can make a ton of money from a community of artists playing with AI stuff." For larger companies that find success with the model, Lightricks plans to negotiate licensing agreements similar to how game engines charge successful developers. "Once they hit ten million in revenue, we're going to come to talk with them about licensing," Farbman explained. Beyond Hollywood: Where AI video is making immediate impact in animation and production Despite the advances represented by LTXV-13B, Farbman acknowledges that AI video generation still has limitations. "If we're honest with ourselves and look at the top models, we're still far away from Hollywood movies. They're not there yet," he said. However, he sees immediate practical applications in areas like animation, where creative professionals can use AI to handle time-consuming aspects of production. "When you think about production costs of high-end animation, the real creative work, people thinking about key frames and the story, is a small percent of the budget. But key framing is a big resource thing," Farbman noted. Looking ahead, Farbman predicts the next frontier will be multimodal video models that integrate different media types in a shared latent space. "It's going to be music, audio, video, etc. And then things like doing good lip sync will be easier. All these things will disappear. You're going to have this multimodal model that knows how to operate across all these different modalities." LTXV-13B is available now as an open-source release and is being integrated into Lightricks' creative apps, including its flagship storytelling platform, LTX Studio.
[2]
Lightricks shakes up AI video creation with powerful open-source model - SiliconANGLE
Lightricks shakes up AI video creation with powerful open-source model Lightricks Ltd. is throwing down the gauntlet to artificial intelligence powerhouses OpenAI, Google LLC and others with the release of its latest open-source video generation model, LTX Video-13B. The new version is said to be a significant upgrade to Lightricks' original LTXV model, boosting the number of parameters and enhancing its features to "dramatically" increase the quality of its video outputs, while maintaining its impressive speed. Available as part of Lightrick's flagship tool LTX Studio, Lightricks says LTXV-13B can generate videos with "stunning detail, coherence and control," even when running on consumer-grade hardware. The original LTXV model debuted in November, and it got a lot of attention as one of the most advanced video generation models around. With its lightweight architecture, the 2 billion-parameter model ran efficiently on laptops and personal computers powered by a single consumer-grade graphics processor, and rapidly generating five seconds of slick-looking video with smooth and consistent motion. However, it was the highly accessible nature of LTXV that really helped it stand out from the crowd. In a world where the most advanced models are typically "black boxes" locked behind pay-to-play application programming interfaces, LTXV was a breath of fresh air. The open-source model, its codebase and its weights were made freely available to the AI community, giving researchers and enthusiasts a rare opportunity to understand how it works and make it even better. Lightricks made LTXV open-source because it wants to encourage further innovation in the AI industry, and the only way to do that is by making the latest advances available to everyone, so anyone can build on them. It was a calculated move by the startup, which hoped that by getting its foundational model into the hands of as many developers as possible, it could entice more of them to use its paid platforms. With LTXV-13B, the company is following the same approach, making it available to download on Hugging Face and GitHub, where it can be licensed freely by any organization with less than $10 million in annual revenue. That means users are free to tinker with it any way they want, fine-tune it, add new features and integrate it into third-party applications. Users will also be able to get their hands on some compelling new features that have been designed to enhance video quality without affecting the model's efficiency. One of the biggest updates is a new multiscale rendering capability that enables creators to slowly add more detail and colour to their videos in a step-by-step process. Think of an artist who starts off with a rough pencil sketch before pulling out his paintbrush and adding more intricate details and colors. Creators can employ the same "layered" approach and progressively enhance the individual elements within their videos, similar to the staged scene construction techniques used by professional filmmakers. The advantage of doing this is twofold. On the one hand, it results in better-quality videos with more refined visual details, Lightricks said. It's also much faster, enabling the model to render high-resolution video as much as 30 times faster than competing models with a similar number of parameters. Lightricks also revealed enhancements to existing features for camera motion control, keyframe editing, multishot sequencing and character and scene-level motion adjustment. In addition, the release integrates several contributions from the open source community that improve the model's scene coherence and motion consistency and preserve its efficiency. For instance, Lightricks said it worked with researchers to integrate more advanced reference-to-video generation and video-to-video editing tools with LTXV-13B. And there are new upsampling controls, which help to eliminate the effects of background noise. The open-source community also helped the company optimize LTXV-13B to ensure it can still run efficiently on consumer-grade GPUs, despite being much bulkier than the original model. This is enabled by the UEfficient Q8 kernel, which helps to scale the model's performance on devices with minimal compute resources. So developers can run the model locally on any machine. LTXV-13B also stands out as an "ethical" model, since it was trained on a curated dataset of visual assets provided by Getty Images Holdings Inc. and Shutterstock Inc. The high quality of its licensed training data ensures that the model's outputs are both visually compelling and safe to use commercially, without any risk of copyright infringement issues. LTXV-13B is available now through LTX Studio, an premium platform which allows creators to outline their ideas using text-based prompts and slowly refine them to generate professional videos. With LTX Studio, creators can access advanced editing tools, enabling them to change the camera angles, refine the appearance of individual characters, edit buildings and objects in the background, adapt the environment and more. Co-founder and Chief Executive Zeev Farbman said the release is a "pivotal moment" for anyone interested in AI video generation. "Our users can now create content with more consistency, better quality, and tighter control," he promised. "This new version of LTX Video runs on consumer hardware, while staying true to what makes all our products different -- speed, creativity and usability."
Share
Share
Copy Link
Lightricks releases LTXV-13B, an open-source AI video generation model that is 30 times faster than competitors and runs on consumer-grade hardware, democratizing access to high-quality AI-generated videos.
Lightricks, the company behind popular creative apps like Facetune and VideoLeap, has announced the release of its most powerful AI video generation model to date: the LTX Video 13-billion-parameter model (LTXV-13B). This new model represents a significant leap forward in AI video generation technology, offering unprecedented speed and accessibility 1.
One of the most remarkable features of LTXV-13B is its ability to generate high-quality AI videos up to 30 times faster than comparable models while running on consumer-grade hardware. This breakthrough eliminates the need for expensive enterprise GPUs, democratizing access to advanced AI video generation capabilities 1.
Zeev Farbman, co-founder and CEO of Lightricks, emphasized the significance of this achievement: "The introduction of our 13B parameter LTX Video model marks a pivotal moment in AI video generation with the ability to generate fast, high-quality videos on consumer GPUs" 1.
The core innovation behind LTXV-13B's efficiency is its "multiscale rendering" approach. This novel technique generates video in progressive layers of detail, mirroring how artists approach complex scenes. The process starts with a coarse grid and rough approximation of the scene, then gradually adds more details to each tile 1.
This approach not only increases efficiency but also enhances video quality. Users can employ a "layered" approach to progressively enhance individual elements within their videos, similar to the staged scene construction techniques used by professional filmmakers 2.
In a bold move, Lightricks has made LTXV-13B fully open-source, available on both Hugging Face and GitHub. This decision comes at a time when many leading AI models remain behind closed APIs. The open-source strategy aims to accelerate research and improvement, leveraging contributions from the academic and developer communities 1 2.
To address potential legal challenges related to AI training data, Lightricks has secured partnerships with Getty Images and Shutterstock. These collaborations provide access to licensed content for model training, reducing legal risks for commercial applications and potentially giving Lightricks an advantage in enterprise markets concerned about copyright issues 1.
In an unusual move, Lightricks is offering LTXV-13B free to license for enterprises with under $10 million in annual revenue. This approach aims to build a community of developers and companies who can demonstrate the model's value before monetization. For larger companies that find success with the model, Lightricks plans to negotiate licensing agreements similar to how game engines charge successful developers 1 2.
Lightricks launches LTX Video (LTXV 0.9), an open-source AI model capable of generating high-quality video clips in near real-time, challenging proprietary AI systems and democratizing advanced video creation.
4 Sources
4 Sources
Shutterstock introduces a novel 'research license' model, partnering with Lightricks to provide access to high-quality video data for AI training, potentially reshaping the landscape of ethical AI development.
3 Sources
3 Sources
Moonvalley, a Los Angeles-based startup, has launched Marey, an AI video generation model trained exclusively on licensed content. This ethical approach aims to address copyright concerns in the rapidly evolving field of AI-generated videos.
5 Sources
5 Sources
Luma AI has launched Photon, a new AI image generation model, alongside an updated Dream Machine platform. The release introduces innovative features for creators, including consistent character generation and a user-friendly interface.
4 Sources
4 Sources
Runway introduces Gen-3 Alpha Turbo, an AI-powered tool that can turn selfies into action-packed videos. This advancement in AI technology promises faster and more cost-effective video generation for content creators.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved