The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Sat, 23 Nov, 12:05 AM UTC
4 Sources
[1]
Lightricks launches open-source AI model for faster video generation
Lightricks, the tech company behind the iconic selfie-editing app Facetune, has unveiled LTX Video (LTXV 0.9), an open-source AI model poised to revolutionize video creation. While many might know Lightricks for transforming social media feeds with Facetune's photo-editing tools, the company has expanded its creative technology portfolio, now tackling real-time video generation with this innovative new release. The LTXV model is designed to generate five-second video clips in just four seconds, providing an efficient solution for creators and researchers working with generative AI. It promises high-quality outputs, smoother motion consistency, and accessibility for smaller studios and independent creators using prosumer hardware like the RTX 4090. Lightricks CEO Zeev Farbman described the model as a step toward fostering innovation in video AI: "With many AI technologies becoming proprietary, we believe it's time for an open-sourced video model that the global academic and developer community can build on to shape the future of AI video." Fast processing: Generates 121 frames of video (768×512 resolution) in just four seconds using 20 diffusion steps. Motion consistency: Reduces visual distortions like object morphing across frames, ensuring smooth, coherent transitions. Accessible hardware: Runs efficiently on consumer-level GPUs, lowering the barrier to entry for advanced video creation. Available on GitHub and Hugging Face, LTXV invites developers to customize and build upon its capabilities. The release builds on Lightricks' commitment to collaboration, following earlier contributions like Long AnimateDiff, another open-source framework. With LTXV, Lightricks continues its evolution from shaping social media imagery with Facetune to pioneering AI video technology, setting new standards for creativity and innovation.
[2]
Lightricks unveils new open-source AI video model -- an impressive focus on speed and motion
Lightricks, the app developer behind LTX Studio, Facetune and Videoleap has released its first custom AI video model: LTX Video 0.9. It is open-source and the company claims it's capable of generating five seconds of AI video in just four seconds. The company says the new model can generate clips with impressive degrees of motion consistency, and realism and do so with greater efficiency than other similar-sized alternatives. LTX Video is being dubbed a "real-time video creator" that was built with feedback from users of LTX Studio, a platform that lets you create a multi-clip project from a single prompt. The five-second-video-in-four-seconds was generated using an Nvidia H100 GPUs and with a 768 x 512 resolution. The model will run on a standard Nvidia RTX 4090, although it will take a fair bit longer than four seconds to create the video in that case. Zeev Farbman, Co-founder and CEO of Lightricks said: "With many AI technologies becoming proprietary, we believe it's time for an open-sourced video model that the global academic and developer community can build on and help shape the future of AI video." LTX Video is open-source, much like Mochi-1 and according to Lightricks it is able to generate videos quickly while maintaining video quality and motion accuracy. "We built Lightricks with a vision to push the boundaries of what's possible in digital creativity to continue bridging that gap between imagination and creation -- ultimately leading to LTXV, which will allow us to develop better products that address the needs of so many industries taking advantage of AI's power," said Farbman. Being able to run a model like this comfortably on a good gaming PC is a huge step up for AI video and gets us to a point where it could be integrated into games or video editing tools for real-time rendering and previews. The company promises "unmatched motion and structure consistency" from the LTX Video diffusion transformer architecture. It can ensure coherent transitions between individual frames within the 5-second video create a smoother motion and reduce morphing. This will in turn make it easier to scale up to longer-form video production in the future, according to Yaron Inger, CTO of Lightricks. He said it will enable a wider range of use cases than are currently possible Inger said: "The ability to generate videos faster than playing them opens the possibility for applications beyond content creation, like gaming and interactive experiences for shopping, learning or socializing. We're excited to see how researchers and developers will build upon this foundational model." I've been trying LTX Video in an early preview and was impressed with the motion quality as well as the overall visual output. It isn't as good as Kling or Runway Gen-3 but for an open-source model able to generate at speed, it is a serious contender. It is available in image-to-video and text-to-video modes. LTX Video is also adaptable to work with a range of video lengths and resolutions which make it useful in production scenarios. Finally, being fully open source with codebase and model weights other developers can enhance or build on the base model. We've seen this with image models like Flux and Stable Diffusion, resulting in a wider range of capabilities than a single company might be able to develop alone. All of the videos in this article were made using LTX Video and were as fast to generate as you'd expect. I haven't yet tried them offline but it is available through ComfyUI if you have a good enough gaming PC available.
[3]
This Open-Source AI Video Model Can Generate Videos in Real Time
The LTX Video model can generate videos in 768x512p resolution The AI model can run locally on Nvidia RTX 4090 GPU It uses a Diffusion Transformer with 2 billion parameters Lightricks, a software company focused on image and video editing, released an open-source artificial intelligence (AI) video model in preview last week. Dubbed LTX Video, the AI model can generate medium-resolution videos in real time. While real-time video generation capability exists in a few large language models (LLMs), this is the first one to be open-sourced. The company also stated that once the full version is released, it will be free for both personal and commercial use and can be integrated into the LTX Studio. In a series of posts on X (formerly known as Twitter), Lightricks detailed its open-source AI model. The LTX Video accepts both text and images as input and can generate five-second-long videos in 768 x 512p resolution. While the preview model caps the video quality at medium resolution, it offers near real-time generation with four seconds of wait time. However, this generation time is possible on devices equipped with the Nvidia H100 chip. The company claims that the AI model can generate dynamic videos with high prompt adherence and does not require high-end resources to run. To locally run the LTX Video, users will need a GPU similar to the level of RTX 4090. Lightricks also highlighted that the model architecture is based on the Diffusion Transformer but uses only two billion parameters to keep its size small. LTX Video is currently available on GitHub, Hugging Face, and ComfyUI to download. To test the model's capabilities before downloading it from the hosting websites, users can go to the model page on Fal.ai here. The AI model can also be integrated with a wide range of external editing tools to further fine-tune the generated videos. The company also plans to release the full version of the video model and make it open source for both personal use cases and commercial usage. The full version of the tool will also be integrated with the LTX Studio, the company's AI-powered storyboard platform.
[4]
Exclusive: Lightricks bets on open-source AI video to challenge Big Tech
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Lightricks, the Israeli company behind the viral photo-editing app Facetune, is launching an ambitious effort to shake up the generative AI landscape. The company announced today the release of LTX Video (LTXV), an open-source AI model capable of generating five seconds of high-quality video in just four seconds. By making its video model freely available, Lightricks is taking direct aim at the growing dominance of proprietary AI systems from tech giants like OpenAI, Adobe, and Google. "We believe foundational models are going to be a commodity, and you can't build an actual business around foundational models," said Zeev Farbman, Co-founder and CEO of Lightricks, in an exclusive interview with VentureBeat. "If startups want to have a serious chance to compete, the technology needs to be open, and you want to make sure that people in the top universities across the world have access to your model and add capabilities on top of it." With real-time processing, scalability for long-form video, and a compact architecture that runs efficiently even on consumer-grade hardware, LTXV is poised to make professional-grade generative video technology accessible to a broader audience -- an approach that could disrupt the industry's status quo. How Lightricks weaponizes open source to challenge AI giants Lightricks' decision to release LTXV as open source is a calculated gamble designed to differentiate the company in an increasingly crowded generative AI market. The model, with its two billion parameters, is designed to run efficiently on widely available GPUs, such as the NVIDIA RTX 4090, while maintaining high visual fidelity and motion consistency. This move comes at a time when many leading AI models -- from OpenAI's DALL-E to Google's Imagen -- are locked behind APIs, requiring developers to pay for access. Lightricks, by contrast, is betting that openness will foster innovation and adoption. Farbman compared LTXV's launch to Meta's release of its open-source Llama language models, which quickly gained traction in the AI community and helped Meta establish itself in a space dominated by OpenAI's ChatGPT. "The business rationale is that if the community adopts it, if people in academia adopt it, we as a company are going to benefit a ton from it," Farbman said. Unlike Meta, which controls the infrastructure its models run on, Lightricks is focusing solely on the model itself, working with platforms like Hugging Face to make it accessible. "We're not going to make any money out of this model at the moment," Farbman emphasized. "Some people are going to deploy it locally on their hardware, like a gaming PC. It's all about adoption." Lightning-fast AI video: Breaking speed records on consumer hardware LTXV's standout feature is its speed. The model can generate five seconds of video -- 121 frames at 768×512 resolution -- in just four seconds on NVIDIA's H100 GPUs. Even on consumer-grade hardware, such as the RTX 4090, LTXV delivers near-real-time performance, making it one of the fastest models of its kind. This speed is achieved without compromising quality. The model's Diffusion Transformer architecture ensures smooth motion and structural consistency between frames, addressing a key limitation of earlier video-generation models. For smaller studios, independent creators, and researchers, the ability to iterate quickly and generate high-quality results on affordable hardware is a game-changer. "When you're waiting a couple of minutes to get a result, it's a terrible user experience," Farbman said. "But once you're getting feedback quickly, you can experiment and iterate faster. You develop a mental model of what the system can do, and that unlocks creativity." Lightricks has also designed LTXV to support longer-form video production, offering creators greater flexibility and control. This scalability, combined with its rapid processing times, opens up new possibilities for industries ranging from gaming to e-commerce. In gaming, for example, LTXV could be used to upscale graphics in older games, transforming them into visually stunning experiences. In e-commerce, the model's speed and efficiency could enable businesses to create thousands of ad variations for targeted A/B testing. "Imagine casting an actor -- real or virtual -- and tweaking the visuals in real time to find the best creative for a specific audience," Farbman said. From photo app to AI powerhouse: Lightricks' bold market play With LTXV, Lightricks is positioning itself as a disruptor in an industry increasingly dominated by a handful of tech giants. This is a bold move for a company that started as a mobile app maker and is best known for Facetune, a consumer photo-editing app that became a global hit. Lightricks has since expanded its offerings, acquiring the Chicago-based influencer marketing platform Popular Pays and launching LTX Studio, an AI-driven storytelling platform aimed at professional creators. The integration of LTXV into LTX Studio is expected to enhance the platform's capabilities, allowing users to generate longer, more dynamic videos with greater speed and precision. But Lightricks faces significant challenges. Competing against industry heavyweights like Adobe and Autodesk, which have deeper pockets and established user bases, won't be easy. Adobe, for example, has already integrated generative AI into its Creative Cloud suite, giving it a natural advantage among professional users. Farbman acknowledges the risks but believes that open-source innovation is the only viable path forward for smaller players. "If you want to have a fighting chance as a startup versus the giants, you need to ensure the technology is open and adopted by academia and the broader community," he said. Why open source could win the AI video generation race The release of LTXV also highlights a growing tension in the AI industry between open-source and proprietary approaches. While closed models offer companies tighter control and monetization opportunities, they risk alienating developers and researchers who lack access to cutting-edge tools. "Part of what's going on at the moment is that diffusion models are becoming an alternative paradigm to classical ways of doing things in computer graphics," Farbman explained. "But if you actually want to build alternatives, APIs are definitely not enough. You need to give people -- academia, industry, enthusiasts -- models to tinker with and create amazing new ideas." Lightricks plans to release LTXV on both GitHub and Hugging Face, with an initial "community preview" phase to allow for testing and feedback. The model will eventually be released under an OpenRAIL license, ensuring that derivatives remain open for academic and commercial use. For Lightricks, the stakes are high. The company is betting not only on the success of LTXV but also on the broader adoption of open AI models in a field increasingly dominated by closed ecosystems. "The future of open models is bright," Farbman said confidently. Whether that vision comes to fruition remains to be seen. But by making its most advanced technology freely available, Lightricks is sending a clear message: in the race to define the future of AI video, openness and collaboration may be the ultimate competitive advantage.
Share
Share
Copy Link
Lightricks launches LTX Video (LTXV 0.9), an open-source AI model capable of generating high-quality video clips in near real-time, challenging proprietary AI systems and democratizing advanced video creation.
Lightricks, the company behind the popular photo-editing app Facetune, has unveiled LTX Video (LTXV 0.9), an open-source AI model designed to revolutionize video creation 1. This innovative technology promises to generate high-quality, five-second video clips in just four seconds, setting a new standard for efficiency in AI-powered video generation 2.
LTXV boasts several impressive features that set it apart in the AI video generation landscape:
Speed and Efficiency: The model can generate 121 frames of video at 768x512 resolution in just four seconds using 20 diffusion steps on high-end hardware like NVIDIA H100 GPUs 12.
Motion Consistency: LTXV addresses a common issue in AI-generated videos by reducing visual distortions and ensuring smooth, coherent transitions between frames 13.
Accessibility: The model is designed to run efficiently on consumer-level GPUs like the NVIDIA RTX 4090, making it accessible to smaller studios and independent creators 12.
Versatility: LTXV supports both image-to-video and text-to-video modes, offering flexibility for various creative applications 2.
Lightricks' decision to make LTXV open-source is a strategic move aimed at fostering innovation and challenging the dominance of proprietary AI systems:
Community Collaboration: By releasing LTXV on platforms like GitHub and Hugging Face, Lightricks invites developers to customize and build upon its capabilities 14.
Democratizing AI Video: The open-source nature of LTXV lowers barriers to entry for advanced video creation, potentially disrupting the industry status quo 4.
Competing with Tech Giants: This approach positions Lightricks to challenge larger companies like OpenAI, Adobe, and Google in the generative AI space 4.
The release of LTXV opens up exciting possibilities across various industries:
Content Creation: The model's speed and quality make it ideal for rapid prototyping and iteration in video production 24.
Gaming and Interactive Experiences: LTXV's real-time capabilities could be integrated into games or used for interactive shopping, learning, or socializing applications 2.
E-commerce and Advertising: The ability to quickly generate multiple video variations could revolutionize A/B testing and targeted advertising 4.
Zeev Farbman, Co-founder and CEO of Lightricks, emphasizes the importance of open-sourcing AI technologies: "With many AI technologies becoming proprietary, we believe it's time for an open-sourced video model that the global academic and developer community can build on to shape the future of AI video" 12.
Yaron Inger, CTO of Lightricks, highlights the model's potential for scaling up to longer-form video production and enabling a wider range of use cases 2.
As Lightricks continues to evolve from a photo-editing app developer to an AI technology pioneer, LTXV represents a significant step in democratizing advanced video creation tools and challenging the current AI landscape dominated by tech giants 4.
Reference
[2]
[3]
Genmo releases Mochi 1, an open-source text-to-video AI model, offering high-quality video generation capabilities comparable to proprietary models. The launch is accompanied by a $28.4 million Series A funding round.
4 Sources
4 Sources
Runway introduces Gen-3 Alpha Turbo, an AI-powered tool that can turn selfies into action-packed videos. This advancement in AI technology promises faster and more cost-effective video generation for content creators.
2 Sources
2 Sources
Shutterstock introduces a novel 'research license' model, partnering with Lightricks to provide access to high-quality video data for AI training, potentially reshaping the landscape of ethical AI development.
3 Sources
3 Sources
Luma AI has released Ray2, a new AI video generation model that promises improved realism, natural motion, and better physics. This update to their Dream Machine platform challenges competitors like OpenAI's Sora and Google's Veo 2.
4 Sources
4 Sources
Alibaba has released Wan 2.1, a suite of open-source AI video generation models, claiming superior performance to OpenAI's Sora. The models support text-to-video and image-to-video generation in multiple languages and resolutions.
8 Sources
8 Sources