Curated by THEOUTPOST
On Thu, 1 Aug, 12:06 AM UTC
2 Sources
[1]
Turn your selfie into an action star with this new AI image-to-video feature
AI video generator Runway's Gen-3 quickly makes a still into a film Artificial intelligence-powered video maker Runway has added the promised image-to-video feature to its Gen-3 model released a few weeks ago, and it may be as impressive as promised. Runway has enhanced the feature to address its biggest limitations in the Gen-2 model released early last year. The upgraded tool is miles better at character consistency and hyperrealism, making it a more powerful tool for creators looking to produce high-quality video content. Runway's Gen-3 model is still in alpha testing and only available to subscribers who pay $12 per month per editor for the most basic package. The new model had already attracted plenty of interest even when it came out with only text-to-video capabilities. But, no matter how good a text-to-video engine is, it has inherent limits, especially when it comes to characters in a video looking the same over multiple prompts and appearing to be in the real world. Without visual continuity, it's hard to make any kind of narrative. In earlier iterations of Runway, users often struggled to keep characters and settings uniform across different scenes when relying solely on text prompts. Offering reliable consistency in character and environmental design is no small thing, but the use of an initial image as a reference point to maintain coherence across different shots can help. In Gen-3, Runway's AI can create a 10-second video guided by additional motion or text prompts in the platform. You can see how it works in the video below. Runway's image-to-video feature doesn't just ensure people and backgrounds stay the same when seen from a distance. Gen-3 also incorporates Runway's lip-sync feature so that someone speaking moves their mouth in a way that matches the words they are saying. A user can tell the AI model what they want their character to say, and the movement will be animated to match. Combining synchronized dialogue and realistic character movements will interest a lot of marketing and advertising developers looking for new and, ideally, cheaper ways to produce videos. Runway isn't done adding to the Gen-3 platform, either. The next step is bringing the same enhancements to the video-to-video option. The idea is to keep the same motion but in a different style. A human running down a street becomes an animated anthropomorphic fox dashing through a forest, for instance. Runway will also bring its control features to Gen-3, such as Motion Brush, Advanced Camera Controls, and Director Mode. AI video tools are still in the early stages of development, with most models excelling in short-form content creation but struggling with longer narratives. That puts Runway and its new features in a strong position from a market standpoint, but it is far from alone. Midjourney, Ideogram, Leonardo (now owned by Canva), and others are all racing to make the definitive AI video generator. Of course, they're all keeping a wary watch on OpenAI and its Sora video generator. OpenAI has some advantages in name recognition, among other benefits. In fact, Toys"R"Us has already made a short film commercial using Sora and premiered it at the Cannes Lions Festival. Still, the film about AI video generators is only in its first act, and the triumphant winner cheering in slow-motion at the end is far from inevitable.
[2]
Runway announces even faster, cheaper AI video model Gen-3 Alpha Turbo
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More It's been a while since we've had a fancy new AI model with the word "Turbo" to explore, but Runway is making sure it doesn't stay that way. The New York City-based startup that just earlier this week turned heads with its new image-to-video update for its incredibly realistic Gen-3 Alpha model today announced on the social platform X (formerly Twitter) that it is debuting yet another, faster version of said model: Gen-3 Alpha Turbo, which will be "rolling out...with significantly lower pricing over the coming days." In its post, Runway said that the Turbo model was "7x faster than the original Gen-3 Alpha." Runway co-founder and CEO Cristóbal Valenzuela also posted on X a message stating that users could generate new videos with Gen-3 Alpha Turbo in "real-time," or at least close to it, producing a 10 second video in 11 seconds. We previously tested the new image-to-video feature on the prior version of Gen 3-Alpha and found that it was already quite fast, generating videos from stills in often less than a minute. But apparently, Runway thought this wasn't good enough and sought to do much better. It makes sense if the company wants to maintain its lead in offering highly realistic, Hollywood-quality generative AI video models even as insurgents and upstarts nip at its heels, including Pika Labs, Luma AI, Kling, and of course, OpenAI's Sora, the latter of which remains only available to a small group of handpicked testers despite being initially shown off in February. Valenzuela also told a questioner on X that Runway was working on an update to its mobile app to include support for image-to-video with Gen-3 Alpha. More for less? But why would Runway offer a newer, faster version of its latest model with the same quality of AI video generations at a lower price? Aside from potentially being a simpler and less computationally heavy model for it to run on its servers (and therefore cheaper), the company may also be banking on the fact that faster generations will lead to more overall usage and thus, more overall spending on its subscription plans or a-la-carte generation "credits" model. Right now, Runway offers a variety of monthly subscription plans that each come with a set number of credits which must be exchanged for each generation of still images or video on its platform. Gen-3 Alpha, the most prior model, costs 10 credits for every 1 second of video generated. Its older Gen-2 model is priced at 5 credits for every 1 second of video, while interestingly, its oldest Gen-1 model is most expensive at 14 credits per 1 second of AI video. It would probably make sense for the company to offer Gen-3 Alpha Turbo at around 7 credits for every 1 second of video, then, or perhaps as low as 5. Training questions persist Last week, 404 Media obtained a spreadsheet allegedly from a former Runway employee showing the company's plans to scrape and train its AI models off of popular YouTube channel videos, including copyrighted content from major motion picture and TV shows that was ripped and posted or clipped by other YouTube users. While the company faced criticism from some on the web for the tactic, Runway has not yet commented on the 404 Media report and spreadsheet. That said, it is already facing lawsuits alongside other gen AI creative generator companies from creators for allegedly violating copyrights on their still images. Yet scraping, as I maintain, was broadly viewed as permissible ever since Google followed a similar tactic to build its search index and sell ads against it. Still, today, one prominent critic of unauthorized gen AI scraping, former Stability AI exec turned nonprofit founder Ed Newton-Rex, who offers paid certification for ethically trained AI through his new organization Fairly Trained, called out Runway again on X and asked for the company to disclose its training data set. Most leading generative AI companies, even those behind open source models such as Meta's Llama 3.1, have not fully disclosed the intricacies of their training data sets and it is reasonable to me to conclude they view the training data as a proprietary and competitive secret. But we'll see as these lawsuits make their way through the courts if discovery forces gen AI model providers such as Runway to disclose their training data, and if they are found to be in violation of any copyrights and if so, what the remediation or damages would be.
Share
Share
Copy Link
Runway introduces Gen-3 Alpha Turbo, an AI-powered tool that can turn selfies into action-packed videos. This advancement in AI technology promises faster and more cost-effective video generation for content creators.
Runway, a leading AI company, has announced the release of Gen-3 Alpha Turbo, a groundbreaking AI-powered tool that can transform static images into dynamic, action-packed videos. This innovative technology marks a significant leap forward in the realm of AI-generated content, offering content creators and filmmakers new possibilities for visual storytelling 1.
The Gen-3 Alpha Turbo's most captivating feature is its ability to turn a simple selfie into an exhilarating action sequence. Users can upload a photo of themselves and watch as the AI transforms it into a video depicting them in various high-octane scenarios, such as piloting a fighter jet or engaging in a car chase. This functionality opens up new avenues for personal content creation and entertainment 1.
Runway's latest offering boasts significant improvements in both speed and cost-effectiveness compared to its predecessors. Gen-3 Alpha Turbo can generate videos up to four times faster than the previous version, with rendering times reduced from 40 seconds to just 10 seconds for a 4-second clip. Additionally, the cost per second of generated video has been slashed by 75%, making it more accessible to a wider range of users 2.
Gen-3 Alpha Turbo introduces new features that provide users with greater creative control over their generated content. The tool now offers more precise text prompts and the ability to use reference images as style guides. These enhancements allow for more nuanced and customized video outputs, catering to the specific vision of content creators 2.
The introduction of Gen-3 Alpha Turbo has far-reaching implications for the content creation industry. By significantly reducing the time and cost associated with video production, this technology democratizes access to high-quality, visually stunning content. It empowers individual creators, small businesses, and even larger production houses to produce engaging visual narratives with unprecedented ease 1 2.
As with any advanced AI technology, the release of Gen-3 Alpha Turbo raises important questions about the ethical implications of AI-generated content. Issues such as copyright, consent, and the potential for misuse in creating deepfakes will likely become more prominent as these tools become more widespread and sophisticated. Runway and other companies in this space will need to address these concerns as they continue to push the boundaries of AI-generated video technology 1 2.
Reference
Runway AI, a leader in AI-powered video generation, has launched an API for its advanced video model. This move aims to expand access to its technology, enabling developers and enterprises to integrate powerful video generation capabilities into their applications and products.
8 Sources
Runway has added precise camera control features to its Gen-3 Alpha Turbo AI video editor, allowing users to manipulate AI-generated scenes with unprecedented control over camera movements and perspectives.
4 Sources
Runway introduces Act-One, a groundbreaking AI tool that transforms human performances into animated characters, potentially revolutionizing filmmaking and content creation.
10 Sources
Runway AI introduces 'Frames', a new foundational model for image generation that offers unprecedented stylistic control and visual fidelity, integrated into their Gen-3 Alpha platform.
3 Sources
A leaked document suggests that Runway, a Google-backed AI startup, may have used publicly available YouTube videos and copyrighted content to train its Gen-3 AI video generation tool without proper authorization.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved