Curated by THEOUTPOST
On Wed, 23 Oct, 8:04 AM UTC
2 Sources
[1]
Runway's Act-One uses smartphone cameras to replicate facial expression motion capture - SiliconANGLE
Runway's Act-One uses smartphone cameras to replicate facial expression motion capture Runway AI Inc., the generative artificial intelligence startup that builds tools for AI-generated video creation, has announced the launch of a new feature to help creators give their AI video characters more realistic facial expressions. It's called Act-One, and it makes it possible for existing users to record themselves on something as simple as a smartphone camera, capture their facial expressions, and then replicate them on an AI-generated video character. Runway said in a blog post that the tool is being rolled out to users starting from today, and can be accessed by anyone with a Runway account. That said, it's not entirely free-to-use, as users will be required to have enough credits on their account to access the startup's most advanced Gen-3 Alpha video generation model. The Gen-3 Alpha model debuted earlier this year, introducing support for text-to-video, image-to-video and video-to-video modalities, meaning that users can write a description of a scene, upload an image or a video, or use a combination of those inputs as prompts. Once prompted, the model will go about creating a slick video that tries to match the user's vision. Although Runway's Gen-3 Alpha model can create some impressive videos, one area where it has always been a bit weak is facial animation. Particularly, creating accurate facial expressions on characters that can match the mood of the scene. In the filmmaking industry, facial animation is an intricate and expensive task that involves using sophisticated motion capture technologies, manual face rigging techniques and lots of heavy editing behind-the-scenes. Runway is trying to make advanced facial animation more accessible with Act-One. Using the tool, creators will be able to animate their video characters in almost any way they can imagine, without needing to use pricey motion capture equipment. Instead, Act-One makes it possible to use your own videos and facial expressions as a kind of reference, transposing them onto AI-generated characters. It's incredibly detailed, able to replicate everything from micro-expressions to eye-lines, onto various different characters. In a post on its official X account, Runway said Act-One can "translate the performance from a single input video across countless character designs and in many different styles". Although it has not yet rolled out to every Runway user, the company has already received positive feedback from creators: Act-One can be utilized by a range of creative professionals, including animators, video game developers and indie filmmakers, enabling them to generate more unique characters whose personality and actions can be reflected with their emotions and expressions. They'll be able to create much more realistic, cinema-like video characters and capture them at any camera angle or focal length, Runway said, unlocking the potential for much richer, more detailed storytelling and artistic expression. By eliminating the technical barrier associated with character animation, the company hopes to inspire a new generation of creators to better express themselves. For instance, it means an indie film producer can use a single actor to take on the role of multiple animated characters that display Hollywood-level realism, using only a consumer-grade camera. In a post on X, Runway co-founder and Chief Executive Crist贸bal Valenzuela said the filmmaking industry is becoming much more receptive to the potential of generative AI: Runway added that Act-One comes with a number of built-in safeguards that will prevent misuse, including guardrails that will prevent efforts to generate content featuring public figures without their express authorization. It's also said to be integrated with tools to verify voice usage rights, the company said. In addition, Runway will perform continuous monitoring of the tool to ensure it is used in a responsible way by creators.
[2]
'This is a game changer': Runway releases new AI facial expression motion capture feature Act-One
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI video has come incredibly far in the years since the first models debuted in late 2022, increasing in realism, resolution, fidelity, prompt adherence (how well they match the text prompt or description of the video that the user typed) and number. But one area that remains a limitation to many AI video creators -- myself included -- is in depicting realistic facial expressions in AI generated characters. Most appear quite limited and difficult to control. But no longer: today, Runway, the New York City-headquartered AI startup backed by Google and others, announced a new feature "Act-One," that allows users to record video of themselves or actors from any video camera -- even the one on a smartphone -- and then transfers the subject's facial expressions to that of an AI generated character with uncanny accuracy. The free-to-use tool is gradually rolling out "gradually" to users starting today, according to Runway's blog post on the feature. While anyone with a Runway account can access it, it will be limited to those who have enough credits to generate new videos on the company's Gen-3 Alpha video generation model introduced earlier this year, which supports text-to-video, image-to-video, and video-to-video AI creation pipelines (e.g. the user can type in a scene description, upload an image or a video, or use a combination of these inputs and Gen-3 Alpha will use what its given to guide its generation of a new scene). Despite limited availability right now at the time of this posting, the burgeoning scene of AI video creators online is already applauding the new feature. As Allen T. remarked on his X account "This is a game changer!" It also comes on the heels of Runway's move into Hollywood film production last month, when it announced it had inked a deal with Lionsgate, the studio behind the John Wick and Hunger Games movie franchises, to create a custom AI video generation model based on the studio's catalog of more than 20,000 titles. Simplifying a traditionally complex and equipment-heavy creative proccess Traditionally, facial animation requires extensive and often cumbersome processes, including motion capture equipment, manual face rigging, and multiple reference footages. Anyone interested in filmmaking has likely caught sight of some of the intricacy and difficulty of this process to date on set or when viewing behind the scenes footage of effects-heavy and motion-capture films such as The Lord of the Rings series, Avatar, or Rise of the Planet of the Apes, wherein actors are seen covered in ping pong ball markers and their faces dotted with marker and blocked by head-mounted apparatuses. Accurately modeling intricate facial expressions is what led David Fincher and his production team on The Curious Case of Benjamin Button to develop whole new 3D modeling processes and ultimately won them an Academy Award, as reported in a prior VentureBeat report. Yet in the last few years, new software and AI-based startups such as Move have sought to reduce the equipment necessary to perform accurate motion capture -- though that company in particular has concentrated primarily on full-body, more broad movements, whereas Runway's Act-One is focused more on modeling facial expressions. With Act-One, Runway aims to make this complex process far more accessible. The new tool allows creators to animate characters in a variety of styles and designs, without the need for motion-capture gear or character rigging. Instead, users can rely on a simple driving video to transpose performances -- including eye-lines, micro-expressions, and nuanced pacing -- onto a generated character, or even multiple characters in different styles. As Runway wrote on its X account: "Act-One is able to translate the performance from a single input video across countless different character designs and in many different styles." The feature is focused "mostly" on the face "for now," according to Crist贸bal Valenzuela, co-founder and CEO of Runway, who responded to VentureBeat's questions via direct message on X. Runway's approach offers significant advantages for animators, game developers, and filmmakers alike. The model accurately captures the depth of an actor's performance while remaining versatile across different character designs and proportions. This opens up exciting possibilities for creating unique characters that express genuine emotion and personality. Cinematic realism across camera angles One of Act-One's key strengths lies in its ability to deliver cinematic-quality, realistic outputs from various camera angles and focal lengths. This flexibility enhances creators' ability to tell emotionally resonant stories through character performances that were previously hard to achieve without expensive equipment and multi-step workflows. The tool's ability to faithfully capture the emotional depth and performance style of an actor, even in complex scenes. This shift allows creators to bring their characters to life in new ways, unlocking the potential for richer storytelling across both live-action and animated formats. While Runway previously supported video-to-video AI conversion as previously mentioned in this piece, which did allow users to upload footage of themselves and have Gen-3 Alpha or other prior Runway AI video models such as Gen-2 "reskin" them with AI effects, the new Act-One feature is optimized for facial mapping and effects. As Valenzuela told VentureBeat via DM on X: "The consistency and performance is unmatched with Act-One." Enabling more expansive video storytelling A single actor, using only a consumer-grade camera, can now perform multiple characters, with the model generating distinct outputs for each. This capability is poised to transform narrative content creation, particularly in indie film production and digital media, where high-end production resources are often limited. In a public post on X, Valenzuela noted a shift in how the industry approaches generative models. "We are now beyond the threshold of asking ourselves if generative models can generate consistent videos. A good model is now the new baseline. The difference lies in what you do with the model -- how you think about its applications and use cases, and what you ultimately build," Valenzuela wrote. Safety and protection for public figure impersonations As with all of Runway's releases, Act-One comes equipped with a comprehensive suite of safety measures. These include safeguards to detect and block attempts to generate content featuring public figures without authorization, as well as technical tools to verify voice usage rights. Continuous monitoring also ensures that the platform is used responsibly, preventing potential misuse of the tool. Runway's commitment to ethical development aligns with its broader mission to expand creative possibilities while maintaining a strong focus on safety and content moderation. Looking ahead As Act-One gradually rolls out, Runway is eager to see how artists, filmmakers, and other creators will harness this new tool to bring their ideas to life. With Act -ne, complex animation techniques are now within reach for a broader audience of creators, enabling more people to explore new forms of storytelling and artistic expression. By reducing the technical barriers traditionally associated with character animation, the company hopes to inspire new levels of creativity across the digital media landscape.
Share
Share
Copy Link
Runway AI Inc. launches Act-One, a groundbreaking feature that allows users to capture and replicate facial expressions on AI-generated video characters using smartphone cameras, potentially transforming the landscape of digital content creation.
Runway AI Inc., a prominent generative AI startup, has unveiled Act-One, a revolutionary feature that promises to transform the creation of AI-generated video characters 1. This innovative tool allows users to capture and replicate facial expressions on AI-generated characters using nothing more than a smartphone camera, marking a significant advancement in the field of AI-driven content creation.
Traditionally, creating realistic facial expressions for digital characters has been a complex and expensive process, requiring sophisticated motion capture technologies and extensive manual editing. Act-One aims to democratize this process by making advanced facial animation accessible to a broader range of creators 2.
The tool works by allowing users to record themselves or actors using any video camera, including smartphone cameras. It then transfers the captured facial expressions to an AI-generated character with remarkable accuracy, replicating everything from micro-expressions to eye-lines across various character designs and styles [1].
Act-One is being integrated into Runway's existing Gen-3 Alpha video generation model, which supports text-to-video, image-to-video, and video-to-video modalities. While the feature is gradually rolling out to users, it requires sufficient credits to access the advanced Gen-3 Alpha model [1][2].
The introduction of Act-One could have far-reaching implications for various creative professionals:
Animators and Game Developers: The tool enables the creation of more unique and expressive characters without the need for expensive equipment [1].
Indie Filmmakers: Act-One allows for the generation of multiple animated characters with Hollywood-level realism using a single actor and consumer-grade camera [1].
Digital Content Creators: The feature opens up new possibilities for storytelling and artistic expression across both live-action and animated formats [2].
The initial response to Act-One has been overwhelmingly positive, with many creators hailing it as a game-changer in the field of AI-generated content [2]. This development comes at a time when the filmmaking industry is becoming increasingly receptive to the potential of generative AI, as noted by Runway co-founder and CEO Crist贸bal Valenzuela [1].
Runway has implemented several safeguards to prevent misuse of the Act-One feature. These include measures to prevent the generation of content featuring public figures without authorization and tools to verify voice usage rights. The company has also committed to continuous monitoring to ensure responsible use by creators [1].
As AI continues to reshape the landscape of digital content creation, tools like Act-One are poised to play a crucial role in democratizing advanced animation techniques and expanding the creative possibilities for storytellers across various mediums.
Runway AI, a leader in AI-powered video generation, has launched an API for its advanced video model. This move aims to expand access to its technology, enabling developers and enterprises to integrate powerful video generation capabilities into their applications and products.
8 Sources
Runway introduces Gen-3 Alpha Turbo, an AI-powered tool that can turn selfies into action-packed videos. This advancement in AI technology promises faster and more cost-effective video generation for content creators.
2 Sources
Runway, a leading AI video generation company, has announced a $5 million fund to support up to 100 experimental films using its AI technology. This initiative aims to push the boundaries of filmmaking and explore new creative possibilities.
4 Sources
Meta introduces Movie Gen, an advanced AI model capable of generating and editing high-quality videos and audio from text prompts, potentially revolutionizing content creation for businesses and individuals.
46 Sources
A new AI-generated video featuring Tom Cruise has ignited a fierce debate about copyright and intellectual property in Hollywood, raising questions about the future of filmmaking and actor rights.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
漏 2024 TheOutpost.AI All rights reserved