2 Sources
2 Sources
[1]
Rare look at how Hollywood is already harnessing AI
Studios are exploring how AI can replace CGI -- or hours in the makeup chair. This is not your average vault. Instead of stacks of money or safety deposit boxes, there are cameras, lights and screens fastened together to create two giant orbs. It's called the CAAVault and it's how one of the most powerful talent agencies in the country is seeking to preserve, protect and future-proof in the age of artificial intelligence. ABC News was given rare access into the world of how Hollywood is already harnessing AI, with the Creative Artists Agency, providing a peek at its white-glove offering to digitally clone some of the biggest names in Hollywood and sports for the client to own and license for future projects. "Think of it like a bank," CAA's head of strategic development, Alexandra Shannon says. "If somebody now owns their digital likeness assets, anyone who chooses to do anything other than work with that individual, now there's a stronger case to show that they're infringing on their rights." The CAAVault began in 2023, offering 3D and 4D capture of every client's movement, range of emotions and vocal inflection in a process that takes about 3 hours. "Now somebody can show up in a Call of Duty game or in Fortnite or in any of these sorts of platforms," Shannon said. "You also look at what a company like Masterclass is doing with their on-call product. Sort of a virtual chat bot that is trained on a particular individual." Chatbots made with an actor's likeness can be programmed to speak in multiple languages, increasing reach to a global fanbase, Shannon said. But don't expect licensing the digital double to come at a discount. "The value is in that person, in that individual ... . It doesn't change the value of what that individual brings," Shannon says. While tech companies are at the forefront of AI, respected creatives are finding ways to form their own ventures. In 2020, "South Park" co-creators Matt Stone and Trey Parker founded Deep Voodoo out of a desire to create a deep fake spoof of President Donald Trump. "They had developed this technology along with an incredible team of people and felt like it was too special not to share," Jennifer Howell, Deep Voodoo's chief creative officer, told ABC News. In the own hands-on demonstration, ABC News was able to see a real-time preview of how AI can take what would otherwise require elaborate makeup and prosthetics to transform the appearance of an actor, eliminating the need for hours in the makeup chair. Deep Voodoo said this refined, Snapchat-like filter would have been a huge relief for actors in films like "The Grinch" and "The Substance." Deep Voodoo also showed off its de-aging technology employed in real-time for the Apple TV series "Before" starring Judith Light and Billy Crystal. With the real-time preview available for actors to see, Deep Voodoo argues it better informs acting choices. The company's CEO Afshin Beyzaee said the difference between this technology and more expensive computer-generated imagery (CGI) is that, "it came off a little bit eerie, a little bit, some people call it uncanny, right? This notion that it looks almost right, but there's something that's off about it. And so the AI has been able to bridge the gap ... It looks natural, it looks realistic in a way that people can enjoy what they're watching." Taking it a step further, this same technology could be used to make biopics appear more lifelike, merging the face of an actor with the person they're playing. More attention to the AI race in Hollywood came recently when an AI avatar named Tilly Norwood made headlines as a potential "actress." The union SAG-AFTRA took issue with that designation, arguing acting is a uniquely human skill. "We don't want it to be cheaper to use a synthetic performer. We want it to be the same," National Executive Director Duncan Crabtree-Ireland said. "And if it is the same, we believe -- I believe, certainly -- that human performers will win out because they bring something unique and special to those projects that can't be generated by an algorithm." Crabtree-Ireland is already preparing for negotiations with the major studios again next year, with the 2023 strikes still fresh on people's minds. When asked about whether studios were taking union concerns seriously on how AI is used, he said, "I think the companies are generally coming at this with a more careful and deliberate approach than I feared might happen." ABC News asked Deep Voodoo if using AI would lead to the loss of jobs like makeup artists, stunt doubles, storyboard artists and visual effects artists. The company argued it's just the opposite. "What this does is it allows stories and projects to happen that wouldn't otherwise happen," Beyzaee said. "Some of the people that we employ, those jobs didn't exist five years ago. AI artist jobs didn't exist." The AI boom reached new heights with the release of Google's Veo 3 AI generator app and OpenAI's Sora competitor. Sora reached No. 1 on the U.S. Apple App Store as users initially flooded social media with fantastical images of copyrighted characters before the company added restrictions. In a hands-on demonstration at the Google headquarters in New York City, ABC News was shown how with just a few sentences Veo 3 could generate elaborate 8-second pieces of video and sound for much cheaper than what would take an army of graphics designers. It was that very promise, that drew "Black Swan" director Darren Aronofsky to partner with Google, creating AI production company Primordial Soup and enlisting director Eliza McNitt to helm their first short film titled "Ancestra." The film, premiering at the Tribeca Film Festival, tells the story of McNitt's challenging birth, born with a hole in her heart. "There are things that I wasn't able to achieve before, because I have a very big vision of creating the cosmos, and outer space, and having these images that are really difficult to achieve otherwise. And so AI was really integral to the process," McNitt told ABC News. Using a special version of Veo 3, McNitt decided she would train the AI on images of her as a child to eliminate the need for a real baby on set. "I felt it was actually quite unethical to shoot with a newborn baby," McNitt said. "We wanted to find a new innovative solution to that that felt very realistic. So in the movie, what you're seeing is actually me as a newborn baby." The idea of ethics in AI have become a touchstone for the Asteria Film Company and Moonvalley, an ethical AI company co-created by actress Natasha Lyonne and her filmmaking boyfriend, Bryn Mooser. "There's somewhat of this old adage that says things that are evil, magic and a tool when a new technology comes out," Mooser told ABC News. "It's evil to the people that are being disrupted. It's magic to those who own it. And in the end, it just becomes a tool. And so that's the system that we're seeing right now." He says he tells filmmakers they can choose whether or not to use AI, but they must still learn about it. Asteria, housed in a 110-year-old soundstage with a speakeasy in the basement, houses what they've called the Miray model, the name for the company's own generative AI system, which is marketed as an ethical solution. The "commercially safe" AI model is trained to only use licensed material, so there's no fear of copyright infringement from what it creates. "It has all the controls, I think, that filmmakers were asking for and needs and certainly what our filmmakers here desired," Mooser says. For now, your favorite actors aren't passing off their on-screen performances solely to their AI clone, but one strong indicator of eventual mainstream use came when Netflix confirmed the use of AI in its original series "El Eternauta."
[2]
Forget Sora and Veo, it's a little-known studio that could make AI movies a reality
Toy Story is the benchmark we're working towards, Utopai Studios says. The biggest names in AI video belong to general-purpose generators like OpenAI's Sora 2 and Google's Veo. These text-to-video models are getting closer and closer to realism and prompt adhesion, but they're still not much cop for movie making. With a decent prompt, they can generate convincing short clips, but trying to achieve a coherent series of shots to tell a story is a different matter. A story needs continuity: characters, objects and scenes need to look the same from one shot to the next. Even more challenging, it needs to engage the audience, which requires direction and an absence of distracting artifacts. A self-described "AI-native film and television studio" thinks it can achieve that. Utopai Studios gave me a sneak preview of the trailer for its upcoming AI movie Cortes, and, despite my doubts, what I saw blew me away. Utopai Studios is a generative AI tech developer that's made the transition to content producer. Co-founder and CEO Cecilia Shen says the company realised that for AI video generation for TV and movies to take off, someone needed to prove that it could work, and that meant making the content itself. She compares the approach to that of Pixar with 3D animation. "When people see this impressive content, they will want to use it," Cecilia says. "That's a better way to drive the cinema industry." I've only seen three minutes of a work-in-progress trailer for Cortes, a period epic based on a screenplay by Nicholas Kazan that follows the Spanish conquistador Hernán Cortés's invasion of the Aztec Empire. It was far from perfect. There were some moments when characters' lips weren't in sync with what they were saying, which was distracting. But players looked consistent across shots, transitions made sense, and the whole piece was more convincing than many shorter sequences of AI-generated video that I've seen. The film is the result of Utopai's own AI workflow, which it says was trained specifically to understand the filmmaking process itself: reading scripts, interpreting story arcs, supporting directors with shot planning, and generating scenes that follow narrative intent. It says is models were trained on licensed cinematic works, emphasising story, performance, and continuity. "The problem we're focusing on is storytelling," Cecilia says. "Veo and Sora are only focusing on one clip, so the clip is good, but the storytelling, when it comes to continuities, consistencies, good cinematic positions, is bad." "Eyes are so important," she adds. "If the faces are changing and eyes aren't looking in the right place, we can't connect with a performance anymore. Our in-house model is on the way to solving that problem". Utopai says it can't say much about its workflow at the moment, but it was originally a 3D modeling company called Cybever and says it already has models that it's been building internally for years that it's been able to repurpose. It also says that its model has been trained only on licensed material chosen for its relevance to professional production. For now, Utopai is only using the tech in house to make Cortes as well as a sci-fi TV series. Where these will eventually be shown as still up in the air. The company has also announced a multibillion-dollar joint venture with Stock Farm Road, a global innovation and investment venture co-founded by Brian Koo, grandson of the founder of LG Group. By keeping the workflow exclusive, Utopai says it aims to show that AI can support professional TV and filmmaking responsibly without competing with the directors, writers, and crews it is built to work alongside. Restricting its use also ensures the technology is applied within licensed, controlled environments, which it says it sees as a more ethical path than releasing unrestricted models into the market. Cecilia says there were some challenges in moving from tech developer to filmmaker. "The entire team needs to respect tech and respect content," she says. "Usually these two groups don't understand each other. Silicon Valley is naturally more open minded than Hollywood". When I asked how convincing, or how non-distracting, Utopai thinks the output needs to be for audiences to be able to engage with an AI-generated movie, Cecilia again brings up Pixar as an example. "Our goal is to create something that feels emotionally and visually compelling enough for audiences to stay fully immersed in the story. A good comparison is Toy Story: when you watch it, you're not focused on what's real versus simulated; you're drawn in because you care about the characters and the world. "That's the creative benchmark we're working toward, where intuition and emotion guide the experience as much as visual quality. The visuals exist to serve the narrative and sustain that emotional connection for the full length of a feature film." She thinks there are still several months of work to go before Cortes reaches that point. The team goes back and readjusts the script so that the model can readjust and produce new frames. Speed is not necessarily the main priority. Although Cecilia thinks films and series will generally be faster to make, "there isn't a clear cut number like this film will take half as much time as it would without AI". Utopai admits that it's unknown what the reaction will be from audience and stresses that this makes it important to communicate what the goal is, which Cecilia says "isn't to necessarily replace anyone". I still have doubts, both about how watchable a full AI-generated movie will be, and how desirable. Utopai says its aim is to make unmakable movies, but the fact a movie hasn't been made doesn't necessarily mean it's unfilmable. Amazon planned to make a series on the story Cortes and Moctezuma but cancelled it because of the Covid-19 pandemic and allegations against co-director Ciro Guerra. But Utopai Studios' Cortes trailer is the first example I've seen that makes me think we will see full-length AI-generated movies with a coherence between shots within a few years rather than decades. It plans to screen the Cortes trailer publicly for the first time at AFM next week.
Share
Share
Copy Link
Major Hollywood studios and emerging companies are pioneering AI applications in filmmaking, from creating digital actor clones to producing entire AI-generated movies, while navigating union concerns and technical challenges.
Major Hollywood talent agencies are investing heavily in artificial intelligence to protect and monetize their clients' digital likenesses. The Creative Artists Agency (CAA) has developed the CAAVault, a sophisticated 3D and 4D capture system that preserves every aspect of a performer's appearance, movement, and vocal patterns in a three-hour process that began operations in 2023
1
."Think of it like a bank," explains CAA's head of strategic development, Alexandra Shannon. "If somebody now owns their digital likeness assets, anyone who chooses to do anything other than work with that individual, now there's a stronger case to show that they're infringing on their rights"
1
. These digital doubles can appear in video games like Call of Duty and Fortnite, or power multilingual chatbots that expand stars' global reach without compromising their market value.Creative teams are developing AI solutions that dramatically reduce production time and costs. Deep Voodoo, founded in 2020 by "South Park" creators Matt Stone and Trey Parker, has pioneered real-time AI transformation technology that eliminates the need for extensive makeup and prosthetics
1
.The company's technology functions like an advanced Snapchat filter, providing instant character transformations that would have benefited productions like "The Grinch" and "The Substance." Deep Voodoo has already implemented its de-aging technology in Apple TV's series "Before," starring Judith Light and Billy Crystal, allowing actors to see real-time previews that inform their performance choices.

Source: ABC News
CEO Afshin Beyzaee emphasizes the improvement over traditional CGI: "It came off a little bit eerie, a little bit, some people call it uncanny, right? This notion that it looks almost right, but there's something that's off about it. And so the AI has been able to bridge the gap... It looks natural, it looks realistic in a way that people can enjoy what they're watching"
1
.While general-purpose AI video generators like OpenAI's Sora and Google's Veo can create impressive short clips, they struggle with the continuity required for feature-length storytelling. Utopai Studios, a self-described "AI-native film and television studio," is addressing this challenge by developing specialized AI workflows trained specifically for filmmaking
2
.Co-founder and CEO Cecilia Shen compares their approach to Pixar's pioneering work with 3D animation: "When people see this impressive content, they will want to use it. That's a better way to drive the cinema industry"
2
.Utopai's upcoming film "Cortes," a period epic following Spanish conquistador Hernán Cortés's invasion of the Aztec Empire, demonstrates their progress. Despite some synchronization issues, early footage shows consistent character appearances across shots and coherent narrative transitions. The company's models were trained exclusively on licensed cinematic works, emphasizing story, performance, and continuity rather than individual clip quality.
Related Stories
The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) has expressed concerns about AI's impact on human performers. When AI avatar "Tilly Norwood" was marketed as a potential "actress," the union argued that acting remains a uniquely human skill
1
.National Executive Director Duncan Crabtree-Ireland stated: "We don't want it to be cheaper to use a synthetic performer. We want it to be the same. And if it is the same, we believe -- I believe, certainly -- that human performers will win out because they bring something unique and special to those projects that can't be generated by an algorithm"
1
.However, AI companies argue their technology creates new opportunities rather than eliminating jobs. Deep Voodoo's Beyzaee contends: "What this does is it allows stories and projects to happen that wouldn't otherwise happen. Some of the people that we employ, those jobs didn't exist five years ago. AI artist jobs didn't exist"
1
.Summarized by
Navi
18 Oct 2024•Technology

30 Jun 2025•Technology

08 Jul 2025•Technology
