AI resurrections of dead celebrities spark ethical debate over digital likeness control

Reviewed byNidhi Govil

2 Sources

Share

OpenAI's Sora tool has unleashed hyper-realistic AI-generated videos depicting deceased figures like Queen Elizabeth II and Martin Luther King Jr., triggering backlash from families and experts. While some clips amuse viewers, others raise serious concerns about consent, misinformation, and the erosion of trust in genuine news as synthetic content spreads unchecked across social media.

OpenAI's Sora Unleashes Wave of AI Resurrections

OpenAI's Sora, launched in September and quickly labeled a deepfake machine, has triggered an unprecedented wave of hyper-realistic AI-generated videos featuring dead celebrities and historical figures

1

. The easy-to-use tool powered by OpenAI's Sora 2 model has produced synthetic videos of figures ranging from Queen Elizabeth II and Winston Churchill to Michael Jackson and Elvis Presley. In one TikTok clip, the late queen arrives at a wrestling match on a scooter, climbs a fence, and leaps onto a wrestler, while another Facebook video shows her praising "delightfully orange" cheese puffs in a supermarket aisle

2

. These AI resurrections have rapidly spread across social media platforms, creating a parallel reality where deceased public figures perform actions they never did in life.

Source: Tech Xplore

Source: Tech Xplore

Families Condemn Disrespectful Posthumous Depiction

Not all AI-generated content has prompted laughs. The children of Robin Williams, George Carlin, and Malcolm X have publicly condemned the use of Sora to create synthetic videos of their fathers

1

. Zelda Williams, daughter of the late actor, pleaded on Instagram for people to "stop sending me AI videos of dad," calling the content "maddening." The situation escalated in October when OpenAI blocked users from creating videos of Martin Luther King Jr. after his estate complained about disrespectful depictions, including clips showing the civil rights icon making monkey noises during his celebrated "I Have a Dream" speech

2

. These incidents illustrate how users can portray public figures at will, raising urgent questions about the control of deceased individuals' likenesses and digital likeness rights.

Ethical Debate Surrounding AI Intensifies

The ethical concerns extend beyond celebrity estates to broader societal implications. "We're getting into the 'uncanny valley,'" said Constance de Saint Laurent, a professor at Ireland's Maynooth University, referring to the phenomenon where interactions with artificial objects become so human-like they trigger unease

1

. "If suddenly you started receiving videos of a deceased family member, this is traumatizing. These videos have real consequences," she told AFP. An OpenAI spokesman acknowledged that while there are "strong free speech interests in depicting historical figures," public figures and their families should have ultimate control over their likeness. For recently deceased figures, authorized representatives or estate owners can now request that their likeness not be used in Sora

2

.

Safeguards for Deceased Figures Prove Inadequate

Experts argue that OpenAI's safeguards remain insufficient. "Despite what OpenAI says about wanting people to control their likeness, they have released a tool that decidedly does the opposite," said Hany Farid, co-founder of GetReal Security and a professor at the University of California, Berkeley

1

. "While they mostly stopped the creation of MLK Jr. videos, they are not stopping users from co-opting the identity of many other celebrities." Farid warned that even with protective measures for Martin Luther King Jr., other AI models will emerge without such restrictions, making the problem worse. This reality was underscored when AFP fact-checkers uncovered AI-generated clips using Hollywood director Rob Reiner's likeness spreading online after his alleged murder this month

2

.

Widespread Misinformation Threatens Social Media Trust

As advanced AI tools proliferate, the vulnerability extends beyond public figures to deceased non-celebrities who may have their names, likenesses, and words repurposed for synthetic manipulation

1

. Researchers warn that the unchecked spread of synthetic content, widely called AI slop, could ultimately drive users away from social media. The erosion of trust in genuine news poses a more insidious threat than misinformation itself. "The issue with misinformation in general is not so much that people believe it. A lot of people don't," said Saint Laurent. "The issue is that they see real news and they don't trust it anymore. And this Sora is going to massively increase that"

2

. As AI capabilities advance, the line between authentic and synthetic content continues to blur, demanding urgent attention to likeness rights, consent frameworks, and platform accountability in the age of AI.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo