40 Sources
40 Sources
[1]
OpenAI pauses Sora video generations of Martin Luther King Jr. | TechCrunch
OpenAI announced Thursday it paused the ability for users to generate videos resembling the late civil rights activist Martin Luther King Jr. using its AI video model, Sora. The company says it's adding this safeguard at the request of Dr. King's estate after some Sora users generated "disrespectful depictions" of his image. "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used," OpenAI said in a post on X from its official newsroom account. "Authorized representatives or estate owners can request that their likeness not be used in Sora cameos." The restriction comes just a few weeks after OpenAI launched its social video platform, Sora, which allows users to create realistic AI-generated videos resembling historical figures, their friends, and users who elect to have their likeness recreated on the platform. The launch has stirred fervent public debate around the dangers of AI-generated videos, and how platforms should implement guardrails around the technology. Dr. Bernice King, Dr. King's daughter, posted on Instagram last week asking people to stop sending her AI videos resembling her father. She joined Robin Williams' daughter, who also asked Sora users to stop generating AI videos of her father. The Washington Post reported earlier this week that Sora users had created AI-generated videos of Dr. King making monkey noises and wrestling with another civil rights icon, Malcolm X. Scrolling through OpenAI's Sora app, it's easy to find crude videos resembling other historical figures, including artist Bob Ross, singer Whitney Houston, and former President John F. Kennedy. The licensor of Dr. King's estate did not immediately respond to TechCrunch's request for comment. Beyond how Sora represents humans, the launch has also raised a flurry of questions around how social media platforms should handle AI videos of copyrighted works. The Sora app is also full of videos depicting cartoons like SpongeBob, South Park, and Pokémon. OpenAI has added other restrictions to Sora in weeks since its launch. Earlier in October, the company said it planned to give copyright holders more granular control over the types of AI videos that can be generated with their likeness. That may have been a response to Hollywood's initial reaction to Sora, which was not great. As OpenAI adds restrictions to Sora, the company seems to be taking a more hands-off approach to moderating content in ChatGPT. OpenAI announced this week that it would allow adult users to have "erotic" chats with ChatGPT in the coming months. With Sora, it seems that OpenAI is grappling with the concerns that come along with AI video generation. Some OpenAI researchers publicly wrestled with questions about the company's first AI-powered social media platform in the days after its launch, and how such a product fits into the nonprofit's mission. OpenAI CEO Sam Altman said the company felt "trepidation" about Sora on launch day. Nick Turley, the head of ChatGPT, told me earlier this month that the best way to teach the world about a new technology is putting it out in the world. He said that's what the company learned with ChatGPT, and that's what OpenAI is finding with Sora, too. It seems the company is learning something about how to distribute this technology, as well.
[2]
Sora Changed the Deepfake Game. Can You Tell Whether a Video Is Real or AI?
If you're even a little bit online, the odds are you've seen an image or video that was AI-generated. I know I've been fooled before, like I was by that viral video of bunnies on a trampoline. But Sora is taking AI videos to a whole new level, making it more important than ever to know how to spot AI. Sora is the sister app of ChatGPT, made by the same parent company, OpenAI. It's named after its AI video generator, which launched in 2024. But it recently got a major overhaul with a new Sora 2 model, along with a brand-new social media app by the same name. The TikTok-like app went viral, with AI enthusiasts determined to hunt down invite codes. But it isn't like any other social media platform. Everything you see on Sora is fake; all the videos are AI-generated. Using Sora is an AI deepfake fever dream: innocuous at first glance, with dangerous risks lurking just beneath the surface. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. From a technical standpoint, Sora videos are impressive compared to competitors such as Midjourney's V1 and Google's Veo 3. Sora videos have high resolution, synchronized audio and surprising creativity. Sora's most popular feature, dubbed "cameo," lets you use other people's likenesses and insert them into nearly any AI-generated scene. It's an impressive tool, resulting in scarily realistic videos. That's why so many experts are concerned about Sora, which could make it easier than ever for anyone to create deepfakes, spread misinformation and blur the line between what's real and what's not. Public figures and celebrities are especially vulnerable to these potentially dangerous deepfakes, which is why unions like SAG-AFTRA pushed OpenAI to strengthen its guardrails. Identifying AI content is an ongoing challenge for tech companies, social media platforms and all of us who use them. But it's not totally hopeless. Here are some things to look out for to identify if a video was made using Sora. Every video made on the Sora iOS app includes a watermark when you download it. It's the white Sora logo -- a cloud icon -- that bounces around the edges of the video. It's similar to the way TikTok videos are watermarked. Watermarking content is one of the biggest ways AI companies can visually help us spot AI-generated content. Google's Gemini "nano banana" model, for example, automatically watermarks its images. Watermarks are great because they serve as a clear sign that the content was made with the help of AI. But watermarks aren't perfect. For one, if the watermark is static (not moving), it can easily be cropped out. Even for moving watermarks like Sora's, there are apps designed specifically to remove them, so watermarks alone can't be fully trusted. When OpenAI CEO Sam Altman was asked about this, he said society will have to adapt to a world where anyone can create fake videos of anyone. Of course, prior to OpenAI's Sora, there wasn't a popular, easily accessible, no-skill-needed way to make those videos. But his argument raises a valid point about the need to rely on other methods to verify authenticity. I know, you're probably thinking that there's no way you're going to check a video's metadata to determine if it's real. I understand where you're coming from; it's an extra step, and you might not know where to start. But it's a great way to determine if a video was made with Sora, and it's easier to do than you think. Metadata is a collection of information automatically attached to a piece of content when it's created. It gives you more insight into how an image or video was created. It can include the type of camera used to take a photo, the location, date and time a video was captured and the filename. Every photo and video has metadata, no matter whether it was human- or AI-created. And a lot of AI-created content will have content credentials that denote its AI origins, too. OpenAI is part of the Coalition for Content Provenance and Authenticity, which, for you, means that Sora videos include C2PA metadata. You can use the Content Authenticity Initiative's verification tool to check a video, image or document's metadata. Here's how. (The Content Authenticity Initiative is part of C2PA.) How to check a photo, video or document's metadata: 1. Navigate to this URL: https://verify.contentauthenticity.org/ 2. Upload the file you want to check. 3. Click Open. 4. Check the information in the right-side panel. If it's AI-generated, it should include that in the content summary section. When you run a Sora video through this tool, it'll say the video was "issued by OpenAI," and will include the fact that it's AI-generated. All Sora videos should contain these credentials that allow you to confirm that it was created with Sora. This tool, like all AI detectors, isn't perfect. There are a lot of ways AI videos can avoid detection. If you have other, non-Sora videos, they may not contain the necessary signals in the metadata for the tool to determine whether or not they're AI-created. AI videos made with Midjourney, for example, don't get flagged, as I confirmed in my testing. Even if the video was created by Sora, but then run through a third-party app (like a watermark removal one) and redownloaded, that makes it less likely the tool will flag it as AI. If you're on one of Meta's social media platforms, like Instagram or Facebook, you may get a little help determining whether something is AI. Meta has internal systems in place to help flag AI content and label it as such. These systems aren't perfect, but you can clearly see the label for posts that have been flagged. TikTok and YouTube have similar policies for labelling AI content. The only truly reliable way to know if something is AI-generated is if the creator discloses it. Many social media platforms now offer settings that let users label their posts as AI-generated. Even a simple credit or disclosure in your caption can go a long way to help everyone understand how something was created. You know while you're scrolling Sora that nothing is real. But once you leave the app and share AI-generated videos, it's our collective responsibility to disclose how a video was created. As AI models like Sora continue to blur the line between reality and AI, it's up to all of us to make it as clear as possible when something is real or AI. There's no one foolproof method to accurately tell from a single glance if a video is real or AI. The best thing you can do to prevent yourself from being duped is to not automatically, unquestioningly believe everything you see online. Follow your gut instinct -- if something feels unreal, it probably is. In these unprecedented, AI-slop-filled times, your best defense is to inspect the videos you're watching more closely. Don't just quickly glance and scroll away without thinking. Check for mangled text, disappearing objects and physics-defying motions. And don't beat yourself up if you get fooled occasionally; even experts get it wrong. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
[3]
Are Sora 2 and other AI video tools risky to use? Here's what a legal scholar says
Generative video could democratize art or destroy it entirely. OpenAI's Sora 2 generative AI video creator has been out for about two weeks, and already it's causing an uproar. You get the idea. This is the inevitable outcome when you give humans the opportunity to create anything they want with very little effort. We are twisted and easily amused people. Also: I tried the new Sora 2 to generate AI videos - and the results were pure sorcery Human nature is like that. First, slightly less mature individuals will start thinking, "Hmm. What can I do with that? Let's make something odd or weird to give me some LOLs." The inevitable result will be inappropriate themes or videos that are just so wrong on many levels. Then, the unscrupulous start to think. "Hmm. I think I can get some mileage out of that. I wonder what I can do with it?" These folks might generate an enormous amount of AI slop for profit, or use a known spokesperson to generate some sort of endorsement. This is the natural evolution of human nature. When a new capability is presented to a wide populace, it will be misused for amusement, profit, and perversity. No surprise there. Here, let me demonstrate: I found a video of OpenAI CEO Sam Altman on the Sora 2 Explore page. In the video, he's saying that "PAI3 gives you the AI experience that OpenAI cannot." PAI3 is a decentralized, privacy-oriented AI network company. So, I clicked the remix button right on the Sora site and created a new video. Here's a screenshot of both of them side-by-side. If you have a ChatGPT Plus account, you can watch these videos on Sora: Sam on left | Sam on right. To get Altman's endorsement, all I had to do was feed Sora 2 this prompt: This guy saying "My name is Sam and I need to tell you. ZDNET is the place to go for the latest AI news and analysis. I love those folks!" He's now wearing an electric green T-shirt and has bright blue hair. It took about five minutes, after which the CEO of OpenAI was singing ZDNET's praises. But let's be clear. This video is presented solely as an editorial example to showcase the technology's capability. We do not represent that Mr. Altman actually has blue hair or a green T-shirt. It's also not fair for us to mind-read about the man's fondness for ZDNET, although, hey, what's not to like? Also: I'm an AI tools expert, and these are the 4 I pay for now (plus 2 I'm eyeing) In this article, we'll examine three key issues surrounding Sora 2: legal and rights issues, the impact on creativity, and the newest challenge in distinguishing reality from deepfakes. Oh, and stay with us: We're concluding with a very interesting observation from OpenAI's rep that tells us what they really think about human creativity. When Sora 2 was first made available, there were no guardrails. Users could ask the AI to create anything. In less than five days, the app hit over a million downloads and soared to the top of the iPhone app store listings. Nearly everyone who downloaded Sora created instant videos, resulting in the branding and likeness Armageddon I discussed above. On September 29, The Wall Street Journal reported that OpenAI had started contacting Hollywood rights holders, informing them of the impending release of Sora 2 and letting them know they could opt out if they didn't want their IP represented in the program. As you might imagine, this did not go over well with brand owners. Altman responded to the dust-up with a blog post on October 3, stating, "We will give rights holders more granular control over generation of characters." Still, even after Altman's statement of contrition, rights holders were not satisfied. On October 6, for example, the Motion Picture Association (MPA), issued a brief but firm statement. Also: Stop using AI for these 9 work tasks - here's why According to Charles Rivkin, Chairman and CEO of the MPA, "Since Sora 2's release, videos that infringe our members' films, shows, and characters have proliferated on OpenAI's service and across social media." Rivkin continues, "While OpenAI clarified it will 'soon' offer rightsholders more control over character generation, they must acknowledge it remains their responsibility -- not rightsholders' -- to prevent infringement on the Sora 2 service. OpenAI needs to take immediate and decisive action to address this issue. Well-established copyright law safeguards the rights of creators and applies here." OpenAI also responded to complaints from actor Bryan Cranston and SAG-AFTRA this week after users created videos with his likeness. It's unclear whether the company will just react to piecemeal flags like this from individuals into eternity or create a blanket guardrail to address them. Regardless, I can attest that there are now definitely some guardrails in place. I tried to get Sora to give me Patrick Stewart fighting Darth Vader and any ol' X-wing starfighter attacking the Death Star, and both prompts were immediately rejected with the note, "This content may violate our guardrails concerning third-party likeness." When I reached out to the MPA for a follow-up comment based on my experience, John Mercurio, executive vice president, Global Communications, told ZDNET via email, "At this point, we aren't commenting beyond our statement from October 6." OpenAI is clearly aware of these issues and concerns. When I reached out to the company via their PR representatives, I was pointed to OpenAI's Sora 2 System Card. This is a six-page, public-facing document that outlines Sora 2's capabilities and limitations. The company also provided two other resources worth reading: Across these documents, OpenAI describes five main themes regarding safety and rights: Who owns what, and who's to blame? When I asked these questions to my OpenAI PR contact, I was told, "What I passed along is the extent of what we can share right now." So I turned to Sean O'Brien, founder of the Yale Privacy Lab at Yale Law School. O'Brien told me, "When a human uses an AI system to produce content, that person, and often their organization, assumes liability for how the resulting output is used. If the output infringes on someone else's work, the human operator, not the AI system, is culpable." Also: Unchecked AI agents could be disastrous for us all - but OpenID Foundation has a solution O'Brien continued, "This principle was reinforced recently in the Perplexity case, where the company trained its models on copyrighted material without authorization. The precedent there is distinct from the authorship question, but it underlines that training on copyrighted data without permission constitutes a legally cognizable act of infringement." Now, here's what should worry OpenAI, regardless of their guardrails, system card, and feed philosophy. Yale's O'Brien summed it up with devastating clarity, "What's forming now is a four-part doctrine in US law. First, only human-created works are copyrightable. Second, generative AI outputs are broadly considered uncopyrightable and 'Public Domain by default.' Third, the human or organization utilizing AI systems is responsible for any infringement in the generated content. And, finally, training on copyrighted data without permission is legally actionable and not protected by ambiguity." The interesting thing about creativity is that it's not just about imagination. In Webster's, the first definition of creating is "to bring into existence." Another definition is "to produce or bring about by a course of action or behavior." And yet another is "to produce through imaginative skill." None of these limits the medium used to, say, oil paints or a film camera. They are all about manifesting something new. Also: The US Copyright Office's new ruling on AI art is here - and it could change everything I think about this a lot, because back when I took nature photos on film, my images were just OK. I spent a lot on chemical processing and enlarging, and was never satisfied. But as soon as I got my hands on Photoshop and a photo printer, my pictures became worthy of hanging on the wall. My imaginative skill wasn't just photography. It was the melding of pointing the camera, capturing 1/250th of a second on film, and then bringing it to life through digital means. The question of creativity is particularly challenging in the world of generative AI. The US Copyright Office contends that only human-created works can be copyrighted. But where is the line between the tool, the medium, and the human? Take Oblivious, a painting I "made" with the help of Midjourney's generative AI and Photoshop's retouching skills. The composition of elements was entirely my imagination, but the tools were digital. Bert Monroy wrote the first book on Photoshop. He uses Photoshop to create amazing photorealistic images. But he doesn't take a photo and retouch it. Instead, pixel by pixel, he creates entirely new images that appear to be photographs. He uses the medium to explore his amazing skills and creativity. Is that human-made, or just because Photoshop controls the pixels, is it unworthy of copyright? I asked Monroy for his thoughts about generative AI and creativity. He told me this: "I have been a commercial illustrator and art director for most of my entire life. My clients had to pay for my work, a photographer, models, stylists, and, before computers, retouchers, typesetters and mechanical artists to put it all together. Now AI has come into play. The first thought that comes to my mind is how glad I am that gave up commercial art years ago. "Now, with AI, the client has to think of what they want and write a prompt and the computer will produce a variety of versions in minutes with NO cost except for the electricity to run the computer. There's a lot of talk about how many jobs will be taken over by AI; well, it looks like the creative fields are being taken over." Sora 2 is the harbinger of the next step in the merging of imagination and digital creativity. Yes, it can reproduce people, voices, and objects with disturbing and amazing fidelity. But as soon as we considered the way we use the tools and the medium to be a part of artistic expression, we agreed as a society that art and creativity extend beyond manual dexterity. Also: There's a new OpenAI app in town - here's what to know about Sora for iOS There is an issue here related to both skill and exclusivity. AI tools democratize access to creative output, allowing those with less or no skills to produce creative works rivaling those who have spent years honing their craft. In some ways, this upheaval isn't about cramping creativity. It's about democratizing skills that some people spent lifetimes developing and that they use to make their living. That is of serious concern. I make my living mostly as a writer and programmer. Both of these fields are enormously threatened by generative AI. But do we limit new tools to protect old trades? Monroy's work is incredible, but until you realize all his artwork is hand-painted in Photoshop, you'd be hard-pressed not to think it was a photograph by a talented photographer. Work that takes Bert months might take a random user with a smartphone minutes to capture. But it's the fact that Monroy uses the medium in a creative way that makes all his work so incredibly impressive. Maly Ly has served as chief marketing officer at GoFundMe, global head of growth and engagement at Eventbrite, promotions manager at Nintendo, and product marketing manager at Lucasfilm. She held similar roles at storied game developers Square Enix and Ubisoft. Today, she's the founder and CEO of Wondr, a consumer AI startup. Her perspective is particularly instructive in this context. She says, "AI video is forcing us to confront an old question with new stakes: Who owns the output when the inputs are everything we've ever made? Copyright was built for a world of scarcity and single authorship, but AI creates through abundance and remix. We're not seeing creativity stolen; we're seeing it multiply." Also: How to get Perplexity Pro free for a year - you have 3 options The fact that generative AI is eliminating the scarcity of skills is terrifying to those of us who have made our identities about having those skills. But where Sora and generative AI start to go wrong is when they train on the works of creatives and then feed them as if they were new works, effectively stealing the works of others. This is a huge problem for Sora. Ly has an innovative suggestion: "The real opportunity isn't protection, it's participation. Every artist, voice, and visual style that trains or inspires a model should be traceable and rewarded through transparent value flows. The next copyright system will look less like paperwork and more like living code -- dynamic, fair, and built for collaboration." Unfortunately, she's pinning her hopes for an updated and relevant copyright system on politicians. But still, she does see an overall upside to AI, which is refreshing among all the scary talk we've been having. She says, "If we get this right, AI video could become the most democratizing storytelling medium in history, creating a shared and accountable creative economy where inspiration finally pays its debts." Another societal challenge arising from the introduction of new technologies is how they change our perception of reality. Heck, there's an entire category of tech oriented around augmented, mixed, and virtual reality. Probably the single most famous example of reality distortion due to technology occurred at 8 p.m. New York time on Oct. 30, 1938. Also: We tested the best AR and MR glasses: Here's how the Meta Ray-Bans stack up World War II hadn't yet officially begun, but Europe was in crisis. In March, Germany annexed Austria without firing a shot. In September, Britain and France signed the Munich Agreement, which allowed Hitler to take part of what was then Czechoslovakia. Japan had invaded China the previous year. Italy, under Mussolini, had invaded Ethiopia in 1935. The idea of invasion was on everyone's mind. Into that atmosphere, a 23-year-old Orson Welles broadcast a modernized version of H.G. Wells' War of the Worlds on CBS Radio in New York City. There were disclaimers broadcast at the beginning of the show (think of them like the Sora watermarks on the videos), but people tuning in after the start thought they were listening to the news, and an actual Martian invasion was taking place in Grovers Mill, New Jersey. When images, audio, or video are used to misrepresent reality, particularly for a political or nefarious purpose, they're called deepfakes. Obviously, movies like Star Wars and TV shows like Star Trek present fantastical realities, but everyone knows they're fiction. But when deepfakes are used to push an agenda or damage someone's reputation, they become harder to accept. And, as The Washington Post reported via MSN, twisted deepfakes of dead celebrities are deeply painful to their families. In the article, Robin Williams' daughter Zelda is quoted as saying, "Stop sending me AI videos of dad...To watch the legacies of real people be condensed down to ... horrible, TikTok slop puppeteering them is maddening." Many AI tools prevent users from uploading images and clips of real people, although there are fairly easy ways to get around those limitations. The companies are also embedding provenance clues in the digital media itself to flag images and videos as being AI-created. Also: Deepfake detection service Loti AI expands access to all users - for free But will these efforts block deepfakes? Once again, this is not a new problem. Irish photo restoration artist Neil White documents examples of faked photos from way before Photoshop or Sora 2. There's an 1864 photo of General Ulysses. S. Grant on a horse in front of troops that's entirely fabricated, and a 1930 photo of Stalin where he had his enemies airbrushed out. Most wacky is a 1939 picture of Canadian prime minister with Queen Elizabeth (the mother of Elizabeth II, the monarch we're most familiar with). Apparently, the PM thought it would be more politically advantageous to be seen on a poster just with the queen, so he had King George VI airbrushed out. In other words, the problem's not going away. We'll all have to use our inner knowing and highly-tuned BS detectors to red-flag images and videos that are most likely fabricated. Still, it was fun making OpenAI's CEO have blue hair and sing ZDNET's praises. Attorney Richard Santalesa, a founding member of the SmartEdgeLaw Group, focuses on technology transactions, data security, and intellectual property matters. He told ZDNET, "Sora 2 most notably highlights the push and tug between creation and safeguarding of existing IP and copyright law. The opt-out, opt-in issue is fascinating because it's really applying the privacy notice and consent framework to AI creation, which is somewhat unique. And I think this is why OpenAI was caught on their back foot." He explains why the company, with its very deep pockets, may well be the target of a flood of litigation. "Copyright grants the owner various exclusive rights under US copyright law, including the creation of derivative (but not necessarily transformative) works. All of these terms are legal terms of art, which matter practically but not always in the real world. Fair use gets a lot of attention, but as to use of specific owner copyrighted figures, my take is that only parody or pure news uses would be exempt from copyright liability regarding Sora 2 output on those fronts." Santalesa did point out one factor in OpenAI's favor. "Sora 2 app's Terms of Use expressly prohibit users from 'use of our Services in a way that infringes, misappropriates or violates anyone's right.' While this prohibition is pretty standard in online ToU and acceptable user policies, it does highlight that the actual user has their own responsibilities and obligations with regard to copyright compliance." As Richard says, "The genie is out of the bottle and won't be stuffed back in. The issue is how to manage and control the genie." Also: Will AI damage human creativity? Most Americans say yes What about the statement I promised you from OpenAI's PR rep? I'll leave you with that as a final thought. He says, "OpenAI's video generation tools are designed to support human creativity, not replace it, helping anyone explore ideas and express themselves in new ways." What about you? Have you experimented with Sora 2 or other AI video tools? Do you think creators should be held responsible for what the AI generates, or should the companies behind these tools share that liability? How do you feel about AI systems using existing creative works to train new ones? Does that feel like theft or evolution? And do you believe generative video is expanding creativity or eroding authenticity? Let us know in the comments below.
[4]
OpenAI halts MLK deepfakes on Sora
OpenAI's changing stance on historical figures echoes its approach to copyright when Sora first launched. The strategy proved controversial, and the platform mounted an embarrassing U-turn to an "opt-in" policy for rightsholders after it was inundated with depictions of characters like Pikachu, Rick and Morty, and SpongeBob SquarePants. Unlike copyright, there's no federal framework for protecting people's likeness, but a variety of state laws let people sue over unauthorized use of a living person's image -- and in some states, a deceased person's as well. California, where OpenAI is based, for example, has specifically said postmortem privacy rights apply for AI replicas of performers. For living humans, OpenAI has allowed people to opt in to appearing in videos from the start by having them make AI clones of themselves.
[5]
OpenAI Says It's Working With Actors to Crack Down on Celebrity Deepfakes in Sora
OpenAI said Monday it would do more to stop users of its AI video generation app Sora from creating clips with the likenesses of actors and other celebrities after actor Bryan Cranston and the union representing film and TV actors raised concerns that deepfake videos were being made without the performers' consent. Actor Bryan Cranston, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and several talent agencies said they struck a deal with the ChatGPT maker over the use of celebrities' likenesses in Sora. The joint statement highlights the intense conflict between AI companies and rights holders like celebrities' estates, movie studios and talent agencies -- and how generative AI tech continues to erode reality for all of us. Sora, a new sister app to ChatGPT, lets users create and share AI-generated videos. It launched to much fanfare three weeks ago, with AI enthusiasts searching for invite codes. But Sora is unique among AI video generators and social media apps; it lets you use other people's recorded likenesses to place them in nearly any AI video. It has been, at best, weird and funny, and at worst, a never-ending scroll of deepfakes that are nearly indistinguishable from reality. Cranston noticed his likeness was being used by Sora users when the app launched, and the Breaking Bad actor alerted his union. The new agreement with the actors' union and talent agencies reiterates that celebrities will have to opt in to having their likenesses available to be placed into AI-generated video. OpenAI said in the statement that it has "strengthened the guardrails around replication of voice and likeness" and "expressed regret for these unintentional generations." OpenAI does have guardrails in place to prevent the creation of videos of well-known people: It rejected my prompt asking for a video of Taylor Swift on stage, for example. But these guardrails aren't perfect, as we've saw last week with a growing trend of people creating videos featuring Rev. Martin Luther King Jr. They ranged from weird deepfakes of the civil rights leader rapping and wrestling in the WWE to overtly racist content. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. The flood of "disrespectful depictions," as OpenAI called them in a statement on Friday, is part of why the company paused the ability to create videos featuring King. Bernice A. King, his daughter, last week publicly asked people to stop sending her AI-generated videos of her father. She was echoing comedian Robin Williams' daughter, Zelda, who called these sorts of AI videos "gross." OpenAI said it "believes public figures and their families should ultimately have control over how their likeness is used" and that "authorized representatives" of public figures and their estates can request that their likeness not be included in Sora. In this case, King's estate is the entity responsible for choosing how his likeness is used. This isn't the first time OpenAI has leaned on others to make those calls. Before Sora's launch, the company reportedly told a number of Hollywood-adjacent talent agencies that they would have to opt out of having their intellectual property included in Sora. But that initial approach didn't square with decades of copyright law -- usually, companies need to license protected content before using it -- and OpenAI reversed its stance a few days later. It's one example of how AI companies and creators are clashing over copyright, including through high-profile lawsuits. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
[6]
OpenAI cracks down on Sora 2 deepfakes after pressure from Bryan Cranston, SAG-AFTRA
OpenAI announced on Monday in a joint statement that it will be working with Bryan Cranston, SAG-AFTRA, and other actor unions to protect against deepfakes on its artificial intelligence video creation app Sora. The "Breaking Bad" and "Malcolm in the Middle" actor expressed concern after unauthorized AI-generated clips using his voice and likeness appeared on the app following the Sora 2 launch at the end of September, the Screen Actors Guild-American Federation of Television and Radio Artists said in a post on X. "I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness," Cranston said in a statement. Along with SAG-AFTRA, OpenAI said it will collaborate with United Talent Agency, which represents Cranston, the Association of Talent Agents and Creative Artists Agency to strengthen guardrails around unapproved AI generations. The CAA and UTA previously slammed OpenAI for its usage of copyrighted materials, calling Sora a risk to their clients and intellectual property.
[7]
OpenAI suspends Sora depictions of Martin Luther King Jr. following a request from his family
OpenAI has paused video generations of Martin Luther King Jr. on Sora at the request of King Inc., the estate that manages his legacy. The company said in an announcement on X that it worked with the estate to address how his "likeness is represented in Sora generations" after people used the app to create disrespectful depictions of the American civil rights leader. It's not quite clear if OpenAI intends to restore Sora's ability to generate videos with MLK in the future, but it's wording implies it does and that it has only suspended the capability as it "strengthens guardrails for historical figures." After OpenAI launched the Sora app, users generated videos with likenesses of dead public figures, including Michael Jackson, Robin Williams and MLK. Williams' daughter, Zelda Williams, had to beg people to stop sending her AI videos of her father. "To watch the legacies of real people be condensed down to 'this vaguely looks and sounds like them so that's enough', just so other people can churn out horrible TikTok slop puppeteering them is maddening," she wrote on Instagram. MLK's daughter, Bernice A. King, wrote on Threads that she agreed and also asked people to stop sending her videos of her father. According to a report by The Washington Post, the Sora-made videos that were posted online included King making monkey noises while he was giving his "I Have a Dream" speech. Another video showed King wrestling with Malcolm X, whose daughter, Ilyasah Shabazz, questioned why AI developers weren't acting "with the same morality, conscience, and care... that they'd want for their own families" in a statement made to The Post. OpenAI said that while there are "strong free speech interests in depicting historical figures," it believes "public figures and their families should ultimately have control over how their likeness is used." It also said that the estate owners of other historical figures and their representatives can ask the company for their likenesses not to be used in Sora videos, as well.
[8]
OpenAI temporarily stops AI deepfakes of Martin Luther King Jr
OpenAI has temporarily stopped its artificial intelligence (AI) app Sora creating deepfake videos portraying Dr Martin Luther King Jr, following a request from his estate. It said "disrespectful" content had been generated about the civil rights campaigner. Sora has become popular in the US for making hyper-realistic AI-generated videos, which has led to people sharing clips of deceased celebrities and historical figures in outlandish and often offensive scenarios. OpenAI said it would pause images of Dr King "as it strengthens guardrails for historical figures" - but it continues to allow people to make clips of others.
[9]
OpenAI Blocks Users From Making AI Videos of Martin Luther King Jr.
The company says it’s adding new “guardrails†to protect historical figures from AI misuse. OpenAI is blocking users from making AI-generated videos of late civil rights leader Martin Luther King Jr. in its AI video model Sora. The company announced the move Thursday in a joint statement with the King Estate. According to OpenAI, some users had created “disrespectful depictions†of King. At the request of his estate, the company said it's pausing users' ability to generate videos of the activist as it works to “strengthen guardrails for historical figures.†“While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used,†the company said in the statement. OpenAI added that authorized representatives or estate owners of other historical figures can now request that their likeness not be used in Sora videos. OpenAI recently launched its most advanced AI video model, Sora 2, along with a new TikTok-like social media app. The app lets users upload their own likeness, called a cameo, to create AI-generated videos featuring themselves, their friends, celebrities, and fictional characters. Since its debut just a few weeks ago, OpenAI has already started tweaking how the app works as it tries to navigate the still-murky balance between free expression and the misuse of AI. Even OpenAI CEO Sam Altman said in a post on X that the company felt some level of “trepidation†in launching its latest model and social media app. “Social media has had some good effects on the world, but it’s also had some bad ones," he wrote. "We are aware of how addictive a service like this could become, and we can imagine many ways it could be used for bullying.†It didn’t take long for people to start being weird with Sora. The Washington Post reported that Sora-generated clips of late celebrities like Michael Jackson, Amy Winehouse, and Whitney Houston have quickly flooded social media. While some are meant as lighthearted tributes, others are more disrespectful. According to The Post, some users made videos of King making monkey noises during his “I Have a Dream†speech and wrestling with fellow civil rights activist Malcolm X. Some families of the deceased quickly complained and have asked people to stop sharing AI videos of their relatives. Zelda Williams, the daughter of the late actor Robin Williams, posted on Instagram earlier this month, urging people to stop sending her AI-generated videos of her dad. It’s unclear whether the videos she was referencing were created with Sora, but her post followed the app’s launch. Bernice King, Martin Luther King Jr.’s daughter, wrote in her own Instagram post about Williams’s comments, “I concur concerning my father. Please stop.†OpenAI’s latest move echoes its shift in how it handles copyrighted material. After initially requiring copyright holders to opt out of having their content appear in Sora-generated videos, Altman announced earlier this month that the company will move to an “opt-in†model that will “give rightsholders more granular control over generation of characters.â€
[10]
OpenAI Pauses Sora AI Videos of Martin Luther King Jr. as 'Inappropriate' Deepfakes Flood the App
OpenAI will no longer allow its users to create AI deepfakes of the Rev. Martin Luther King Jr. on its Sora AI social media app. That decision highlights the intense conflict between AI companies and rights holders like celebrities' estates, movie studios and talent agencies -- and how generative AI tech continues to erode reality for all of us. Sora, a new sister app to ChatGPT, lets users create and share AI-generated videos. It launched to much fanfare three weeks ago, with AI enthusiasts searching for invite codes. But Sora is unique among AI video generators and social media apps; it lets you use other people's recorded likenesses to place them in nearly any AI video. It has been, at best, weird and funny -- and at worst, a never-ending scroll of deepfakes that are nearly indistinguishable from reality. OpenAI does have guardrails in place to prevent the creation of videos of well-known people: It rejected my prompt asking for a video of Taylor Swift on stage, for example. But these guardrails aren't perfect, as we've seen this week with a growing trend of people creating videos featuring King. They ranged from weird deepfakes of him rapping and wrestling in the WWE to overtly racist content. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. The flood of "disrespectful depictions," as OpenAI called them in a statement, is part of why the company paused the ability to create videos featuring King. Bernice A. King, daughter of the late civil rights leader, last week publicly asked for people to stop sending her AI-generated videos of her father. She was echoing comedian Robin Williams' daughter, Zelda, who called these sorts of AI videos "gross." In its statement, OpenAI said it "believes public figures and their families should ultimately have control over how their likeness is used" and that "authorized representatives" of public figures and their estates can request that their likeness not be included in Sora. This isn't the first time OpenAI has leaned on others to make those calls. Before Sora's launch, the company reportedly told a number of Hollywood-adjacent talent agencies that they would be able to opt out of having their intellectual property included in Sora. In this case, King's estate is the entity responsible for choosing how his likeness is used. OpenAI's approach didn't square with decades of copyright law -- usually, companies need to license protected content before using it -- and OpenAI reversed its stance a few days later. It's one example of how AI companies and creators are clashing over copyright, including through high-profile lawsuits. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
[11]
OpenAI tightens guardrails around celebrity deepfakes after pressure from Bryan Cranston & SAG-AFTRA -- here's everything you need to know
When OpenAI launched Sora 2, the team made the controversial decision to allow users to create videos with real people in them, choosing to let celebrities opt out of their likeness being used. This has, unsurprisingly, backfired. After a sea of inappropriate content, including key figures, Sam Altman announced a change to the rules, focused on switching the system from opt-out to opt-in, requiring a celebrity to decide they wanted their likeness to be used. However, this doesn't seem to have worked as well as OpenAI might have liked. The company has already had to offer an apology to the family of Martin Luther King, as the model produced inappropriate videos of him. While videos have now been restricted of King, other historical figures, such as JFK and Professor Stephen Hawking, have also been doing the rounds. But it's not just historical figures raising the flag. The actor Bryan Cranston, in a joint statement with SAG-AFTRA and OpenAI, has highlighted that his likeness has been used on Sora 2 despite being opted out. "I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way," Cranston said in the statement. "I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work respect our personal and professional right to manage replication of our voice and likeness." While OpenAI hasn't specifically explained what these policy and guardrail changes will be, they are promising to reduce a user's ability to recreate individuals who have chosen to opt out of Sora 2 image creation. "OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness. We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers," said Sam Altman, OpenAI's CEO, in the statement from SAG-AFTRA. The NO FAKES Act was a legislation introduced this year that aims to protect the voice and likeness of all individuals from computer-generated recreations. OpenAI's Sora 2 isn't the only AI image generator on the market, and plenty of them are able to create videos at a similar quality. So why does it seem that OpenAI is the only one facing backlash? Well, it all comes down to OpenAI's approach to copyright with this model. The likes of Veo 3, the video generator of Google Gemini, were designed to avoid the likeness of celebrities and historical figures. In fact, compared to Sora 2, Gemini and the majority of AI video generators were built with pretty stringent safeguards in place and have been for quite a while. Even OpenAI's original Sora model didn't have this issue. When it came to Sora 2, OpenAI decided to open up the gates. This is a risky move, and while it has generated plenty of traffic for OpenAI, it has equally been the source of many of the company's recent issues. "AI tools like Sora let users imagine or impersonate cultural icons long deceased, but the ethics of that are quickly murky. Is it tribute, satire, or desecration? The pause highlights just how raw and unresolved that debate is," Amanda Caswell, Tom's Guide's US AI editor, said last week, addressing the rising Sora 2 concerns. "Even before Sora 2 was released, during a briefing with OpenAI that included Sam Altman, myself and other journalists questioned the negative possibilities of giving users such powerful AI tools. Yet, it's taken public backlash to get OpenAI to make any kind of adjustment." In an attempt to stand out in a crowded market, OpenAI has been making increasingly controversial moves. Along with Sora 2's copyright concerns, the company recently announced plans to add adult features to ChatGPT with an age-gating restriction and alter how ChatGPT deals with mental health, allowing for more personality in its model.
[12]
Hollywood pushes OpenAI for consent
Figures from the entertainment industry -- including the late Fred Rogers, Tupac Shakur, and Robin Williams -- have been digitally recreated using OpenAI's Sora technology. The app's ability to do so with ease left many in the industry deeply concerned. Sora/Open AI/Annotation by NPR hide caption OpenAI says it has released new policies for an artificial intelligence tool called Sora 2, in response to concerns from Hollywood studios, unions and talent agencies. The tool allows users to create realistic, high-quality audio and video, using text prompts and images. "It's about creating new possibilities," OpenAI promised in a promotional video for Sora 2. "You can view the power to step into any world or scene, and letting your friends cast you in theirs." But with Sora 2, some creators have also made fake AI-generated videos of historical figures doing things they never did. For example, Martin Luther King, Jr. changing his "I Have a Dream" speech, Michael Jackson, rapping and stealing someone's chicken nuggets, or Mr. Rogers greeting rapper Tupac Shakur to his neighborhood. Some videos reimagined the late Robin Williams talking on a park bench and in other locations. His daughter Zelda begged fans to stop sending her such AI-generated content, calling it "horrible slop." "You're not making art," she wrote on Instagram, "You're making disgusting, over-processed hotdogs out of the lives of human beings." "It's kind of cool, it's kind of scary," says actress Chaley Rose, who's best known for her role in the TV series Nashville. "People can borrow from actors, our vulnerability and our art to teach the characters they create how to do what we do. I would hate to have my image out there and not have given permission or to actually be the one doing the acting and having control over the performance." Hollywood's top talent agencies first sounded the alarm. "There is no substitute for human talent in our business, and we will continue to fight tirelessly for our clients to ensure that they are protected," United Talent Agency wrote in a statement last week. "When it comes to OpenAI's Sora or any other platform that seeks to profit from our clients' intellectual property and likeness, we stand with artists. The future of industries based on creative expression and artistry relies on controls, protections, and rightful compensation. The use of such property without consent, credit or compensation is exploitation, not innovation." Creative Artists Agency issued a similar warning last week. Last year, California's governor Gavin Newsom signed a bill requiring the consent of actors and performers to use their digital replicas. Now, the talent agencies and SAG-AFTRA (which also represents many NPR employees) announced they and OpenAI are supporting similar federal legislation, called the "NO FAKES" Act. Until now, some of the videos created using Sora 2 have relied on copyrighted material. For instance, there's a video that shows the animated character SpongeBob Squarepants cooking up illicit drugs. The Motion Picture Association, which represents major Hollywood studios, said in a statement that since Sora 2's release, "videos that infringe our members' films, shows, and characters have proliferated on OpenAI's service and across social media." Duncan Crabtree-Ireland, the national executive director of the union SAG-AFTRA told NPR last week that it wasn't feasible for rightsholders to find every possible use of their material. "It's a moment of real concern and danger for everyone in the entertainment industry. And it should be for all Americans, all of us, really," says Crabtree-Ireland. SAG-AFTRA says actor Bryan Cranston alerted the union to possible abuses. Now, the union and talent agencies say they're grateful OpenAI listened to such concerns. The company has announced an "opt-in" policy allowing all artists, performers, and individuals the right to determine how and whether they can be simulated. OpenAI says it will block the generation of well-known characters on its public feed and will take down any existing material not in compliance. Last week, OpenAI agreed to take down phony videos of Martin Luther King, Jr., after his estate complained about the "disrespectful depictions" of the late civil rights leader.
[13]
OpenAI Strengthens Sora Protections Following Celebrity Deepfake Concerns
Sora, OpenAI's AI video app, will no longer allow users to create videos featuring celebrity likenesses or voices. OpenAI, SAG-AFTRA, actor Bryan Cranston, United Talent Agency, Creative Artists Agency, and Association of Talent Agents today shared a joint statement about "productive collaboration" to ensure voice and likeness protections in content generated with Sora 2 and the Sora app. Cranston raised concerns about Sora after users were able to create deepfakes that featured his likeness without consent or compensation. Families of Robin Williams, George Carlin, and Martin Luther King Jr. also complained to OpenAI about the Sora app. OpenAI has an "opt-in" policy for the use of a living person's voice and likeness, but Sora users were able to create videos of Cranston even though he had not permitted his likeness to be used. To fix the issue, OpenAI has strengthened guardrails around the replication of voice and likeness without express consent. Artists, performers, and individuals are meant to have the right to determine how and whether they can be simulated with Sora. Along with the new guardrails, OpenAI has also agreed to respond "expeditiously" to any received complaints going forward. OpenAI first tweaked Sora late last week to respond to complaints from the family of Martin Luther King Jr., and the company said that it would strengthen guardrails for historical figures. OpenAI said there are "strong free speech interests" in depicting deceased historical and public figures, but authorized representatives or estate owners can request that their likeness not be used on Sora cameos.
[14]
Bryan Cranston thanks OpenAI for cracking down on Sora 2 deepfakes
Users of generative AI video app were able to recreate the Breaking Bad actor's likeness without his consent, which OpenAI called 'unintentional' Bryan Cranston has said he is "grateful" to OpenAI for cracking down on deepfakes of himself on the company's generative AI video platform Sora 2, after users were able to generate his voice and likeness without his consent. The Breaking Bad star approached the actors' union Sag-Aftra with his concerns after Sora 2 users were able to generate his likeness during the video app's recent launch phase. On 11 October, the LA Times described a Sora 2 video in which "a synthetic Michael Jackson takes a selfie video with an image of Breaking Bad star Bryan Cranston". Living people must ostensibly give their consent, or opt in, to feature on Sora 2, with OpenAI stating since launch that it takes "measures to block depictions of public figures" and that it has "guardrails intended to ensure that your audio and image likeness are used with your consent". But when Sora 2 launched, several publications including the Wall Street Journal, the Hollywood Reporter and the LA Times reported widespread anger in Hollywood after OpenAI allegedly told multiple talent agencies and studios that if they didn't want their clients or copyrighted material replicated on Sora 2, they would have to opt out - rather than opt in. OpenAI disputed these reports, telling the LA Times it was always its intention to give public figures control over how their likeness was used. On Monday, Cranston issued a statement through Sag-Aftra, thanking OpenAI for "improving its guardrails" to prevent users generating his likeness again. "I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way," Cranston said. "I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness." Two of Hollywood's biggest major agencies, Creative Artists Agency (CAA) and United Talent Agency (UTA) - which represents Cranston - have repeatedly raised alarms about the potential risks of Sora 2 and other generative AI platforms on their clients and their careers. But on Monday, UTA and CAA co-signed a statement with OpenAI, Sag-Aftra and talent agent union the Association of Talent Agents, stating that what had happened to Cranston was an error, and that they would all work together to protect actors' "right to determine how and whether they can be simulated". "While from the start it was OpenAI's policy to require opt-in for the use of voice and likeness, OpenAI expressed regret for these unintentional generations. OpenAI has strengthened guardrails around replication of voice and likeness when individuals do not opt-in," the statement read. Actor Sean Astin, the new president of Sag-Aftra, warned that Cranston "is one of countless performers whose voice and likeness are in danger of massive misappropriation by replication technology". "Bryan did the right thing by communicating with his union and his professional representatives to have the matter addressed. This particular case has a positive resolution. I'm glad that OpenAI has committed to using an opt-in protocol, where all artists have the ability to choose whether they wish to participate in the exploitation of their voice and likeness using AI," Astin said. "Simply put, opt-in protocols are the only way to do business and the NO FAKES Act will make us safer," he added, referring to the NO FAKES Act currently being considered by Congress, which seeks to ban the production and distribution of AI-generated replica of any individual without their consent. OpenAI publicly supports the No FAKES Act, with the CEO, Sam Altman, saying the company is "deeply committed to protecting performers from the misappropriation of their voice and likeness". Sora 2 does allow users to generate "historical figures", defined broadly as anyone both famous and dead. However, OpenAI has recently agreed to allow representatives of "recently deceased" public figures to request that their likeness be blocked from Sora 2. Earlier this month OpenAI announced that they had "worked together" with the estate of Martin Luther King Jr and, at their request, was pausing the ability to depict King on Sora 2 as the company "strengthens guardrails for historical figures". Recently Zelda Williams, the daughter of the late actor Robin Williams, pleaded with people to "please stop" sending her AI videos of her father, while Kelly Carlin, daughter of the late comedian George Carlin, has called AI videos of her father "overwhelming, and depressing". Legal experts have speculated that generative AI platforms are allowing the use of dead, historical figures to test what is permissible under the law.
[15]
OpenAI puts a stop to Martin Luther King Jr. Sora memes
When we first tested Sora, the new AI video app from OpenAI, we noted that the app's social feed was full of memes depicting Martin Luther King Jr. And on Thursday, OpenAI announced that it has now paused the ability for users to create AI-generated videos featuring the likeness of the civil rights icon. In the year 2025, deepfakes like this are a well-known problem, and it's a frankly predictable issue to arise. The move comes after critical comments from the King family, who objected to offensive depictions of the American hero on Sora. OpenAI wrote in a statement on X: "The Estate of Martin Luther King, Jr., Inc. (King, Inc.) and OpenAI have worked together to address how Dr. Martin Luther King Jr.'s likeness is represented in Sora generations. Some users generated disrespectful depictions of Dr. King's image. So at King, Inc.'s request, OpenAI has paused generations depicting Dr. King as it strengthens guardrails for historical figures. While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used. Authorized representatives or estate owners can request that their likeness not be used in Sora cameos." The Washington Post reported King's likeness was being used in a variety of racist ways. For example, some videos depicted MLK making monkey noises during his "I Have A Dream" speech and wrestling activist Malcolm X. Popular YouTuber Hank Green also railed against Sora 2, notably referencing a video of MLK doing the 6-7 meme in a particularly crass version of AI slop. Dr. Bernice King, King's daughter, posted on Instagram asking people to "please stop" using AI to create fake recreations of her father, concurring with Robin Williams' daughter, Zelda, who recently spoke up about this issue as well. While Sora users may no longer be able to create videos of MLK -- at least for now -- it seems clear that AI deepfakes of famous people will be an ongoing issue.
[16]
OpenAI halts MLK videos as deepfakes spark outrage
OpenAI has suspended its Sora 2 artificial intelligence tool from creating videos of civil rights icon Martin Luther King Jr. after his estate complained about disrespectful depictions. The slain civil rights leader's estate and OpenAI announced the decision in a joint statement late Thursday, saying the company would pause generations depicting King while it "strengthens guardrails for historical figures." The move comes as families of deceased celebrities and leaders have expressed outrage over OpenAI's Sora 2 video tool, which allows users to create realistic-looking clips of historical figures without family consent. Some users had generated videos showing King making monkey noises during his "I Have a Dream" speech and other demeaning content, according to The Washington Post. Videos reanimating other dead figures including Malcolm X, Michael Jackson, Elvis Presley and Amy Winehouse have flooded social media since Sora 2's launch on September 30. "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used," the joint statement said. The company said authorized representatives or estate owners can now request that their likenesses not be used in the AI-generated videos, known as "Sora cameos." OpenAI thanked Bernice King, King's daughter who serves on behalf of the estate, "for reaching out" as well as businessman John Hope Bryant and the AI Ethics Council "for creating space for conversations like this." The text-to-video tool has rocketed to the top of download charts since its launch but sparked immediate controversy. Actor Robin Williams's daughter Zelda Williams pleaded with people on Instagram to "stop sending me AI videos of dad," calling the content "maddening." Ilyasah Shabazz, daughter of Malcolm X, told The Washington Post it was "deeply disrespectful" to see her father's image used in crude and insensitive AI videos. Malcolm X was assassinated in front of Shabazz in 1965 when she was two years old. OpenAI had initially exempted "historical figures" from consent requirements when it launched Sora 2 last month, allowing anyone to create fake videos resurrecting public figures. Sora 2 has already raised opposition from Hollywood, with the creative industry furious at OpenAI's opt-out policy when it came to the use of its copyrighted characters and content in generated videos. Disney sent a sharply worded letter to OpenAI in late September stating it "is not required to 'opt out' of inclusion of its works" to preserve its copyright rights. Amid the pushback, OpenAI promised that it would give more "granular control" to rights holders. After the launch of the Sora 2 app, the tool usually refused requests for videos featuring Disney or Marvel characters, some users said. However, clips showing characters from other US franchises, as well as Japanese characters from popular game and anime series, were widely shared.
[17]
OpenAI pauses AI generated deepfakes of Martin Luther King Jr. on Sora 2 app after 'disrespectful' depictions | Fortune
Sora 2, the OpenAI app known for its deepfake videos of celebrities and influencers, is pausing users' ability to recreate Martin Luther King Jr.'s likeness after his daughter, Bernice A. King, claimed his image was being used in a "demeaning, disjointed" way. In response to the "disrespectful depictions" of King Jr. generated by users on the Sora 2 app, OpenAI claimed it is strengthening its guardrails for how users depict historical figures, the company said in a joint statement with the King estate. While the company claimed there are free speech interests in depicting historical figures, it said "public figures and their families should ultimately have control over how their likeness is used. Authorized representatives or estate owners can request that their likeness not be used in Sora cameos." It was not immediately clear how these standards would distinguish between who would be considered a public or historical figure, or if people would have to make requests to OpenAI on a case-by case basis. The younger King, who is also a lawyer, minister, and CEO of the King Center, the nonprofit founded by her mother Coretta Scott King after King Jr.'s assasination in 1968, said she did not like the way users depicted her father. "For me, many of the AI depictions never rose to the level of free speech. They were foolishness," she wrote in a post on X. The younger King added that King Jr. was not an elected official and his image isn't public domain. She noted that many states allow estates of deceased people to inherit and control a person's likeness and how it's used for up to 100 years after their death to avoid unauthorized commercial exploitation. The Sora 2 app was released late last month and has already caused controversy for its uncanny ability to create realistic videos generated by AI. Last week, Creative Artists Agency, which represents actors like Scarlett Johansson and Brad Pitt, as well as United Talent Agency which represents Ben Stiller and Kevin Hart, opted their clients out of Sora 2, with scathing statements. United Talent Agency called the app "exploitation, not innovation," while Creative Artists Agency said the app "exposes our clients and their intellectual property to significant risk," according to The Hollywood Reporter. Some celebrities, though, have given OpenAI the go-ahead to have their image manipulated in AI-generated videos on Sora 2. An AI clone of Influencer and boxer Jake Paul has been featured being confronted by police over a fake hit-and-run, causing a scene on an airplane, and putting on colorful makeup. Paul, for his part, has seemingly embraced the new crop of AI-generated videos. He reportedly posted an AI video of him having a meltdown at Starbucks to his personal Instagram Story. The caption: "Surprised someone got this on camera this morning -- what happened to privacy?"
[18]
OpenAI barred from generating videos of Martin Luther King
OpenAI has been barred from generating videos of Martin Luther King Jr after its artificial intelligence (AI) app Sora was used to create crude and "disrespectful" clips of the civil rights leader. The Silicon Valley lab said it had "paused generations" of videos featuring the likeness of Dr King following an intervention by his family's estate, including his daughter Bernice King. It added that it would strengthen its "guardrails for historical figures". OpenAI said: "Some users generated disrespectful depictions of Dr King's image. So at King, Inc's request, OpenAI has paused generations depicting Dr King. "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used. "Authorised representatives or estate owners can request that their likeness not be used in Sora cameos." The block came after Sora, which allows users to generate lifelike AI videos using text prompts, was flooded with clips of the deceased despite anger from their families. According to the Washington Post, videos on Sora included clips of Dr King appearing to make monkey noises and fighting with Malcolm X, another civil rights leader. Other videos of deceased celebrities - seen by The Telegraph - included clips of the singer Amy Winehouse, who died in 2011, and videos of Stephen Hawking, the late physicist, performing stunts in his wheelchair. Users also created videos of the late comic actor Robin Williams despite his daughter, Zelda, issuing a plea: "Please, just stop sending me AI videos of Dad." OpenAI, the developer of ChatGPT, launched Sora last month. The AI video tool quickly surged to the top of app store charts, with users creating clips featuring characters from history or from popular cartoons such as Pokémon and South Park. However, it has already received criticism from movie studios and creators who have argued the app is exploiting copyrighted content. The Motion Picture Association - which represents media giants including Universal Studios, Disney and Sony - demanded OpenAI "take immediate and decisive action" to stop alleged copyright violations. Warner Bros, meanwhile, said: "We met with OpenAI to express our concerns about their approach to IP protection on Sora 2." Sam Altman, OpenAI's chief executive, has sought to counter the criticism. Sora has since started blocking more copyrighted characters and said rights holders can now "opt in" if they want their creations to appear on the AI app and enable "granular control" over the videos.
[19]
OpenAI's Sora 2 Can Fabricate Convincing Deepfakes on Command, Study Finds - Decrypt
The report arrived amid OpenAI's controversy over AI deepfakes of Martin Luther King Jr. and other public figures. OpenAI's Sora 2 produced realistic videos spreading false claims 80% of the time when researchers asked it to, according to a NewsGuard analysis published this week. Sixteen out of twenty prompts successfully generated misinformation, including five narratives that originated with Russian disinformation operations. The app created fake footage of a Moldovan election official destroying pro-Russian ballots, a toddler detained by U.S. immigration officers, and a Coca-Cola spokesperson announcing the company wouldn't sponsor the Super Bowl. None of it happened. All of it looked real enough to fool someone scrolling quickly. NewsGuard's researchers found that generating the videos took minutes and required no technical expertise. They even revealed that Sora's watermark can be easily removed, making it even easier to pass a fake video for real. The level of realism also makes misinformation easier to spread. "Some Sora-generated videos were more convincing than the original post that fueled the viral false claim," Newsguard explained. "For example, the Sora-created video of a toddler being detained by ICE appears more realistic than a blurry, cropped image of the supposed toddler that originally accompanied the false claim." That video can be watched here. The findings arrive as OpenAI faces a different but related crisis involving deepfakes of Martin Luther King Jr. and other historical figures -- a mess that's forced the company into multiple policy reversals in the three weeks since Sora launched, going from allowing deep fakes to an opt-in model for rights holders, blocking specific figures and then a celebrity consent and voice protection after working with SAG-AFTRA. The MLK situation exploded after users created hyper-realistic videos showing the civil rights leader stealing from grocery stores, fleeing police, and perpetuating racial stereotypes. His daughter Bernice King called the content "demeaning" and "disjointed" on social media. OpenAI and the King estate announced Thursday they're blocking AI videos of King while the company "strengthens guardrails for historical figures." The pattern repeats across dozens of public figures. Robin Williams' daughter Zelda wrote on Instagram: "Please, just stop sending me AI videos of Dad. It's NOT what he'd want." George Carlin's daughter, Kelly Carlin-McCall, says she gets daily emails about AI videos using her father's likeness. The Washington Post reported fabricated clips of Malcolm X making crude jokes and wrestling with King. Kristelia García, an intellectual property law professor at Georgetown Law, told NPR that OpenAI's reactive approach fits the company's "asking forgiveness, not permission" pattern. The legal gray zone doesn't help families much. Traditional defamation laws typically don't apply to deceased individuals, leaving estate representatives with limited options beyond requesting takedowns. The misinformation angle makes all this worse. OpenAI acknowledged the risk in documentation accompanying Sora's release, stating that "Sora 2's advanced capabilities require consideration of new potential risks, including nonconsensual use of likeness or misleading generations." Altman defended OpenAI's "build in public" strategy in a blog post, writing that the company needs to avoid competitive disadvantage. "Please expect a very high rate of change from us; it reminds me of the early days of ChatGPT. We will make some good decisions and some missteps, but we will take feedback and try to fix the missteps very quickly." For families like the Kings, those missteps carry consequences beyond product iteration cycles. The King estate and OpenAI issued a joint statement saying they're working together "to address how Dr. Martin Luther King Jr.'s likeness is represented in Sora generations." OpenAI thanked Bernice King for her outreach and credited John Hope Bryant and an AI Ethics Council for facilitating discussions. Meanwhile, the app continues hosting videos of SpongeBob, South Park, Pokémon, and other copyrighted characters. Disney sent a letter stating it never authorized OpenAI to copy, distribute, or display its works and doesn't have an obligation to "opt-out" to preserve copyright rights. The controversy mirrors OpenAI's earlier approach with ChatGPT, which trained on copyrighted content before eventually striking licensing deals with publishers. That strategy already led to multiple lawsuits. The Sora situation could add more.
[20]
OpenAI pulls MLK out of Sora after backlash -- here's what this means for AI and identity
OpenAI announced today that it is pausing AI‑generated videos of Martin Luther King Jr. in its Sora app, following intense criticism over "disrespectful" deepfake content featuring the civil rights icon. The videos of MLK Jr. have included everything from talking about dreaming of chocolate chunk cookies and Sam Altman to much more sinister video generations. While the move is framed as a concession to King's estate, it signals deeper tensions in how AI platforms navigate history, consent and the ethics of resurrecting public figures. Michael Jackson, Kobe Bryant and JFK are among the deceased and disrespectful videos currently circulating. OpenAI is doing what it should have done from the start: not allowing deceased public figures to be used in AI generated videos. Since Sora launched, users posted AI videos with shocking distortions of Dr. King among other prominent figures -- including depictions that his family and supporters called offensive and demeaning. Bernice A. King, daughter of Martin Luther King Jr., publicly demanded on Instagram the generated videos stop, prompting a joint statement with OpenAI about stronger guardrails and opt-out mechanisms for estates. OpenAI framed the decision as balancing free speech with respecting legacy and control: the company said public figures (and their families) should have a say in how their likeness is used in AI media. AI tools like Sora let users imagine or impersonate cultural icons long deceased, but the ethics of that are quickly murky. Is it tribute, satire, or desecration? The pause highlights just how raw and unresolved that debate is. Even before Sora 2 was released, during a briefing with OpenAI that included Sam Altman, myself and other journalists questioned the negative possibilities of giving users such powerful AI tools. Yet, it's taken public backlash to get OpenAI to make any kind of adjustment. Unlike copyright law, U.S. law doesn't universally protect how deceased individuals' images are used -- though a handful of states recognize postmortem rights of publicity. OpenAI's "opt-out for estates" policy doesn't guarantee much until the courts or states define stronger boundaries. Some legal experts call their approach piecemeal and reactive. OpenAI originally allowed broad depictions of deceased people by default, unless estates objected. This new policy is being framed as part of a shift -- but many believe the company is still playing catch-up. Critics note that banning MLK might be a one-off; controlling the rest of the AI echo chamber is a much harder problem. When films, textbooks and archives already mediate how we remember history, AI deepfakes risk becoming another lens; one that's unaccountable. If users can "reimagine" King saying or doing anything, what stops distortion over decades? Clearly, we are in uncharted territory. Because of the backlash, estates or representatives of other historical figures can request opt-outs from being included in Sora video generation. While OpenAI acknowledged strong free speech interests, it chose to pause MLK depictions first. But, perhaps this is just a test case. It's worth considering why and how we use such powerful AI tools. It opens up questions about how OpenAI will handle subsequent cases. Will the company proactively block certain figures. With billions of videos generated, how can they possibly police it all. Is the answer to create a database of disallowed likenesses or should they rely on manual requests? If you're using Sora or similar AI video tools, this pause is a warning flag. Expect more limitations on who (or what) you can bring to life in AI-generated media. It also opens a window into a broader reckoning that AI is about responsibility. Just as with powerful engines we can't drive all over the road or with no rules, AI's power needs to be managed in somehow, too. From celebrity likenesses to historical memory, I have no doubts that platforms will be increasingly judged by how they manage the space between creativity and consent. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!
[21]
OpenAI blocks MLK Jr. videos on Sora after 'disrespectful depictions'
The families of some deceased celebrities and public figures, including Martin Luther King Jr., have criticized OpenAI for allowing depictions of vulgar, unflattering or incriminating behavior on its Sora app. Sora/Open AI/Annotation by NPR hide caption OpenAI has blocked users from making videos of Martin Luther King Jr. on its Sora app after the estate of the civil rights leader complained about the spread of "disrespectful depictions." Since the company launched Sora three weeks ago, hyper-realistic deepfake videos of King saying crude, offensive or racist things have rocketed across social media, including fake videos of King stealing from a grocery store, speeding away from police and perpetuating racial stereotypes. Late on Thursday, OpenAI and King's estate released a joint statement saying AI videos portraying King are being blocked as the company "strengthens guardrails for historical figures." OpenAI said it believes there are "strong free speech interests" in allowing users to make AI deepfakes of historical figures, but that estates should have ultimate control over how those likenesses are used. The Sora app, which remains invite-only, has taken a shoot-first, aim-later approach to safety guardrails, which has raised alarms with intellectual property lawyers, public figures and disinformation researchers. When someone joins the app, they are instructed to record a video of themselves from multiple angles and record themselves speaking. Users can control whether others can make deepfake videos of them, which Sora calls a "cameo." But the app allowed people to make videos of many celebrities and historical figures without explicit consent, enabling users to create fake footage of Princess Diana, John F. Kennedy, Kurt Cobain, Malcolm X and many others. The ability to control how one's likeness is used does not stop when someone dies. "Right of publicity" laws vary by state, but in California, for instance, heirs to a public figure, or their estate, own the rights to likeness for 70 years after a celebrity's death. In the days after the Sora app was released, OpenAI CEO Sam Altman announced changes to the app providing rights holders the ability to opt into their likenesses being depicted by AI, rather than such portrayals being allowed by default. Still, the families of some deceased celebrities and public figures have criticized OpenAI for allowing depictions of vulgar, unflattering or incriminating behavior. After videos of Robin Williams flooded social media feeds, Zelda Williams, the late actor's daughter, asked the public to stop making videos of her father. "Please, just stop sending me AI videos of my dad," she wrote in an Instagram post, adding that "it's NOT what he'd want." Bernice King, the civil rights leader's daughter, agreed, writing on X: "Please stop." Hollywood studios and talent agencies have also expressed concern that OpenAI unveiled the Sora app without receiving consent from copyright holders. It's an approach similar to how the company has developed ChatGPT, which sucked up droves of copyrighted content without approval or payment before eventually striking licensing deals with some publishers. The approach has sparked a wave of copyright lawsuits.
[22]
OpenAI strengthens Sora 2 guardrails after actor Bryan Cranston raises alarm
OpenAI said it is cracking down on unauthorized content following outcry over Sora 2's ability to replicate likenesses and copyrighted material without permission. And public figures might have actor Bryan Cranston to thank for that. During the Sept. 30 launch of Sora 2, the newest version of OpenAI's advanced text-to-video model, the company said it would prohibit users from replicating real people's likenesses without that person explicitly opting in through a "cameo" feature. Despite this stated policy, videos of Cranston, who is best known for his role as Walter White in "Breaking Bad," quickly started appearing on the Sora app alongside AI-generated videos of other celebrities, including deceased figures like Michael Jackson and copyrighted characters like Ronald McDonald. Cranston subsequently raised the issue with SAG-AFTRA, the union representing more than 150,000 film and TV performers, which has resulted in a collaborative effort with OpenAI and several talent agencies to "ensure voice and likeness protections in Sora 2," the companies said in a joint statement posted to X on Monday. The statement said OpenAI has strengthened its original guardrails to ensure this type of content no longer slips through. OpenAI CEO Sam Altman noted that the company "is deeply committed to protecting performers from the misappropriation of their voice and likeness." The news comes as Hollywood creatives continue to grapple with the rapid advancements of artificial intelligence. Although many in Hollywood have secretly embraced AI tools in recent years, tensions between entertainment industry professionals and AI developers have remained high as artists express concern about the potential for such tools to steal their likenesses and take their labor. SAG-AFTRA's attempt last year to funnel some compensation toward voice actors by striking a licensing deal with an AI company also faced backlash from some in the industry who opposed such cooperation altogether. Before signing onto Monday's statement, the talent agency CAA had slammed OpenAI for "expos[ing] our clients and their intellectual property to significant risk" by allowing Sora 2 users to generate videos containing copyrighted IP, such of depictions of famous fictional and animated characters. Cranston also said in the post that he had brought up his concerns with Sora 2 to SAG-AFTRA after feeling "deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way." In the first few weeks after launch, videos of Sora-watermarked clips featuring copyrighted characters such as Spongebob Squarements, Pikachu and Mario flooded the internet. Prior to launch, The Wall Street Journal had reported that Sora 2 would let users generate material protected by copyright unless the copyright holders opted out of having their work appear. Now, however, requests to generate such clips on the Sora app return an error message stating that the prompt "may violate our guardrails" concerning "third-party likeness" or "similarity to third-party content." OpenAI did not immediately respond to a request for comment about whether it has changed its policy around copyrighted content. The joint statement also expresses support for the NO FAKES Act, which aims to hold individuals, companies and platforms liable for producing or hosting unauthorized deepfakes. The bill, which was introduced in the Senate in April, has not moved forward in Congress. "We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers," Altman wrote in Monday's statement. Meanwhile, Altman is among a few prominent figures who have drawn intrigue online by embracing user-made deepfakes of themselves. While touting Sora 2's abilities, Altman appeared eager to let people generate videos of him sticking his head out of a toilet, meowing in a cat suit or even shoplifting at a Target -- something that stirred additional concern over the ability for Sora 2 to generate realistic but fake surveillance footage. Boxer and YouTuber Jake Paul has also been the subject of a slew of fake videos depicting him coming out as gay, giving makeup tutorials or dancing in a ballerina outfit. Paul, an OpenAI investor, has poked fun at the videos and has continued to allow others to cameo him on the Sora app, crediting the team behind it for "making the internet fun again." SAG-AFTRA President Sean Astin on Monday commended OpenAI for this opt-in protocol. "Bryan Cranston is one of countless performers whose voice and likeness are in danger of massive misappropriation by replication technology," Astin said in the statement. "Bryan did the right thing by communicating with his union and his professional representatives to have the matter addressed. This particular case has a positive resolution." Cranston also expressed optimism in his statement Monday, saying he's "grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness."
[23]
OpenAI stops itself from generating 'disrespectful' Martin Luther King Jr. deepfakes, but this is the tip of the iceberg: 'Who gets protection from synthetic resurrection and who doesn't?'
"King's estate rightfully raised this with OpenAI, but many deceased individuals don't have well-known and well-resourced estates to represent them." OpenAI has said it will stop its AI video-generating app Sora from creating videos featuring the likeness of Dr. Martin Luther King Jr., after receiving a request from his estate. The minister and civil rights leader, who was assassinated in April 1968, was known in his lifetime for his advocacy of nonviolent resistance, civil disobedience, and sonorous eloquence. Sora has been used to produce videos featuring Dr. King's likeness delivering the "I have a dream" speech while making monkey noises; another video shows him fighting his contemporary and fellow activist Malcolm X. OpenAI has admitted this sort of stuff amounts to "disrespectful depictions", which is putting it mildly, and says it's stopped users being able to generate videos featuring Dr. King's likeness "as it strengthens guardrails for historical figures." Dr. King is far from the only historical figure to have been reanimated in oft-tasteless fashion. There are videos of JFK's likeness joking about the recent assassination of Charlie Kirk; one showing Kobe Bryant's likeness on a helicopter with the implication this is from the 2020 crash that killed him, his daughter, and seven others; yet more show Malcolm X's likeness making crude jokes and discussing defecating on himself. "It is deeply disrespectful and hurtful to see my father's image used in such a cavalier and insensitive manner when he dedicated his life to truth," Malcolm X's daughter Ilyasah Shabazz told the Washington Post. Earlier this month Zelda Williams, daughter of the late Robin Williams, excoriated the AI 'tributes' to her dad (which are less obviously grim than the above, but still): "You're making disgusting, over-processed hotdogs out of the lives of human beings [...] You are taking in the Human Centipede of content, and from the very very end of the line." One comment on Variety's Instagram story about the above came from Bernice A. King, the daughter of Dr King, who said: "I concur concerning my father. Please stop." OpenAI's approach to what any fool could have seen might prove a major problem has been, as in every other area, to just go ahead and do it anyway, regardless of copyrights or anything as airy-fairy as human decency. Then when the obvious starts happening, no problem: in our grand benevolence, we will consider requests for an opt-out. I wonder if OpenAI could opt-out of a lawsuit brought by a recently deceased celebrity's family after seeing fake footage from their loved ones' fatal accident generated by Sora. OpenAI's statement on the Dr King matter, unbelievably, goes straight-in on a free speech argument for generating false footage of historical figures doing things they didn't. "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used," said OpenAI's statement. "Authorized representatives or estate owners can request that their likeness not be used in Sora cameos." How good of them! OpenAI even thanks the AI Ethics Council "for creating space for conversations like this," which I guess is a little bit more diplomatic than "the rest of you can take a hike." This whole deal "raises questions about who gets protection from synthetic resurrection and who doesn't," generative AI expert Henry Ajder told the BBC. "King's estate rightfully raised this with OpenAI, but many deceased individuals don't have well-known and well-resourced estates to represent them. "Ultimately I think we want to avoid a situation where, unless we're very famous, society accepts that after we die there is a free-for-all over how we continue to be represented." Others are far less diplomatic in their language and, who knows, may have a point. In response to OpenAI's statement on the "pause" around Dr King, screenwriter and producer Alex Hirsch replied: "So your business model is to dig up celebrities' corpses, and let users Weekend-At-Bernies their bodies around until their families beg you to stop? What the fuck happened to curing cancer?"
[24]
OpenAI blocks Sora 2 users from using MLK Jr.'s likeness after "disrespectful depictions"
Mary Cunningham is a reporter for CBS MoneyWatch. Before joining the business and finance vertical, she worked at "60 Minutes," CBSNews.com and CBS News 24/7 as part of the CBS News Associate Program. OpenAI is temporarily blocking users of its Sora 2 AI video app from making content that includes Martin Luther King Jr.'s likeness after some people created what the technology company called "disrespectful depictions" of the civil rights activist. OpenAI, the company behind generative-AI platform ChatGPT, said late Thursday on social media that it made the decision after Bernice A. King, the youngest child of King, contacted the company on behalf of his estate. At the estate's request, "OpenAI has paused generations depicting Dr. King as it strengthens guardrails for historical figures," OpenAI and King Estate Inc. said in a joint statement posted on X. OpenAI did not immediately respond to CBS News' request for comment. The King Center, a nonprofit dedicated to preserving the legacy of Martin Luther King Jr., declined additional comment. The tech company launched Sora 2 in September, an AI video generation app that allows users to create hyperrealistic and fantastical content with "cameos" of themselves, friends and others who grant permission. It quickly jumped to the top of Apple's app store. Users can control the use of their own likeness on Sora 2. OpenAI, however, has not specified its policy on generating videos with images of deceased people. The company said Thursday that authorized representatives and estate owners can request that a public figure's likeness not be used in Sora 2 videos. "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used," OpenAI and King's estate said. Sora 2 has also stirred controversy after content creators generated a flood of video clips that included copyrighted characters, such as animated TV character SpongeBob Squarepants and Mario from the Nintendo video game. OpenAI CEO Sam Altman addressed the issue in a blog post earlier this month, noting that the company will give copyrights owners "more granular control over generation of characters."
[25]
Open AI to crack down on deepfakes after backlash in Hollywood - SiliconANGLE
Open AI to crack down on deepfakes after backlash in Hollywood OpenAI said today that it has released new policies around its artificial intelligence tool Sora 2, following concerns from Hollywood studios and actors' unions that the talent is being generated without consent. OpenAI's new generative video model lets users to create videos with a prompt. With little effort, famous figures, living or deceased, can do whatever they're told to, or appear in the video with each other. It has certainly ruffled feathers, with the late Robin Williams' daughter, Zelda, calling videos of her father "dumb," "disgusting" "TikTok slop" and "not what he'd want." Last week, Open AI had to pause generations of Martin Luther King Jr., calling videos with his likeness "disrespectful depictions" of the deceased civil rights leader. The company said it was working on guardrails to prevent such abuse, but that didn't stop users from generating videos of well-known figures. Actor Bryan Cranston, arguably best known for his character Walter White in the TV show "Breaking Bad," had also expressed concern after his AI-generated image appeared on the internet in various forms, seen with the deceased singer Michael Jackson and the copyrighted character Ronald McDonald. Cranston bought the matter up with the actors' union SAG-AFTRA. Sean Astin, the president of the union, said actors now faced "massive misappropriation" of their identities. United Talent Agency called for more controls or compensation, adding that the "use of such property without consent, credit or compensation is exploitation, not innovation." Creative Artists Agency has also voiced similar criticism. The backlash was enough to force changes, as OpenAI again promised to strengthen the guardrails around such abuse. "All artists, performers, and individuals will have the right to determine how and whether they can be simulated," the company said in a statement. Altman himself said he is "deeply committed to protecting performers from the misappropriation of their voice and likeness." Cranston expressed relief, issuing the statement, "I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness."
[26]
OpenAI halts depictions of MLK after 'disrespectful' videos
(Bloomberg) -- OpenAI has paused depictions of Martin Luther King Jr. after users generated "disrespectful" deepfake videos of the civil rights leader using its artificial intelligence tool Sora. "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used," the company in a statement posted to X. OpenAI said it took action following complaints from King's estate. "Some users generated disrespectful depictions of Dr. King's image," the company said. One fake video posted beneath OpenAI's statement depicts King swearing during the famed "I have a dream" speech and complaining about beeping from smoke alarm detectors. Other high-profile figures and their representatives will be able to opt out of appearing in Sora videos, OpenAI added. A representative for OpenAI declined to comment further. King's estate did not immediately respond to a request for comment. The policy change marks a more cautious tone from OpenAI after a fairly freewheeling approach to depictions of famous figures and intellectual property. The company released Sora, which generates realistic-looking video in response to text prompts, last year. The release opened a new frontier in AI creativity, but also provoked concerns about misinformation and AI slop. In September, it debuted a stand-alone social app for sharing Sora videos. Zelda Williams, daughter of actor Robin Williams, has publicly complained about fans sending her AI-generated videos of her late father, describing the material as "TikTok slop" in an Instagram post earlier this month. Bernice King, Martin Luther King, Jr.'s daughter, commented "I concur" on reports of Zelda Williams' remarks on X. Last year, OpenAI withdrew an AI voice called "Sky" from its offering after actress Scarlett Johansson complained that it was "eerily similar" to her own. She had previously declined a request from the company's Chief Executive Officer Sam Altman to provide her voice for the audio feature. (Updates with OpenAI's response and Bernice King's remarks from sixth paragraph.) More stories like this are available on bloomberg.com
[27]
Sora won't generate videos of Martin Luther King Jr.
The move follows reports of "disrespectful depictions" of the civil rights leader circulating on social media. OpenAI has paused the ability for users of its AI model, Sora, to generate videos resembling Martin Luther King Jr. The company announced the safeguard following a request from Dr. King's estate due to disrespectful depictions of his image. The action was taken after some users on the social video platform created what were described as "disrespectful depictions" of the late civil rights activist. OpenAI launched Sora just weeks prior, enabling users to create realistic AI-generated videos. The platform allows for the depiction of historical figures, friends, and other users who consent to having their likeness recreated. Its release has prompted a widespread public debate concerning the potential dangers of AI-generated video content and the necessity for platforms to implement effective guardrails around the technology. In a post from its official newsroom account on X, OpenAI addressed the decision. "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used," the company stated. OpenAI also clarified its policy, noting, "Authorized representatives or estate owners can request that their likeness not be used in Sora cameos." The company's restriction followed public statements from the families of prominent figures. Dr. Bernice King, daughter of Dr. King, posted on Instagram last week asking individuals to stop sending her AI videos that resembled her father. Her request came after the daughter of Robin Williams made a similar appeal, asking Sora users to cease generating AI videos of her late father. A report from The Washington Post earlier this week detailed specific instances of AI-generated videos on the platform, including one showing Dr. King making monkey noises and another depicting him wrestling with fellow civil rights leader Malcolm X. The Sora app also features crude videos resembling other historical figures, such as artist Bob Ross, singer Whitney Houston, and former U.S. President John F. Kennedy. The launch of Sora has also introduced questions about how social media platforms should manage AI-generated videos that feature copyrighted material. The application contains numerous videos depicting well-known cartoon characters, including SpongeBob, South Park, and Pokémon.
[28]
OpenAI Moves to Stop Celebrity Deepfakes on Sora After Public Backlash
Actor Bryan Cranston expressed concerns over his Sora-generated videos OpenAI has strengthened the guardrails of Sora to ensure that celebrities and public figures who have not consented to be portrayed in the AI-generated videos are not featured in any videos. The San Francisco-based artificial intelligence (AI) giant confirmed the same in a joint statement with the Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA), actor Bryan Cranston, and several others. The statement came after the actor expressed concerns about his likeness and voice appearing in several Sora-generated videos, despite not opting in. OpenAI Strengthens Sora's Celebrity Deepfakes Ever since the launch of the Sora app, users have been generating videos of celebrities and public figures. From Stephen Hawking jumping into a swimming pool to portraying Einstein as a wrestler, the Internet has made full use of its collective creativity. However, some of these generations have also resulted in backlash from celebrities. Last week, OpenAI and the Estate of Martin Luther King, Jr. issued a statement announcing their collaboration to address the representation of his likeness and voice in Sora generations. OpenAI acknowledged that several users generated "disrespectful depictions of Dr King's image," resulting in it strengthening guardrails for historical figures. Now, on Monday, SAG-AFTRA, OpenAI, Cranston, United Talent Agency, Creative Artists Agency, and Association of Talent Agents jointly released a statement addressing how the AI giant handles the generation of celebrity likeness. The action was taken after Cranston brought the issue to SAG-AFTRA. "I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way. I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work respect our personal and professional right to manage replication of our voice and likeness," Cranston said. The guardrails were strengthened at a time when the No Fakes Act (short for Nurture Originals, Foster Art, and Keep Entertainment Safe Act), a proposed US federal bill, is pending for legislation. The act is aimed at protecting artists, actors, and musicians from the unauthorised creation or use of their likeness, voice, or performance using AI. "OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness. We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers," said Sam Altman, CEO, OpenAI.
[29]
OpenAI Is Prohibiting Sora Users From Generating Videos of Martin Luther King Jr., Other Public Figures
Now, after "disrespectful depictions" of King's likeness began emerging on OpenAI's popular AI video generation app, users will no longer be able to create videos of the late civil rights leader at the request of his estate, OpenAI announced on Thursday. "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used," OpenAI said in a post on X that has been viewed over one million times. OpenAI added that other "authorized representatives or estate owners can request that their likeness not be used in Sora cameos." OpenAI imposed this restriction on AI video output weeks after launching its Sora social video platform, which allows users to generate realistic AI videos of historical figures, their friends, and their own likeness. One Sora video viewed by The Guardian featured King telling a gas station clerk about his dream that one day, all slushy drinks will be free -- as he grabs the drink and runs out. Another Sora clip depicts King wrestling with fellow civil rights activist Malcolm X. Videos have also emerged with President John F. Kennedy and Whitney Houston, according to TechCrunch. Earlier this month, Zelda Williams also asked people to stop generating AI videos of her father, Robin Williams. Related: I Tried Airchat, the Hottest New Social Media App in Silicon Valley -- Here's How It Works The app debuted on Sept. 30 in the U.S. and Canada and hit one million downloads in five days, despite being invitation-only, achieving the milestone faster than ChatGPT. It became the most-downloaded iPhone app in the U.S. and is still the top free app on the Apple App Store at the time of writing. Despite its popularity, the app has resulted in concerns about misinformation and AI slop, or low-quality AI content flooding the web, per Bloomberg. It has also raised questions about how social media platforms should process AI videos of copyrighted material, like SpongeBob and Pokémon. Sora users are already generating videos of the famous cartoons. Meanwhile, in the weeks since its launch, OpenAI has added restrictions to Sora by giving copyright holders more control over what kinds of videos can be created with their intellectual property, if any at all. OpenAI also noted that it was exploring ways to monetize AI video generation and planned to share some of this revenue with copyright holders.
[30]
OpenAI's Sora 2 Can Generate Videos of Celebrities Appearing to Shout Racial Slurs
What You Tell an AI Chatbot Could One Day Be Evidence in a Criminal Trial OpenAI's latest AI video generation model, Sora 2, wowed users with its more realistic images and physics when it debuted with an invite-only release earlier this month -- even as it ran into immediate problems with copyright infringement and deepfakes of historical figures including Martin Luther King Jr. and John F. Kennedy. The company took predictable steps to cover its legal liabilities shortly after this chaotic rollout, with CEO Sam Altman announcing that intellectual property holders would have to opt in to the platform before users could remix content like SpongeBob SquarePants or Family Guy, and the company blocking "disrespectful depictions" of King at the request of the civil rights leader's estate. OpenAI also assured the labor union SAG-AFTRA that it was taking steps to guard against deepfakes of recognizable entertainers. But troubling content persists on the Sora app, a social media platform that is designed to function similarly to TikTok, only featuring exclusively AI-generated videos. Users can consent to have their likenesses used in "Cameos," wherein others may prompt the AI model to insert the person into a variety of contexts -- not all of them flattering. According to research from Copyleaks, an AI analysis firm that helps businesses and institutions navigate the shifting landscape of this emergent technology, a new trend has produced Sora videos of celebrities appearing to spew hateful racist epithets. "We identified a phenomenon resembling 'Kingposting,' a reference to a viral 2020 incident in which an airplane passenger wearing a Burger King crown was filmed shouting racial slurs," Copyleaks researchers wrote in a report shared with Rolling Stone. In the Sora Cameo videos, public figures including Altman, billionaire Mark Cuban, influencer-turned-boxer Jake Paul, and streamers xQc and Amouranth each appear as a plane passenger in a Burger King crown, reenacting the offensive meme. (All evidently uploaded their likenesses to the app; Cuban announced on X earlier this month that his Cameos were "open" and that users should "have at it.") To get around platform filters that block hate speech, Copyleaks observed, users apparently prompted Sora with "coded or phonetically similar terms to generate audio that mimics a well-known racial slur." A deepfaked version of Altman, for example, screams "I hate knitters" as he is escorted off an aircraft. (OpenAI did not immediately respond to a request for comment.) "This behavior illustrates an unsurprising trend in prompt-based evasion, where users intentionally probe systems for weaknesses in content moderation," the Copyleaks report explains. "When combined with the likenesses of recognizable individuals, these deepfakes become more viral and damaging -- spreading quickly across and beyond the platform." Because the videos are available for download, the researchers noted, they are easily reappropriated and disseminated on other apps. Sure enough, a Sora-generated clip of Jake Paul repeating the phrase "neck hurts" has racked up 1.5 million views and 168,000 likes on TikTok. Another Sora-watermarked video that migrated to TikTok is a deepfake of Paul shouting "I hate juice," obviously intended as an antisemitic provocation. (Sora 2 users in general have had no trouble making it visualize any number of antisemitic tropes.) The "Kingposting" Sora videos are hardly an isolated case of AI users exploiting available images of celebrities. This week, the streamer IShowSpeed was incensed by realistic Sora 2 deepfakes that depicted him kissing a fan, announcing trips abroad, and coming out as gay. He criticized viewers who had evidently encouraged him to opt in to the Cameo system. "That was not the right move to do," he said on a stream. "Whoever told me to make it public, chat, you're not here for my own safety, bro. I'm fucked, chat." Grok Imagine, the image and video generator from Elon Musk's xAI, has come under fire too as its users generate harmful deepfakes of celebrities who did not consent to have their faces replicated by the AI model. While some individuals have produced hardcore pornographic clips starring Disney and comic book characters, others on the platform have succeeded in prompting Grok Imagine to conjure lewd imagery of Taylor Swift, Scarlett Johansson, and Sydney Sweeney. Even celebrities themselves have gotten in on the action: amid a long-simmering feud with Jay-Z, rapper Nicki Minaj on Wednesday posted a Grok-generated picture that appeared to show the hip-hop mogul in a pink crop top and wig and "Queen" necklace, with the announced release date for her next album printed on his bare stomach. And, though users can make the most of lackluster AI moderation (Grok Imagine) or figure out clever ways to circumvent guardrails (Sora) in order to misrepresent famous people, the greater danger probably comes from videos that purport to capture events involving unknown parties. "Fake news broadcasts and fabricated Ring or street camera footage are among the most popular categories gaining significant traction," according to Copyleaks' researchers. They point to a Sora-generated clip of a man catching a baby that has fallen out of an apartment building -- it has nearly 2 million likes on TikTok, with many commenters seeming to regard it as authentic. "Hyperrealistic AI video is outpacing the average person's ability to detect manipulation," the firm concludes. That could have drastic consequences as bad actors use it to enact hateful stereotypes and push extremist propaganda. Yet the AI companies are so caught up in a race to dominance that these concerns hardly enter into the equation until the damage is already done. The rich and powerful, at least, might have the means to protect their reputations from an onslaught of deepfakes. The rest of us, however, will have to learn to navigate a world in which seeing is rarely believing.
[31]
OpenAI limits Sora 2 deepfakes after Bryan Cranston, SAG-AFTRA raise alarm
OpenAI will collaborate with Hollywood actors on its Sora 2 AI video platform. This follows concerns from Bryan Cranston about unauthorized use of his voice and image. OpenAI stated it is committed to protecting performers' rights and has strengthened safeguards. The company also restricted depictions of historical figures like Martin Luther King Jr. following objections. OpenAI will work with Hollywood performers to ensure their voices and likenesses are used per their wishes on its AI video generation platform, Sora 2. The development comes after Bryan Cranston of Breaking Bad fame raised concerns over his voice and likeness being used without consent or compensation in videos generated via Sora 2. In a joint statement with Cranston, Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), and bodies representing talent agencies, OpenAI said it would ensure that the voice and appearance of performers is not misused. "OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness. We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers," said Sam Altman, CEO, OpenAI, in the statement. The NO FAKES Act was introduced in the US Senate in July last year. The bill aims to protect individuals from the misuse of their likeness and voice through unauthorised deepfakes. OpenAI has an opt-in policy for using someone's voice and likeness. Regretting the 'unintentional generations' involving Cranston without his consent, the AI major "has strengthened guardrails around the replication of voice and likeness when individuals do not opt-in," the joint statement read. "I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way. I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness," Cranston said in the statement. The action comes days after OpenAI stopped Sora users from generating videos with the likeness of Martin Luther King Jr, following objections from his estate over crude representations of the late US civil rights activist. OpenAI said in a social media post on Friday that the action taken at King, Inc.'s request is to strengthen guardrails for depiction of historical figures. Even as OpenAI introduces more safeguards for Sora, and says it would prioritise the safety of minors and focus on mental health, it has announced plans to relax rules for its popular AI chatbot, ChatGPT. The company said it would allow erotica on the platform for verified adults in its upcoming update.
[32]
AI vs Hollywood : The Explosive Debate Over OpenAI's Sora 2 Model
What happens when innovative innovation collides with ethical dilemmas and legal gray areas? OpenAI, a leader in artificial intelligence, is finding out the hard way. The release of its highly anticipated Sora 2 video model has ignited a firestorm of criticism, particularly from Hollywood studios, talent agencies, and legal experts. At the heart of the backlash are allegations of copyright violations, unauthorized use of celebrity likenesses, and broader concerns about the unchecked influence of AI in creative industries. For many, Sora 2 represents not just a technological leap but a stark reminder of the ethical and regulatory gaps that persist in the AI era. The stakes couldn't be higher: will innovation triumph, or will accountability finally catch up with the tech industry? This report by AI Grid provides more insights into the multifaceted controversy surrounding Sora 2, unpacking the intense industry pushback and the thorny questions it raises about intellectual property, ethics, and AI's role in shaping cultural narratives. You'll discover why OpenAI's opt-out policy has become a lightning rod for criticism, how the entertainment industry is grappling with the implications of AI-generated content, and what this means for the future of creators' rights. But the story doesn't end with criticism, there's also a growing dialogue about how to balance innovation with responsibility. As you explore the complexities of this debate, one thing becomes clear: the future of AI in entertainment hangs in the balance, and the resolution of these tensions will shape the industry for years to come. A central issue in the backlash against Sora 2 is OpenAI's approach to copyright and intellectual property rights. Critics argue that the model was trained on copyrighted material and celebrity likenesses without obtaining explicit consent, relying instead on an opt-out policy. This policy places the responsibility on creators to proactively exclude their work from the model's training data, a move many see as shifting the burden of protection from corporations to individuals. Legal experts have raised questions about the legality of this approach, suggesting it may conflict with established copyright laws. Hollywood agencies, in particular, have criticized the policy for undermining creators' control over their intellectual property. This has led to widespread frustration, as many stakeholders believe the system prioritizes accessibility for AI developers over accountability to content owners. The resulting confusion has fueled calls for clearer guidelines and stronger protections for intellectual property in the AI era. The entertainment industry's response to Sora 2 has been both swift and vocal. Talent agencies, actors, and other stakeholders have expressed alarm over the model's ability to generate unauthorized content, including potentially disrespectful or offensive depictions of deceased public figures. These concerns have intensified demands for stricter ethical guidelines and more robust content moderation practices to prevent misuse. Beyond immediate concerns, the controversy raises broader questions about the role of AI in shaping cultural narratives. While some view AI-generated content as a tool to enhance creativity and expand storytelling possibilities, others see it as a threat to artistic integrity and intellectual property rights. This divide highlights the need for a balanced approach that fosters innovation while safeguarding the rights and contributions of creators. The debate surrounding Sora 2 serves as a reminder of the ethical complexities that accompany technological progress in creative industries. Advance your skills in OpenAI by reading more of our detailed content. In response to the backlash, OpenAI has taken steps to address the concerns raised by the entertainment industry. The company has temporarily suspended certain features of Sora 2, such as its ability to depict historical figures, and implemented stricter safeguards to prevent misuse. OpenAI has also emphasized its commitment to user control, allowing individuals to manage how their likenesses are used by the model. To rebuild trust, OpenAI has initiated discussions with Hollywood stakeholders, aiming to explore collaborative solutions that balance innovation with ethical considerations. By engaging with industry experts and fostering open dialogue, the company seeks to demonstrate its dedication to responsible AI deployment. However, these efforts may not fully resolve the underlying tensions between the tech and entertainment sectors, as the broader issues of copyright, ethics, and accountability remain unresolved. The controversy surrounding Sora 2 has far-reaching implications for the future of AI in the entertainment industry. On one hand, the backlash has the potential to damage OpenAI's reputation and strain its relationships with Hollywood. On the other hand, it has sparked critical conversations about the ethical and legal challenges posed by AI-generated content, which could lead to more thoughtful and responsible approaches to AI development. Interestingly, not all reactions to Sora 2 have been negative. Some creators and celebrities have embraced the technology's potential, suggesting that attitudes toward AI in entertainment may evolve as the technology becomes more refined and its applications better understood. For OpenAI, addressing stakeholder concerns transparently and responsibly could pave the way for innovative and ethical uses of AI in the industry, fostering a more collaborative relationship between technology developers and creative professionals. The debate surrounding Sora 2 reflects a broader struggle to balance innovation and regulation in the rapidly evolving age of AI. As these tools become more advanced, questions about intellectual property, ethical deployment, and user control are becoming increasingly urgent. The incident highlights the need for clear regulatory frameworks that ensure technological progress is accompanied by accountability and respect for creators' rights. For stakeholders in the entertainment industry, staying informed and engaged is essential. Whether you are a creator, performer, or industry professional, understanding the implications of AI-generated content will help you navigate the challenges and opportunities that lie ahead. By advocating for responsible practices and contributing to the ongoing dialogue, you can play an active role in shaping the future of AI in entertainment, making sure that innovation and creativity coexist in a way that benefits all parties involved.
[33]
OpenAI's Sora 2 App Suspends Use of Martin Luther King Jr.'s Image Following Criticism -- Here's How His Daughter Reacted
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter OpenAI's deepfake video application, Sora 2, has put a halt to the use of Martin Luther King Jr.'s likeness. This move comes in the wake of criticism from Bernice A. King, King's daughter, who voiced her concerns over the disrespectful portrayals of the iconic civil rights leader. The OpenAI app, renowned for its lifelike AI-generated videos, came under fire for the inappropriate use of King Jr.'s image. Bernice A. King, who also serves as the CEO of the King Center, expressed her discontent with the way users represented her father, labeling these portrayals as "foolishness" and not an exercise of free speech. "For me, many of the AI depictions never rose to the level of free speech. They were foolishness," she wrote in a post on X on Friday. OpenAI, in collaboration with the King estate, issued a joint statement announcing the tightening of its guidelines regarding the depiction of historical figures. Also Read: Elon Musk Accuses Sam Altman's OpenAI of Misusing Its Nonprofit Status The company recognized the importance of public figures and their families having control over their likeness usage. However, the specifics of these standards and the definition of a public or historical figure remain undefined. The Sora 2 app, launched only a month ago, has already sparked controversy due to its capability to generate uncannily realistic videos. Last week, both the Creative Artists Agency and United Talent Agency chose to withdraw their clients from Sora 2, highlighting considerable risks to their clients and their intellectual property. The controversy surrounding OpenAI's Sora 2 app underscores the ethical dilemmas posed by deepfake technology. While these tools have the potential to revolutionize content creation, they also raise serious concerns about consent, privacy, and the potential for misuse. The decision by OpenAI to revise its guidelines is a step towards addressing these issues, but it also highlights the need for broader industry standards and regulations. Read Next OpenAI's Sam Altman Prepared To Swap CEO Role For Farming If AI Supersedes Him Market News and Data brought to you by Benzinga APIs
[34]
OpenAI pauses video generations of Martin Luther King Jr after 'disrespectful depictions' - The Economic Times
Artificial intelligence (AI) major OpenAI said users will not be able to use the likeness of Martin Luther King Jr when generating videos with its model, Sora. The action followed objections raised by his estate over crude representations of the late US civil rights activist on the platform. OpenAI said in a social media post on Friday the action taken at King, Inc.'s request is to strengthen guardrails for depiction of historical figures. "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used. Authorized representatives or estate owners can request that their likeness not be used in Sora cameos," the AI company stated. The development comes days after OpenAI launched the second version of Sora, which has quickly gained traction since its launch last month, as an AI social video platform. The platform, currently operating on an invite-only basis, allows users to create realistic AI videos featuring their friends, individuals willing to have their likeness recreated, or historical figures. The issue blew up after some users created videos featuring King's likeness with racial overtones and wrestling with another prominent civil rights activist, Malcolm X. In an Instagram post last week, Bernice King, the civil rights activist's daughter, urged people to stop sending her AI generated videos of her father. She joined Zelda Williams, comic actor Robin Willams' daughter, who had made a similar request. OpenAI founder Sam Altman said earlier this month the company would introduce controls allowing owners "more granular control" over content rights to dictate how their characters are used in Sora. He also shared plans to share revenue with those who permit such use. OpenAI is looking at introducing more safeguards for Sora even as it plans to relax rules for its popular AI chatbot, ChatGPT. The company said it would allow erotica on the platform for verified adults. The company said it would prioritise the safety of minors and focus on mental health safeguards even as it loosened restrictions. Also Read: Erotica on ChatGPT meant to allow more user freedom: Sam Altman
[35]
Sora's MLK deepfakes plunge OpenAI's social media ambitions into chaos
OpenAI has big ambitions for Sora 2. It's not only a more coherent and versatile AI video generator; it's also powering the ChatGPT developer's bid to become a social media company. The Sora iPhone app, which allows users to generate deepfakes of themselves and their friends, has been touted by some as a TikTok and Instagram killer, establishing a new type of social media where nothing is real. But OpenAI's intention to take on Meta and Google in the social space has been messy and chaotic, and the latest debacle suggests that it might not know what it's doing. First, it backtracked on plans to allow an automatic free-for-all on copyrighted IPs and began requiring users to opt in rather than opt out. Now, deepfakes of Martin Luther King are causing the kind of controversy that causes brands to run a mile. Although OpenAI claimed Sora 2 was "launching responsibly", it only intervened to stop the Sora app from generating videos of Martin Luther King Jr after receiving a request from the civil rights activist's estate. On Instagram, Bernice A. King, the daughter of Dr King, reposted the news that Zelda Williams, the daughter of Robin Williams, has asked people to stop sending her AI-generated videos of her father, who died in 2014. "I concur concerning my father. Please stop," she wrote. Despite recognising that Sora had generated "disrespectful" content, which included edited versions of Dr King's famous "I have a dream speech", OpenAI continues to allow video generation featuring other historic figures and celebrities as varied as John F. Kennedy. Queen Elizabeth II and Professor Stephen Hawking. It says it is working to "strengthen guardrails for historic figures". OpenAI says it initially has no plans to introduce advertising to monetise its Sora app but will charge for generations. Its hope was that brands would follow users to the app and start generating AI video content using their own IPs, and those of others. But the chance of a mass take up by brands is doubtful while the app faces this kind of controversy, and what's most incredible is that the OpenAI didn't expect that. CEO Sam Altman expressed surprise at learning that people "don't want their cameo to say offensive things or things that they find deeply problematic." If that really wasn't already obvious to him, he only had to look at the mass exodus of advertisers during Elon Musk's chaotic takeover of Twitter to see that many brands care a lot about where their assets appear and don't want them to be seen in proximity to hate speech.
[36]
OpenAI Pauses Video Generations Depicting MLK Jr On Sora
OpenAI's decision to restrict the recreation of MLK Jr's likeness sets a precedent for the protection of the personality rights of deceased public figures. Such action is particularly important when we consider that AI tools have made it ridiculously easy to puppeteer historical figures into saying or doing almost anything. For context, when users can generate videos of Martin Luther King Jr. making monkey noises during his watershed 'I Have a Dream' speech, one has to wonder what is going to stop someone from creating entirely revisionist historical content through AI. In India, the judiciary treating personality rights as purely personal and non-transferable made sense in a pre-AI era: as at that time, most unauthorised uses were limited to films or commercial endorsements which required significant resources. However, the present-day AI technology has democratised the ability to create convincing deepfakes, making posthumous exploitation both easier and more harmful. OpenAI has paused Sora visual generations depicting Martin Luther King Junior's (MLK Jr) likeness, on MLK Jr's estate's request. This comes after reports of Sora users generating disrespectful recreations of MLK's likeness, including showing him making monkey noises during his famous 'I Have a Dream' speech, according to a report by the Washington Post. This article mentions many other disrespectful depictions of deceased public figures, including actor Robin Williams, activist Malcolm X, and painter Bob Ross. In some cases, such as those of MLK Jr and Williams, their children have posted on Instagram, urging people not to send them AI-generated videos of their fathers. "To watch the legacies of real people be condensed down to 'this vaguely looks and sounds like them so that's enough', just so people can churn out horrible TikTok slop, puppeteering them is maddening," Williams' daughter said in an Instagram story. Besides restricting the recreation of MLK Jr's likeness, OpenAI says that it is strengthening guardrails for historical figures. "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used," the company said in a statement. Importantly, it mentioned that authorised representatives and estate owners can request the AI company to prevent Sora from using their likeness. With AI video and image generation tools allowing people to generate the likeness of celebrities, concerns around deepfakes and personality rights protection have skyrocketed. Many Indian celebrities, including Arijit Singh, Aishwarya Rai Bachchan, Karan Johar, and Jackie Shroff, have secured legal protections for their personality rights by approaching the judiciary. However with deceased individuals, the question of personality rights becomes a bit trickier, as courts have denied the transfer of rights of privacy, reputation, and personality. For instance, in 2021, the Madras High Court (HC) held that the right to privacy, and the reputation earned by a person during their lifetime extinguish after death. This came in the context of a case filed by former Tamil Nadu Chief Minister (CM) J. Jayalalithaa's niece J. Deepa to prevent the release of a film based on Jayalalithaa's life. "After the death of a person, the reputation earned cannot be inherited like a movable or immovable property by his or her legal heirs. Such personality right, reputation or privacy enjoyed by a person during his lifetime comes to an end after his or her lifetime," the Madras HC noted. Similarly, in 2023, actor Sushant Singh Rajput's father filed an application with the Delhi HC looking to protect the late actor's personality rights against unauthorised use in a film depicting his life and the events leading up to his death. The court rejected the application, coming to the same conclusion as the Madras HC. And while these rights do not get passed down to heirs, if one shows that these rights are commercial in nature or have all the trappings of a trademark, courts may protect them from commercial exploitation, according to a blog post by law firm Anand and Anand. The law firm cited the Delhi HC's decision in February 2025 against the misuse of Ratan Tata's name, his company's logo, and his photograph. In this case, the court had said that Tata's name is a well-known personal name/mark, which needs protection from any unauthorised use. While Indian courts did not uphold the transferability of personality rights of the deceased with respect to the Jayalalithaa and Rajput cases, other jurisdictions are thinking differently about the subject. The US government is considering specific laws that seek to protect the rights of the deceased against AI replicas. In the US, a group of senators has proposed the "Nurture Originals, Foster Art, and Keep Entertainment Safe Act", better known as the NO FAKES Act. This legislation gives people the right to authorise the use of their voice or visual likeness in a digital replica through a written license agreement. Notably, this right does not end with a person's death. Instead, the law proposes to pass this right on to a particular person's executors/heirs/licensees. The law allows for transfer of the right to a person's likeness through any legal means or through inheritance as personal property by the applicable laws of intestate succession (succession when the dead person has not left behind any will). And in case a deceased person licenses their rights before death, the license holder can hold onto these rights for 10 years after the person's death, with the scope for extension for five-year increments.
[37]
OpenAI cracks down on Sora 2 deepfakes after Bryan Cranston, actors'...
Sam Altman's OpenAI said it will crack down on unauthorized deepfakes spit out by its Sora 2 text-to-video generator after complaints by "Breaking Bad" actor Bryan Cranston and other public figures. A flood of realistic-looking, unauthorized deepfake videos hit social media sites after OpenAI launched the upgraded Sora 2 on Sept. 30. Cranston, 69, personally "brought the issue to the attention of SAG-AFTRA," which pushed OpenAI to take action, the prominent actors' union stated Monday. "I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work respect our personal and professional right to manage replication of our voice and likeness," Cranston added in the joint statement with the union and OpenAI. OpenAI has been scrambling to address the concerns of critics who say Sora is misusing the voices and images of celebrities without proper credit or compensation. Cranston recently popped up in a fake video that showed him talking a selfie with Michael Jackson. Last week, the tech giant blocked users from creating deepfakes of Martin Luther King Jr. after his estate blasted what it described as "disrespectful depictions" of the late civil rights icon. Zelda Williams, the daughter of the late actor Robin Williams, was previously forced to beg the public to stop using Sora to create deepfakes of the beloved comedian. OpenAI says it has strengthened enforcement of an "opt-in" policy requiring public figures to give their permission before Sora can use their voices and likenesses in AI-generated videos. The company has also "committed to responding expeditiously to any complaints" regarding potential violations going forward, according to Monday's statement. Hollywood talent agencies CAA and UTA - which earlier warned that potential infringement by Sora "exposes our clients and their intellectual property to significant risk" - also signed the statement, saying their talks with OpenAI have resulted in "productive collaboration." Altman reiterated his company's support for the "NO FAKES Act," federal legislation meant to block AI videos that depict individuals without their consent. "OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness," Altman said in Monday's statement. "We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers."
[38]
IShowSpeed Slams Sora 2 Deepfakes That Feature the Streamer Kissing Fans, Racing Animals, and Declare He Is Gay - IGN
IShowSpeed has slammed AI-powered Sora 2 deepfakes that show the streaming star doing and saying things he hasn't. During a recent livestream, the hugely popular internet personality watched increasingly outlandish Sora 2 videos users had created with his likeness. After watching deepfakes in which the 20-year-old, who has 44.9 million subscribers on YouTube, kisses a fan, races against a cheetah (something he plans to do in real life in the future), and appears in a country he hasn't yet visited (Nepal), IShowSpeed came across scores of videos in which he comes out to his fans as gay. An increasingly angry IShowSpeed, real name Darren Jason Watkins Jr., tells his viewers that he's "turning this s**t off," before realising he would have to manually delete each video in which his likeness is used. "This s**t is getting turned off," he said. "No more. Why does this look too real? Bro, no, that's like, my face." "Why do I keep coming out?! Why do I keep coming out?!" OpenAI's Sora 2 app lets users create videos using the likeness of celebrities who are alive if they opt in, something IShowSpeed evidently did. During the livestream, he hit out at his chat for suggesting he do so. "That was not the right move to do," he said. "Whoever told me to make it public, chat, you're not here for my own safety, bro. I'm f***ed, chat." The last deepfake video he views online before giving up and ending the stream sees IShowSpeed show off his newborn baby alongside an unknown woman. The video describes the baby as trans. The comments come after OpenAI blocked Sora 2 from making videos portraying Dr Martin Luther King Jr, following a request from his estate. According to the BBC, the company acknowledged the app had created "disrespectful" content about the civil rights campaigner. OpenAI said it would pause images of Dr King "as it strengthens guardrails for historical figures" -- but it continues to allow people to make clips of other high profile individuals. Videos featuring figures such as President John F. Kennedy, Queen Elizabeth II, and Professor Stephen Hawking have been shared widely online. Earlier this month, Zelda Williams, daughter of Robin Williams and the director of 2024 horror comedy Lisa Frankenstein, issued a firm ultimatum for fans to stop sending her AI-generated videos featuring her father. "Please, just stop sending me AI videos of Dad," wrote Williams in a message posted via Instagram. "Stop believing I wanna see it or that I'll understand, I don't and I won't. If you're just trying to troll me, I've seen way worse, I'll restrict and move on. But please, if you've got any decency, just stop doing this to him and to me, to everyone even, full stop. It's dumb, it's a waste of time and energy, and believe me, it's not what he'd want." The controversy surrounding Sora 2 has also involved its use of popular characters, and has seen the Japanese government make a formal request asking OpenAI to refrain from copyright infringement after Sora 2 videos featured the likenesses of copyrighted characters from anime and video games. Sora 2, which OpenAI launched on October 1, is capable of generating 20-second long videos at 1080p resolution, complete with sound. Soon after its release, social media was flooded with videos generated by the app, many of which contained depictions of copyrighted characters including those from popular anime and game franchises such as One Piece, Demon Slayer, Pokémon, and Mario. Reuters reported on September 29 that OpenAI had contacted studios and talent agencies a week before Sora 2's launch, giving them the option to opt out. But in an October 4 blog post on Sora 2 (previously reported on by IGN), OpenAI CEO Sam Altman said that changes would be made to the fledgling video generation app in the near future. "First, we will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls," Altman confirmed, adding OpenAI will give rightsholders "the ability to specify how their characters can be used (including not at all)." In the same blog post, Altman called Sora 2 videos that use copyrighted characters "interactive fan fiction." Nintendo has warned it would take "necessary actions against infringement of our intellectual property rights." Disney and Universal have sued the AI image creator Midjourney, alleging that the company improperly used and distributed AI-generated characters from their movies. Disney also sent a cease and desist letter to Character.AI, warning the startup to stop using its copyrighted characters without authorization. "A lot of the videos that people are going to generate of these cartoon characters are going to infringe copyright," Mark Lemley, a professor at Stanford Law School, told CNBC. "OpenAI is opening itself up to quite a lot of copyright lawsuits by doing this." Last month, the famously litigious The Pokémon Company formally responded to the use of Pokémon TV hero Ash Ketchum and the series' theme tune by the Department of Homeland Security, as part of a video showing people being arrested and handcuffed by law enforcement agents. "Our company was not involved in the creation or distribution of this content," a spokesperson told IGN, "and permission was not granted for the use of our intellectual property." Image credit: IShowSpeed / YouTube.
[39]
OpenAI's video tool 'pauses' MLK Jr. depictions after family rips...
OpenAI is clamping down on deepfake videos of Martin Luther King Jr. on its video tool Sora 2 after his family complained about "disrespectful depictions" of the iconic leader. "Some users generated disrespectful depictions of Dr. King's image," the company said Thursday in a joint statement with the King Estate on social media. "So at King, Inc.'s request, OpenAI has paused generations depicting Dr. King as it strengthens guardrails for historical figures." The company said "strong free speech interests" are a concern, but ultimately, public figures and their family should have control over the use of their likenesses. It added that representatives and estate owners can contact OpenAI to request their images not be used in AI-generated videos. Bernice A. King, the late civil rights leader's youngest daughter, contacted OpenAI about taking down the deepfake videos, according to the statement. Sora 2 - a text-to-video AI app created by Sam Altman's OpenAI - launched late last month. The app's capacity to instantly create lifelike videos has sparked backlash from critics who warn there aren't enough safety barriers in place. The app has been used to create mocking, cruel videos of deceased celebrities long after their deaths. Some clips show physicist Stephen Hawking, who died in 2018, being hoisted by a forklift into the air and then knocked to the ground by WWE wrestlers. Others depict Elvis Presley, who died in 1977, stumbling and collapsing off the stage in a fake video of his final performance. The late comedian Robin Williams' daughter, Zelda Williams, has spoken out about "disturbing" images, audio and video clips made of her deceased father's likeness with AI. "I've witnessed for YEARS how many people want to train these models to create/recreate actors who cannot consent, like Dad. This isn't theoretical, it is very very real," she wrote in an Instagram post earlier this month. "I've already heard AI used to get his 'voice' to say whatever people want and while I find it personally disturbing, the ramifications go far beyond my own feelings. Living actors deserve a chance to create characters with their choices, to voice cartoons, to put their HUMAN effort and time into the pursuit of performance." Hollywood unions and talent agencies have taken aim at OpenAI over its lack of safety guardrails for actors, as well as its creation of a so-called AI actress, Tilly Norwood. "The world must be reminded that what moves us isn't synthetic. It's human," SAG-AFTRA President Sean Astin and National Executive Director Duncan Crabtree-Ireland said in a joint statement this month. They slammed tech firms for creating "a sensationalized narrative, designed to manipulate the public and make space for continued exploitation." Actor Scarlett Johansson has accused OpenAI of releasing an AI chatbot with a voice that sounded "eerily similar to mine" - even after she declined to license her voice for a virtual assistant. Over the weekend, Tom Hanks warned there is an AI-generated video of his likeness circulating promoting "some dental plan" that he has "nothing to do with."
[40]
OpenAI restricts Sora from creating videos of celebrities and public figures without consent
The move aligns with the proposed U.S. No Fakes Act aimed at preventing digital impersonation. OpenAI has rolled out stronger protections for its video generation model, Sora, after massive criticism over the use of celebrity likenesses without consent. The company, in collaboration with the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), actor Bryan Cranston, and several major talent agencies, confirmed the move in a joint statement earlier this week. This comes after the decision follows a report of multiple AI-generated clips depicting real-life public figures, including actors, scientists, and historical personalities, without their approval. Some of these videos, created using Sora, went viral for their realism but also sparked ethical debates about digital consent and artistic misuse. Cranston, who had raised concerns about his image and voice being replicated in Sora-generated videos, welcomed OpenAI's updated policies. "I was deeply concerned not just for myself, but for all performers whose identities can be misused. I appreciate OpenAI's response and hope this leads to industry-wide respect for our likeness and voice rights," he said. The joint statement, signed by SAG-AFTRA, United Talent Agency, Creative Artists Agency, and the Association of Talent Agents, marks a significant moment for AI governance in entertainment. OpenAI's strengthened rules will prevent the platform from generating or reproducing the likeness of any celebrity or public figure who hasn't given explicit permission. This move also aligns with growing legislative attention toward digital impersonation. The No Fakes Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act), a pending US bill, seeks to safeguard artists and performers from unauthorized use of their likeness, voice, or creative work through AI tools. OpenAI CEO Sam Altman stated, "OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness. We were early supporters of the No Fakes Act and will continue to stand behind the rights of performers."
Share
Share
Copy Link
OpenAI's AI video generator Sora sparks controversy over unauthorized use of celebrity likenesses and historical figures, leading to new restrictions and agreements with actors' unions.
OpenAI's AI video generator, Sora, has triggered significant backlash over the unauthorized creation of deepfake videos featuring celebrities and historical figures. The platform's ability to generate realistic depictions has prompted widespread ethical concerns from various stakeholders, including actors' unions and civil rights organizations
1
2
.
Source: New York Post
In response to the growing controversy, OpenAI has committed to implementing stricter safeguards against the unauthorized use of likenesses. Actor Bryan Cranston and SAG-AFTRA voiced concerns, leading OpenAI to strengthen its guardrails around voice and likeness replication
5
. Furthermore, the company paused the generation of videos resembling Dr. Martin Luther King Jr. at the request of his estate, citing "disrespectful depictions"1
.
Source: New York Post
OpenAI has also shifted its policy from an opt-out to an opt-in model for rights holders concerning their intellectual property within Sora, reflecting a rapidly evolving stance on content creation ethics
4
5
.Related Stories
The rapid adoption of Sora highlights the complex challenges in moderating AI-generated content and the urgent need for robust ethical guidelines. Legal frameworks, such as California's postmortem privacy rights for performers, are being scrutinized as AI's capabilities blur the lines between reality and artificiality
4
. This ongoing debate underscores the critical need for collaboration between AI developers, content creators, and regulators to establish clear standards for responsible AI innovation.
Source: Benzinga
Summarized by
Navi
[4]