On September 30, OpenAI launched the second version of its 2024 AI video and audio generation tool. The updated version of Sora allows users to generate copyrighted content even when it is not explicitly stated in the prompt, raising significant concerns about the platform’s liability and responsibility for respecting the intellectual property and copyrights, including moral rights, of the original content creators
According to people familiar with Sora 2, OpenAI was alerting talent agencies and studios about Sora 2 and the opt-out mechanisms for over a week ahead of the launch, reveals a Wall Street Journal report. This opt-out process involves the explicit request from the original owners to exclude their work from the GenAI content that Sora will produce.
Sora is currently live in the US and Canada, and its feed uses recommendation algorithms that track ChatGPT histories â€" unless users disable them in Sora's Data Control Settings â€" along with user activity such as posts, likes, comments, remixes, follows and location, as well as author and safety signals.Â
The AI company also says that the app was "explicitly designed to maximise creation, not consumption," which is even more concerning, as the intention is to create more AI-made content, contributing to more AI-slop.
As mentioned above, OpenAI treats every information and content as copiable, unless the copyright owner opts out. Recently, the default opt-ins have become quite common, where, in an ideal situation, opt-out should have been ideal as the AI companies and original content makers try to make contracts or license their content to use in GenAI content generation.
Additionally, OpenAI's interpretation and understanding of copyright and likeness isn't clear. Jason Kwon, Chief Strategy Officer of OpenAI, says that their approach "has been to treat likeness and copyright distinctly," as quoted by the Wall Street Journal. So, the unresolved question is just how distinct that separation really is.
Additionally, the consent for the likeness of a person is obtained through the third-party application, Cameos, a platform that lets Sora's users generate content using AI from the characters built by other users on Cameos. OpenAI is referring to personality rights, which protects the voice, name, signature, image, likeness, vocal style, articulation, or distinctive attire and appearance. However, the moral rights of an artist or an individual can also be very central to the debate around the exploitation of likeness or personality using AI tools, especially as seen in the AI-made alterations to Dhanush's Ranjhanaa's Tamil version without the permission of the original creator, director Anand L. Rai.
So, how can OpenAI make sure the personality of an individual is not exploited? What if my "consent-based likeness" is used to propagate the beliefs that I would stand against? Such situations are even more concerning, considering the recent report from the US-based Tech Transparency Project, which highlighted the lapses in Meta's enforcement policies and the prevalent usage of deepfake videos with political motivations for advertisements on Meta platforms.
Google-owned YouTube used a Content ID system to address the widespread copyright infringement on its platform. The system, originally called “Video Identification,†began development in 2007 as a direct response to mounting legal pressure and billion-dollar lawsuits.
One among such cases was $1 billion lawsuit US-based Viacom media conglomerate in March 2007. The media company accused YouTube of “brazen†and “massive†infringement by allowing users to upload more than 160,000 unauthorised clips of Viacom programming, viewed 1.5 billion times. The then-Google CEO Eric Schmidt announced the building of an automated copyright detection system to resolve such copyright disputes.
During the ANI vs YouTubers controversy, MediaNama founder-editor Nikhil Pahwa defined the content ID System of YouTube as a tool for processing copyright takedown notices. He also emphasised that YouTube acts as an intermediary that takes down the content only after receiving copyright takedown notices from the original owners and also offers a dispute mechanism for uploaders. YouTube takes down the content in accordance with Section 3(1) of The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Therefore, OpenAI’s Sora urgently needs a YouTube Content ID-type system to address copyright-related issues. At present, copyright holders must approach OpenAI directly to request takedowns through their form, placing the burden on creators to police violations rather than on the platform to prevent them. A proactive rights management system would better protect original works and prevent unauthorised content from circulating on social media platforms
Additionally, platforms like YouTube, OpenAI and other AI services must also take personality rights seriously. Because personality can extend beyond voice and appearance to include a person’s thoughts, ideas and expressions. This is to protect the dignity of the individual and avoid being misrepresented by GenAI content.
If these AI models are capable of replicating or distorting not only original content but also the original or intended thoughts, then that can be damaging to the original personality, especially if the individual is a celebrity. Without mechanisms to safeguard these rights, AI risks misrepresenting individuals and undermining their control over their own identity.
Similarly, default opt-ins must end, and AI platforms must rethink their approach to copyright. Respect for the original creator should remain central, not sacrificed to scale or engagement. Failure to act exposes industries like anime and cartoons to particular harm, since AI can mimic their style with striking accuracy, threatening creators’ livelihoods. Unlike human likenesses, which often reveal flaws in AI replication, animated works can be reproduced with near-perfect identification.