Curated by THEOUTPOST
On Sat, 22 Feb, 12:03 AM UTC
2 Sources
[1]
Deepfakes can ruin lives and livelihoods -- would owning the 'rights' to our own faces and voices help?
Not that long ago, the term "deepfake" wasn't in most people's vocabularies. Now, it is not only commonplace, but is also the focus of intense legal scrutiny around the world. Known in legal documents as "digital replicas," deepfakes are created by artificial intelligence (AI) to simulate the visual and vocal appearance of real people, living or dead. Unregulated, they can do a lot of damage, including financial fraud (already a problem in New Zealand), political disinformation, fake news, and the creation and dissemination of AI-generated pornography and child sexual abuse material. For professional performers and entertainers, the proliferation and increasing sophistication of deepfake technology could demolish their ability to control and derive income from their images and voices. And deepfakes might soon take away jobs: why employ a professional actor when a digital replica will do? One possible solution to this involves giving individuals the ability to enforce intellectual property (IP) rights to their own image and voice. The United States is currently debating such a move, and New Zealand lawmakers should be watching closely. Owning your own likeness Remedies already being discussed in New Zealand include extending prohibitions in the Harmful Digital Communications Act to cover digital replicas that do not depict a victim's actual body. Using (or amending) the Crimes Act, the Fair Trading Act and the Electoral Act would also be helpful. At the same time, there will be political pressure to ensure regulation does not stymie investment in AI technologies -- a concern raised in a 2024 cabinet paper. Legislation introduced to the US Congress last year -- the Nurture Originals, Foster Art, and Keep Entertainment Safe Bill -- proposes a new federal intellectual property right that individual victims can use against creators and disseminators of deepfakes. Known informally as the "No Fakes Bill," the legislation has bipartisan and industry support, including from leading entertainment worker unions. The US Copyright Office examined the current state of US law and concluded that enforceable rights were "urgently needed". From the New Zealand perspective, the No Fakes Bill contains both helpful ideas and possible pitfalls. As we discuss in a forthcoming paper, its innovations include expanding IP protections to "everyday" individuals -- not just celebrities. All individuals would have the right to seek damages and injunctions against unlicensed digital replicas, whether they're in video games, pornographic videos, TikTok posts or remakes of movies and television shows. But these protections may prove illusory because the threshold for protection is so high. The digital replica must be "readily identifiable as the voice or visual likeness of an individual," but it's not clear how identifiable the individual victim of a deepfake needs to be. Well known New Zealand actors such as Anna Paquin and Cliff Curtis would certainly qualify. But would a New Zealand version of the bill protect an everyday person, "readily identifiable" only to family, friends and workmates? Can you license a digital replica? Under the US bill, the new IP rights can be licensed. The bill does not ban deepfakes altogether, but gives individuals more control over the use of their likenesses. An actor could, for example, license an advertising company to make a digital replica to appear in a television commercial. Licenses must be in writing and signed, and the permitted uses must be specified. For living individuals, this can last only ten years. So far, so good. But New Zealand policy analysts should look carefully at the scope of any licensing provisions. The proposed IP right is "licensable in whole or in part." Depending on courts' interpretation of "in whole," individuals could unknowingly sign away all uses of their images and voice. The No Fakes Bill is also silent on the reputational interests of individuals who license others to use their digital replicas. Suppose a performing artist licensed their digital replica for use in AI-generated musical performances. They should not, for example, have to put up with being depicted singing a white supremacist anthem, or other unsanctioned uses that would impugn their dignity and standing. Protectng parody and satire On the other side of the ledger, the No Fakes Bill contains freedom of expression safeguards for good faith commentary, criticism, scholarship, satire and parody. The bill also protects internet service providers (ISPs) from liability if they quickly remove "all instances" of infringing material once notified about it. This is useful language that might be adopted in any New Zealand legislation. Also, the parody and satire defense would be an advance on New Zealand's copyright law, which currently contains no equivalent exception. But the US bill contains no measures empowering victims to require ISPs to block local subscribers' access to online locations that peddle in deepfakes. Known as "site-blocking orders", these injunctions are available in at least 50 countries, including Australia. But New Zealand and the US remain holdouts. For individual victims of deepfakes circulating on foreign websites that are accessible in New Zealand, site-blocking orders could offer the only practical relief. The No Fakes Bill is by no means a perfect or comprehensive solution to the deepfakes problem. Many different weapons will be needed in the legal and policy armory -- including obligations to disclose when digital replicas are used. Even so, creating an IP right could be a useful addition to a suite of measures aimed at reducing the economic, reputational and emotional harms deepfakes can inflict.
[2]
Deepfakes can ruin lives and livelihoods - would owning the 'rights' to our own faces and voices help?
Not that long ago, the term "deepfake" wasn't in most people's vocabularies. Now, it is not only commonplace, but is also the focus of intense legal scrutiny around the world. Known in legal documents as "digital replicas", deepfakes are created by artificial intelligence (AI) to simulate the visual and vocal appearance of real people, living or dead. Unregulated, they can do a lot of damage, including financial fraud (already a problem in New Zealand), political disinformation, fake news, and the creation and dissemination of AI-generated pornography and child sexual abuse material. For professional performers and entertainers, the proliferation and increasing sophistication of deepfake technology could demolish their ability to control and derive income from their images and voices. And deepfakes might soon take away jobs: why employ a professional actor when a digital replica will do? One possible solution to this involves giving individuals the ability to enforce intellectual property (IP) rights to their own image and voice. The United States is currently debating such a move, and New Zealand lawmakers should be watching closely. Owning your own likeness Remedies already being discussed in New Zealand include extending prohibitions in the Harmful Digital Communications Act to cover digital replicas that do not depict a victim's actual body. Using (or amending) the Crimes Act, the Fair Trading Act and the Electoral Act would also be helpful. At the same time, there will be political pressure to ensure regulation does not stymie investment in AI technologies - a concern raised in a 2024 cabinet paper. Legislation introduced to the US Congress last year - the Nurture Originals, Foster Art, and Keep Entertainment Safe Bill - proposes a new federal intellectual property right that individual victims can use against creators and disseminators of deepfakes. Known informally as the "No Fakes Bill", the legislation has bipartisan and industry support, including from leading entertainment worker unions. The US Copyright Office examined the current state of US law and concluded that enforceable rights were "urgently needed". From the New Zealand perspective, the No Fakes Bill contains both helpful ideas and possible pitfalls. As we discuss in a forthcoming paper, its innovations include expanding IP protections to "everyday" individuals - not just celebrities. All individuals would have the right to seek damages and injunctions against unlicensed digital replicas, whether they're in video games, pornographic videos, TikTok posts or remakes of movies and television shows. But these protections may prove illusory because the threshold for protection is so high. The digital replica must be "readily identifiable as the voice or visual likeness of an individual", but it's not clear how identifiable the individual victim of a deepfake needs to be. Well known New Zealand actors such as Anna Paquin and Cliff Curtis would certainly qualify. But would a New Zealand version of the bill protect an everyday person, "readily identifiable" only to family, friends and workmates? Can you license a digital replica? Under the US bill, the new IP rights can be licensed. The bill does not ban deepfakes altogether, but gives individuals more control over the use of their likenesses. An actor could, for example, license an advertising company to make a digital replica to appear in a television commercial. Licences must be in writing and signed, and the permitted uses must be specified. For living individuals, this can last only ten years. So far, so good. But New Zealand policy analysts should look carefully at the scope of any licensing provisions. The proposed IP right is "licensable in whole or in part". Depending on courts' interpretation of "in whole", individuals could unknowingly sign away all uses of their images and voice. The No Fakes Bill is also silent on the reputational interests of individuals who license others to use their digital replicas. Suppose a performing artist licensed their digital replica for use in AI-generated musical performances. They should not, for example, have to put up with being depicted singing a white supremacist anthem, or other unsanctioned uses that would impugn their dignity and standing. Protectng parody and satire On the other side of the ledger, the No Fakes Bill contains freedom of expression safeguards for good faith commentary, criticism, scholarship, satire and parody. The bill also protects internet service providers (ISPs) from liability if they quickly remove "all instances" of infringing material once notified about it. This is useful language that might be adopted in any New Zealand legislation. Also, the parody and satire defence would be an advance on New Zealand's copyright law, which currently contains no equivalent exception. But the US bill contains no measures empowering victims to require ISPs to block local subscribers' access to online locations that peddle in deepfakes. Known as "site-blocking orders", these injunctions are available in at least 50 countries, including Australia. But New Zealand and the US remain holdouts. For individual victims of deepfakes circulating on foreign websites that are accessible in New Zealand, site-blocking orders could offer the only practical relief. The No Fakes Bill is by no means a perfect or comprehensive solution to the deepfakes problem. Many different weapons will be needed in the legal and policy armoury - including obligations to disclose when digital replicas are used. Even so, creating an IP right could be a useful addition to a suite of measures aimed at reducing the economic, reputational and emotional harms deepfakes can inflict.
Share
Share
Copy Link
A new US bill aims to give individuals intellectual property rights over their likeness to protect against deepfakes, raising questions about its effectiveness and potential adoption in other countries like New Zealand.
The term "deepfake," once unfamiliar to most, has now become a subject of intense legal scrutiny worldwide. These AI-generated digital replicas can simulate the visual and vocal appearance of real people, both living and deceased 12. The unregulated use of deepfakes poses significant risks, including financial fraud, political disinformation, fake news, and the creation of AI-generated pornography and child sexual abuse material.
For professional performers and entertainers, the proliferation of deepfake technology threatens their ability to control and monetize their images and voices. There are concerns that deepfakes could potentially replace human actors in various media, raising questions about job security in the entertainment industry 12.
In response to these challenges, the United States Congress has introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe Bill, informally known as the "No Fakes Bill." This legislation proposes a new federal intellectual property right that individuals can use against creators and disseminators of deepfakes 12.
Key features of the bill include:
While the bill offers promising solutions, it also has potential drawbacks:
As the US debates this legislation, other countries, including New Zealand, are closely watching its development. New Zealand is considering its own measures to combat deepfakes, such as:
However, there is also pressure to ensure that regulations do not hinder investment in AI technologies, as highlighted in a 2024 cabinet paper 12.
While the No Fakes Bill is not a comprehensive solution to the deepfake problem, creating an IP right could be a valuable addition to a suite of measures aimed at reducing the economic, reputational, and emotional harms deepfakes can inflict. As technology continues to advance, a multi-faceted approach, including disclosure obligations for digital replicas, will be necessary to address the challenges posed by deepfakes 12.
A recent experiment by ABC News Verify highlights the ease and affordability of creating AI-generated voice clones, raising concerns about the impact of deepfakes on democracy and personal identity. The article explores the inadequacies of current copyright laws and proposes the establishment of "personality rights" as a potential solution.
2 Sources
2 Sources
A bipartisan group of U.S. senators has introduced legislation aimed at protecting individuals and artists from AI-generated deepfakes. The bill seeks to establish legal safeguards and address concerns about AI exploitation in various sectors.
5 Sources
5 Sources
The U.S. Copyright Office has called for urgent legislation to address the growing concerns surrounding AI-generated deepfakes and impersonation. The office emphasizes the need for a new federal law to protect individuals' rights and regulate the use of AI in content creation.
2 Sources
2 Sources
Bollywood stars are raising awareness about the dangers of deepfake technology in the entertainment industry. Celebrities are calling for legal action and increased public awareness to combat this growing threat.
2 Sources
2 Sources
YouTube announces support for the NO FAKES Act and expands its AI-generated content detection technology to protect top creators' likenesses, as the debate over AI-generated replicas intensifies.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved