2 Sources
[1]
Deepfake legislation: Denmark takes action
The World Economic Forum's Global Coalition for Digital Safety aims to accelerate public-private cooperation to tackle harmful online content, including deepfakes, and promote digital media literacy. Deepfakes can range from being funny and absurd to being manipulative and dangerous. In Denmark, the government is taking actions, aiming to strengthen its copyright law to prevent the creation and sharing of AI-generated deepfakes. The amendment, believed to be the first of its kind in Europe, is designed to protect the rights of individuals over their identities, including their appearance and voice. With cross-party support, the government hopes to submit the amendment in the autumn, suggesting that preventing deepfakes is considered a matter of urgency. So just how threatening are deepfakes, and what can policymakers do about them? Deepfakes use artificial intelligence (AI) technology to create highly realistic fake images, videos and audio recordings. The term comes from "deep learning" and "fake" and describes both the AI technology used and the resulting content. Deepfakes either alter existing content - like replacing Michael J. Fox's face with Tom Holland's in clips from Back to the Future - or generate new content showing someone saying or doing something they didn't. While superimposing faces in a film scene may seem innocuous at first glance, it still challenges the individual's right to their image. US actors went on strike for this right in 2023, bringing film and TV productions to a standstill and securing the industry's commitment that, in future, any AI use of actors' images would require consent. A more concerning use of deepfakes is circulating fake news, as in the case of deepfakes of former US President Joe Biden and Ukrainian President Volodymyr Zelenskyy. This gives the messages a high level of credibility by making them appear to come from a trustworthy source. But not all deepfake attacks are politically motivated - financial fraud and cybercrime are other big growth areas, according to recent research by Resemble.ai. While 41% of those targeted are public figures - celebrities, politicians and business leaders - 34% are private individuals, predominantly women children, and 18% organizations. Take Arup, a UK engineering firm, which fell prey to a sizable deepfake scam when criminals using an AI-generated clone of a senior manager convinced a finance employee on a video call to transfer $25 million to cybercriminals. A fraud attempt on Ferrari, using the AI-generated voice of CEO Benedetto Vigna, was narrowly thwarted by an employee asking a tricky question that only the real CEO could answer. A BBC journalist was able to bypass her bank's voice identification system with a synthetic version of her own voice. In its deepfake security report (Q2, 2025), Resemble.ai - a company specializing in detecting harmful deepfakes - reported 487 publicly disclosed deepfake attacks in the second quarter of 2025, a 41% increase from the previous quarter and more than 300% year-on-year. Direct financial losses from deepfake scams have reached nearly $350 million, with deepfake attacks doubling every six months, the company found. According to Resemble, deepfake fraud is a global issue, concentrated mainly in technologically advanced regions, with emerging markets increasingly affected. The US leads in reported incidents, but deepfake cases are also widespread across Asia Pacific and Europe, and rapidly growing in Africa. Policymakers are stepping up in response to deepfakes, with the Take It Down Act in the United States being one of the most significant measures so far. It requires harmful deepfakes to be removed within 48 hours and imposes federal criminal penalties for their distribution. Public websites and mobile apps must establish reporting and takedown procedures. State legislators in Tennessee, Louisiana, and Florida have also passed deepfake laws. In Europe, the European Union's Digital Services Act (or DSA), which came into effect in 2024, is designed to "prevent illegal and harmful activities online and the spread of disinformation". Online service providers are now under greater EU scrutiny than ever before, and several formal investigations for non-compliance are already underway. The UK adopted a similar approach in early 2025 with the Online Safety Act. The Danish amendment currently under consideration means people affected by deepfake content can request its removal, and artists can demand compensation for unauthorized use of their image. This right would extend for 50 years beyond the artist's death. Online platforms like Meta and X could face substantial fines if the amended bill is passed as proposed. While the bill doesn't directly provide for compensation or criminal charges being levied, it would set the legal foundations for seeking damages under Danish law. With Denmark currently holding the Presidency of the Council of the European Union, it has expressed a clear ambition to make media and culture central to European democracy - promoting initiatives like the European Democracy Shield. Its recent amendment to domestic copyright law is therefore likely to send strong political signals to both Brussels and the wider EU. Stressing the need for cross-regional cooperation to make the online world safer, the World Economic Forum's Global Coalition for Digital Safety aims to accelerate public-private collaboration to address harmful content, including deepfakes. It also promotes the exchange of best practices in online safety regulation and supports efforts to improve digital media literacy.
[2]
Danish bill targets AI deepfakes and identity theft
The new law would let Danes request deepfake removals and seek damages for AI misuse of image or voice. Denmark is proposing new legislation to amend its digital copyright law, addressing the increasing threat of AI-generated deepfakes. The proposed changes seek to protect individual rights concerning their digital identities in response to the rise in deepfake attacks, which have resulted in significant financial losses and the spread of disinformation. Deepfakes utilize artificial intelligence to produce realistic fake images, videos, and audio recordings. The technology has been employed in various ways, from creating humorous content to perpetrating financial fraud and spreading misleading information. The World Economic Forum's Global Coalition for Digital Safety is working to foster public-private cooperation to combat harmful online content, including deepfakes, and to enhance digital media literacy. The Danish government's amendment, considered a pioneering effort in Europe, aims to safeguard individuals' control over their identities, specifically their appearance and voice. The government aims to submit the amendment in the autumn, indicating the urgency with which it views the issue. The proposal has garnered cross-party support, suggesting a consensus on the need to address deepfake-related challenges. Deepfakes leverage AI technology, specifically "deep learning," to create manipulated or entirely fabricated content. This technology can alter existing content, as illustrated by replacing one actor's face with another in film clips. It can also generate new content, depicting individuals saying or doing things they never actually did. While some uses, such as face swapping in film scenes, may appear harmless, they raise concerns about the individual's right to their image. In 2023, US actors went on strike to advocate for control over the use of their images by AI. The strike brought film and TV productions to a standstill. The actors secured a commitment from the industry that any future AI use of their images would require their explicit consent. This event highlights the growing awareness and concern regarding the use of AI to manipulate or replicate individuals' likenesses without permission. A significant threat posed by deepfakes is their use in spreading fake news. Instances include deepfakes of former US President Joe Biden and Ukrainian President Volodymyr Zelenskyy. By creating the appearance that messages originate from trustworthy sources, deepfakes can lend credibility to false information, potentially influencing public opinion and political discourse. Resemble.ai's research indicates that financial fraud and cybercrime represent substantial growth areas for deepfake applications. While 41% of deepfake targets are public figures, including celebrities, politicians, and business leaders, 34% are private individuals, predominantly women and children. Organizations account for 18% of those targeted by deepfakes. The UK engineering firm Arup experienced a deepfake scam that resulted in a financial loss of $25 million. Cybercriminals utilized an AI-generated clone of a senior manager to convince a finance employee to transfer the funds during a video call. This instance illustrates the potential for deepfakes to facilitate sophisticated financial crimes. A fraud attempt targeting Ferrari involved the use of an AI-generated voice of CEO Benedetto Vigna. An employee thwarted the attempt by asking a question that only the real CEO could answer. A BBC journalist demonstrated the potential for voice cloning by bypassing her bank's voice identification system using a synthetic version of her own voice. These examples underscore the increasing sophistication and accessibility of deepfake technology for malicious purposes. Resemble.ai's deepfake security report for Q2 2025 revealed a significant increase in publicly disclosed deepfake attacks. The report documented 487 such attacks, representing a 41% increase compared to the previous quarter and a 300% increase year-on-year. The company's findings also indicated that direct financial losses resulting from deepfake scams have reached nearly $350 million. Resemble.ai also noted that deepfake attacks are doubling every six months, highlighting the escalating nature of the threat. Resemble.ai indicates that deepfake fraud is a global issue, particularly prevalent in technologically advanced regions. While the US leads in reported incidents, deepfake cases are also widespread across Asia Pacific and Europe, with a rapid increase observed in Africa. This global distribution underscores the need for international cooperation in addressing the challenges posed by deepfakes. The US has implemented the Take It Down Act, requiring the removal of harmful deepfakes within 48 hours and imposing federal criminal penalties for their distribution. The Act also mandates that public websites and mobile apps establish reporting and takedown procedures. State legislators in Tennessee, Louisiana, and Florida have enacted their own deepfake laws, demonstrating a multi-faceted approach to addressing the issue. The European Union's Digital Services Act (DSA), which came into effect in 2024, aims to prevent illegal and harmful activities online, including the spread of disinformation. The DSA has placed online service providers under increased scrutiny, and several formal investigations for non-compliance are already underway. The UK has adopted a similar approach with the Online Safety Act, implemented in early 2025. The Danish amendment under consideration allows individuals affected by deepfake content to request its removal. Artists can demand compensation for unauthorized use of their image. The right to compensation would extend for 50 years beyond the artist's death. Online platforms like Meta and X could face substantial fines if the amended bill is passed as proposed. The bill would establish the legal foundations for seeking damages under Danish law, though it does not directly provide for compensation or criminal charges. With Denmark holding the Presidency of the Council of the European Union, it aims to prioritize media and culture within European democracy through initiatives like the European Democracy Shield. The amendment to domestic copyright law is expected to send strong political signals to both Brussels and the wider EU. This action reflects Denmark's commitment to addressing the challenges posed by deepfakes and promoting a safer online environment. The World Economic Forum's Global Coalition for Digital Safety aims to promote cross-regional cooperation for online safety. This includes accelerating public-private collaboration to address harmful content, including deepfakes. The coalition also facilitates the exchange of best practices in online safety regulation and supports efforts to improve digital media literacy. By fostering collaboration and knowledge sharing, the coalition aims to enhance global efforts to combat deepfakes and promote a more secure digital environment.
Share
Copy Link
Denmark is set to introduce pioneering legislation to combat AI-generated deepfakes, aiming to protect individuals' digital identities and tackle the growing threat of online manipulation and fraud.
Denmark is taking a bold step in the fight against AI-generated deepfakes by proposing groundbreaking legislation to amend its digital copyright law. The Danish government aims to strengthen copyright protections to prevent the creation and sharing of AI-generated deepfakes, making it the first of its kind in Europe 1. This amendment is designed to safeguard individuals' rights over their digital identities, including their appearance and voice 2.
Deepfakes, which use artificial intelligence to create highly realistic fake images, videos, and audio recordings, have become a significant concern globally. These AI-generated content pieces can range from harmless entertainment to dangerous tools for manipulation and fraud 1. Recent research by Resemble.ai reveals that deepfake attacks are doubling every six months, with direct financial losses reaching nearly $350 million 1.
While 41% of deepfake targets are public figures, 34% are private individuals, predominantly women and children, and 18% are organizations 1. Notable incidents include:
The deepfake issue is not confined to a single region. While the US leads in reported incidents, cases are widespread across Asia Pacific and Europe, with rapid growth observed in Africa 2. In response, various countries and organizations are taking action:
The Danish amendment, expected to be submitted in the autumn, includes several key provisions:
The World Economic Forum's Global Coalition for Digital Safety is working to accelerate public-private collaboration in addressing harmful content, including deepfakes 1. The coalition aims to promote the exchange of best practices in online safety regulation and support efforts to improve digital media literacy 1.
As Denmark currently holds the Presidency of the Council of the European Union, its proposed legislation is likely to send strong political signals to Brussels and the wider EU, potentially influencing future policy decisions across the continent 1.
Google releases Gemini 2.5 Deep Think, an advanced AI model capable of tackling complex problems through parallel thinking and extended processing time, available exclusively to AI Ultra subscribers.
19 Sources
Technology
18 hrs ago
19 Sources
Technology
18 hrs ago
OpenAI raises $8.3 billion in a new funding round, valuing the company at $300 billion. The AI giant's rapid growth and ambitious plans attract major investors, signaling a significant shift in the AI industry landscape.
10 Sources
Business and Economy
10 hrs ago
10 Sources
Business and Economy
10 hrs ago
Reddit's Q2 earnings reveal significant growth driven by AI-powered advertising tools and data licensing deals, showcasing the platform's successful integration of AI technology.
7 Sources
Business and Economy
18 hrs ago
7 Sources
Business and Economy
18 hrs ago
Vast Data, an AI infrastructure provider, is reportedly in discussions with Alphabet's CapitalG and Nvidia for a significant funding round that could value the company at up to $30 billion, marking a major development in the AI storage sector.
5 Sources
Business and Economy
18 hrs ago
5 Sources
Business and Economy
18 hrs ago
Apple reports strong Q3 2025 earnings with record iPhone sales, but faces ongoing challenges from US tariffs and slow progress in AI development.
8 Sources
Business and Economy
18 hrs ago
8 Sources
Business and Economy
18 hrs ago