3 Sources
[1]
Google reports scale of complaints about AI deepfake terrorism content to Australian regulator
SYDNEY (Reuters) - Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material. The Alphabet-owned tech giant also said it had received dozens of user reports warning that its AI program, Gemini, was being used to create child abuse material, according to the Australian eSafety Commission. Under Australian law, tech firms must supply the eSafety Commission periodically with information about harm minimisation efforts or risk fines. The reporting period covered April 2023 to February 2024. Since OpenAI's ChatGPT exploded into the public consciousness in late 2022, regulators around the world have called for better guardrails so AI can't be used to enable terrorism, fraud, deepfake pornography and other abuse. The Australian eSafety Commission called Google's disclosure "world-first insight" into how users may be exploiting the technology to produce harmful and illegal content. "This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated," eSafety Commissioner Julie Inman Grant said in a statement. In its report, Google said it received 258 user reports about suspected AI-generated deepfake terrorist or violent extremist content made using Gemini, and another 86 user reports alleging AI-generated child exploitation or abuse material. It did not say how many of the complaints it verified, according to the regulator. Google used hatch-matching - a system of automatically matching newly-uploaded images with already-known images - to identify and remove child abuse material made with Gemini. But it did not use the same system to weed out terrorist or violent extremist material generated with Gemini, the regulator added. The regulator has fined Telegram and Twitter, later renamed X, for what it called shortcomings in their reports. X has lost one appeal about its fine of A$610,500 ($382,000) but plans to appeal again. Telegram also plans to challenge its fine.
[2]
Google reports scale of complaints about AI deepfake terrorism content to Australian regulator
SYDNEY, March 6 (Reuters) - Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material. The Alphabet-owned (GOOGL.O), opens new tab tech giant also said it had received dozens of user reports warning that its AI program, Gemini, was being used to create child abuse material, according to the Australian eSafety Commission. Under Australian law, tech firms must supply the eSafety Commission periodically with information about harm minimisation efforts or risk fines. The reporting period covered April 2023 to February 2024. Since OpenAI's ChatGPT exploded into the public consciousness in late 2022, regulators around the world have called for better guardrails so AI can't be used to enable terrorism, fraud, deepfake pornography and other abuse. The Australian eSafety Commission called Google's disclosure "world-first insight" into how users may be exploiting the technology to produce harmful and illegal content. "This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated," eSafety Commissioner Julie Inman Grant said in a statement. In its report, Google said it received 258 user reports about suspected AI-generated deepfake terrorist or violent extremist content made using Gemini, and another 86 user reports alleging AI-generated child exploitation or abuse material. It did not say how many of the complaints it verified, according to the regulator. Google used hatch-matching - a system of automatically matching newly-uploaded images with already-known images - to identify and remove child abuse material made with Gemini. But it did not use the same system to weed out terrorist or violent extremist material generated with Gemini, the regulator added. The regulator has fined Telegram and Twitter, later renamed X, for what it called shortcomings in their reports. X has lost one appeal about its fine of A$610,500 ($382,000) but plans to appeal again. Telegram also plans to challenge its fine. Reporting by Byron Kaye; Editing by Edwina Gibbs Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial IntelligencePublic Health
[3]
Google Gemini linked to AI-generated deepfake porn, terrorist content
Gift 5 articles to anyone you choose each month when you subscribe. Google's generative artificial intelligence bot Gemini is potentially being used to create deepfake child pornography as well as terrorist and violent extremist material, according to data in a report from the eSafety Commissioner. The report reveals that Google has received 86 reports from users around the world about suspected synthetic child sexual exploitation and abuse material generated by the AI text and image generator between April 2023 to February 2024. Over the same period, Gemini users reported 258 instances of potential deepfake terrorist and violent extremist material.
Share
Copy Link
Google discloses receiving over 250 global complaints about AI-generated deepfake terrorism content and dozens of reports about child abuse material created using its Gemini AI, raising concerns about AI safety and regulation.
In a groundbreaking disclosure to the Australian eSafety Commission, Google has reported receiving over 250 complaints globally about its artificial intelligence software being used to create deepfake terrorism content. The tech giant also acknowledged dozens of user reports alleging the misuse of its AI program, Gemini, for generating child abuse material 1.
The report, covering the period from April 2023 to February 2024, provides a "world-first insight" into how users may be exploiting AI technology to produce harmful and illegal content. Specifically, Google received:
This disclosure is part of Google's compliance with Australian law, which requires tech firms to periodically supply the eSafety Commission with information about harm minimization efforts or face potential fines. The revelation underscores the growing concern among global regulators about the need for better safeguards to prevent AI from being used for terrorism, fraud, deepfake pornography, and other forms of abuse 1.
Google has implemented measures to address some of these issues:
The Australian eSafety Commission has taken action against other tech companies for perceived shortcomings in their reports:
eSafety Commissioner Julie Inman Grant emphasized the critical need for companies developing AI products to build in and test the efficacy of safeguards to prevent the generation of harmful material. This incident highlights the ongoing challenges in balancing technological innovation with responsible AI development and use 3.
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
18 hrs ago
3 Sources
Technology
18 hrs ago
The UK's technology secretary and OpenAI's CEO discussed a potential multibillion-pound deal to provide ChatGPT Plus access to all UK residents, highlighting the government's growing interest in AI technology.
2 Sources
Technology
2 hrs ago
2 Sources
Technology
2 hrs ago
Multiple news outlets, including Wired and Business Insider, have been duped by AI-generated articles submitted under a fake freelancer's name, raising concerns about the future of journalism in the age of artificial intelligence.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago
Google inadvertently revealed a new smart speaker during its Pixel event, sparking speculation about its features and capabilities. The device is expected to be powered by Gemini AI and could mark a significant upgrade in Google's smart home offerings.
5 Sources
Technology
1 day ago
5 Sources
Technology
1 day ago
As AI and new platforms transform search behavior, brands must adapt their strategies beyond traditional SEO to remain visible in an increasingly fragmented digital landscape.
2 Sources
Technology
1 day ago
2 Sources
Technology
1 day ago