Google Reports AI-Generated Deepfake Terrorism and Child Abuse Content Complaints to Australian Regulator

3 Sources

Share

Google discloses receiving over 250 global complaints about AI-generated deepfake terrorism content and dozens of reports about child abuse material created using its Gemini AI, raising concerns about AI safety and regulation.

News article

Google Reveals AI Misuse Complaints to Australian Regulator

In a groundbreaking disclosure to the Australian eSafety Commission, Google has reported receiving over 250 complaints globally about its artificial intelligence software being used to create deepfake terrorism content. The tech giant also acknowledged dozens of user reports alleging the misuse of its AI program, Gemini, for generating child abuse material

1

.

Scope and Scale of AI Misuse

The report, covering the period from April 2023 to February 2024, provides a "world-first insight" into how users may be exploiting AI technology to produce harmful and illegal content. Specifically, Google received:

  • 258 user reports about suspected AI-generated deepfake terrorist or violent extremist content made using Gemini
  • 86 user reports alleging AI-generated child exploitation or abuse material

    2

Regulatory Context and Implications

This disclosure is part of Google's compliance with Australian law, which requires tech firms to periodically supply the eSafety Commission with information about harm minimization efforts or face potential fines. The revelation underscores the growing concern among global regulators about the need for better safeguards to prevent AI from being used for terrorism, fraud, deepfake pornography, and other forms of abuse

1

.

Google's Response and Mitigation Efforts

Google has implemented measures to address some of these issues:

  1. Hash-matching: A system of automatically matching newly-uploaded images with already-known images to identify and remove child abuse material made with Gemini.
  2. However, the same system is not currently used to weed out terrorist or violent extremist material generated with Gemini

    2

    .

Regulatory Action and Industry Impact

The Australian eSafety Commission has taken action against other tech companies for perceived shortcomings in their reports:

  • Telegram and Twitter (now X) have been fined
  • X has lost one appeal about its fine of A$610,500 ($382,000) but plans to appeal again
  • Telegram also intends to challenge its fine

    1

Call for Enhanced AI Safeguards

eSafety Commissioner Julie Inman Grant emphasized the critical need for companies developing AI products to build in and test the efficacy of safeguards to prevent the generation of harmful material. This incident highlights the ongoing challenges in balancing technological innovation with responsible AI development and use

3

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo