Misinformation Expert's Affidavit Compromised by ChatGPT-Generated False Citations

2 Sources

Stanford professor Jeff Hancock admits to using ChatGPT for organizing citations in a legal document supporting Minnesota's anti-deepfake law, leading to AI-generated false information in the affidavit.

News article

Misinformation Expert's Affidavit Compromised by AI-Generated Citations

In a twist of irony, Jeff Hancock, a Stanford professor and misinformation expert, has admitted to using OpenAI's ChatGPT to organize citations in a legal document supporting Minnesota's law against using deepfake technology to influence elections. This admission has raised questions about the integrity of the filing and highlighted the risks of relying on AI in legal contexts 1.

The Incident and Its Implications

Hancock, who founded the Stanford Social Media Lab, used ChatGPT-4o to streamline citations in his affidavit. However, the AI tool introduced errors, including non-existent citations and fabricated references, a phenomenon known as "hallucinations" in AI parlance 2. This incident has led to the document being labeled "unreliable" by attorneys representing the challengers of the Minnesota law.

Hancock's Response and Defense

In a subsequent filing, Hancock clarified his use of AI:

  1. He wrote and reviewed the substance of the declaration himself.
  2. ChatGPT was only used for organizing citations, not for drafting the document.
  3. He used Google Scholar and GPT-4o to find relevant articles, inadvertently causing citation errors.
  4. He stands firmly behind the claims made in the affidavit, asserting they are supported by recent scholarly research 1.

Hancock emphasized that he did not intend to mislead the court and expressed regret for any confusion caused. He maintains that the core arguments of his document remain valid despite the citation errors.

Broader Implications for AI in Legal Contexts

This incident underscores ongoing concerns about AI's reliability in legal and professional settings. It follows a similar case in May 2023, where a lawyer faced issues when ChatGPT fabricated non-existent cases in a legal brief 1. These occurrences highlight the challenges posed by AI "hallucinations," a term that has gained traction since Google CEO Sundar Pichai acknowledged AI's struggle with this issue.

The Debate on AI and Misinformation

Ironically, Hancock's affidavit was filed in support of Minnesota's "Use of Deep Fake Technology to Influence an Election" law, which is currently being challenged in federal court. The law aims to combat the use of AI-generated content to mislead voters prior to elections 2. This incident has now become a focal point in the broader debate about AI's role in creating and combating misinformation.

Future Considerations

As AI technology continues to evolve rapidly, with developments like the launch of GPT-4, the incident serves as a cautionary tale. It emphasizes the need for careful consideration and potentially new guidelines for the use of AI in professional and legal contexts. Tech leaders like Elon Musk and OpenAI CEO Sam Altman have already voiced concerns about potential risks associated with advanced AI systems 1.

Explore today's top stories

NASA and IBM Unveil Surya: An AI Model for Predicting Solar Weather

NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather, potentially improving the protection of Earth's critical infrastructure from solar storms.

New Scientist logoengadget logoGizmodo logo

5 Sources

Technology

6 hrs ago

NASA and IBM Unveil Surya: An AI Model for Predicting Solar

Meta Launches AI-Powered Voice Translation for Facebook and Instagram Creators

Meta introduces an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.

TechCrunch logoCNET logoThe Verge logo

8 Sources

Technology

22 hrs ago

Meta Launches AI-Powered Voice Translation for Facebook and

OpenAI's GPT-6: Revolutionizing AI with Memory and Personalization

OpenAI CEO Sam Altman reveals plans for GPT-6, focusing on memory capabilities to create more personalized and adaptive AI interactions. The upcoming model aims to remember user preferences and conversations, potentially transforming the relationship between humans and AI.

CNBC logoTom's Guide logo

2 Sources

Technology

22 hrs ago

OpenAI's GPT-6: Revolutionizing AI with Memory and

DeepSeek and Baidu: China's Open-Source AI Revolution Challenges Western Dominance

Chinese AI companies DeepSeek and Baidu are making waves in the global AI landscape with their open-source models, challenging the dominance of Western tech giants and potentially reshaping the AI industry.

TechRadar logoVentureBeat logo

2 Sources

Technology

6 hrs ago

DeepSeek and Baidu: China's Open-Source AI Revolution

The Rise of 'AI Psychosis': Mental Health Concerns Grow as AI Chatbots Proliferate

A comprehensive look at the emerging phenomenon of 'AI psychosis', its impact on mental health, and the growing concerns among experts and tech leaders about the psychological risks associated with AI chatbots.

Gizmodo logoFuturism logoThe Telegraph logo

3 Sources

Technology

6 hrs ago

The Rise of 'AI Psychosis': Mental Health Concerns Grow as
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo