Misinformation Expert's Affidavit Compromised by ChatGPT-Generated False Citations

Curated by THEOUTPOST

On Fri, 6 Dec, 8:01 AM UTC

2 Sources

Share

Stanford professor Jeff Hancock admits to using ChatGPT for organizing citations in a legal document supporting Minnesota's anti-deepfake law, leading to AI-generated false information in the affidavit.

Misinformation Expert's Affidavit Compromised by AI-Generated Citations

In a twist of irony, Jeff Hancock, a Stanford professor and misinformation expert, has admitted to using OpenAI's ChatGPT to organize citations in a legal document supporting Minnesota's law against using deepfake technology to influence elections. This admission has raised questions about the integrity of the filing and highlighted the risks of relying on AI in legal contexts 1.

The Incident and Its Implications

Hancock, who founded the Stanford Social Media Lab, used ChatGPT-4o to streamline citations in his affidavit. However, the AI tool introduced errors, including non-existent citations and fabricated references, a phenomenon known as "hallucinations" in AI parlance 2. This incident has led to the document being labeled "unreliable" by attorneys representing the challengers of the Minnesota law.

Hancock's Response and Defense

In a subsequent filing, Hancock clarified his use of AI:

  1. He wrote and reviewed the substance of the declaration himself.
  2. ChatGPT was only used for organizing citations, not for drafting the document.
  3. He used Google Scholar and GPT-4o to find relevant articles, inadvertently causing citation errors.
  4. He stands firmly behind the claims made in the affidavit, asserting they are supported by recent scholarly research 1.

Hancock emphasized that he did not intend to mislead the court and expressed regret for any confusion caused. He maintains that the core arguments of his document remain valid despite the citation errors.

Broader Implications for AI in Legal Contexts

This incident underscores ongoing concerns about AI's reliability in legal and professional settings. It follows a similar case in May 2023, where a lawyer faced issues when ChatGPT fabricated non-existent cases in a legal brief 1. These occurrences highlight the challenges posed by AI "hallucinations," a term that has gained traction since Google CEO Sundar Pichai acknowledged AI's struggle with this issue.

The Debate on AI and Misinformation

Ironically, Hancock's affidavit was filed in support of Minnesota's "Use of Deep Fake Technology to Influence an Election" law, which is currently being challenged in federal court. The law aims to combat the use of AI-generated content to mislead voters prior to elections 2. This incident has now become a focal point in the broader debate about AI's role in creating and combating misinformation.

Future Considerations

As AI technology continues to evolve rapidly, with developments like the launch of GPT-4, the incident serves as a cautionary tale. It emphasizes the need for careful consideration and potentially new guidelines for the use of AI in professional and legal contexts. Tech leaders like Elon Musk and OpenAI CEO Sam Altman have already voiced concerns about potential risks associated with advanced AI systems 1.

Continue Reading
Stanford Professor Accused of Citing AI-Generated Studies

Stanford Professor Accused of Citing AI-Generated Studies in Deepfake Legislation Testimony

Stanford professor Jeff Hancock faces allegations of citing non-existent, potentially AI-generated studies in his expert testimony supporting Minnesota's proposed deepfake legislation, raising questions about AI's impact on legal proceedings and academic integrity.

Dataconomy logoPC Magazine logoGizmodo logoTechSpot logo

6 Sources

Dataconomy logoPC Magazine logoGizmodo logoTechSpot logo

6 Sources

AI Hallucinations in Legal Filings: Morgan & Morgan Warns

AI Hallucinations in Legal Filings: Morgan & Morgan Warns Lawyers of Consequences

Morgan & Morgan, a major US law firm, warns its attorneys about the risks of using AI-generated content in court filings after a case involving fake citations. The incident highlights growing concerns about AI use in the legal profession.

Ars Technica logoU.S. News & World Report logoMarket Screener logoEconomic Times logo

9 Sources

Ars Technica logoU.S. News & World Report logoMarket Screener logoEconomic Times logo

9 Sources

AI-Generated Research Papers Found on Google Scholar,

AI-Generated Research Papers Found on Google Scholar, Raising Concerns in Academic Community

A Harvard study reveals the presence of AI-generated research papers on Google Scholar, sparking debates about academic integrity and the future of scholarly publishing. The findings highlight the challenges posed by AI in distinguishing between human-authored and machine-generated content.

Business Insider logoZDNet logo

4 Sources

Business Insider logoZDNet logo

4 Sources

BBC Study Reveals Significant Inaccuracies in AI-Generated

BBC Study Reveals Significant Inaccuracies in AI-Generated News Summaries

A BBC investigation finds that major AI chatbots, including ChatGPT, Copilot, Gemini, and Perplexity AI, struggle with accuracy when summarizing news articles, raising concerns about the reliability of AI in news dissemination.

MediaNama logoDataconomy logoZDNet logoArs Technica logo

14 Sources

MediaNama logoDataconomy logoZDNet logoArs Technica logo

14 Sources

AI Detectors Fail to Accurately Identify Human-Written

AI Detectors Fail to Accurately Identify Human-Written Text, Raising Concerns About Reliability

Recent tests reveal that AI detectors are incorrectly flagging human-written texts, including historical documents, as AI-generated. This raises questions about their accuracy and the potential consequences of their use in academic and professional settings.

Analytics India Magazine logoDecrypt logo

2 Sources

Analytics India Magazine logoDecrypt logo

2 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved