Misinformation Expert's Affidavit Compromised by ChatGPT-Generated False Citations

2 Sources

Share

Stanford professor Jeff Hancock admits to using ChatGPT for organizing citations in a legal document supporting Minnesota's anti-deepfake law, leading to AI-generated false information in the affidavit.

News article

Misinformation Expert's Affidavit Compromised by AI-Generated Citations

In a twist of irony, Jeff Hancock, a Stanford professor and misinformation expert, has admitted to using OpenAI's ChatGPT to organize citations in a legal document supporting Minnesota's law against using deepfake technology to influence elections. This admission has raised questions about the integrity of the filing and highlighted the risks of relying on AI in legal contexts

1

.

The Incident and Its Implications

Hancock, who founded the Stanford Social Media Lab, used ChatGPT-4o to streamline citations in his affidavit. However, the AI tool introduced errors, including non-existent citations and fabricated references, a phenomenon known as "hallucinations" in AI parlance

2

. This incident has led to the document being labeled "unreliable" by attorneys representing the challengers of the Minnesota law.

Hancock's Response and Defense

In a subsequent filing, Hancock clarified his use of AI:

  1. He wrote and reviewed the substance of the declaration himself.
  2. ChatGPT was only used for organizing citations, not for drafting the document.
  3. He used Google Scholar and GPT-4o to find relevant articles, inadvertently causing citation errors.
  4. He stands firmly behind the claims made in the affidavit, asserting they are supported by recent scholarly research

    1

    .

Hancock emphasized that he did not intend to mislead the court and expressed regret for any confusion caused. He maintains that the core arguments of his document remain valid despite the citation errors.

Broader Implications for AI in Legal Contexts

This incident underscores ongoing concerns about AI's reliability in legal and professional settings. It follows a similar case in May 2023, where a lawyer faced issues when ChatGPT fabricated non-existent cases in a legal brief

1

. These occurrences highlight the challenges posed by AI "hallucinations," a term that has gained traction since Google CEO Sundar Pichai acknowledged AI's struggle with this issue.

The Debate on AI and Misinformation

Ironically, Hancock's affidavit was filed in support of Minnesota's "Use of Deep Fake Technology to Influence an Election" law, which is currently being challenged in federal court. The law aims to combat the use of AI-generated content to mislead voters prior to elections

2

. This incident has now become a focal point in the broader debate about AI's role in creating and combating misinformation.

Future Considerations

As AI technology continues to evolve rapidly, with developments like the launch of GPT-4, the incident serves as a cautionary tale. It emphasizes the need for careful consideration and potentially new guidelines for the use of AI in professional and legal contexts. Tech leaders like Elon Musk and OpenAI CEO Sam Altman have already voiced concerns about potential risks associated with advanced AI systems

1

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo