Stanford Professor Accused of Citing AI-Generated Studies in Deepfake Legislation Testimony

Curated by THEOUTPOST

On Fri, 22 Nov, 12:03 AM UTC

6 Sources

Share

Stanford professor Jeff Hancock faces allegations of citing non-existent, potentially AI-generated studies in his expert testimony supporting Minnesota's proposed deepfake legislation, raising questions about AI's impact on legal proceedings and academic integrity.

Stanford Professor Accused of Citing Fake AI-Generated Studies

Stanford professor Jeff Hancock, a renowned expert in misinformation studies and founding director of the Stanford Social Media Lab, has been accused of citing non-existent studies in his expert testimony supporting Minnesota's proposed deepfake legislation. The allegations have sparked a debate about the reliability of AI-generated content in legal and academic contexts 1.

The Controversial Testimony

Hancock's declaration, submitted in support of Minnesota's proposed law to restrict deepfakes during elections, referenced studies that plaintiffs' attorneys claim do not exist. One such study, titled "The Influence of Deepfake Videos on Political Attitudes and Behavior," allegedly published in the Journal of Information Technology & Politics, could not be found in academic databases 2.

Legal Implications and Challenges

The case, Kohls v. Ellison, challenges Minnesota's law criminalizing the use of deepfakes to influence elections. The plaintiffs, including conservative YouTuber Christopher Kohls and Minnesota State Representative Mary Franson, argue that the law violates First Amendment rights 3.

AI Hallucinations and Legal Concerns

The plaintiffs' legal team suggests that the citations bear hallmarks of AI-generated content, potentially undermining the credibility of Hancock's entire declaration. This incident raises significant questions about the use of AI in legal proceedings and the potential for "AI hallucinations" to compromise expert testimony 4.

Broader Implications for AI in Legal and Academic Settings

This case highlights the growing concerns surrounding the use of AI-generated content in professional and academic contexts. It follows recent incidents where lawyers faced sanctions for using fabricated AI-generated citations in legal documents 5.

The Deepfake Legislation Debate

Minnesota's proposed law aims to impose legal constraints on the distribution and creation of deepfakes around election periods. Opponents argue that such legislation could infringe upon constitutional free speech rights, while supporters contend that it is necessary to protect political integrity in the age of AI-generated misinformation 1.

Professor Hancock's Background and Silence

Hancock, known for his research on misinformation and popular public talks, including a TED talk with over 1.5 million views, has not yet publicly commented on the allegations against his testimony 2.

Ongoing Investigations and Future Implications

As investigations into the validity of the declarations continue, this case is expected to have significant implications for the treatment of AI-generated content in legal proceedings and the development of legislation related to digital content and misinformation in political contexts 1.

Continue Reading
Misinformation Expert's Affidavit Compromised by

Misinformation Expert's Affidavit Compromised by ChatGPT-Generated False Citations

Stanford professor Jeff Hancock admits to using ChatGPT for organizing citations in a legal document supporting Minnesota's anti-deepfake law, leading to AI-generated false information in the affidavit.

Benzinga logoWccftech logo

2 Sources

Benzinga logoWccftech logo

2 Sources

AI Hallucinations in Legal Filings: Morgan & Morgan Warns

AI Hallucinations in Legal Filings: Morgan & Morgan Warns Lawyers of Consequences

Morgan & Morgan, a major US law firm, warns its attorneys about the risks of using AI-generated content in court filings after a case involving fake citations. The incident highlights growing concerns about AI use in the legal profession.

Ars Technica logoU.S. News & World Report logoMarket Screener logoEconomic Times logo

9 Sources

Ars Technica logoU.S. News & World Report logoMarket Screener logoEconomic Times logo

9 Sources

X Sues California Over Deepfake Law, Citing First Amendment

X Sues California Over Deepfake Law, Citing First Amendment Concerns

Elon Musk's social media platform X has filed a lawsuit against California's new law targeting AI-generated deepfakes in elections, claiming it violates free speech protections.

MediaNama logoPC Magazine logoCBS News logoMiami Herald logo

7 Sources

MediaNama logoPC Magazine logoCBS News logoMiami Herald logo

7 Sources

California's New Deepfake Law Faces Legal Challenges Amid

California's New Deepfake Law Faces Legal Challenges Amid Elon Musk's Reposting of Kamala Harris Parodies

California's recently enacted law targeting AI-generated deepfakes in elections is being put to the test, as Elon Musk's reposting of Kamala Harris parody videos sparks debate and potential legal challenges.

TechCrunch logoFortune logoCBS News logoEconomic Times logo

6 Sources

TechCrunch logoFortune logoCBS News logoEconomic Times logo

6 Sources

Senators Introduce Bill to Combat AI-Generated Deepfakes

Senators Introduce Bill to Combat AI-Generated Deepfakes and Protect Individuals

A bipartisan group of U.S. senators has introduced legislation aimed at protecting individuals and artists from AI-generated deepfakes. The bill seeks to establish legal safeguards and address concerns about AI exploitation in various sectors.

engadget logoRolling Stone logoPYMNTS.com logoSiliconANGLE logo

5 Sources

engadget logoRolling Stone logoPYMNTS.com logoSiliconANGLE logo

5 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved