Curated by THEOUTPOST
On Fri, 6 Dec, 8:01 AM UTC
2 Sources
[1]
Misinformation Expert Admits Using ChatGPT To File Affidavit On Misinformation, Says OpenAI's Chatbot Added The Fake Details
Misinformation expert Jeff Hancock confessed to utilizing OpenAI's ChatGPT for organizing citations in a legal document, with hallucinations calling into question the integrity of the filing itself. What Happened: Hancock, who founded the Stanford Social Media Lab, admitted that while ChatGPT assisted in drafting his affidavit, it introduced errors. These inaccuracies do not, however, affect the core arguments of his document, according to the expert. "I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it, all of which are supported by the most recent scholarly research in the field and reflect my opinion as an expert regarding the impact of AI technology on misinformation and its societal effects," Hancock wrote in a subsequent filing. Hancock's affidavit supports Minnesota's "Use of Deep Fake Technology to Influence an Election" law, which is currently challenged in federal court by Christopher Khols, known as Mr. Reagan on YouTube, and state Rep. Mary Franson. Their attorneys labeled the document "unreliable" due to non-existent citations and sought its removal. See Also: Jeff Bezos Optimistic About Trump's Second Term, Wants To Help Streamline Rules Hancock clarified that while he used ChatGPT for drafting, he did not rely on it for writing. He emphasized his commitment to the affidavit's claims, supported by scholarly research. He used Google Scholar and GPT-4o to find relevant articles, inadvertently causing citation errors, known as "hallucinations." "I did not intend to mislead the Court or counsel," said Hancock. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: The incident underscores ongoing concerns about AI's reliability in legal contexts. In May 2023, a lawyer faced similar issues when ChatGPT fabricated non-existent cases in a brief, leading to legal chaos. This highlights the challenges of AI "hallucinations," a term gaining traction since Google CEO Sundar Pichai acknowledged AI's struggle with this issue. The rapid evolution of AI technology, as seen with the launch of ChatGPT's successor GPT-4, has prompted tech leaders like Elon Musk and OpenAI CEO Sam Altman to caution against potential risks. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: AAPL Stock Hits New High Despite Sales Slowdown: Tech Bull Says 'Street Is Realizing' iPhone 16 Marks The 'Start Of A Super Cycle' For Apple Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs
[2]
A Misinformation Researcher Admits To Using ChatGPT For Citations In His Filing, But Denies Knowing AI 'Hallucinations' Were Adding False Details To It
With the advancement of artificial intelligence, communication and how it is drafted and presented has been completely revamped, but the increased reliance on AI for content brings several other issues. A misinformation expert was recently under fire for using the technology to draft his document for a legal filing, which ended up including fabricated citations. Now, what makes this more ironic is that the case filing was an attempt to combat the use of AI-generated content to mislead voters prior to elections. The researcher has now admitted to having used ChatGPT for streamlining citations and believes the error should not have an impact on the points presented in the declaration. Jeff Hancock is a Stanford professor and a misinformation expert who filed an affidavit for a case and supported the Minnesota law prohibiting the use of Deep Fake technology to influence elections. What was initially meant to be a filing against using AI to wrongfully impact users is now facing massive criticism for ironically having AI-generated details in the legal document that include false information, making it unreliable and inaccurate. Now, the misinformation expert, in an additional declaration, has admitted that he used ChatGPT-4o to organize his citations but was not aware that it added fake details or fabricated references. He denies using the tool for other parts of the document and declares the error to be unintentional. In the later submission, he wrote: I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it, all of which are supported by the most recent scholarly research in the field and reflect my opinion as an expert regarding the impact of AI technology on misinformation and its societal effects. Hancock further explained that he used Google Scholar as well as GPT-4o to create the citation list, but it was not used for drafting the document. He emphasized his lack of awareness of AI hallucinations that ended up making the citation errors. Hancock then focused on the points made in the declaration and how he stands by them that should not be impacted due to the confusion. He expressed: I did not intend to mislead the Court or counsel. I express my sincere regret for any confusion this may have caused. That said, I stand firmly behind all the substantive points in the declaration. Whether the court would accept Hancock's explanation for the errors in his submission or not, it does highlight the risks of using AI tools in legal contexts.
Share
Share
Copy Link
Stanford professor Jeff Hancock admits to using ChatGPT for organizing citations in a legal document supporting Minnesota's anti-deepfake law, leading to AI-generated false information in the affidavit.
In a twist of irony, Jeff Hancock, a Stanford professor and misinformation expert, has admitted to using OpenAI's ChatGPT to organize citations in a legal document supporting Minnesota's law against using deepfake technology to influence elections. This admission has raised questions about the integrity of the filing and highlighted the risks of relying on AI in legal contexts 1.
Hancock, who founded the Stanford Social Media Lab, used ChatGPT-4o to streamline citations in his affidavit. However, the AI tool introduced errors, including non-existent citations and fabricated references, a phenomenon known as "hallucinations" in AI parlance 2. This incident has led to the document being labeled "unreliable" by attorneys representing the challengers of the Minnesota law.
In a subsequent filing, Hancock clarified his use of AI:
Hancock emphasized that he did not intend to mislead the court and expressed regret for any confusion caused. He maintains that the core arguments of his document remain valid despite the citation errors.
This incident underscores ongoing concerns about AI's reliability in legal and professional settings. It follows a similar case in May 2023, where a lawyer faced issues when ChatGPT fabricated non-existent cases in a legal brief 1. These occurrences highlight the challenges posed by AI "hallucinations," a term that has gained traction since Google CEO Sundar Pichai acknowledged AI's struggle with this issue.
Ironically, Hancock's affidavit was filed in support of Minnesota's "Use of Deep Fake Technology to Influence an Election" law, which is currently being challenged in federal court. The law aims to combat the use of AI-generated content to mislead voters prior to elections 2. This incident has now become a focal point in the broader debate about AI's role in creating and combating misinformation.
As AI technology continues to evolve rapidly, with developments like the launch of GPT-4, the incident serves as a cautionary tale. It emphasizes the need for careful consideration and potentially new guidelines for the use of AI in professional and legal contexts. Tech leaders like Elon Musk and OpenAI CEO Sam Altman have already voiced concerns about potential risks associated with advanced AI systems 1.
Reference
Stanford professor Jeff Hancock faces allegations of citing non-existent, potentially AI-generated studies in his expert testimony supporting Minnesota's proposed deepfake legislation, raising questions about AI's impact on legal proceedings and academic integrity.
6 Sources
6 Sources
Morgan & Morgan, a major US law firm, warns its attorneys about the risks of using AI-generated content in court filings after a case involving fake citations. The incident highlights growing concerns about AI use in the legal profession.
9 Sources
9 Sources
A Harvard study reveals the presence of AI-generated research papers on Google Scholar, sparking debates about academic integrity and the future of scholarly publishing. The findings highlight the challenges posed by AI in distinguishing between human-authored and machine-generated content.
4 Sources
4 Sources
A BBC investigation finds that major AI chatbots, including ChatGPT, Copilot, Gemini, and Perplexity AI, struggle with accuracy when summarizing news articles, raising concerns about the reliability of AI in news dissemination.
14 Sources
14 Sources
Recent tests reveal that AI detectors are incorrectly flagging human-written texts, including historical documents, as AI-generated. This raises questions about their accuracy and the potential consequences of their use in academic and professional settings.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved