Curated by THEOUTPOST
On Fri, 22 Nov, 12:03 AM UTC
6 Sources
[1]
Stanford professor faces allegations of citing fake AI-generated study
Stanford professor Jeff Hancock faces accusations of citing a non-existent study in his testimony related to Minnesota's proposed deepfake legislation. This incident was brought to light by the plaintiff's attorneys in a case against conservative YouTuber Christopher Kohls. The context involves a political debate about free speech and the legality of deepfakes during elections. Hancock's testimony was used by Minnesota Attorney General Keith Ellison to defend the proposed law, suggesting deepfakes threaten political integrity. The allegations state that Hancock's declaration included a reference to a fake study titled "The Influence of Deepfake Videos on Political Attitudes and Behavior," which the plaintiff's legal team claims does not exist in the journal it was purportedly published. They argue that this citation is a likely creation of an AI language model, potentially undermining the credibility of his entire declaration. The plaintiff's lawyers noted that the citation does not appear in any academic databases, which raises significant questions regarding its authenticity. They concluded: "The declaration of Prof. Hancock should be excluded in its entirety because at least some of it is based on fabricated material likely generated by an AI model". The implications of these allegations extend beyond this case. They challenge the reliability of AI-generated content within legal contexts, a concern that echoes recent events where lawyers faced sanctions for using fabricated citations in legal documents. The court filing underscores that the veracity of expert testimony can be severely impacted by AI's potential to produce inaccuracies, often referred to as "hallucinations." Hancock has a well-documented background in misinformation studies, having contributed significant research in the field and produced popular public talks on the subject. Yet, he has not yet publicly commented on the claims against his testimony. The viral Kamala Harris deepfake and its implications Investigations into the validity of the declarations used in this court case are ongoing, which raises concerns for future applications of expert testimonies influenced by AI-generated data. The Minnesota deepfake legislation, under scrutiny, aims to impose legal constraints on the distribution and creation of deepfakes around election periods. Opponents of the bill argue that the legal framework could infringe upon constitutional free speech rights, invoking concerns about censorship and the implications for digital expression. As this case unfolds, further analysis is expected regarding the intersection of technology, legal standards, and free speech rights. It remains to be seen how the court will respond to the allegations surrounding Hancock's testimony and if this will set a precedent for how AI-generated content is treated in legal proceedings. The legal community is closely monitoring this case for its implications on upcoming legislation related to digital content and misinformation in political contexts.
[2]
Stanford Professor Allegedly Submits Fake AI Citations in Argument On Deepfake Harms
A well-known Stanford professor is accused of including fake AI-generated citations in a legal argument on the dangers of deepfakes. Minnesota, much like California, has proposed a law that will enforce legal restrictions on the use of deepfakes around election time. Professor Jeff Hancock, a founding director of the Stanford Social Media Lab, submitted a legal argument in support of the bill, the Minnesota Reformer reports. However, some journalists and legal professors have been unable to locate some of the studies cited in the argument, such as "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance." Some of the commentators believe that this could be a sign that parts of the argument have been generated by artificial intelligence, saying this could be an example of an "AI Hallucination". This is where an AI, like ChatGPT, simply makes up information that does not exist. Opponents of Minnesota's new bill have argued that these potential "AI Hallucinations" make the professor's legal argument less reliable. The court filing by conservative and Republican Representative Mary Franson said the mysterious citations "calls the entire document into question." Professor Hancock is a well-known name in the field of misinformation. One of his TED talks "The Future of Lying," has racked up over 1.5 million views on YouTube, and he also stars in a documentary on misinformation that is available on Netflix. This isn't the first time that the appearance of fake AI-based legal citations has caused issues. In June 2023, Reuters reported that two New York lawyers were sanctioned after submitting a legal brief that the court ruled was generated by OpenAI's ChatGPT -- racking up a $5,000 fine in the process. Professor Hancock has yet to publically respond to the allegations against him. It's perhaps unsurprising that legal arguments about the dangers of deepfakes in elections are under intense scrutiny right now. Elon Musk's X is spearheading a comparable lawsuit challenging California's Defending Democracy From Deepfake Deception Act of 2024, which is also imposing limits on the creation and sharing of deepfakes around election time, arguing these types of restrictions impede the First Amendment.
[3]
Stanford Professor Accused of Using AI to Write Expert Testimony Criticizing Deepfakes
Plaintiffs in a lawsuit challenging Minnesota's law criminalizing election deepfakes say an expert brought in by the state likely wrote his opinion with the help of AI. In what appears to be an embarrassing and ironic gaffe, a top Stanford University professor has been accused of spreading AI-generated misinformation while serving as an expert witness in support of a law designed to keep AI-generated misinformation out of elections. Jeff Hancock, the founding director of Stanford's Social Media Lab, submitted his expert opinion earlier this month in Kohls v. Ellison, a lawsuit filed by a YouTuber and Minnesota state representative who claim the state's new law criminalizing the use of deepfakes to influence elections violates their First Amendment right to free speech. His opinion included a reference to a study that purportedly found "even when individuals are informed about the existence of deepfakes, they may still struggle to distinguish between real and manipulated content." But according to the plaintiff's attorneys, the study Hancock citedâ€"titled "The Influence of Deepfake Videos on Political Attitudes and Behavior" and published in the Journal of Information Technology & Politicsâ€"does not actually exist. "The citation bears the hallmarks of being an artificial intelligence (AI) 'hallucination,' suggesting that at least the citation was generated by a large language model like ChatGPT," the plaintiffs wrote in a motion seeking to exclude Hancock's expert opinion. "Plaintiffs do not know how this hallucination wound up in Hancock's declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever." The accusations about Hancock's use of AI were first reported by the Minnesota Reformer. Hancock did not immediately respond to Gizmodo's request for comment. Minnesota is one of 20 states to have passed laws regulating the use of deepfakes in political campaigns. Its law prohibits knowingly or acting with reckless disregard to disseminate a deepfake up to 90 days before an election if the material is made without the consent of the person depicted and is intended to influence the results of the election. The lawsuit challenging the law was filed by a conservative law firm on behalf of Minnesota state Representative Mary Franson and Christopher Kohls, a YouTuber who goes by the handle Mr Reagan. A lawsuit filed by Kohls challenging California's election deepfake law led to a federal judge issuing a preliminary injunction last month preventing that law from going into effect.
[4]
Expert defends anti-AI misinformation law using chatbot-written misinformation
Facepalm: Large language models have a long, steep hill to climb before they prove trustworthy and reliable. For now, they are helpful in starting research, but only fools would trust them enough to write a legal document. A professor specializing in the subject should know better. A Stanford professor has an egg on his face after submitting an affidavit to the court in support of a controversial Minnesota law aimed at curbing the use of deepfakes and AI to influence election outcomes. The proposed amendment to existing legislation states that candidates convicted of using deepfakes during an election campaign must forfeit the race and face fines and imprisonment of up to five years and $10,000, depending on the number of previous convictions. Minnesota State Representative Mary Franson and YouTuber Christopher Kohls have challenged the law, claiming it violates the First Amendment. During the pretrial proceedings, Minnesota Attorney General Keith Ellison asked the founding director of Stanford's Social Media Lab, Professor Jeff Hancock, to provide an affidavit declaring his support of the law (below). Expert Declaration of Professor Jeff Hancock via CourtListener The Minnesota Reformer notes that Hancock drew up a well-worded argument for why the legislation is essential. He cites several sources for his conviction, including a study titled "The Influence of Deepfake Videos on Political Attitudes and Behavior" in the Journal of Information Technology & Politics. He also referenced another academic paper called "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance." The problem is that neither of these studies exist in the journal mentioned or any other academic resource. The plaintiffs filed a memorandum suggesting that the citations could be AI-generated. The dubious attributions challenge the declaration's validity, even if they aren't from an LLM, so the judge should throw it out. "The citation bears the hallmarks of being an artificial intelligence 'hallucination,' suggesting that at least the citation was generated by a large language model like ChatGPT," the memorandum reads. "Plaintiffs do not know how this hallucination wound up in Hancock's declaration, but it calls the entire document into question." Click to enlarge If the citations are AI-generated, it is highly likely that portions, or even the entirety of the affidavit, are, too. In experiments with ChatGPT, TechSpot has found that the LLM will make up quotations that do not exist in an apparent attempt to lend validity to a story. When confronted about it, the chatbot will admit that it made the material up and will revise it with even more dubious content (above). It is conceivable that Hancock, who is undoubtedly a very busy man, wrote a draft declaration and passed it on to an aide to edit, who ran it through an LLM to clean it up, and the model added the references unprompted. However, that doesn't excuse the document from rightful scrutiny and criticism, which is the main problem with LLMs today. The irony that a self-proclaimed expert submitted a document containing AI-generated misinformation to a legal body in support of a law that outlaws that very information is not lost to anyone involved. Ellison and Hancock have not commented on the situation and likely want the embarrassing faux pas to disappear. The more tantalizing question is whether the court will consider this perjury since Hancock signed under the statement, "I declare under penalty of perjury that everything I have stated in this document is true and correct." If people are not held accountable for misusing AI, how can it ever get better?
[5]
Lawyer Submits Anti-AI Document That Appears to Have Been Created Using AI
The document referenced a bunch of court cases that were entirely made up. A lawyer in Minnesota who claims to be an expert on how "people use deception with technology," has been accused of using an AI chatbot to draft an affidavit -- in support of an anti-deepfake law in the state. As the Minnesota Reformer reports, lawyers challenging the law on behalf of far-right YouTuber and Republican state representative Mary Franson found that Stanford Social Media Lab founding director Jeff Hancock's affidavit included references to studies that don't appear to exist, a telltale sign of AI text generators that often "hallucinate" facts and reference materials. While it's far from the first time a lawyer has been accused of making up court cases using AI chatbots like OpenAI's ChatGPT, it's an especially ironic development given the subject matter. The law, which calls for a ban on the use of deepfakes to influence an election, was challenged in federal court by Franson on the grounds that such a ban would violate First Amendment rights. But in an attempt to defend the law, Hancock -- or possibly one of his staff -- appears to have stepped in it, handing the plaintiff's attorneys a golden opportunity. One study cited in Hancock's affidavit titled "The Influence of Deepfake Videos on Political Attitudes and Behavior" doesn't appear to exist. "The citation bears the hallmarks of being an artificial intelligence (AI) 'hallucination,' suggesting that at least the citation was generated by a large language model like ChatGPT," Franson's attorneys wrote in a memorandum. "Plaintiffs do not know how this hallucination wound up in Hancock's declaration, but it calls the entire document into question." And it's not just Franson's lawyers. UCLA law professor Eugene Volokh also discovered a different cited study titled "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance," which also doesn't appear to exist. It's a troubling turn in an otherwise meaningful effort to keep AI deepfakes from swaying an election, something that has become a very real risk given steady advancements in the tech. It also highlights a recurring trend: lawyers keep getting caught using tools like ChatGPT when they bungle up facts. Last year, New York City-based lawyer Steven Schwartz was caught using ChatGPT to help him write up a document. A different Colorado-based lawyer named Zacharia Crabill, who was also caught red-handed, was fired from his job in November for the same offense. Crabill, however, dug in his heels. "There's no point in being a naysayer," he told the Washington Post of the firing, "or being against something that is invariably going to become the way of the future."
[6]
An anti-deepfake declaration may have been written by AI
A federal lawsuit over Minnesota's "Use of Deep Fake Technology to Influence An Election" law is now directly dealing with the influence of AI. In a recent filing, attorneys challenging the law say an affidavit submitted to support it shows signs of containing AI-generated text. The Minnesota Reformer reports Attorney General Keith Ellison asked Stanford Social Media Lab founding director Jeff Hancock to make the submission, but the document filed includes non-existent sources that seem to have been hallucinated by ChatGPT or another large language model (LLM).
Share
Share
Copy Link
Stanford professor Jeff Hancock faces allegations of citing non-existent, potentially AI-generated studies in his expert testimony supporting Minnesota's proposed deepfake legislation, raising questions about AI's impact on legal proceedings and academic integrity.
Stanford professor Jeff Hancock, a renowned expert in misinformation studies and founding director of the Stanford Social Media Lab, has been accused of citing non-existent studies in his expert testimony supporting Minnesota's proposed deepfake legislation. The allegations have sparked a debate about the reliability of AI-generated content in legal and academic contexts 1.
Hancock's declaration, submitted in support of Minnesota's proposed law to restrict deepfakes during elections, referenced studies that plaintiffs' attorneys claim do not exist. One such study, titled "The Influence of Deepfake Videos on Political Attitudes and Behavior," allegedly published in the Journal of Information Technology & Politics, could not be found in academic databases 2.
The case, Kohls v. Ellison, challenges Minnesota's law criminalizing the use of deepfakes to influence elections. The plaintiffs, including conservative YouTuber Christopher Kohls and Minnesota State Representative Mary Franson, argue that the law violates First Amendment rights 3.
The plaintiffs' legal team suggests that the citations bear hallmarks of AI-generated content, potentially undermining the credibility of Hancock's entire declaration. This incident raises significant questions about the use of AI in legal proceedings and the potential for "AI hallucinations" to compromise expert testimony 4.
This case highlights the growing concerns surrounding the use of AI-generated content in professional and academic contexts. It follows recent incidents where lawyers faced sanctions for using fabricated AI-generated citations in legal documents 5.
Minnesota's proposed law aims to impose legal constraints on the distribution and creation of deepfakes around election periods. Opponents argue that such legislation could infringe upon constitutional free speech rights, while supporters contend that it is necessary to protect political integrity in the age of AI-generated misinformation 1.
Hancock, known for his research on misinformation and popular public talks, including a TED talk with over 1.5 million views, has not yet publicly commented on the allegations against his testimony 2.
As investigations into the validity of the declarations continue, this case is expected to have significant implications for the treatment of AI-generated content in legal proceedings and the development of legislation related to digital content and misinformation in political contexts 1.
Reference
Stanford professor Jeff Hancock admits to using ChatGPT for organizing citations in a legal document supporting Minnesota's anti-deepfake law, leading to AI-generated false information in the affidavit.
2 Sources
2 Sources
Morgan & Morgan, a major US law firm, warns its attorneys about the risks of using AI-generated content in court filings after a case involving fake citations. The incident highlights growing concerns about AI use in the legal profession.
9 Sources
9 Sources
Elon Musk's social media platform X has filed a lawsuit against California's new law targeting AI-generated deepfakes in elections, claiming it violates free speech protections.
7 Sources
7 Sources
California's recently enacted law targeting AI-generated deepfakes in elections is being put to the test, as Elon Musk's reposting of Kamala Harris parody videos sparks debate and potential legal challenges.
6 Sources
6 Sources
A bipartisan group of U.S. senators has introduced legislation aimed at protecting individuals and artists from AI-generated deepfakes. The bill seeks to establish legal safeguards and address concerns about AI exploitation in various sectors.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved