4 Sources
4 Sources
[1]
Judge dismisses lawsuit twice due to alleged deepfake video testimony
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. The big picture: Judges are already facing a new challenge in courtrooms as AI-generated documentation worms its way into filings. Now that voice cloning and video deepfakes have become more convincing, courts must scrutinize video and audio testimony carefully. This unprecedented extra vetting is only going to bog down an already complicated legal system. A California housing dispute is getting media attention over allegations that lawyers presented a deepfake video as witness testimony. NBC News reports that Judge Victoria Kolakowski became suspicious after the supposed witness showed signs that something was not right, including a monotone voice, fuzzy facial features, and repeated facial expressions. Judge Kolakowski soon realized that the clip had the hallmarks of generative AI. The case, Mendones v. Cushman & Wakefield, may be one of the earliest instances of lawyers submitting deepfake video as authentic testimony. Judge Kolakowski dismissed the case in September and denied a request for reconsideration in early November. Legal experts warn that the incident highlights a broader threat, as AI-generated evidence is increasingly flooding courtrooms and compromising the judicial system in unprecedented ways. Mendones v. Cushman & Wakefield is not the first time courts have caught generative AI in filings and evidence. In February, a judge fined a lawyer $15,000 for submitting filings with fake AI-generated case citations. Instances of AI misuse will continue to accumulate as companies, including law firms, integrate it into their practices without proper oversight, and judges are already grappling with the implications. Judge Scott Schlegel of Louisiana, who actually supports judicial AI adoption, cautions that plaintiffs could use AI-cloned voices to generate threatening recordings, falsely affecting decisions in restraining order cases. Similarly, Judge Erica Yew pointed out that forged documents can easily enter official records, challenging traditional trust in public filings. Despite the mounting instances and increasing concerns, the legal system has yet to establish a centralized system to track these incidents. Courts are experimenting with guidance to address AI-generated content, but there are no formal protocols. The National Center for State Courts and the Thomson Reuters Institute classify deepfakes as "unacknowledged AI evidence," offering judges checklists to verify origin, access, and alteration. Of course, this requires judicial review that goes beyond the norm. Brian Long, CEO and Co-Founder of Adaptive Security, says the problem is only getting more complicated. "The hard truth is that next-gen AI makes these fakes incredibly convincing, and detection tools are not keeping up. That means law firms need new processes fast," Long told TechSpot. "Always verify audio and video through a second channel, request the original source files, and confirm live with the real individual whenever possible. The earlier you can spot an impersonation attempt, the safer your clients will be. Judges are now navigating a new frontier where traditional evidence standards collide with generative AI. Courts are only beginning to adapt, and experts caution that without stronger protocols, dockets could soon start admitting material that appears real but is entirely fabricated. Some judges are already calling for centralized tracking and more precise guidance. However, until the AI industry sees some real regulation, it's the wild west out there.
[2]
Judge Horrified as Lawyers Submit Evidence in Court That Was Faked With AI
Lawyers across the country have been landing themselves in hot water for submitting botched court documents written with the help of AI, in blunders that were clear signs of the tech's rapid inroads into the courtroom. But it was only a matter of time before AI wasn't just producing clerical errors, but actual submitted "evidence." That's what recently played out in a California court over a housing dispute -- and it didn't end well for the AI-fielding party. As NBC News reports, the plaintiffs in the case, Mendones v. Cushman & Wakefield, Inc, submitted a strange video that was supposed to be witness testimony. In it, the witness's face is fuzzy and barely animated. Aside from the rare blink, the only noticeable movement comes from her flapping lips, while the rest of her expression remains unchanged. There's also a jarring cut, after which the movements repeat themselves. In other words, it was obviously an AI deepfake. And according to the reporting, it might be one of the first documented instances of a deepfake being submitted as purportedly authentic evidence in court -- or at least one that was caught. Judges have had little patience for AI making a mockery of their profession, and the one on this case, judge Victoria Kolakowski, wasn't having any of it, either. Kolakowski dismissed the case on September 9, citing the AI-generated witness testimony. The plaintiffs filed a motion for reconsideration, arguing that Kolakowski failed to prove that their incredibly janky deepfake was the creation of AI. Their request was denied on November 6. Kolakowski says her profession is only just beginning to grapple with AI. The release of OpenAI's video generating AI app, Sora 2, was a wakeup call for just how easily convincing video evidence could be faked, as users quickly found that they could create realistic videos of people committing crimes like shoplifting. Creating deepfakes may have once required some degree of technical knowhow, but now, anyone with a smartphone and a prompt could spit them out. "The judiciary in general is aware that big changes are happening and want to understand AI, but I don't think anybody has figured out the full implications," Kolakowski told NBC. "We're still dealing with a technology in its infancy." Among judges and other legal experts interviewed by NBC, there seems to be two prevailing schools of thought on how to deal with AI. One argues that we should get ahead of the AI threat by updating judicial rules, such as instituting guidelines that dictate how lawyers verify their evidence, or having it so it's the judge's rather than the jury's duty to identify AI fakery. But the other camp maintains that we should leave it up to the judges to figure it out among themselves, and see if an apocalypse of AI-forged evidence really comes to pass. Right now, the latter sentiment is informing official policy. In May, NBC noted, the US Judicial Conference's Advisory Committee on Evidence Rules rejected proposals to update the guidance on AI, arguing that "existing standards of authenticity are up to the task of regulating AI evidence." The committee signaled it was open to instituting these changes in the future, which could take years, but in the meantime, AI will run rampant in courtrooms, and likely under most of our noses. "I think AI-generated fake or modified evidence is happening much more frequently than is reported publicly," judge Erica Yew, a member of California's Santa Clara County Superior Court, told NBC.
[3]
AI-generated evidence is showing up in court. Judges say they're not ready.
Judges across the U.S. say they are becoming increasingly concerned about the prospect of deepfake evidence.Gabrielle Korein / NBC News; Getty Images Judge Victoria Kolakowski sensed something was wrong with Exhibit 6C. Submitted by the plaintiffs in a California housing dispute, the video showed a witness whose voice was disjointed and monotone, her face fuzzy and lacking emotion. Every few seconds, the witness would twitch and repeat her expressions. Kolakowski, who serves on California's Alameda County Superior Court, soon realized why: The video had been produced using generative artificial intelligence. Though the video claimed to feature a real witness -- who had appeared in another, authentic piece of evidence -- Exhibit 6C was an AI "deepfake," Kolakowski said. The case, Mendones v. Cushman & Wakefield, Inc., appears to be one of the first instances in which a suspected deepfake was submitted as purportedly authentic evidence in court and detected -- a sign, judges and legal experts said, of a much larger threat. Citing the plaintiffs' use of AI-generated material masquerading as real evidence, Kolakowski dismissed the case on Sept. 9. The plaintiffs sought reconsideration of her decision, arguing the judge suspected but failed to prove that the evidence was AI-generated. Judge Kolakowski denied their request for reconsideration on Nov. 6. The plaintiffs did not respond to a request for comment. With the rise of powerful AI tools, AI-generated content is increasingly finding its way into courts, and some judges are worried that hyperrealistic fake evidence will soon flood their courtrooms and threaten their fact-finding mission. NBC News spoke to five judges and 10 legal experts who warned that the rapid advances in generative AI -- now capable of producing convincing fake videos, images, documents and audio -- could erode the foundation of trust upon which courtrooms stand. Some judges are trying to raise awareness and calling for action around the issue, but the process is just beginning. "The judiciary in general is aware that big changes are happening and want to understand AI, but I don't think anybody has figured out the full implications," Kolakowski told NBC News. "We're still dealing with a technology in its infancy." Prior to the Mendones case, courts have repeatedly dealt with a phenomenon billed as the "Liar's Dividend," -- when plaintiffs and defendants invoke the possibility of generative AI involvement to cast doubt on actual, authentic evidence. But in the Mendones case, the court found the plaintiffs attempted the opposite: to falsely admit AI-generated video as genuine evidence. Judge Stoney Hiljus, who serves in Minnesota's 10th Judicial District and is chair of the Minnesota Judicial Branch's AI Response Committee, said the case brings to the fore a growing concern among judges. "I think there are a lot of judges in fear that they're going to make a decision based on something that's not real, something AI-generated, and it's going to have real impacts on someone's life," he said. Many judges across the country agree, even those who advocate for the use of AI in court. Judge Scott Schlegel serves on the Fifth Circuit Court of Appeal in Louisiana and is a leading advocate for judicial adoption of AI technology, but he also worries about the risks generative AI poses to the pursuit of truth. "My wife and I have been together for over 30 years, and she has my voice everywhere," Schlegel said. "She could easily clone my voice on free or inexpensive software to create a threatening message that sounds like it's from me and walk into any courthouse around the country with that recording." "The judge will sign that restraining order. They will sign every single time," said Schlegel, referring to the hypothetical recording. "So you lose your cat, dog, guns, house, you lose everything." Judge Erica Yew, a member of California's Santa Clara County Superior Court since 2001, is passionate about AI's use in the court system and its potential to increase access to justice. Yet she also acknowledged that forged audio could easily lead to a protective order and advocated for more centralized tracking of such incidents. "I am not aware of any repository where courts can report or memorialize their encounters with deep-faked evidence," Yew told NBC News. "I think AI-generated fake or modified evidence is happening much more frequently than is reported publicly." Yew said she is concerned that deepfakes could corrupt other, long-trusted methods of obtaining evidence in court. With AI, "someone could easily generate a false record of title and go to the county clerk's office," for example, to establish ownership of a car. But the county clerk likely will not have the expertise or time to check the ownership document for authenticity, Yew said, and will instead just enter the document into the official record. "Now a litigant can go get a copy of the document and bring it to court, and a judge will likely admit it. So now do I, as a judge, have to question a source of evidence that has traditionally been reliable?" Yew wondered. Though fraudulent evidence has long been an issue for the courts, Yew said AI could cause an unprecedented expansion of realistic, falsified evidence. "We're in a whole new frontier," Yew said. Schlegel and Yew are among a small group of judges leading efforts to address the emerging threat of deepfakes in court. They are joined by a consortium of the National Center for State Courts and the Thomson Reuters Institute, which has created resources for judges to address the growing deepfake quandary. The consortium labels deepfakes as "unacknowledged AI evidence" to distinguish these creations from "acknowledged AI evidence" like AI-generated accident reconstruction videos, which are recognized by all parties as AI-generated. Earlier this year, the consortium published a cheat sheet to help judges deal with deepfakes. The document advises judges to ask those providing potentially AI-generated evidence to explain its origin, reveal who had access to the evidence, share whether the evidence had been altered in any way and look for corroborating evidence. In April 2024, a Washington state judge denied a defendant's efforts to use an AI tool to clarify a video that had been submitted. Beyond this cadre of advocates, judges around the country are starting to take note of AI's impact on their work, according to Hiljus, the Minnesota judge. "Judges are starting to consider, is this evidence authentic? Has it been modified? Is it just plain old fake? We've learned over the last several months, especially with OpenAI's Sora coming out, that it's not very difficult to make a really realistic video of someone doing something they never did," Hiljus said. "I hear from judges who are really concerned about it and who think that they might be seeing AI-generated evidence but don't know quite how to approach the issue." Hiljus is currently surveying state judges in Minnesota to better understand how generative AI is showing up in their courtrooms. To address the rise of deepfakes, several judges and legal experts are advocating for changes to judicial rules and guidelines on how attorneys verify their evidence. By law and in concert with the Supreme Court, the U.S. Congress establishes the rules for how evidence is used in lower courts. One proposal crafted by Maura R. Grossman, a research professor of computer science at the University of Waterloo and a practicing lawyer, and Paul Grimm, a professor at Duke Law School and former federal district judge, would require parties alleging that the opposition used deepfakes to thoroughly substantiate their arguments. Another proposal would transfer the duty of deepfake identification from impressionable juries to judges. The proposals were considered by the U.S. Judicial Conference's Advisory Committee on Evidence Rules when it conferred in May, but they were not approved. Members argued "existing standards of authenticity are up to the task of regulating AI evidence." The U.S. Judicial Conference is a voting body of 26 federal judges, overseen by the chief justice of the Supreme Court. After a committee recommends a change to judicial rules, the conference votes on the proposal, which is then reviewed by the Supreme Court and voted upon by Congress. Despite opting not to move the rule change forward for now, the committee was eager to keep a deepfake evidence rule "in the bullpen in case the Committee decides to move forward with an AI amendment in the future," according to committee notes. Grimm was pessimistic about this decision given how quickly the AI ecosystem is evolving. By his accounting, it takes a minimum of three years for a new federal rule on evidence to be adopted. The Trump administration's AI Action Plan, released in July as the administration's road map for American AI efforts, highlights the need to "combat synthetic media in the court system" and advocates for exploring deepfake-specific standards similar to the proposed evidence rule changes. Yet other law practitioners think a cautionary approach is wisest, waiting to see how often deepfakes are really passed off as evidence in court and how judges react before moving to update overarching rules of evidence. Jonathan Mayer, the former chief science and technology adviser and chief AI officer at the U.S. Justice Department under President Joe Biden and now a professor at Princeton University, told NBC News he routinely encountered the issue of AI in the court system: "A recurring question was whether effectively addressing AI abuses would require new law, including new statutory authorities or court rules." "We generally concluded that existing law was sufficient," he said. However, "the impact of AI could change -- and it could change quickly -- so we also thought through and prepared for possible scenarios." In the meantime, attorneys may become the first line of defense against deepfakes invading U.S. courtrooms. Judge Schlegel pointed to Louisiana's Act 250, passed earlier this year, as a successful and effective way to change norms about deepfakes at the state level. The act mandates that attorneys exercise "reasonable diligence" to determine if evidence they or their clients submit has been generated by AI. "The courts can't do it all by themselves," Schlegel said. "When your client walks in the door and hands you 10 photographs, you should ask them questions. Where did you get these photographs? Did you take them on your phone or a camera?" "If it doesn't smell right, you need to do a deeper dive before you offer that evidence into court. And if you don't, then you're violating your duties as an officer of the court," he said. Daniel Garrie, co-founder of cybersecurity and digital forensics company Law & Forensics, said that human expertise will have to continue to supplement digital-only efforts. "No tool is perfect, and frequently additional facts become relevant," Garrie wrote via email. "For example, it may be impossible for a person to have been at a certain location if GPS data shows them elsewhere at the time a photo was purportedly taken." Metadata -- or the invisible descriptive data attached to files that describe facts like the file's origin, date of creation and date of modification -- could be a key defense against deepfakes in the near future. For example, in the Mendones case, the court found the metadata of one of the purportedly-real-but-deepfaked videos showed that the plaintiffs' video was captured on an iPhone 6, which was impossible given that the plaintiff's argument required capabilities only available on an iPhone 15 or newer. Courts could also mandate that video- and audio-recording hardware include robust mathematical signatures attesting to the provenance and authenticity of their outputs, allowing courts to verify that content was recorded by actual cameras. Such technological solutions may still run into critical stumbling blocks similar to those that plagued prior legal efforts to adapt to new technologies, like DNA testing or even fingerprint analysis. Parties lacking the latest technical AI and deepfake know-how may face a disadvantage in proving evidence's origin. Grossman, the University of Waterloo professor, said that for now, judges need to keep their guard up. "Anybody with a device and internet connection can take 10 or 15 seconds of your voice and have a convincing enough tape to call your bank and withdraw money. Generative AI has democratized fraud." "We're really moving into a new paradigm," Grossman said. "Instead of trust but verify, we should be saying: Don't trust and verify."
[4]
Deepfakes and AI in the Courtroom: Report Calls for Legal Reforms to Address a Troubling Trend | Newswise
Newswise -- From cell phone footage to bodycam and surveillance clips, U.S. courtrooms are awash with video these days, with more than 80% of court cases hinging to some degree on video evidence. But in the age of artificial intelligence, the legal system is ill-prepared to distinguish deep fakes from real footage and handle other AI-enhanced evidence equitably, according to a new University of Colorado Boulder-led report. "Courts in the United States, both at the state and federal level, lack clear guidelines for the use of video as evidence in general, and this picture is only going to get more complicated with the rise of AI and deep fakes," said senior author Sandra Ristovska, associate professor of media studies and director of CU's new Visual Evidence Lab. "We felt that something needed to be done." The 26-page report, compiled by 20 experts from around the country, comes as new AI video generators have made it remarkably easy to create life-like clips, including fraudulent witness testimonies and crime scene footage. Meanwhile, AI is increasingly used to enhance real video footage, making poor quality recordings easier to see and hear, and to match security camera clips with suspects -- sometimes in error. Among other reforms, the report calls for specialized training for judges and jurors to help them critically evaluate AI-enhanced or AI-generated footage, as well as national standards governing what kind of AI is permissible. "Judges, attorneys and jurors treat video in highly varied ways around the country, and if the playing field isn't equal, that could lead to unfair renderings of justice," said Ristovska. The rise of deepfakes On Sept. 9, 2025, a judge in Alameda County, California, threw out a civil case and recommended sanctions for the plaintiffs after determining that a videotaped witness testimony was a deepfake. The case was among the first known cases in which a deepfake was deliberately used in the courtroom. Due to rapidly advancing technology, Ristovska suspects there will be more. She points to a recent social media post in which a reporter demonstrated how easy it is to abuse the new Sora 2 video generation technology: In less than a minute, he was able to get it to make a video showing "bodycam footage of cops arresting a dark-skinned man in a department store." "This shows how AI-generated videos are becoming misleadingly persuasive and how they can be exploited to incriminate and further marginalize racial and ethnic minorities," she said. Right now, she's more concerned about a different problem -- what she calls the "deepfake defense." More attorneys may be painting real video footage as fake. For instance, in a 2023 lawsuit brought by the family of a man who died when his Tesla crashed while using the self-driving feature, the company's defense counsel attempted, unsuccessfully, to dismiss a video by claiming it was a deepfake. "If this continues, jurors will accord little or no weight to authentic footage that they really should be paying attention to," she said. AI enhancement Deepfakes aside, lawyers are increasingly submitting authentic videos that have been enhanced by AI to make the sound or visuals clearer. But not everyone can afford to use those technologies, said Ristovska, and while some judges allow them, some don't. "There is a real concern that AI enhancement may exacerbate already existing inequalities in access to justice," she said. AI is also routinely used to match surveillance video with potential suspects through facial recognition technology. But the system is far from fool proof. One recent Washington Post investigation found that no less than eight people have been wrongly arrested after being identified by facial recognition software. "People are so accustomed to thinking that the technological solution is the trusted solution that even if it is a low-quality image or video, if it is run through AI, people will trust that it is an accurate match," she said. Educating judges and jurors Ristovska has studied video evidence and the impact it can have on human rights for most of her career. She founded the center in April to bring together experts from across multiple disciplines to address the shifting landscape around video use in the courtroom. In addition to new trainings for judges and jurors, the report calls for the establishment of a new system for storing and retrieving evidentiary videos, which are far harder for journalists and the public to access than text court records. It also calls on technology companies to develop ways that make it easier for viewers to detect deepfakes without putting videographers who want to remain anonymous (like whistleblowers and activists) in jeopardy. "Our hope is that this report inspires legal reforms, policy proposals based on science and more research," Ristovska said. "This is just the beginning."
Share
Share
Copy Link
A California judge dismissed a housing dispute case twice after detecting AI-generated deepfake video testimony, marking one of the first documented instances of such evidence in court. Legal experts warn that courts lack proper protocols to handle the growing threat of AI-generated evidence.
In what appears to be one of the first documented instances of deepfake evidence being submitted in a U.S. court, California Judge Victoria Kolakowski dismissed a housing dispute case twice after detecting AI-generated video testimony. The case, Mendones v. Cushman & Wakefield, has drawn national attention as courts grapple with the growing threat of artificial intelligence-generated evidence
1
2
.Source: TechSpot
Judge Kolakowski became suspicious of Exhibit 6C, a video purporting to show witness testimony, after noticing several telltale signs of AI generation. The witness displayed a monotone voice, fuzzy facial features, and repeated facial expressions with minimal movement beyond lip-syncing. "The video had the hallmarks of generative AI," Kolakowski observed, leading her to dismiss the case on September 9 and deny a motion for reconsideration in November
3
.Legal experts warn that the judicial system faces an unprecedented challenge as AI-generated content increasingly infiltrates courtrooms. With more than 80% of court cases now relying to some degree on video evidence, the potential for manipulation poses a significant threat to the justice system
4
.
Source: NBC
"The judiciary in general is aware that big changes are happening and want to understand AI, but I don't think anybody has figured out the full implications," Judge Kolakowski told NBC News. "We're still dealing with a technology in its infancy"
3
.Judge Scott Schlegel of Louisiana, despite being a proponent of judicial AI adoption, expressed concerns about the technology's potential for abuse. He warned that AI-cloned voices could generate threatening recordings, falsely affecting decisions in restraining order cases. "The judge will sign that restraining order. They will sign every single time," Schlegel said, referring to hypothetical AI-generated evidence
1
.Currently, courts operate without centralized tracking systems or formal protocols for handling AI-generated evidence. The National Center for State Courts and the Thomson Reuters Institute have begun classifying deepfakes as "unacknowledged AI evidence" and offer judges checklists to verify origin and authenticity, but these measures require extraordinary judicial review beyond normal procedures
1
.
Source: Futurism
Judge Erica Yew of California's Santa Clara County Superior Court noted the absence of any centralized reporting system: "I am not aware of any repository where courts can report or memorialize their encounters with deep-faked evidence. I think AI-generated fake or modified evidence is happening much more frequently than is reported publicly"
3
.In May, the U.S. Judicial Conference's Advisory Committee on Evidence Rules rejected proposals to update guidance on AI evidence, arguing that "existing standards of authenticity are up to the task of regulating AI evidence." However, the committee indicated openness to future changes, which could take years to implement
2
.Related Stories
Brian Long, CEO of Adaptive Security, emphasized the growing sophistication of AI-generated content: "The hard truth is that next-gen AI makes these fakes incredibly convincing, and detection tools are not keeping up." He recommended that law firms implement new verification processes, including confirming evidence through secondary channels and requesting original source files .
The recent release of OpenAI's Sora 2 video generation tool has heightened concerns among legal professionals. Users quickly demonstrated the ability to create realistic videos of people committing crimes, showing how easily fraudulent evidence could be manufactured
2
.A University of Colorado Boulder report compiled by 20 experts calls for specialized training for judges and jurors to help them critically evaluate AI-enhanced footage, as well as national standards governing permissible AI use in courts. The report warns that without proper protocols, the justice system risks admitting entirely fabricated material that appears authentic
4
.Summarized by
Navi
15 May 2025•Policy and Regulation

22 Jul 2025•Policy and Regulation

07 Jun 2025•Technology

1
Business and Economy

2
Technology

3
Policy and Regulation
