Curated by THEOUTPOST
On Sat, 5 Apr, 12:03 AM UTC
15 Sources
[1]
Judge berates AI entrepreneur for using a generated 'lawyer' in court
Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews. A man's recent attempt to use an AI-generated avatar in his legal appeal made an immediate impression on a New York courtroom, but probably not the one he was hoping for. Jerome Dewald -- a 74-year-old that The Register notes is behind a startup that says it's "revolutionizing legal self-representation with AI" -- was chewed out during an employment dispute hearing on March 26th for failing to inform judges that he had artificially generated the man presenting his oral argument. While the court had approved Dewald to submit a video for his case, Justice Sallie Manzanet-Daniels became confused when the unknown speaker, who clearly wasn't Dewald, appeared on the screen. "Hold on," Manzanet-Daniels said, interrupting the video after the avatar had barely finished its first sentence. "Is that counsel for the case?" "I generated that," Dewald responded. "It's not a real person." Dewald told The Register that the avatar -- a "big, beautiful hunk of a guy" called Jim -- was one of the stock options provided by an AI avatar company called Tavus. Dewald says the video was submitted due to difficulties he experiences with extended speaking, but the courtroom was unaware that the video contents were artificially generated. "It would have been nice to know that when you made your application. You did not tell me that, sir, I don't appreciate being misled." said Manzanet-Daniels, responding to Dewald's admission. "You are not going to use this courtroom as a launch for your business." This is the latest of several snafus that have occurred when people try to mix legal processes with AI technology. Two attorneys and a law firm were penalized in 2023 for submitting fictitious legal research that had been made up by ChatGPT. DoNotPay, a "robot lawyer" company, was also ordered to pay the FTC a $193,000 settlement in February for advertising, without evidence, that its AI legal representation is as good as a real human lawyer.
[2]
Judge slams AI entrepreneur for having avatar testify
We hear from court-scolded Jerome Dewald, who insists lawyer-bots have a future Interview The founder of an AI startup who attempted to use an artificially generated avatar to argue his case in court has been scolded by a judge for the stunt. The avatar - its appearance and voice created by software - appeared on behalf of Jerome Dewald, the plaintiff in an employment dispute with insurance firm MassMutual Metro New York, at a March 26 hearing before the US state's supreme court appellate division. During oral arguments, Dewald asked for a video to be played depicting a man in a V-neck sweater to the five-judge panel. The video opened: "Now may it please the court, I come here today a humble pro se before a panel of five distinguished justices..." A pro se being someone representing themselves. Confused by the unknown speaker, one of the judges, Associate Justice Sallie Manzanet-Daniels, immediately interrupted to ask who was addressing the court. "Is this ... hold on? Is that counsel for the case?" "That? I generated that," replied Dewald, who was physically sitting before the panel of judges in the hearing. "I'm sorry?" the judge said. "I generated that," Dewald reiterated. "That is not a real person." "Okay," the judge snapped. "It would have been nice to know that when you made your application. You did not tell me that, sir." That application being Dewald's request to play a video arguing his case, as according to him a medical condition had left the entrepreneur unable to easily address the court verbally in person at length. The panel was not expecting a computer-imagined person to show up, however. "You have appeared before this court and been able to testify verbally in the past," Judge Manzanet-Daniels continued. "You have gone to my clerk's office and held verbal conversations with our staff for over 30 minutes. "I don't appreciate being misled. So either you are suffering from an ailment that prevents you from being able to articulate or you don't. You are not going to use this courtroom as a launch for your business, sir. If you want to have oral argument time you may stand up and give it to me." In an interview with The Register this week, Dewald said: "I asked the court for permission in advance and they gave it to me. So they were not unprepared to have the presentation. They were unprepared to see an artificially generated image." The judge's reference to an ailment refers to Dewald's bout with throat cancer 25 years ago. "Extended speaking is problematic for me," he explained. "I mean, I can go through the different things that happened, but that was part of the reason that they agreed to let me do the presentation." I asked the court for permission in advance, and they gave it to me. So they were not unprepared to have the presentation. They were unprepared to see an artificially generated image Dewald, who operates a startup called Pro Se Pro that aims to help unrepresented litigants navigate the US legal system without hiring lawyers, had planned to use an AI service called Tavus to create a realistic video avatar of himself to read his argument to the court. "I did get a permission in advance," he claimed. "I intended to use my own replica that would have been an image of me talking. But the technology is fairly new. I had never made a replica before of myself or anybody." Dewald explained that the process of creating an avatar to appear in court involves providing Tavus a two-to-four-minute video of the subject talking plus a one-minute segment that shows the subject standing still. That material is used to generate the subject's digital replica, a process that takes about two to four hours. He ended up using a default avatar, called Jim, rather than one of himself, though. "On my basic plan, I only get three replicas a month to generate," Dewald said. "So I was trying to be conservative. I tried one. It failed after about six hours. I tried another one. It failed after about eight hours. And by the time we were getting ready for the hearing I still didn't have my own replica. So I just used one of their stock replicas, that big, beautiful hunk of a guy that they call Jim." "Jim" only got a few words out at the hearing before being cut off. "So you can see the judge was upset, she was really upset in the beginning," said Dewald, who ended up addressing the court himself. "And then when I started giving my presentation, as poorly as I did, she seemed to become much more sympathetic. The look on her face was more like, 'Well I'm sorry I chewed you out so badly.'" While there have been several instances of attorneys being chided by judges for filing court documents with AI-generated inaccuracies, Dewald believes the judge's ire in this matter was due to being surprised by an unexpected person on the video presentation. Asked about whether the court's reaction gave him pause about the viability of AI applications in legal matters, Dewald said, "I don't know, but the technology has changed so quickly. You know my site, we get a fair number of views but we really don't get much business out of it." Dewald said he'd been unable to develop his AI legal business due to lack of funding and other concerns, so it had remained untended for about a year. "In the artificial intelligence world, a year is like an eon," he said. "When I put it up, we're still working on the level of ChatGPT-3.5. Our centerpiece was a scenario analyzer. That is an AI piece that interviews a pro se [litigant] and then gives some advice. I would argue it's not legal advice, but you can argue what you want. "And that piece worked kind of OK. It involved a tech stack that had some Amazon pieces in it that have now been deprecated. And the whole landscape has changed so much that that site needs to be rebuilt with agentic AI now, because you can just do so much more." Dewald downplayed the judge's admonition not to use the court as a venue to promote his biz. "There was nothing there that was promoting any business that I have," he said. I think the courts eye [AI use] very skeptically Asked about whether AI should be accepted in courtrooms, Dewald said: "I think the courts eye it very skeptically. With respect to the replica and the presentation that I did, there could be no hallucinations in that unless there were hallucinations of the script that I gave them to read. I did use a generative AI to draft the script, but I also checked it very thoroughly. I've been doing this for a long time." Dewald has a background in engineering and computer science, and is not a lawyer. He said he was admitted to law school in New York in the 1970s but never attended. He also said he recently sat a law school admission test, is a member of some bar associations, and follows the evolving use of AI in the law. Citing a panel discussion with several New York justices about a year ago, he said the recommendation was that the use of AI should be disclosed to your opponent and the court. "I've been doing that for over a year," said Dewald. "I'm not sure how useful it is because in some respects, full and open disclosure, on the other side of the coin, I think it tends to be discriminatory sometimes. It tends to prejudice readers against you because there's such a negative view of hallucinations and AI." He said hallucinations represent a real problem for AI, along with misstating actual citations, and misinterpreting the premise of a case. He added he's very thorough when checking the accuracy of AI output. Dewald pointed to a recent American Bar Association seminar, Navigating Artificial Intelligence in the Judiciary, that covered guidelines for the responsible use of AI tools by judicial officers. AI actually tends to empower unrepresented litigants, gives them a voice that they wouldn't normally have in the courtroom The seminar attempts to grapple with common concerns about AI, such as model bias, hallucination, and confidentiality. It also mentions pushing the boundaries of legal norms as is the case when AI models are used to generate output that's outside the court record that may be put forth as a sort of uncredentialed expert witness. Courts have also taken to using AI. As the seminar notes, Arizona courts have deployed digital avatars to summarize decisions. Dewald reckons AI can help pro se litigants, and "actually tends to empower unrepresented litigants, as it gives them a voice that they wouldn't normally have in the courtroom." He added he already filed an apology with the court because it was "a mistake not to be fully transparent" and warn the justices his argument would be presented by an avatar. ®
[3]
An AI avatar tried to argue a case before a New York court. The judges weren't having it
NEW YORK (AP) -- It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen -- a person about to present an argument in a lawsuit -- not only had no law degree, but didn't exist at all. The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division's First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute. "The appellant has submitted a video for his argument," said Justice Sallie Manzanet-Daniels. "Ok. We will hear that video now." On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. "May it please the court," the man began. "I come here today a humble pro se before a panel of five distinguished justices." "Ok, hold on," Manzanet-Daniels said. "Is that counsel for the case?" "I generated that. That's not a real person," Dewald answered. It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. "It would have been nice to know that when you made your application. You did not tell me that sir," Manzanet-Daniels said before yelling across the room for the video to be shut off. "I don't appreciate being misled," she said before letting Dewald continue with his argument. Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words. In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing. "The court was really upset about it," Dewald conceded. "They chewed me up pretty good." Even real lawyers have gotten into trouble when their use of artificial intelligence went awry. In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a "good faith mistake" in failing to understand that artificial intelligence might make things up. Later that year, more fictious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. Those were errors, but Arizona's Supreme Court last month intentionally began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public. On the court's website, the avatars -- who go by "Daniel" and "Victoria" -- say they are there "to share its news." Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he wasn't surprised to learn of Dewald's introduction of a fake person to argue an appeals case in a New York court. "From my perspective, it was inevitable," he said. He said it was unlikely that a lawyer would do such a thing because of tradition and court rules and because they could be disbarred. But he said individuals who appear without a lawyer and request permission to address the court are usually not given instructions about the risks of using a synthetically produced video to present their case. Dewald said he tries to keep up with technology, having recently listened to a webinar sponsored by the American Bar Association that discussed the use of AI in the legal world. As for Dewald's case, it was still pending before the appeals court as of Thursday.
[4]
Pro Tip: Don't Send Your AI Avatar to Testify for You in Court
Generally speaking, it is not considered a good idea to represent yourself in court. But there is another, potentially worse route: you can say that you are going to represent yourself, then pass the task on to an AI avatar in order to show off the capabilities of your startup. That appears to be the approach that AI entrepreneur Jerome Dewald took, according to a report from The Register, and it was not well received by the court. Here's the situation that Dewald seemingly found himself in: Dewald is the plaintiff in an employment dispute with insurance firm MassMutual Metro New York and was scheduled to make an argument before the court on March 26, 2025. Dewald was diagnosed with throat cancer 25 years ago and, according to his account to The Register, still suffers from the effects of it, making continuous speaking challenging. So he asked the court if he could submit a video to make his statementâ€"a reasonable enough request that the court seemed to have approved in advance. What the court did not approve, based on the judge's reaction, was the video that Dewald submitted, which was not him making a statement but instead a handsomely generic guy who the judges had never seen before. Just a couple seconds into the unnamed business stud's statement, Associate Justice Sallie Manzanet-Daniels cut off the video and asked "Is that counsel for the case?" It was at that point Dewald revealed that the person making the statement wasn't a person at allâ€"it was an AI-generated video. "That is not a real person," he told the court. It was about then that Justice Manzanet-Daniels lost it. "It would have been nice to know that when you made your application. You did not tell me that, sir," she said, noting that Dewald had testified at length previously and had conversations with the clerk's office for extended periods without issue. "I don't appreciate being misled. So either you are suffering from an ailment that prevents you from being able to articulate or you don't," she said. Now would probably be a good time to mention that Dewald heads an AI startup called Pro Se Pro that helps people represent themselves in legal matters with AI toolsâ€"a fact the judge seemed to know, as she told him, "You are not going to use this courtroom as a launch for your business, sir." To be fair to Dewald, it does not seem that his avatar was created with his own platform, which he told The Register has been at a standstill for about a year due to lack of funding. His AI representative, named "Jim" was created with a free trial to an AI service called Tavus. While he intended to make an AI version of himself to speak before the court, he couldn't get the trial to work so, he told The Register he just settled for "one of their stock replicas, that big, beautiful hunk of a guy." Still, the appearance of self-promotion as well as the unexpected appearance of an AI avatar was enough to get Dewald a scolding from the court. He went on to make his argument on his own and has since copped to the fact that he probably should have provided a heads-up that he was going to use AI to present his case. Lesson learned.
[5]
'They chewed me up pretty good': A US plaintiff attempted to use an AI avatar to argue their court case and the judges were far from amused
AI has many uses -- or at least that's what we keep being told -- but one area where it might do some good is helping with the notoriously complex legal process. After all, should you find yourself faced with the task of self-representation in an upcoming court case, perhaps an AI may be able to present your arguments more successfully than you. Jerome Dewald tried just that in the New York State Supreme Court last month, but the judges were far from impressed by his creative use of technology. Dewald was representing himself as a plaintiff in an employment dispute, APNews reports, but felt that an AI avatar would deliver his opening presentation better than he could due to a tendency to mumble and trip over his words. He first attempted to generate a digital replica of himself with "a product created by a San Francisco tech company", but when he ran out of time he used a generic avatar instead. His AI representative was displayed on a video screen in the court as a "smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater." Unfortunately for Dewald, his AI-stand in was rumbled almost as soon as the pre-generated video began. Presenting Dewald's argument with the line "may it please the court, I come here today a humble pro se before a panel of five distinguished judges," the AI counsel was interrupted by judge Manzanet-Daniels almost immediately: "Okay, hold on, is that counsel for the case?" she interjected, before demanding the video be shut off. "I generated that. It's not a real person" responded Dewald, attracting Manzanet-Daniels apparent ire. "It would have been nice to know that when you made your application. You did not tell me that, sir... I don't appreciate being misled." "They chewed me up pretty good," said Dewald, in a later interview with AP News. "The court was really upset about it." Ouch. Dewalk later wrote an apology to the court, explaining his tendency to stumble over his words and that he hadn't intended any harm. He had, however, asked for permission to play a prerecorded video, although apparently this did not give him consent to use an AI avatar to present his arguments. It's not the first time we've heard about the intersection between AI and legal services. Legal advice startup DoNotPay offers an AI legal assistant to help defendants with court proceedings, although the company found itself under the scrutiny of the FTC last year for a perceived lack of testing to back up its effectiveness claims. Nor is it the first time AI avatars have been used as stand-ins for real people. News startup Channel 1 showed off a proof-of-concept video of AI news hosts last year, and the results were quite convincing. However, this might be the first use of an AI avatar making arguments in a US court. Not that the AI representative got much past its preamble, but still, firsts are firsts. And given the often ridiculous expense associated with court proceedings, it does strike as a potentially useful workaround for those without access to a good lawyer -- or the funds to pay one, at the very least. Still, it didn't pass muster in this particular instance. While I'd be very surprised if summarisation tools like ChatGPT weren't being used all over the legal system at this point to scythe through complicated legal documents, it appears it may be a while yet before AI representation sets a legal precedent of its own.
[6]
An AI avatar tried to argue a case before a New York court. The judges weren't having it
It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen -- a person about to present an argument in a lawsuit -- not only had no law degree, but didn't exist at all. The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division's First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute. "The appellant has submitted a video for his argument," said Justice Sallie Manzanet-Daniels. "Ok. We will hear that video now." On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. "May it please the court," the man began. "I come here today a humble pro se before a panel of five distinguished justices." "Ok, hold on," Manzanet-Daniels said. "Is that counsel for the case?" "I generated that. That's not a real person," Dewald answered. It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. "It would have been nice to know that when you made your application. You did not tell me that sir," Manzanet-Daniels said before yelling across the room for the video to be shut off. "I don't appreciate being misled," she said before letting Dewald continue with his argument. Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words. In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing. "The court was really upset about it," Dewald conceded. "They chewed me up pretty good." Even real lawyers have gotten into trouble when their use of artificial intelligence went awry. In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a "good faith mistake" in failing to understand that artificial intelligence might make things up. Later that year, more fictious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. Those were errors, but Arizona's Supreme Court last month intentionally began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public. On the court's website, the avatars -- who go by "Daniel" and "Victoria" -- say they are there "to share its news." Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he wasn't surprised to learn of Dewald's introduction of a fake person to argue an appeals case in a New York court. "From my perspective, it was inevitable," he said. He said it was unlikely that a lawyer would do such a thing because of tradition and court rules and because they could be disbarred. But he said individuals who appear without a lawyer and request permission to address the court are usually not given instructions about the risks of using a synthetically produced video to present their case. Dewald said he tries to keep up with technology, having recently listened to a webinar sponsored by the American Bar Association that discussed the use of AI in the legal world. As for Dewald's case, it was still pending before the appeals court as of Thursday.
[7]
Man employs AI avatar in legal appeal, and judge isn't amused
Jerome Dewald sat with his legs crossed and his hands folded in his lap in front of an appellate panel of New York state judges, ready to argue for a reversal of a lower court's decision in his dispute with a former employer. The court had allowed Dewald, who is not a lawyer and was representing himself, to accompany his argument with a prerecorded video presentation. As the video began to play, it showed a man seemingly younger than Dewald's 74 years wearing a blue collared shirt and a beige sweater and standing in front of what appeared to be a blurred virtual background. A few seconds into the video, one of the judges, confused by the image on the screen, asked Dewald if the man was his lawyer. "I generated that," Dewald responded. "That is not a real person." The judge, Justice Sallie Manzanet-Daniels of the Appellate Division's 1st Judicial Department, paused for a moment. It was clear she was displeased with his answer. "It would have been nice to know that when you made your application," she snapped at him. "I don't appreciate being misled," she added before yelling for someone to turn off the video. What Dewald failed to disclose was that he had created the digital avatar using artificial intelligence software, the latest example of AI creeping into the U.S. legal system in potentially troubling ways. The hearing at which Dewald made his presentation, on March 26, was filmed by court system cameras and reported earlier by The Associated Press. Reached Friday, Dewald, the plaintiff in the case, said he had been overwhelmed by embarrassment at the hearing. He said he had sent the judges a letter of apology shortly afterward, expressing his deep regret and acknowledging that his actions had "inadvertently misled" the court. He said he had resorted to using the software after stumbling over his words in previous legal proceedings. Using AI for the presentation, he thought, might ease the pressure he felt in the courtroom. He said he had planned to make a digital version of himself but had encountered "technical difficulties" in doing so, which prompted him to create a fake person for the recording instead. "My intent was never to deceive but rather to present my arguments in the most efficient manner possible," he said in his letter to the judges. "However, I recognize that proper disclosure and transparency must always take precedence." A self-described entrepreneur, Dewald was appealing an earlier ruling in a contract dispute with a former employer. He eventually presented an oral argument at the appellate hearing, stammering and taking frequent pauses to regroup and read prepared remarks from his cellphone. As embarrassed as he might be, Dewald could take some comfort in the fact that actual lawyers have gotten into trouble for using AI in court. In 2023, a New York lawyer faced severe repercussions after he used ChatGPT to create a legal brief riddled with fake judicial opinions and legal citations. The case showcased the flaws in relying on AI and reverberated throughout the legal trade. The same year, Michael Cohen, a former lawyer and fixer for President Donald Trump, provided his lawyer with phony legal citations he had gotten from Google Bard, an AI program. Cohen ultimately pleaded for mercy from the federal judge presiding over his case, emphasizing that he had not known the generative text service could provide false information. Some experts say that AI and large language models can be helpful to people who have legal matters to deal with but cannot afford lawyers. Still, the technology's risks remain. "They can still hallucinate -- produce very compelling looking information" that is actually "either fake or nonsensical," said Daniel Shin, the assistant director of research at the Center for Legal and Court Technology at the William & Mary Law School. "That risk has to be addressed."
[8]
A defendant tried to use an AI avatar in a legal appeal. It didn't work
"I don't appreciate being misled," said a New York judge after seeing an AI avatar used to represent a defendant. A defendant in a New York appeals court has been slammed by a judge for using an artificial intelligence avatar to represent himself in a recent case. A New York appeals court faced an unusual situation in late March when Jerome Dewald, representing himself in an employment dispute, submitted an AI-generated avatar to present his legal arguments via video, a livestream of the hearing shows. It's the latest example of artificial intelligence tools trickling their way into courtrooms. Within seconds of the video starting, Justice Sallie Manzanet-Daniels called for it to stop, asking whether the avatar was counsel for the case. "I generated that," 74-year-old Dewald responded, adding, "That is not a real person." The judge appeared displeased, retorting, "It would have been nice to know that when you made your application," stating that the defendant had previously appeared before the court and been able to testify verbally in the past. "I don't appreciate being misled," the judge added. She asked the defendant if he was suffering from an ailment that prevented him from articulating before adding, "You are not going to use this courtroom as a launch for your business," and then yelling, "Shut that off," pointing to the video screen. Dewald later apologized, explaining he thought the AI avatar would deliver his arguments more eloquently than he could. Speaking to The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a San Francisco tech company to create the AI avatar. He originally tried to generate a digital replica of himself but was prevented by time constraints before the hearing. "The court was really upset about it," Dewald conceded, adding, "They chewed me up pretty good." Related: Meta's Llama 4 puts US back in lead to 'win the AI race' -- David Sacks The incident highlights growing challenges as AI enters the legal world. In 2023, a New York lawyer was blasted for citing fake cases generated by ChatGPT in a legal brief as part of a lawsuit against a Columbian airline. In March, Arizona's Supreme Court began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public. In September, the US Federal Trade Commission took action against companies it claimed misled consumers using AI, including a firm that offered an AI lawyer.
[9]
Man attempts to use AI avatar in court, is immediately 'chewed out' by judges
TL;DR: In a New York court, 74-year-old Jerome Dewald attempted to use an AI-generated avatar to represent himself in a legal dispute. The judge quickly dismissed the footage, expressing displeasure. Dewald's case highlights concerns over inappropriate AI use in legal settings, echoing past incidents involving AI-generated errors in legal research. In an unconventional deployment of AI tools, a defendant in the New York Courts has attempted to use an artificial intelligence avatar to represent himself in a legal dispute. Credit: Supreme Court of the State of New York The defendant in question, 74-year-old Jerome Dewald, appeared in the New York Supreme Court on March 26 regarding an employment matter. During the hearing, Dewald attempted to submit footage of an AI-generated avatar, which appeared in front of the judges and uttered a few words before being quickly shut down by the judge. "May it please the court. I come here today a humble pro se before a panel of five distinguished justices." the avatar began Justice Sallie Manzanet-Daniels quickly alluded to the footage's mechanical nature, questioning whether the avatar was, in fact, Dewald's legal counsel for the case. Dewald responded, confirming that it was an avatar generated by artificial intelligence. "I generated that. That's not a real person," Dewald answered. The judge immediately ordered the footage removed, highlighting displeasure towards Dewald's attempt. "It would have been nice to know that when you made your application. You did not tell me that sir," Manzanet-Daniels iterated. Dewald later claimed that he applied for permission to play pre-recorded footage for the appearance, utilizing a product from a San Francisco tech company to generate the avatar. He reportedly intended to generate a replica of himself, but was unable to achieve this due to time constraints. "The court was really upset about it," Dewald admitted "They chewed me up pretty good." Dewald's attempt joins a growing number of cases of AI being inappropriately deployed into legal settings. Two New York attorneys were fined in 2023 for using AI tools to conduct legal research that was later found to be full of hallucinations. President Trump's former lawyer, Michael Cohen, was also criticized for using AI tools to conduct legal research, eventually realizing that the tools could produce fictitious outputs.
[10]
An AI Avatar Tried to Argue a Case Before a New York Court. the Judges Weren't Having It
This screenshot from a video labeled as a March 26, 2025 live stream video on the YouTube channel of the Appellate division of the First Judicial Department of the Supreme Court of the state of New York, shows an artificial intelligence-generated avatar, bottom right, addressing the justices on a video screen set up in the courtroom. (Appellate division of the First Judicial Department of the Supreme Court of the state of New York via AP) NEW YORK (AP) -- It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen -- a person about to present an argument in a lawsuit -- not only had no law degree, but didn't exist at all. The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division's First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute. "The appellant has submitted a video for his argument," said Justice Sallie Manzanet-Daniels. "Ok. We will hear that video now." On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. "May it please the court," the man began. "I come here today a humble pro se before a panel of five distinguished justices." "Ok, hold on," Manzanet-Daniels said. "Is that counsel for the case?" "I generated that. That's not a real person," Dewald answered. It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. "It would have been nice to know that when you made your application. You did not tell me that sir," Manzanet-Daniels said before yelling across the room for the video to be shut off. "I don't appreciate being misled," she said before letting Dewald continue with his argument. Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words. In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing. "The court was really upset about it," Dewald conceded. "They chewed me up pretty good." Even real lawyers have gotten into trouble when their use of artificial intelligence went awry. In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a "good faith mistake" in failing to understand that artificial intelligence might make things up. Later that year, more fictious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. Those were errors, but Arizona's Supreme Court last month intentionally began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public. On the court's website, the avatars -- who go by "Daniel" and "Victoria" -- say they are there "to share its news." Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he wasn't surprised to learn of Dewald's introduction of a fake person to argue an appeals case in a New York court. "From my perspective, it was inevitable," he said. He said it was unlikely that a lawyer would do such a thing because of tradition and court rules and because they could be disbarred. But he said individuals who appear without a lawyer and request permission to address the court are usually not given instructions about the risks of using a synthetically produced video to present their case. Dewald said he tries to keep up with technology, having recently listened to a webinar sponsored by the American Bar Association that discussed the use of AI in the legal world. As for Dewald's case, it was still pending before the appeals court as of Thursday. Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[11]
A 74-Year-Old Needed a Lawyer, So He Used an AI Avatar in Court. It Didn't Go Well.
A judge listening to the case shut down the video within seconds. A New York courtroom came face-to-face with artificial intelligence last month when a plaintiff attempted to use an AI-generated avatar to present a case. Jerome Dewald, a 74-year-old plaintiff in an employment case, submitted an AI-generated video for his argument without telling judges beforehand. The video featured an AI-created person who didn't exist and was used to speak in his place. On March 26, the video played before five baffled New York State judges who were listening to the case at the New York State Supreme Court Appellate Division's First Judicial Department. The judges expected Dewald to speak on video, but the video he presented to them showed a young man in a button-down shirt and sweater. "May it please the court," said the AI-generated avatar. "I come here today a humble pro se before a panel of five distinguished justices." One of the judges, Justice Sallie Manzanet-Daniels, interrupted the presentation immediately before the avatar could speak another word. "Okay, hold on," she said. "Is that counsel for the case?" Related: TikTok's Symphony Avatars Make It Tough to Tell If It's a Human or an AI Clone in an Ad Dewald confirmed that it was and said that he had generated the person using AI. Manzanet-Daniels called for the video to be turned off. "I don't appreciate being misled," she said, noting that Dewald had not stated beforehand that he would be using AI to present his argument. Dewald was still allowed to make his argument himself, and he later wrote a letter of apology to the court explaining that he didn't have a lawyer and turned to AI to deliver his argument in a polished way, without stammering or pausing. "The court was really upset about it," Dewald told the Associated Press. He said that he had used a program from a San Francisco company to create the avatar. Related: Tennessee Just Passed a New Law to Protect Musicians From a Growing AI Threat -- And Even Taylor Swift Has Been a Victim This isn't the first time AI has made an appearance in the courtroom. In June 2023, a federal judge leveled a $5,000 fine on two lawyers and their New York-based firm, Levidow, Levidow, & Oberman, P.C., for using ChatGPT in their arguments. The AI chatbot made up quotes, cases, and citations, creating a fake legal history. However, AI technology has also been allowed to help courts function. In June 2023, the Eleventh Judicial Circuit of Florida released an AI digital chatbot, Sandi, so that anyone who visits the Miami-Dade Courts website can receive assistance from the bot in either English or Spanish.The Arizona Supreme Court introduced
[12]
An AI avatar tried to argue a case before a New York court. The judges weren't having it
NEW YORK (AP) -- It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen -- a person about to present an argument in a lawsuit -- not only had no law degree, but didn't exist at all. The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division's First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute. "The appellant has submitted a video for his argument," said Justice Sallie Manzanet-Daniels. "Ok. We will hear that video now." On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. "May it please the court," the man began. "I come here today a humble pro se before a panel of five distinguished justices." "Ok, hold on," Manzanet-Daniels said. "Is that counsel for the case?" "I generated that. That's not a real person," Dewald answered. It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. "It would have been nice to know that when you made your application. You did not tell me that sir," Manzanet-Daniels said before yelling across the room for the video to be shut off. "I don't appreciate being misled," she said before letting Dewald continue with his argument. Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words. In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing. "The court was really upset about it," Dewald conceded. "They chewed me up pretty good." Even real lawyers have gotten into trouble when their use of artificial intelligence went awry. In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a "good faith mistake" in failing to understand that artificial intelligence might make things up. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. Those were errors, but Arizona's Supreme Court last month intentionally began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public. On the court's website, the avatars -- who go by "Daniel" and "Victoria" -- say they are there "to share its news." Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he wasn't surprised to learn of Dewald's introduction of a fake person to argue an appeals case in a New York court. "From my perspective, it was inevitable," he said. He said it was unlikely that a lawyer would do such a thing because of tradition and court rules and because they could be disbarred. But he said individuals who appear without a lawyer and request permission to address the court are usually not given instructions about the risks of using a synthetically produced video to present their case. Dewald said he tries to keep up with technology, having recently listened to a webinar sponsored by the American Bar Association that discussed the use of AI in the legal world. As for Dewald's case, it was still pending before the appeals court as of Thursday.
[13]
New York Judges Nix Man's Attempt To Let AI Avatar Argue His Case
NEW YORK (AP) -- It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen -- a person about to present an argument in a lawsuit -- not only had no law degree, but didn't exist at all. The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division's First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute. "The appellant has submitted a video for his argument," said Justice Sallie Manzanet-Daniels. "Ok. We will hear that video now." On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. "May it please the court," the man began. "I come here today a humble pro se before a panel of five distinguished justices." "Ok, hold on," Manzanet-Daniels said. "Is that counsel for the case?" "I generated that. That's not a real person," Dewald answered. It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. "It would have been nice to know that when you made your application. You did not tell me that sir," Manzanet-Daniels said before yelling across the room for the video to be shut off. "I don't appreciate being misled," she said before letting Dewald continue with his argument. Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words. In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing. "The court was really upset about it," Dewald conceded. "They chewed me up pretty good." Even real lawyers have gotten into trouble when their use of artificial intelligence went awry. In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a "good faith mistake" in failing to understand that artificial intelligence might make things up. Later that year, more fictious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. Those were errors, but Arizona's Supreme Court last month intentionally began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public. On the court's website, the avatars -- who go by "Daniel" and "Victoria" -- say they are there "to share its news." Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he wasn't surprised to learn of Dewald's introduction of a fake person to argue an appeals case in a New York court. "From my perspective, it was inevitable," he said. He said it was unlikely that a lawyer would do such a thing because of tradition and court rules and because they could be disbarred. But he said individuals who appear without a lawyer and request permission to address the court are usually not given instructions about the risks of using a synthetically produced video to present their case. Dewald said he tries to keep up with technology, having recently listened to a webinar sponsored by the American Bar Association that discussed the use of AI in the legal world. As for Dewald's case, it was still pending before the appeals court as of Thursday.
[14]
"That's not a real person": NY judges shut down AI avatar in courtroom twist
In a New York appeals court, Jerome Dewald used an AI-generated avatar to present his argument in an employment dispute, sparking confusion and frustration among judges. The incident highlights the increasing use of AI in the legal system, raising questions about transparency and the risks of relying on AI-generated content. Despite the controversy, experts acknowledge AI's potential benefits but warn about its limitations.Jerome Dewald sat upright in front of five New York State appellate judges. A 74-year-old entrepreneur without legal training, he was there to contest a ruling in his employment case. But the real twist wasn't in the lawsuit. It came when the video he submitted began playing in court. A young, clean-cut man appeared on the courtroom screen. "May it please the court," the digital figure began. "I come here today a humble pro se before a panel of five distinguished justices." It took just seconds for Justice Sallie Manzanet-Daniels to interrupt. "Ok, hold on. Is that counsel for the case?" "No," Dewald replied from his seat. "I generated that. That's not a real person." The courtroom fell silent. The judge, visibly irked, responded sharply. "It would have been nice to know that when you made your application. You did not tell me that, sir," said Justice Manzanet-Daniels. She then raised her voice across the courtroom: "Shut that off. I don't appreciate being misled." The moment, captured on court cameras, has since stirred debate about the growing influence -- and misuse -- of AI in legal spaces. In an interview with The Associated Press, Dewald explained his reasoning. Representing himself in court had proven difficult. His voice often faltered under pressure. So, he applied to submit a pre-recorded video -- and used AI to deliver it more clearly. "I generated that," Dewald admitted again. "That is not a real person." He had used software developed by a San Francisco tech firm to build the avatar. Although he had initially intended to create a version that resembled him, technical issues got in the way. "The court was really upset about it," he said. "They chewed me up pretty good." In a letter to the court, Dewald later wrote, "My intent was never to deceive but rather to present my arguments in the most efficient manner possible. However, I recognise that proper disclosure and transparency must always take precedence." He added that the avatar was meant to compensate for his nervous speech: "I stumbled a lot in earlier hearings. I thought this might help." After the video was stopped, Dewald continued his argument the old-fashioned way. He spoke slowly, pausing often, and read from his phone. His embarrassment, however, isn't unique. Several licensed attorneys have also faced trouble for mishandling artificial intelligence in court. In June 2023, two New York lawyers and their firm were fined $5,000 each after submitting a legal brief filled with fabricated cases. They had relied on an AI chatbot, unaware that it could invent citations. The firm claimed it was a "good faith mistake". Later that year, former Trump lawyer Michael Cohen landed in similar trouble. He passed on legal citations to his attorney -- only to later discover they had been hallucinated by Google Bard. "I didn't know it could make stuff up," Cohen pleaded in court. Dewald's case underscores a larger shift. Courts are already experimenting with AI in controlled ways. Just last month, the Arizona Supreme Court launched two AI avatars named "Daniel" and "Victoria" to summarise decisions on its public website. These, however, come with full transparency. "From my perspective, it was inevitable," said Daniel Shin, assistant director of research at the Centre for Legal and Court Technology at William & Mary Law School. "They can still hallucinate -- produce very compelling looking information that is either fake or nonsensical. That risk has to be addressed." Shin noted that while licensed lawyers would likely avoid such tactics due to the risk of disbarment, individuals without legal counsel aren't given much guidance about the risks of AI. "There are no real instructions about this kind of thing," he said. Dewald said he had recently watched an American Bar Association webinar about AI in the legal world. He saw himself as simply trying to keep up. His case, still pending as of Thursday, may yet set precedent in how courts treat AI submissions -- especially from those without legal representation. But for now, the message from the bench is clear: the courtroom may be slow to adapt to avatars, even if technology continues to race ahead.
[15]
AI avatar tried to argue a case before New York court -- but judges...
It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen -- a person about to present an argument in a lawsuit -- not only had no law degree, but didn't exist at all. The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division's First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute. "The appellant has submitted a video for his argument," said Justice Sallie Manzanet-Daniels. "Ok. We will hear that video now." On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. "May it please the court," the man began. "I come here today a humble pro se before a panel of five distinguished justices." "Ok, hold on," Manzanet-Daniels said. "Is that counsel for the case?" "I generated that. That's not a real person," Dewald answered. It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. "It would have been nice to know that when you made your application. You did not tell me that sir," Manzanet-Daniels said before yelling across the room for the video to be shut off. "I don't appreciate being misled," she said before letting Dewald continue with his argument. Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words. In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing. "The court was really upset about it," Dewald conceded. "They chewed me up pretty good." Even real lawyers have gotten into trouble when their use of artificial intelligence went awry. In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a "good faith mistake" in failing to understand that artificial intelligence might make things up. Later that year, more fictious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. Those were errors, but Arizona's Supreme Court last month intentionally began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public. On the court's website, the avatars -- who go by "Daniel" and "Victoria" -- say they are there "to share its news." Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he wasn't surprised to learn of Dewald's introduction of a fake person to argue an appeals case in a New York court. "From my perspective, it was inevitable," he said. He said it was unlikely that a lawyer would do such a thing because of tradition and court rules and because they could be disbarred. But he said individuals who appear without a lawyer and request permission to address the court are usually not given instructions about the risks of using a synthetically produced video to present their case. Dewald said he tries to keep up with technology, having recently listened to a webinar sponsored by the American Bar Association that discussed the use of AI in the legal world. As for Dewald's case, it was still pending before the appeals court as of Thursday.
Share
Share
Copy Link
An AI entrepreneur's attempt to use an AI-generated avatar for legal representation in a New York court backfires, raising questions about the role of AI in legal proceedings and the boundaries of courtroom technology.
In a recent New York court hearing, an unexpected attempt to use artificial intelligence in legal proceedings sparked controversy and drew sharp criticism from the presiding judge. Jerome Dewald, a 74-year-old AI entrepreneur, tried to present his case in an employment dispute using an AI-generated avatar, only to face immediate rebuke from Justice Sallie Manzanet-Daniels 1.
Dewald, who operates a startup called Pro Se Pro aimed at helping unrepresented litigants navigate the US legal system, had received permission to submit a video for his case. However, he failed to inform the court that the video would feature an artificially generated speaker 2.
The AI avatar, described as a "big, beautiful hunk of a guy" named Jim, was one of the stock options provided by an AI avatar company called Tavus. As the video began playing, Justice Manzanet-Daniels quickly interrupted, asking, "Is that counsel for the case?" 3
Upon learning that the speaker was AI-generated, Justice Manzanet-Daniels expressed her displeasure: "It would have been nice to know that when you made your application. You did not tell me that, sir. I don't appreciate being misled." She further admonished Dewald, stating, "You are not going to use this courtroom as a launch for your business." 1
Dewald later explained that he had intended to use an AI-generated replica of himself due to difficulties with extended speaking stemming from a bout with throat cancer 25 years ago. However, technical issues led him to use a stock avatar instead 2.
In an interview following the incident, Dewald admitted, "The court was really upset about it. They chewed me up pretty good." He has since penned an apology to the court, stating that he hadn't intended any harm 3.
This incident highlights the growing intersection of AI and legal processes, raising questions about the appropriate use of technology in courtrooms. It follows other recent controversies, including cases where attorneys were penalized for submitting AI-generated legal research containing fictitious information 4.
Despite this setback, some legal experts believe that AI's role in the legal system is likely to expand. Daniel Shin, an adjunct professor at William & Mary Law School, commented, "From my perspective, it was inevitable." However, he noted that such innovations are more likely to come from individuals representing themselves rather than licensed attorneys bound by strict professional rules 3.
As the legal community grapples with the implications of AI, this incident serves as a cautionary tale about the importance of transparency and adherence to courtroom protocols when introducing new technologies into legal proceedings 5.
Reference
[2]
A 74-year-old plaintiff's attempt to use an AI-generated lawyer avatar in a New York courtroom backfires, raising questions about the use of artificial intelligence in legal proceedings.
2 Sources
2 Sources
Morgan & Morgan, a major US law firm, warns its attorneys about the risks of using AI-generated content in court filings after a case involving fake citations. The incident highlights growing concerns about AI use in the legal profession.
9 Sources
9 Sources
The Arizona Supreme Court introduces AI-generated avatars, Victoria and Daniel, to deliver news about court rulings, aiming to improve public understanding and trust in the judicial system.
8 Sources
8 Sources
DoNotPay, the company behind the self-proclaimed "world's first robot lawyer," has been fined $193,000 by the Federal Trade Commission for false advertising and deceptive practices. The AI-powered legal service faced scrutiny for its bold claims and ineffective operations.
7 Sources
7 Sources
Stanford professor Jeff Hancock admits to using ChatGPT for organizing citations in a legal document supporting Minnesota's anti-deepfake law, leading to AI-generated false information in the affidavit.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved