4 Sources
4 Sources
[1]
AI ruling prompts warnings from US lawyers: Your chats could be used against you
April 15 (Reuters) - As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. These warnings became more urgent after a federal judge in New York ruled, opens new tab this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him. In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic's Claude and OpenAI's ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases. "We are telling our clients: You should proceed with caution here," said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim. People's discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private. In emails to clients and advisories posted on their websites, more than a dozen major U.S. law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court. Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer's advice or communications with a chatbot could erase the legal protection known as attorney-client privilege that usually shields communications between lawyers and their clients. A JUDICIAL RULING The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent (BENF.O), opens new tab. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty. Heppner had used Anthropic's chatbot Claude to prepare reports about his case to share with his attorneys, who later argued that his AI exchanges should be withheld because they contained details from the lawyers related to his defense. Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots. Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications. Manhattan-based U.S. District Judge Jed Rakoff ruled, opens new tab in February that Heppner must hand over 31 documents generated by Anthropic's chatbot Claude related to the case. No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote. Lawyers for Heppner did not immediately respond to requests for comment. A spokesperson for the U.S. attorney's office in Manhattan declined to comment. Courts already are grappling with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which among other things has led to legal filings containing made-up cases invented by AI. Rakoff's decision was an important early test in the AI chatbot era for bedrock legal protections governing attorney-client communications and materials prepared for litigation. On the same day as Rakoff's ruling, U.S. Magistrate Judge Anthony Patti in Michigan said a woman representing herself in a lawsuit she brought against her former company did not have to hand over her chats with OpenAI's ChatGPT about the employment claims made in the case. Patti treated the woman's AI chats as part of her own personal "work-product" for the case, rather than as conversations with a person who her employer could seek to use for its defense. ChatGPT and other generative AI programs "are tools, not persons," Patti wrote in his order, opens new tab. The privacy and usage terms for both OpenAI and Anthropic state that the companies can share data involving their users with third parties. Both also state that they require users to consult a qualified professional before relying on their chatbots for legal advice. Rakoff at a February hearing in Heppner's case noted that Claude "expressly provided that users have no expectation of privacy in their inputs." Representatives for OpenAI and Anthropic did not immediately respond to requests for comment. LAWYERS RACE TO SET GUARDRAILS The advice from lawyers has ranged from telling clients to select their AI platforms carefully to suggesting specific language to use in chatbot prompts. Los Angeles-based O'Melveny & Myers and other firms said in client advisories that "closed" AI systems designed for corporate use could provide stronger protections for legal communications, though they said even that remains largely untested. Some firms said AI legal research is more likely to be protected by attorney-client privilege when it is conducted at the direction of a lawyer. If a lawyer does advise the use of AI, a person should say so in the chatbot prompt, New York-headquartered law firm Debevoise & Plimpton said in a notice on its website. "I am doing this research at the direction of counsel for X litigation," the firm suggested people write. Information about AI use is also becoming common in contracts used by law firms with clients, according to a Reuters review of contracts posted to a U.S. government website. Sher Tremonte, which often represents white-collar criminal defendants, said in a new contract in March: "Disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege." Justin Ellis of New York-headquartered law firm MoloLamken and other lawyers said they expect that more rulings will eventually clarify when AI chats can be used as evidence. Until then, attorneys are saying that an age-old assumption still applies: Do not talk to anyone except your lawyer about your case - including AI. Reporting by Mike Scarcella; Editing by David Bario, Amy Stevens and Will Dunham Our Standards: The Thomson Reuters Trust Principles., opens new tab
[2]
A US judge ruled that a fraud defendant's AI chats with Claude are not privileged
In a February ruling described as the first of its kind in the US, Judge Jed Rakoff found that Bradley Heppner's conversations with Anthropic's Claude about his legal exposure stripped away both attorney-client privilege and work-product protection, because an AI is not a lawyer and public AI platforms have no confidentiality obligation. More than a dozen major law firms have since issued client advisories. A landmark US federal court ruling has prompted a wave of legal warnings across the country: if you use a publicly available AI chatbot to research or discuss your legal situation, those conversations may be seized, disclosed to opposing counsel, and used as evidence against you. The case that set off the alarm is United States v. Heppner, in which Judge Jed S. Rakoff of the Southern District of New York ruled in February 2026 that a criminal defendant's private conversations with Anthropic's Claude AI were neither protected by attorney-client privilege nor covered by the work-product doctrine. The ruling, delivered orally on 10 February and followed by a written opinion on 17 February, is described by legal observers as the first decision of its kind in the United States on the question of whether AI chatbot conversations carry legal protection. The defendant, Bradley Heppner, was the former chairman of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent. He was charged by federal prosecutors in November 2025 with securities and wire fraud, and pleaded not guilty. After receiving a grand jury subpoena and before engaging defence counsel formally, Heppner used Claude to analyse his legal exposure, outline potential defence strategies, and develop legal arguments, acting on his own initiative rather than under direction from his attorneys. When the FBI searched his home, it seized approximately 31 documents memorialising these AI conversations. The government sought their production; Heppner argued they were privileged. Rakoff rejected that argument on three grounds. First, attorney-client privilege protects communications between a client and an attorney. Claude is not an attorney, has no law licence, owes no duty of loyalty, and cannot form a privileged relationship. As Rakoff put it from the bench, Heppner had "disclosed it to a third party, in effect, AI, which had no obligation of confidentiality." Second, there was no reasonable expectation of confidentiality: the judge examined Anthropic's terms of service and privacy policy, which explicitly permit data collection, use of inputs and outputs to train the model, and disclosure to third parties including governmental regulatory authorities. By clicking accept, Heppner had consented to a disclosure framework that is incompatible with privilege. Third, work-product protection did not apply because Heppner was not acting at the direction of his lawyers when he queried Claude, and the documents did not reflect his attorneys' strategy at the time of creation. On the same day as Rakoff's ruling, a federal magistrate judge in Michigan reached what initially appears to be the opposite conclusion. In Warner v. Gilbarco, Inc., Magistrate Judge Anthony Patti held that a pro se plaintiff's ChatGPT conversations about her employment discrimination case were protected as work product, reasoning that AI tools are "tools, not persons" and that work-product waiver requires disclosure to an adversary, not merely to a software platform. A third case, Morgan v. V2X (D. Colo., March 2026), reached a similar conclusion for another pro se litigant. Legal analysts note that these cases are factually distinguishable from Heppner: the plaintiffs in Warner and Morgan were self-represented, governed by a civil procedure rule expressly protective of work product, while Heppner was a represented criminal defendant who acted without attorney guidance. The courts themselves acknowledged they were not laying down broad rules for all scenarios. The practical impact has been immediate. Reuters reported that more than a dozen major US law firms have issued client advisories warning against using public AI platforms for anything touching legal matters. New York firm Sher Tremonte has gone further, adding contractual language to client engagement agreements stating that sharing a lawyer's advice or communications with a chatbot could erase attorney-client privilege. The consensus guidance from firms including Orrick, Crowell & Moring, and Fisher Phillips is consistent: treat public AI platforms as an inherently non-confidential environment; assume anything typed could be disclosed. Use only private, closed AI deployments whose terms of service do not permit training on inputs or disclosure to third parties; and always obtain explicit attorney direction before using any AI system in connection with legal matters.
[3]
AI ruling prompts warnings from US lawyers: Your chats could be used against you - The Economic Times
Lawyers in the US are cautioning clients about using AI chatbots for legal matters. Recent court rulings indicate that conversations with AI like Claude and ChatGPT may not be protected by attorney-client privilege. This means sensitive case details shared with AI could be revealed to prosecutors or opposing parties in legal disputes.As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. These warnings became more urgent after a federal judge in New York ruled this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him. In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic's Claude and OpenAI's ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases. "We are telling our clients: You should proceed with caution here," said Alexandria Gutierrez Swette, a lawyer at New York-based law firm Kobre & Kim. People's discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private. In emails to clients and advisories posted on their websites, more than a dozen major U.S. law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court. Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer's advice or communications with a chatbot could erase the legal protection known as attorney-client privilege that usually shields communications between lawyers and their clients. A judicial ruling The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty. Heppner had used Anthropic's chatbot Claude to prepare reports about his case to share with his attorneys, who later argued that his AI exchanges should be withheld because they contained details from the lawyers related to his defense. Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots. Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications. Manhattan-based U.S. District Judge Jed Rakoff ruled in February that Heppner must hand over 31 documents generated by Anthropic's chatbot Claude related to the case. No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote. Lawyers for Heppner did not immediately respond to requests for comment. A spokesperson for the U.S. attorney's office in Manhattan declined to comment. Courts already are grappling with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which among other things has led to legal filings containing made-up cases invented by AI. Rakoff's decision was an important early test in the AI chatbot era for bedrock legal protections governing attorney-client communications and materials prepared for litigation. On the same day as Rakoff's ruling, U.S. Magistrate Judge Anthony Patti in Michigan said a woman representing herself in a lawsuit she brought against her former company did not have to hand over her chats with OpenAI's ChatGPT about the employment claims made in the case. Patti treated the woman's AI chats as part of her own personal "work-product" for the case, rather than as conversations with a person who her employer could seek to use for its defense. ChatGPT and other generative AI programs "are tools, not persons," Patti wrote in his order. The privacy and usage terms for both OpenAI and Anthropic state that the companies can share data involving their users with third parties. Both also state that they require users to consult a qualified professional before relying on their chatbots for legal advice. Rakoff at a February hearing in Heppner's case noted that Claude "expressly provided that users have no expectation of privacy in their inputs." Representatives for OpenAI and Anthropic did not immediately respond to requests for comment. Lawyers race to set guardrails The advice from lawyers has ranged from telling clients to select their AI platforms carefully to suggesting specific language to use in chatbot prompts. Los Angeles-based O'Melveny & Myers and other firms said in client advisories that "closed" AI systems designed for corporate use could provide stronger protections for legal communications, though they said even that remains largely untested. Some firms said AI legal research is more likely to be protected by attorney-client privilege when it is conducted at the direction of a lawyer. If a lawyer does advise the use of AI, a person should say so in the chatbot prompt, New York-headquartered law firm Debevoise & Plimpton said in a notice on its website. "I am doing this research at the direction of counsel for [X] litigation," the firm suggested people write. Information about AI use is also becoming common in contracts used by law firms with clients, according to a Reuters review of contracts posted to a U.S. government website. Sher Tremonte, which often represents white-collar criminal defendants, said in a new contract in March: "Disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege." Justin Ellis of New York-headquartered law firm MoloLamken and other lawyers said they expect that more rulings will eventually clarify when AI chats can be used as evidence. Until then, attorneys are saying that an age-old assumption still applies: Do not talk to anyone except your lawyer about your case - including AI.
[4]
AI chatbot conversations can be used against people in court, lawyers warn after federal ruling
April 15 - As people increasingly turn to artificial intelligence for advice, some US lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. These warnings became more urgent after a federal judge in New York ruled this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him. In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic's Claude and OpenAI's ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases. "We are telling our clients: You should proceed with caution here," said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim. People's discussions with their lawyers are almost always deemed confidential under US law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private. In emails to clients and advisories posted on their websites, more than a dozen major US law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court. Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer's advice or communications with a chatbot could erase the legal protection known as attorney-client privilege that usually shields communications between lawyers and their clients. The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty. Heppner had used Anthropic's chatbot Claude to prepare reports about his case to share with his attorneys, who later argued that his AI exchanges should be withheld because they contained details from the lawyers related to his defense. Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots. Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications. Manhattan-based US District Judge Jed Rakoff ruled in February that Heppner must hand over 31 documents generated by Anthropic's chatbot Claude related to the case. No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote. Lawyers for Heppner did not immediately respond to requests for comment. A spokesperson for the US attorney's office in Manhattan declined to comment. Courts already are grappling with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which, among other things, has led to legal filings containing made-up cases invented by AI. Rakoff's decision was an important early test in the AI chatbot era for bedrock legal protections governing attorney-client communications and materials prepared for litigation. On the same day as Rakoff's ruling, US Magistrate Judge Anthony Patti in Michigan said a woman representing herself in a lawsuit she brought against her former company did not have to hand over her chats with OpenAI's ChatGPT about the employment claims made in the case. Patti treated the woman's AI chats as part of her own personal "work-product" for the case, rather than as conversations with a person who her employer could seek to use for its defense. ChatGPT and other generative AI programs "are tools, not persons," Patti wrote in his order. The privacy and usage terms for both OpenAI and Anthropic state that the companies can share data involving their users with third parties. Both also state that they require users to consult a qualified professional before relying on their chatbots for legal advice. Rakoff at a February hearing in Heppner's case noted that Claude "expressly provided that users have no expectation of privacy in their inputs." Representatives for OpenAI and Anthropic did not immediately respond to requests for comment. The advice from lawyers has ranged from telling clients to select their AI platforms carefully to suggesting specific language to use in chatbot prompts. Los Angeles-based O'Melveny & Myers and other firms said in client advisories that "closed" AI systems designed for corporate use could provide stronger protections for legal communications, though they said even that remains largely untested. Some firms said AI legal research is more likely to be protected by attorney-client privilege when it is conducted at the direction of a lawyer. If a lawyer does advise the use of AI, a person should say so in the chatbot prompt, New York-headquartered law firm Debevoise & Plimpton said in a notice on its website. "I am doing this research at the direction of counsel for X litigation," the firm suggested people write. Information about AI use is also becoming common in contracts used by law firms with clients, according to a Reuters review of contracts posted to a US government website. Sher Tremonte, which often represents white-collar criminal defendants, said in a new contract in March: "Disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege." Justin Ellis of New York-headquartered law firm MoloLamken and other lawyers said they expect that more rulings will eventually clarify when AI chats can be used as evidence. Until then, attorneys are saying that an age-old assumption still applies: Do not talk to anyone except your lawyer about your case -- including AI.
Share
Share
Copy Link
A landmark federal court ruling has prompted urgent warnings from US lawyers about using AI chatbots for legal matters. Judge Jed Rakoff ruled that conversations with AI platforms like Claude are not protected by attorney-client privilege, meaning prosecutors can demand these chats as evidence. More than a dozen major law firms are now advising clients to treat AI chatbot conversations as non-confidential.
A groundbreaking federal court ruling has sent shockwaves through the legal community, prompting warnings from US lawyers that AI chatbot conversations could be seized and used as evidence in court. In February, U.S. District Judge Jed Rakoff in Manhattan ruled that Bradley Heppner, former chair of bankrupt financial services company GWG Holdings, must hand over 31 documents generated through his conversations with Anthropic's Claude chatbot
1
. Heppner, charged with securities fraud and wire fraud in November, had used Claude to prepare reports about his case before sharing them with his attorneys.
Source: New York Post
The ruling established that no attorney-client privilege exists "or could exist, between an AI user and a platform such as Claude," according to Rakoff's written opinion
3
. This landmark decision marks the first of its kind in the United States on whether AI chats are not privileged, fundamentally challenging how people use AI tools for legal advice.The federal court ruling hinged on three critical factors that stripped away both attorney-client privilege and work-product doctrine protections. First, ChatGPT, Claude, and similar AI platforms are not lawyers and cannot form privileged relationships with users. Second, Rakoff examined Anthropic's terms of service, which explicitly permit data collection and disclosure to third parties, including governmental regulatory authorities. "Claude expressly provided that users have no expectation of privacy in their inputs," the judge noted during a February hearing
1
.
Source: The Next Web
Third, Heppner was not acting under his attorneys' direction when he queried Claude, meaning the documents did not reflect his lawyers' strategy at the time of creation. Voluntarily revealing information from a lawyer to any third party can jeopardize customary legal protections for attorney communications, creating a risk of disclosure in legal proceedings
4
. The privacy and usage terms for both OpenAI and Anthropic explicitly state that companies can share data involving their users with third parties, reinforcing the lack of confidentiality inherent in public AI platforms3
.In the wake of this federal court ruling, more than a dozen major US law firms have issued urgent client advisories warning against using AI chatbot conversations for legal matters. "We are telling our clients: You should proceed with caution here," said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim
1
. These warnings from US lawyers emphasize that chats could be used against you by prosecutors in criminal cases or by litigation adversaries in civil cases.New York-based firm Sher Tremonte has taken the extraordinary step of adding contractual language to client engagement agreements, stating that sharing a lawyer's advice or communications with a chatbot could erase attorney-client privilege protections
1
. Law firms including Orrick, Crowell & Moring, and Fisher Phillips have issued consistent guidance: treat public AI platforms as inherently non-confidential environments and assume anything typed could become evidence or face disclosure.Related Stories
On the same day as Rakoff's New York ruling, U.S. Magistrate Judge Anthony Patti in Michigan reached what initially appears to be a contrasting conclusion. In Warner v. Gilbarco, Inc., Patti held that a pro se plaintiff's ChatGPT conversations about her employment discrimination case were protected under the work-product doctrine
1
. Patti reasoned that AI tools are "tools, not persons," and that work-product waiver requires disclosure to an adversary, not merely to a software platform4
.A third case, Morgan v. V2X in Colorado in March, reached a similar conclusion for another self-represented litigant. Legal analysts note these cases are factually distinguishable: the plaintiffs in Warner and Morgan were self-represented and governed by civil procedure rules expressly protective of work product, while Heppner was a represented criminal defendant who acted without attorney guidance. The courts themselves acknowledged they were not establishing broad rules for all scenarios, leaving the legal landscape uncertain for those using AI tools for legal advice.
The immediate practical impact centers on how individuals and companies navigate privacy risks when using AI chatbot conversations for legal matters. Los Angeles-based O'Melveny & Myers and other law firms have advised in client communications that "closed" AI systems designed for corporate use could provide stronger protections for legal communications, though even that remains largely untested
4
. The consensus guidance emphasizes using only private AI deployments whose terms of service do not permit training on inputs or disclosure to third parties, and always obtaining explicit attorney direction before using any AI system in connection with legal proceedings.
Source: Reuters
Courts are already grappling with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which has led to legal filings containing fabricated cases invented by AI
3
. Rakoff's decision represents an important early test in the AI chatbot era for bedrock legal protections governing attorney-client communications and materials prepared for litigation. As people increasingly turn to artificial intelligence for advice when their freedom or legal liability is on the line, the question of confidentiality and potential waiver of privilege will likely generate additional court battles and regulatory scrutiny in coming months.Summarized by
Navi
05 Aug 2025•Policy and Regulation

08 Oct 2025•Technology

22 Sept 2025•Technology
