Federal court ruling sparks warnings: AI chatbot conversations could be used as evidence

4 Sources

Share

A landmark federal court ruling has prompted urgent warnings from US lawyers about using AI chatbots for legal matters. Judge Jed Rakoff ruled that conversations with AI platforms like Claude are not protected by attorney-client privilege, meaning prosecutors can demand these chats as evidence. More than a dozen major law firms are now advising clients to treat AI chatbot conversations as non-confidential.

Federal Court Ruling Strips Legal Protection from AI Chatbot Conversations

A groundbreaking federal court ruling has sent shockwaves through the legal community, prompting warnings from US lawyers that AI chatbot conversations could be seized and used as evidence in court. In February, U.S. District Judge Jed Rakoff in Manhattan ruled that Bradley Heppner, former chair of bankrupt financial services company GWG Holdings, must hand over 31 documents generated through his conversations with Anthropic's Claude chatbot

1

. Heppner, charged with securities fraud and wire fraud in November, had used Claude to prepare reports about his case before sharing them with his attorneys.

Source: New York Post

Source: New York Post

The ruling established that no attorney-client privilege exists "or could exist, between an AI user and a platform such as Claude," according to Rakoff's written opinion

3

. This landmark decision marks the first of its kind in the United States on whether AI chats are not privileged, fundamentally challenging how people use AI tools for legal advice.

Lack of Confidentiality Undermines Traditional Legal Protections

The federal court ruling hinged on three critical factors that stripped away both attorney-client privilege and work-product doctrine protections. First, ChatGPT, Claude, and similar AI platforms are not lawyers and cannot form privileged relationships with users. Second, Rakoff examined Anthropic's terms of service, which explicitly permit data collection and disclosure to third parties, including governmental regulatory authorities. "Claude expressly provided that users have no expectation of privacy in their inputs," the judge noted during a February hearing

1

.

Source: The Next Web

Source: The Next Web

Third, Heppner was not acting under his attorneys' direction when he queried Claude, meaning the documents did not reflect his lawyers' strategy at the time of creation. Voluntarily revealing information from a lawyer to any third party can jeopardize customary legal protections for attorney communications, creating a risk of disclosure in legal proceedings

4

. The privacy and usage terms for both OpenAI and Anthropic explicitly state that companies can share data involving their users with third parties, reinforcing the lack of confidentiality inherent in public AI platforms

3

.

Urgent Client Advisories from Major Law Firms

In the wake of this federal court ruling, more than a dozen major US law firms have issued urgent client advisories warning against using AI chatbot conversations for legal matters. "We are telling our clients: You should proceed with caution here," said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim

1

. These warnings from US lawyers emphasize that chats could be used against you by prosecutors in criminal cases or by litigation adversaries in civil cases.

New York-based firm Sher Tremonte has taken the extraordinary step of adding contractual language to client engagement agreements, stating that sharing a lawyer's advice or communications with a chatbot could erase attorney-client privilege protections

1

. Law firms including Orrick, Crowell & Moring, and Fisher Phillips have issued consistent guidance: treat public AI platforms as inherently non-confidential environments and assume anything typed could become evidence or face disclosure.

Contrasting Rulings Create Uncertainty for Self-Represented Litigants

On the same day as Rakoff's New York ruling, U.S. Magistrate Judge Anthony Patti in Michigan reached what initially appears to be a contrasting conclusion. In Warner v. Gilbarco, Inc., Patti held that a pro se plaintiff's ChatGPT conversations about her employment discrimination case were protected under the work-product doctrine

1

. Patti reasoned that AI tools are "tools, not persons," and that work-product waiver requires disclosure to an adversary, not merely to a software platform

4

.

A third case, Morgan v. V2X in Colorado in March, reached a similar conclusion for another self-represented litigant. Legal analysts note these cases are factually distinguishable: the plaintiffs in Warner and Morgan were self-represented and governed by civil procedure rules expressly protective of work product, while Heppner was a represented criminal defendant who acted without attorney guidance. The courts themselves acknowledged they were not establishing broad rules for all scenarios, leaving the legal landscape uncertain for those using AI tools for legal advice.

Privacy Risks and the Path Forward

The immediate practical impact centers on how individuals and companies navigate privacy risks when using AI chatbot conversations for legal matters. Los Angeles-based O'Melveny & Myers and other law firms have advised in client communications that "closed" AI systems designed for corporate use could provide stronger protections for legal communications, though even that remains largely untested

4

. The consensus guidance emphasizes using only private AI deployments whose terms of service do not permit training on inputs or disclosure to third parties, and always obtaining explicit attorney direction before using any AI system in connection with legal proceedings.

Source: Reuters

Source: Reuters

Courts are already grappling with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which has led to legal filings containing fabricated cases invented by AI

3

. Rakoff's decision represents an important early test in the AI chatbot era for bedrock legal protections governing attorney-client communications and materials prepared for litigation. As people increasingly turn to artificial intelligence for advice when their freedom or legal liability is on the line, the question of confidentiality and potential waiver of privilege will likely generate additional court battles and regulatory scrutiny in coming months.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo