5 Sources
[1]
AI ruling prompts warnings from US lawyers: Your chats could be used against you
April 15 (Reuters) - As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. These warnings became more urgent after a federal judge in New York ruled, opens new tab this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him. In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic's Claude and OpenAI's ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases. "We are telling our clients: You should proceed with caution here," said Alexandria GutiĆ©rrez Swette, a lawyer at New York-based law firm Kobre & Kim. People's discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private. In emails to clients and advisories posted on their websites, more than a dozen major U.S. law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court. Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer's advice or communications with a chatbot could erase the legal protection known as attorney-client privilege ā that usually shields communications between lawyers and their clients. A JUDICIAL RULING The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent (BENF.O), opens new tab. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty. Heppner had used Anthropic's chatbot Claude to prepare reports about his case to share with his attorneys, who later argued that his AI exchanges should be withheld because they contained details from the lawyers related to his defense. Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots. Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications. Manhattan-based U.S. District Judge Jed Rakoff ruled, opens new tab in February that Heppner must hand over 31 documents generated by Anthropic's chatbot Claude related to the case. No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote. Lawyers for Heppner did not immediately respond to requests for comment. A spokesperson for the U.S. attorney's office in Manhattan declined to comment. Courts already are grappling with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which among other things has led to legal filings containing made-up cases invented by AI. Rakoff's decision was an important early test in the AI chatbot era for bedrock legal protections ā governing attorney-client communications and materials prepared for litigation. On the same day as Rakoff's ruling, U.S. Magistrate Judge Anthony Patti in Michigan said a woman representing herself in a lawsuit she brought against her former company did not have to hand over her chats with OpenAI's ChatGPT about the employment claims made in the case. Patti treated the woman's AI chats as part of her own personal "work-product" for the case, rather than as conversations with a person who her employer could seek to use for its defense. ChatGPT and other generative AI programs "are tools, not persons," Patti wrote in his order, opens new tab. The privacy and usage terms for both OpenAI and Anthropic state that the companies can share data involving their users with third parties. Both also state that they require users to consult a qualified professional before relying on their chatbots for legal ā advice. Rakoff at a February hearing in Heppner's case noted that Claude "expressly provided that users have no expectation of privacy in their inputs." Representatives for OpenAI and Anthropic did not immediately respond to requests for comment. LAWYERS RACE TO SET GUARDRAILS The advice from lawyers has ranged from telling clients to select their AI platforms carefully to suggesting specific language to use in chatbot prompts. Los Angeles-based O'Melveny & Myers and other firms said in client advisories that "closed" AI systems designed for corporate use could provide stronger protections for legal communications, though they said even that remains largely untested. Some ā firms said AI legal research is more likely to be protected by attorney-client privilege when it is conducted at the direction of a lawyer. If a lawyer does advise the use of AI, a person should say so in the chatbot prompt, New York-headquartered law firm Debevoise & Plimpton said in a notice on its website. "I am doing this research at the direction of counsel for X litigation," the firm suggested people write. Information about AI use is also becoming common in contracts used by law firms ā with clients, according to a Reuters review of contracts posted to a U.S. government website. Sher Tremonte, which often represents white-collar criminal defendants, said in a new contract in March: "Disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege." Justin Ellis of New York-headquartered law firm MoloLamken and other lawyers said they expect that more rulings will eventually clarify when AI chats can be used as evidence. Until then, attorneys are saying that an age-old assumption still applies: Do not talk to anyone except your lawyer about your case - including AI. Reporting by Mike Scarcella; Editing by David Bario, Amy Stevens and Will Dunham Our Standards: The Thomson Reuters Trust Principles., opens new tab
[2]
A US judge ruled that a fraud defendant's AI chats with Claude are not privileged
In a February ruling described as the first of its kind in the US, Judge Jed Rakoff found that Bradley Heppner's conversations with Anthropic's Claude about his legal exposure stripped away both attorney-client privilege and work-product protection, because an AI is not a lawyer and public AI platforms have no confidentiality obligation. More than a dozen major law firms have since issued client advisories. A landmark US federal court ruling has prompted a wave of legal warnings across the country: if you use a publicly available AI chatbot to research or discuss your legal situation, those conversations may be seized, disclosed to opposing counsel, and used as evidence against you. The case that set off the alarm is United States v. Heppner, in which Judge Jed S. Rakoff of the Southern District of New York ruled in February 2026 that a criminal defendant's private conversations with Anthropic's Claude AI were neither protected by attorney-client privilege nor covered by the work-product doctrine. The ruling, delivered orally on 10 February and followed by a written opinion on 17 February, is described by legal observers as the first decision of its kind in the United States on the question of whether AI chatbot conversations carry legal protection. The defendant, Bradley Heppner, was the former chairman of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent. He was charged by federal prosecutors in November 2025 with securities and wire fraud, and pleaded not guilty. After receiving a grand jury subpoena and before engaging defence counsel formally, Heppner used Claude to analyse his legal exposure, outline potential defence strategies, and develop legal arguments, acting on his own initiative rather than under direction from his attorneys. When the FBI searched his home, it seized approximately 31 documents memorialising these AI conversations. The government sought their production; Heppner argued they were privileged. Rakoff rejected that argument on three grounds. First, attorney-client privilege protects communications between a client and an attorney. Claude is not an attorney, has no law licence, owes no duty of loyalty, and cannot form a privileged relationship. As Rakoff put it from the bench, Heppner had "disclosed it to a third party, in effect, AI, which had no obligation of confidentiality." Second, there was no reasonable expectation of confidentiality: the judge examined Anthropic's terms of service and privacy policy, which explicitly permit data collection, use of inputs and outputs to train the model, and disclosure to third parties including governmental regulatory authorities. By clicking accept, Heppner had consented to a disclosure framework that is incompatible with privilege. Third, work-product protection did not apply because Heppner was not acting at the direction of his lawyers when he queried Claude, and the documents did not reflect his attorneys' strategy at the time of creation. On the same day as Rakoff's ruling, a federal magistrate judge in Michigan reached what initially appears to be the opposite conclusion. In Warner v. Gilbarco, Inc., Magistrate Judge Anthony Patti held that a pro se plaintiff's ChatGPT conversations about her employment discrimination case were protected as work product, reasoning that AI tools are "tools, not persons" and that work-product waiver requires disclosure to an adversary, not merely to a software platform. A third case, Morgan v. V2X (D. Colo., March 2026), reached a similar conclusion for another pro se litigant. Legal analysts note that these cases are factually distinguishable from Heppner: the plaintiffs in Warner and Morgan were self-represented, governed by a civil procedure rule expressly protective of work product, while Heppner was a represented criminal defendant who acted without attorney guidance. The courts themselves acknowledged they were not laying down broad rules for all scenarios. The practical impact has been immediate. Reuters reported that more than a dozen major US law firms have issued client advisories warning against using public AI platforms for anything touching legal matters. New York firm Sher Tremonte has gone further, adding contractual language to client engagement agreements stating that sharing a lawyer's advice or communications with a chatbot could erase attorney-client privilege. The consensus guidance from firms including Orrick, Crowell & Moring, and Fisher Phillips is consistent: treat public AI platforms as an inherently non-confidential environment; assume anything typed could be disclosed. Use only private, closed AI deployments whose terms of service do not permit training on inputs or disclosure to third parties; and always obtain explicit attorney direction before using any AI system in connection with legal matters.
[3]
Things You Told ChatGPT or Claude My Have Already Doomed You in Court
Can't-miss innovations from the bleeding edge of science and tech Most tech industry products are easily accessible by the US government. Ring Doorbells have given the Los Angeles Police Department warrantless access to their customer's camera footage. The FBI can extract your iPhones metadata to peep the content of your Signal messages. Google will happily comply with administrative subpoenas issued by Department of Homeland Security apparatchiks. A new ruling by a New York federal judge now definitively includes AI chatbots in that list. As part of a protracted legal battle involving Brad Heppner, former chair of financial service company GWG Holdings, US District Judge Jed Rakoff ruled that AI chatbots aren't subject to attorney-client privilege. Maybe that sounds like a no-brainer, but evidently some people need the reminder. In preparing background materials for his attorneys, Heppner made the wise decision to enter various reports into Anthropic's flagship chatbot Claude. The AI then spat out the preliminary reports, which his lawyers used to prepare his defense related to charges of securities and wire fraud, as Reuters reported. The problem is that while attorney-client privilege protects most information exchanged between Heppner and his lawyers, it doesn't extend to anything he jammed into Claude. As a result, the embattled financier is being forced to hand over 31 documents generated by Claude to the court. In his opinion, Rakoff wrote that no attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude." The judge went even further, absolving the chatbot from any impropriety, because "Claude disclaims providing legal advice." "Indeed, when the Government asked Claude whether it could give legal advice, it responded that 'I'm not a lawyer and can't provide formal legal advice or recommendations' and went on to recommend that a user 'should consult with a qualified attorney who can properly assess your specific circumstances,'" Rakoff further explained. Heppner's alleged fraud notwithstanding, the ruling has major implications for AI chatbot users who could now be incriminating themselves, whether they know it or not. The finding is already reverberating around law offices, according to Reuters. The white-collar defense firm Sher Tremonte, for example, updated its contract to reflect that "disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege." Again, tech companies handing personal data to Uncle Sam -- willingly or otherwise -- is nothing new. But when you consider how many people have dumped their entire brains into these chatbots over the past few years, it's clear this ruling represents a new frontier for tech industry compliance with government demands.
[4]
AI ruling prompts warnings from US lawyers: Your chats could be used against you - The Economic Times
Lawyers in the US are cautioning clients about using AI chatbots for legal matters. Recent court rulings indicate that conversations with AI like Claude and ChatGPT may not be protected by attorney-client privilege. This means sensitive case details shared with AI could be revealed to prosecutors or opposing parties in legal disputes.As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. These warnings became more urgent after a federal judge in New York ruled this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him. In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic's Claude and OpenAI's ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases. "We are telling our clients: You should proceed with caution here," said Alexandria Gutierrez Swette, a lawyer at New York-based law firm Kobre & Kim. People's discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more ā private. In emails ā to clients and advisories posted on their websites, more than a dozen major U.S. law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court. Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer's advice or communications with a chatbot could erase the legal protection known as attorney-client privilege that usually shields communications between lawyers and their clients. A judicial ruling The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty. Heppner had used Anthropic's chatbot Claude to prepare reports about his case to share with his attorneys, who later argued that his AI exchanges should be withheld because they contained details from the lawyers related to his defense. Prosecutors argued that they had a right to demand material that ā Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots. Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications. Manhattan-based U.S. District Judge Jed Rakoff ruled in February that Heppner must hand over 31 documents generated by Anthropic's chatbot Claude related to the case. No attorney-client ā relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote. Lawyers for Heppner did not immediately respond to requests for comment. A spokesperson for the U.S. attorney's office in Manhattan declined to comment. Courts already are grappling with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which among other things has led to legal filings containing made-up cases invented by AI. Rakoff's decision was an important early test in the AI chatbot era for bedrock legal protections governing attorney-client communications and materials prepared for litigation. On the same day as Rakoff's ruling, U.S. Magistrate Judge Anthony Patti in Michigan said a woman representing herself in a lawsuit she brought against her former company did not have to hand over her chats with OpenAI's ChatGPT about the employment claims made in the case. Patti treated the woman's AI chats as part of her own personal "work-product" for the case, rather than as conversations with a person who her employer could seek to use for its defense. ChatGPT and other generative AI programs "are tools, not persons," Patti wrote in his order. The privacy and usage terms for both OpenAI and Anthropic state that the companies can share data involving their users with third parties. Both also state that they require users to consult a qualified professional before relying on their chatbots for legal advice. Rakoff at a February hearing in Heppner's case noted that Claude "expressly provided that users have no expectation of privacy in their inputs." Representatives for OpenAI and Anthropic did not immediately respond to requests for comment. Lawyers race to set guardrails The advice from lawyers ā has ranged from telling clients to select their AI platforms carefully to suggesting specific language to use in chatbot prompts. Los Angeles-based O'Melveny & Myers and other firms said in client advisories that "closed" AI systems designed for corporate use could provide stronger protections for legal communications, though they said even that remains largely untested. Some firms said AI legal research is more likely to be protected by attorney-client privilege when it is conducted at the direction of a lawyer. If a lawyer does advise the use of AI, a person should say so in the chatbot prompt, New York-headquartered law firm Debevoise & Plimpton said in a notice on its website. "I am doing this research at the direction of counsel for [X] litigation," the firm suggested people write. Information about AI use is also becoming common in contracts used by law firms with clients, according to a Reuters review of contracts posted to a U.S. government website. Sher Tremonte, which often represents white-collar criminal defendants, said in a new contract in March: "Disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege." Justin Ellis of New York-headquartered law firm MoloLamken and other lawyers said they expect that more rulings will eventually clarify when AI chats can be used as evidence. Until then, attorneys are saying that an age-old assumption still applies: Do not talk to anyone except your lawyer about your case - including AI.
[5]
AI chatbot conversations can be used against people in court, lawyers warn after federal ruling
April 15 - As people increasingly turn to artificial intelligence for advice, some US lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. These warnings became more urgent after a federal judge in New York ruled this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him. In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic's Claude and OpenAI's ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases. "We are telling our clients: You should proceed with caution here," said Alexandria GutiĆ©rrez Swette, a lawyer at New York-based law firm Kobre & Kim. People's discussions with their lawyers are almost always deemed confidential under US law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private. In emails to clients and advisories posted on their websites, more than a dozen major US law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court. Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer's advice or communications with a chatbot could erase the legal protection known as attorney-client privilege ā that usually shields communications between lawyers and their clients. The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty. Heppner had used Anthropic's chatbot Claude to prepare reports about his case to share with his attorneys, who later argued that his AI exchanges should be withheld because they contained details from the lawyers related to his defense. Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots. Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications. Manhattan-based US District Judge Jed Rakoff ruled in February that Heppner must hand over 31 documents generated by Anthropic's chatbot Claude related to the case. No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote. Lawyers for Heppner did not immediately respond to requests for comment. A spokesperson for the US attorney's office in Manhattan declined to comment. Courts already are grappling with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which, among other things, has led to legal filings containing made-up cases invented by AI. Rakoff's decision was an important early test in the AI chatbot era for bedrock legal protections ā governing attorney-client communications and materials prepared for litigation. On the same day as Rakoff's ruling, US Magistrate Judge Anthony Patti in Michigan said a woman representing herself in a lawsuit she brought against her former company did not have to hand over her chats with OpenAI's ChatGPT about the employment claims made in the case. Patti treated the woman's AI chats as part of her own personal "work-product" for the case, rather than as conversations with a person who her employer could seek to use for its defense. ChatGPT and other generative AI programs "are tools, not persons," Patti wrote in his order. The privacy and usage terms for both OpenAI and Anthropic state that the companies can share data involving their users with third parties. Both also state that they require users to consult a qualified professional before relying on their chatbots for legal ā advice. Rakoff at a February hearing in Heppner's case noted that Claude "expressly provided that users have no expectation of privacy in their inputs." Representatives for OpenAI and Anthropic did not immediately respond to requests for comment. The advice from lawyers has ranged from telling clients to select their AI platforms carefully to suggesting specific language to use in chatbot prompts. Los Angeles-based O'Melveny & Myers and other firms said in client advisories that "closed" AI systems designed for corporate use could provide stronger protections for legal communications, though they said even that remains largely untested. Some ā firms said AI legal research is more likely to be protected by attorney-client privilege when it is conducted at the direction of a lawyer. If a lawyer does advise the use of AI, a person should say so in the chatbot prompt, New York-headquartered law firm Debevoise & Plimpton said in a notice on its website. "I am doing this research at the direction of counsel for X litigation," the firm suggested people write. Information about AI use is also becoming common in contracts used by law firms ā with clients, according to a Reuters review of contracts posted to a US government website. Sher Tremonte, which often represents white-collar criminal defendants, said in a new contract in March: "Disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege." Justin Ellis of New York-headquartered law firm MoloLamken and other lawyers said they expect that more rulings will eventually clarify when AI chats can be used as evidence. Until then, attorneys are saying that an age-old assumption still applies: Do not talk to anyone except your lawyer about your case -- including AI.
Share
Copy Link
A federal judge ruled that conversations with AI chatbots like Claude and ChatGPT are not protected by attorney-client privilege, meaning sensitive legal discussions could be seized and used as evidence. The February 2026 decision has prompted more than a dozen major US law firms to issue client advisories warning against treating AI tools as confidential legal advisors.
A February 2026 court ruling has sent shockwaves through the legal community, forcing lawyers across the United States to reconsider how their clients interact with AI chatbots. US District Judge Jed Rakoff in New York delivered what legal observers describe as the first decision of its kind, ruling that conversations with AI chatbots are not protected by attorney-client privilege and can be seized as evidence in criminal and civil cases
1
. The decision centered on Bradley Heppner, former chair of bankrupt financial services company GWG Holdings, who faces securities fraud charges and had used Anthropic's Claude to prepare reports about his case.
Source: New York Post
Judge Rakoff ordered Heppner to hand over 31 documents generated through his interactions with Claude, stating unequivocally that no attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude"
1
. The ruling hinged on three critical points: AI chatbots are not licensed attorneys, they owe no duty of confidentiality, and their terms of service explicitly permit data sharing with third parties including government authorities.The immediate aftermath of the court ruling has been a cascade of warnings from US lawyers. More than a dozen major law firms have issued client advisories cautioning against using AI tools for legal advice or sharing sensitive case information with platforms like ChatGPT and Claude
1
. "We are telling our clients: You should proceed with caution here," said Alexandria GutiƩrrez Swette, a lawyer at New York-based law firm Kobre & Kim4
.
Source: The Next Web
Some firms have gone beyond advisories and embedded protective language directly into their client contracts. New York-based firm Sher Tremonte now includes explicit warnings in engagement agreements stating that sharing a lawyer's advice or communications with a chatbot could constitute a waiver of attorney-client privilege
5
. This contractual approach reflects the legal profession's recognition that conversations with AI chatbots carry a significant risk of disclosure that could undermine client defenses.A closer examination of AI platform terms reveals why these warnings from US lawyers have become so urgent. Both OpenAI and Anthropic state in their privacy and usage terms that they can share user data with third parties, and both explicitly disclaim providing legal advice
4
. During a February hearing, Judge Rakoff noted that Claude "expressly provided that users have no expectation of privacy in their inputs"1
.When Rakoff asked Claude directly whether it could provide legal advice, the chatbot responded: "I'm not a lawyer and can't provide formal legal advice or recommendations," and recommended consulting "a qualified attorney who can properly assess your specific circumstances"
3
. This self-disclaimer became part of the judicial reasoning that AI chats not privileged status should apply universally to public AI platforms.Related Stories
On the same day Rakoff issued his ruling, US Magistrate Judge Anthony Patti in Michigan reached what appeared to be a contradictory conclusion in Warner v. Gilbarco, Inc. Patti ruled that a woman representing herself in an employment discrimination lawsuit did not have to hand over her ChatGPT conversations, treating them as protected work-product rather than conversations with a third party
5
. Patti wrote that ChatGPT and other generative AI programs "are tools, not persons".Legal analysts note these cases are factually distinguishable. The work-product doctrine protections applied in Michigan involved self-represented plaintiffs in civil cases, while Heppner was a represented criminal defendant who used AI without attorney guidance. A third case, Morgan v. V2X in Colorado in March 2026, reached a similar protective conclusion for another pro se litigant. However, legal experts caution that these rulings don't establish broad protections and emphasize that anyone with legal representation should avoid using public AI platforms for case-related matters.
The broader implications of this court ruling reach far beyond individual legal cases. As people increasingly rely on AI chatbots for various forms of advice, the Heppner decision establishes a precedent that could expose users to unexpected legal jeopardy
3
. The ruling represents what some observers call "a new frontier for tech industry compliance with government demands," particularly given how many people have shared sensitive personal information with these platforms over recent years3
.
Source: Futurism
Law firms are now advising clients to use only "closed" AI systems designed for corporate use, which may provide stronger confidentiality protections, though even these remain largely untested in court
5
. The guidance ranges from selecting AI platforms carefully to suggesting specific language for chatbot prompts that might preserve some level of protection. Legal professionals expect courts will continue grappling with how artificial intelligence intersects with bedrock legal protections governing attorney-client communications and materials prepared for litigation, making this an area where users should watch for evolving standards and additional case law that could either strengthen or further erode privacy expectations when using AI tools for legal advice.Summarized by
Navi
05 Aug 2025ā¢Policy and Regulation

08 Oct 2025ā¢Technology

25 Jun 2025ā¢Policy and Regulation

1
Health

2
Technology

3
Technology
