10 Sources
[1]
OpenAI is storing deleted ChatGPT conversations as part of its NYT lawsuit
Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. OpenAI says it's forced to store deleted ChatGPT conversations "indefinitely" due to a court order issued as part of The New York Times' copyright lawsuit against it. In a post on Thursday, OpenAI chief operating officer Brad Lightcap says the company is appealing the court's decision, which he calls an "overreach" that "abandons long-standing privacy norms and weakens privacy protections." Last month, a court ordered OpenAI to preserve "all output log data that would otherwise be deleted," even if a user requests the deletion of a chat or if privacy laws require OpenAI to delete data. OpenAI's policies state that when a user deletes a chat, it retains it for 30 days before permanently deleting it. The company must now put a pause on this policy until the court says otherwise. OpenAI says the court order will impact free, Pro, Plus, and Team ChatGPT users. It won't affect ChatGPT Enterprise or ChatGPT Edu customers, or businesses that have a zero data retention agreement. OpenAI adds that the data won't be public, and "only a small, audited OpenAI legal and security team" will be able to access the stored information for legal purposes. The Times sued OpenAI and Microsoft for copyright infringement in 2023, accusing the companies of "copying and using millions" of the newspaper's articles to train their AI models. The publication argues that saving user data could help preserve evidence to support its case. "We think this was an inappropriate request that sets a bad precedent," OpenAI CEO Sam Altman said in a post on X. "We will fight any demand that compromises our users' privacy; this is a core principle." The New York Times declined to comment.
[2]
OpenAI appeals court order forcing it to preserve all ChatGPT data
OpenAI has appealed a court ruling from last month that forces it to retain ChatGPT data indefinitely as part of a copyright violation case brought by The New York Times in 2023. CEO Sam Altman said in a tweet on X that the judge's decision "compromises our users' privacy" and "sets a bad precedent." In May, federal judge Ona T. Wang ordered OpenAI to preserve and segregate all ChatGPT output log data that would otherwise be deleted due to a user request. She said that the ruling was justified because the volume of deleted conversations is "significant." The directive notes that the judge asked OpenAI if there was a way to anonymize the data to address users' privacy concerns. The New York Times sought the order so that it can accurately track how often OpenAI violates its IP, including instances when users requested deletion of chats. A federal judge allowed the original case to proceed, agreeing with the NYT's argument that OpenAI and Microsoft's tech had induced users to plagiarize its materials. In a FAQ on its site, OpenAI painted the order as a privacy issue without addressing the millions of alleged copyright violations. "This fundamentally conflicts with the privacy commitments we have made to our users," the company wrote. "It abandons long-standing privacy norms and weakens privacy protections." OpenAI noted that the order "does not impact ChatGPT Enterprise or ChatGPT Edu customers." The NYT and other AI copyright cases are still ongoing, as courts have not yet decided whether OpenAI, Google and other companies infringed copyrights on a massive scale by scraping material from the internet. The tech companies have argued that training is protected by "fair use" copyright law and that the lawsuits threaten the AI industry. Creators of that content, in turn, argue that AI harms their own livelihoods by stealing and reproducing works with little to no compensation.
[3]
OpenAI Appeals 'Sweeping, Unprecedented Order' Requiring It Maintain All ChatGPT Logs
Although OpenAI has continually slammed the New York Times' copyright lawsuit, the case isn't as meritless as the company claims. Last month, a federal judge ordered OpenAI to indefinitely maintain all of ChatGPT's data as part of an ongoing copyright lawsuit. In response, OpenAI has filed an appeal to overturn the decision stating that the "sweeping, unprecedented order" violates its users' privacy. The New York Times sued both OpenAI and Microsoft in 2023 claiming that the companies violated copyrights by using its articles to train their language models. However, OpenAI said the Times' case is "without merit" and argued that the training falls under "fair use". Previously, OpenAI only kept chat logs for users of ChatGPT Free, Plus, and Pro who didn't opt out. However, in May, the Times and other news organizations claimed that OpenAI was engaging in a "substantial, ongoing" destruction of chat logs that could contain evidence of copyright violations. Judge Ona Wang responded by ordering ChatGPT to maintain and segregate all ChatGPT logs that would otherwise be deleted. In a court appeal, OpenAI argued that Wang's order "prevent[s] OpenAI from respecting its users' privacy decisions." According to Ars Technica, the company also claimed that the Times' accusations were "unfounded", writing, "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events. The order appears to have incorrectly assumed the contrary." "The [Times] and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us," COO Brad Lightcap said in a statement. He added that the demand for OpenAI to retain all data "abandons long-standing privacy norms and weakens privacy protections." On X, CEO Sam Altman wrote that the "inappropriate request...sets a bad precedent." He also added that the case highlights the need for "AI privilege" where "talking to an AI should be like talking to a lawyer or a doctor." The court order triggered an initial wave of panic. Per Ars Technica, OpenAI's court filing cited social media posts from LinkedIn and X where users expressed concerns about their privacy. On LinkedIn, one person warned their clients to be "extra careful" about what information they shared with ChatGPT. In another example, someone tweeted, "Wang apparently thinks the NY Times' boomer copyright concerns trump the privacy of EVERY @OPENAI USER - insane!!!" On one hand, I couldn't imagine having a ChatGPT log sensitive enough data that I'd care if someone else read it. However, people do use ChatGPT as a therapist, for life advice, and even treat it as a romantic partner. Regardless of whether I'd personally do the same, they deserve the right to keep that content private.Γ At the same time, the Times' case isn't as baseless as OpenAI claims. It is absolutely worth discussing how artificial intelligence is trained. Remember when Clearview AI scraped 30 billion images from Facebook to train its facial recognition? Or reports that the federal government uses images of vulnerable people to test facial recognition software? Yes, those examples exist outside of journalism and copyright law. However, it highlights the need for conversations about whether companies like OpenAI should need explicit consent to utilize content rather than scraping whatever they want from the internet.
[4]
Sam Altman says AI chats should be as private as 'talking to a lawyer or a doctor', but OpenAI could soon be forced to keep your ChatGPT conversations forever
Back in December 2023, the New York Times launched a lawsuit against OpenAI and Microsoft, alleging copyright infringement. The New York Times alleges that OpenAI had trained its ChatGPT model, which also powers Microsoft's Copilot, by "copying and using millions" of its articles without permission. The lawsuit is still ongoing, and as part of it the New York Times (and other plaintiffs involved in the case) have made the demand that OpenAI are made to retain consumer ChatGPT and API customer data indefinitely, much to the ire of Sam Altman, CEO of OpenAI, who took to X.com to tweet, "We have been thinking recently about the need for something like 'AI privilege'; this really accelerates the need to have the conversation. IMO talking to an AI should be like talking to a lawyer or a doctor. I hope society will figure this out soon." OpenAI describes the New York Times lawsuit as "baseless", and in a lengthy post on the OpenAI website titled, 'How we're responding to The New York Times' data demands in order to protect user privacy', OpenAI lays out its approach to privacy. Brad Lightcap, COO, OpenAI, says that the demand from the NYT "fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections." As more and more people share intimate details of their lives with AI chatbots, which are often taking on the role of a therapist, I can appreciate the need to be able to keep AI conversations private, however, I can also see the NYT's point of view that if there is evidence that supports its claims against OpenAI then it needs to have access to that data without OpenAI being able to declare it all as too private to share. At the moment, a ChatGPT chat is removed from your account immediately when you delete the conversation, and scheduled for permanent deletion from OpenAI systems within 30 days. The order would mean that even deleted ChatGPT conversations would have to be retained by OpenAI. As a ChatGPT user myself, I've always appreciated the ability to be able to remove conversations entirely. If OpenAI is forced to comply with this request, then it's going to affect pretty much everybody who uses the service, on either a free, Plus, Pro, or Teams (but not Enterprise or Edu account holders). The order also does not impact API customers who are using Zero Data Retention endpoints under OpenAI's ZDR amendment. OpenAI has said it has appealed the order to the District Court Judge and will inform us when it knows more.
[5]
Sam Altman calls for 'AI privilege' as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Regular ChatGPT users (among whom include the author of this article) may or may not have noticed that the hit chatbot from OpenAI allows users to enter into a "temporary chat" that is designed to wipe all the information exchanged between the user and the underlying AI model as soon as the chat session is closed by the user. In addition, OpenAI also allows users to manually delete prior ChatGPT sessions from the sidebar on the web and desktop/mobile apps by left-clicking or control-clicking, or holding down/long pressing on them from the selector. However, this week, OpenAI found itself facing criticism from some of said ChatGPT users after they discovered that the company has not actually been deleting these chat logs as previously indicated. As AI influencer and software engineer Simon Willison wrote on his personal blog: "Paying customers of [OpenAI's] APIs may well make the decision to switch to other providers who can offer retention policies that aren't subverted by this court order!" "You're telling me my deleted chatgpt chats are actually not deleted and is being saved to be investigated by a judge?" posted X user @ns123abc, a comment that drew over a million views. Another user, @kepano, added, "you can 'delete' a ChatGPT chat, however all chats must be retained due to legal obligations ?". Instead, OpenAI confirmed it has been preserving deleted and temporary user chat logs since mid-May 2025 in response to a federal court order, though it did not disclose this to users until yesterday, June 5th. The order, embedded below and issued on May 13, 2025, by U.S. Magistrate Judge Ona T. Wang, requires OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis," including chats deleted by user request or due to privacy obligations. The court's directive stems from The New York Times (NYT) v. OpenAI and Microsoft, a now three-year-old copyright case still being argued in which the NYT's lawyers allege that OpenAI's language models regurgitate copyrighted news content verbatim. The plaintiffs argue that logs, including those users may have deleted, could contain infringing outputs relevant to the lawsuit. While OpenAI complied with the order immediately, it did not publicly notify affected users for more than three weeks, when OpenAI issued a blog post and an FAQ describing the legal mandate and outlining who is impacted. However, OpenAI is placing the blame squarely on the NYT and the judge's order, saying it believes the preservation demand to be "baseless." OpenAI clarifies what's going on with the court order to preserve ChatGPT user logs -- including which chats are impacted In a blog post published yesterday, OpenAI Chief Operating Officer Brad Lightcap defended the company's position and stated that it was advocating for user privacy and security against an over-broad judicial order, writing: "The New York Times and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us: retain consumer ChatGPT and API customer data indefinitely. This fundamentally conflicts with the privacy commitments we have made to our users." The post clarified that ChatGPT Free, Plus, Pro, and Team users, along with API customers without a Zero Data Retention (ZDR) agreement, are affected by the preservation order, meaning even if users on these plans delete their chats or use temporary chat mode, their chats will be stored for the foreseeable future. However, subscribers to the ChatGPT Enterprise and Edu users, as well as API clients using ZDR endpoints, are not impacted by the order and their chats will be deleted as directed. The retained data is held under legal hold, meaning it is stored in a secure, segregated system and only accessible to a small number of legal and security personnel. "This data is not automatically shared with The New York Times or anyone else," Lightcap emphasized in OpenAI's blog post. Sam Altman floats new concept of 'AI privilege' allowing for confidential conversations between models and users, similar to speaking to a human doctor or lawyer OpenAI CEO and co-founder Sam Altman also addressed the issue publicly in a post from his account on the social network X last night, writing: "recently the NYT asked a court to force us to not delete any user chats. we think this was an inappropriate request that sets a bad precedent. we are appealing the decision. we will fight any demand that compromises our users' privacy; this is a core principle." He also suggested a broader legal and ethical framework may be needed for AI privacy: "we have been thinking recently about the need for something like 'AI privilege'; this really accelerates the need to have the conversation." "imo talking to an AI should be like talking to a lawyer or a doctor." The notion of AI privilege -- as a potential legal standard -- echoes attorney-client and doctor-patient confidentiality. Whether such a framework would gain traction in courtrooms or policy circles remains to be seen, but Altman's remarks indicate OpenAI may increasingly advocate for such a shift. What comes next for OpenAI and your temporary/deleted chats? OpenAI has filed a formal objection to the court's order, requesting that it be vacated. In court filings, the company argues that the demand lacks a factual basis and that preserving billions of additional data points is neither necessary nor proportionate. Judge Wang, in a May 27 hearing, indicated the order is temporary. She instructed the parties to develop a sampling plan to test whether deleted user data materially differs from retained logs. OpenAI was ordered to submit that proposal by today, June 6, but I have yet to see the filing. What it means for enterprises and decision-makers in charge of ChatGPT usage in corporate environments While the order exempts ChatGPT Enterprise and API customers using ZDR endpoints, the broader legal and reputational implications matter deeply for professionals responsible for deploying and scaling AI solutions inside organizations. Those who oversee the full lifecycle of large language models -- from data ingestion to fine-tuning and integration -- will need to reassess assumptions about data governance. If user-facing components of an LLM are subject to legal preservation orders, it raises urgent questions about where data goes after it leaves a secure endpoint, and how to isolate, log, or anonymize high-risk interactions. Any platform touching OpenAI APIs must validate which endpoints (e.g., ZDR vs non-ZDR) are used and ensure data handling policies are reflected in user agreements, audit logs, and internal documentation. Even if ZDR endpoints are used, data lifecycle policies may require review to confirm that downstream systems (e.g., analytics, logging, backup) do not inadvertently retain transient interactions that were presumed short-lived. Security officers responsible for managing risk must now expand threat modeling to include legal discovery as a potential vector. Teams must verify whether OpenAI's backend retention practices align with internal controls and third-party risk assessments, and whether users are relying on features like "temporary chat" that no longer function as expected under legal preservation. A new flashpoint for user privacy and security This moment is not just a legal skirmish; it is a flashpoint in the evolving conversation around AI privacy and data rights. By framing the issue as a matter of "AI privilege," OpenAI is effectively proposing a new social contract for how intelligent systems handle confidential inputs. Whether courts or lawmakers accept that framing remains uncertain. But for now, OpenAI is caught in a balancing act -- between legal compliance, enterprise assurances, and user trust -- and facing louder questions about who controls your data when you talk to a machine.
[6]
OpenAI Challenges Court Order to Preserve User Data in NYT Lawsuit - Decrypt
The New York Times suit alleges OpenAI illegally used copyrighted content for training OpenAI is contesting a federal court order requiring it to preserve all user data, including deleted chats, as part of a copyright lawsuit brought by The New York Times. "We strongly believe this is an overreach by The New York Times. We're continuing to appeal this order so we can keep putting your trust and privacy first," OpenAI COO Brad Lightcap said in a statement. The decision stems from a May 13 order to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court." The New York Times sued OpenAI and Microsoft in December 2023, alleging that both companies illegally used Times content to train large language models like ChatGPT and Bing Chat. The Times claims this infringes on its copyrights and threatens the business model of original journalism. It said last month that potential evidence of copyright infringement might be deleted as users clear their chat histories. At the heart of the case is whether using copyrighted material to train generative AI models constitutes "fair use." The Times alleges that OpenAI's tools sometimes generate near-verbatim outputs from its articles and can bypass its paywall through AI-generated summaries. Both sides have argued they are taking the moral high ground. The Times has said it is protecting journalism and the ability of the media to do its work and get paid for it. OpenAI CEO Sam Altman has accused the outlet of being "on the wrong side of history", while the company has said The Times cherry-picked the data used in the suit. As the generative AI industry expands, courts are becoming key battlegrounds in the fight over data, privacy, and intellectual property. The lawsuit is one of several high-profile copyright claims brought against OpenAI and other AI firms. In April, Ziff Davis, which owns media outlets such as PCMag and Mashable, sued OpenAI over allegations of using its content without consent. This week, Reddit filed a suit against another AI company, Anthropic, alleging it scraped Reddit data without permission. Anthropic is also facing lawsuits from music publishers and authors.
[7]
OpenAI appeals data retention order over NYT copyright lawsuit
News outlets fighting the lawsuit claim that OpenAI is destroying output logs. OpenAI is appealing a court order which forces the AI giant to preserve ChatGPT logs, arguing that the order conflicts with the company's commitment to privacy. In a blog yesterday (5 June), OpenAI's chief operating officer Brad Lightcap rebutted that the plaintiffs - The New York Times, The New York Daily News and the Centre for Investigative Reporting - made a "sweeping and unnecessary" demand for ChatGPT and API output data. However, the News Plaintiffs - as they are known collectively - claim that OpenAI is destroying output logs. They argue that OpenAI's output log data is necessary to show that the company copies and sometimes misattributes their work, and at times, paraphrases them or uses their direct quotes in its output. OpenAI had a duty to preserve relevant output log data, the plaintiffs say, especially as its own privacy policy mentions that it may retain data for any legal obligations. Last month, a magistrate judge at the US district court in the Southern District of New York held that OpenAI needs to preserve and segregate all output log data that would otherwise be deleted. This includes the free, Plus, Pro and Team versions of ChatGPT, as well as its API. The company is clearly unhappy with the decision, arguing that the requests were "vastly overbroad". It maintained that a "preserve everything" request brings up issues around user preference, as well as privacy laws and regulations in different regions of the world. The New York Times initially filed the lawsuit against AI juggernaut OpenAI and its biggest backer Microsoft back in 2023 over claims that AI chatbots, including ChatGPT, are trained on millions of articles it published. "Defendants seek to free-ride on The Times's massive investment in its journalism by using it to build substitutive products without permission or payment," the New York Times alleged. It later joined forces with other outlets. Recently, several of OpenAI and Microsoft's motions aiming to dismiss claims from the lawsuit were denied, including their motion to dismiss direct and contributory infringement and claims of trademark dilution. Meanwhile, as the copyright infringement lawsuit moves forward, The New York Times recently signed an agreement to let Amazon use its content for generative AI. Besides news articles, Amazon will be allowed to use the publication's cooking and sports content as well. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[8]
OpenAI to retain deleted ChatGPT conversations following court order - SiliconANGLE
OpenAI to retain deleted ChatGPT conversations following court order OpenAI will retain users' deleted ChatGPT conversations to comply with a recently issued court order. Brad Lightcap, the artificial intelligence developer's Chief Operating Officer, disclosed the move in a late Thursday blog post. When users delete ChatGPT prompts and the chatbot's responses, OpenAI usually retains the data for 30 days before permanently erasing it. Going forward, the AI provider will stop discarding the logs. OpenAI will likewise retain many of the requests sent to its application programming interfaces along with the output generated in response. Not all of the ChatGPT developer's users are affected. OpenAI won't apply the new data retention policy to companies that use its API under a so-called Zero Data Retention agreement. According to Lightcap, the Enterprise and Edu editions of ChatGPT are exempt as well. The court order that prompted the new data retention policy was issued last month in connection with a high-profile copyright lawsuit against OpenAI. In 2023, the New York Times sued the ChatGPT developer for using its content without permission. The paper alleges that OpenAI and Microsoft Corp., one of the AI provider's most important partners, incorporated millions of articles into their model training datasets. The lawsuit further charged that ChatGPT outputted Times content verbatim without attribution. Several other publications filed similar copyright lawsuits a few months later. Those claims were subsequently combined with the Times' lawsuit into a single case. In March, a federal court dismissed parts of the lawsuit but ruled that it can proceed to trial. A few weeks ago, the judge presiding over the case ordered OpenAI to retain users' deleted prompts and prompt responses. The ruling stemmed from concerns that discarding the data might delete evidence relevant for the case. According to an OpenAI filing, the court determined that customers who "use ChatGPT for infringing purposes -- e.g., to 'get around the [Times's] pay wall' -- they might be more likely to 'delete all [their] searches' to cover their tracks." OpenAI stated that the data it must retain to comply with the ruling will be "stored separately in a secure system." The ChatGPT developer says that the system's contents can only be accessed to meet legal obligations. "Only a small, audited OpenAI legal and security team would be able to access this data as necessary to comply with our legal obligations," OpenAI stated. "This data is not automatically shared with The New York Times or anyone else. It's locked under a separate legal hold, meaning it's securely stored and can only be accessed under strict legal protocols."
[9]
OpenAI Appeals Court Order Requiring Retention of Consumer Data | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The order came in response to a demand from The New York Times and other plaintiffs in a lawsuit they brought against the artificial intelligence (AI) company, because they believe the data might support their case, OpenAI said in a Thursday (June 5) blog post. "This fundamentally conflicts with the privacy commitments we have made to our users," OpenAI Chief Operating Officer Brad Lightcap said in the post. "It abandons long-standing privacy norms and weakens privacy protections." Lightcap added that OpenAI believes the order is "an overreach" by The New York Times and that the company is appealing the order. Reached by PYMNTS, The New York Times declined to comment on the blog post. The order came in The New York Times' copyright case against OpenAI and Microsoft, Reuters reported Thursday. The publisher filed the suit in 2023, alleging that the companies used its articles without permission to train their AI models. The order requires the company to preserve ChatGPT output data indefinitely, according to the report. OpenAI appealed the order Tuesday (June 3), the report said, citing a court filing. According to OpenAI's Thursday blog post, the company's data retention policies vary by the type of account but generally include permanent removal of deleted chats within 30 days, unless legal or security reasons require it to do otherwise. For business customers using Zero Data Retention endpoints, inputs and outputs are not retained. This is not impacted by the court order, the post said. Under the court order, deleted chats that would normally be removed from OpenAI's systems will instead be retained and stored in a secure system, protected under legal hold and accessible only when required to meet legal obligations, per the post. OpenAI CEO Sam Altman said in a Thursday post on X: "We have been thinking recently about the need for something like 'AI privilege'; this really accelerates the need to have the conversation. [In my opinion] talking to an AI should be like talking to a lawyer or doctor. I hope society will figure this out soon." In another post, Altman said: "(maybe spousal privilege is a better analogy)." When The New York Times filed its lawsuit alleging copyright infringement in December 2023, the newspaper said in an article that the suit accuses OpenAI and Microsoft of "using the Times's content without payment to create products that substitute for The Times and steal audiences away from it."
[10]
OpenAI Pushes Back Against Court's Data Retention Order
OpenAI has pushed back against the court order to retain output logs. This comes as part of an ongoing lawsuit between the AI company and The New York Times. Filed in 2023, this lawsuit claims that OpenAI used the New York Times' copyrighted works to train its models, and the models are now competing with the publication. As per an order in this case in May this year, the New York Times had urged that OpenAI should preserve output logs on a "wholesale basis." In response, the company had brought up user data deletion requests and the fact that numerous privacy laws in the US contemplate data deletion rights for users. The court, however, ordered OpenAI to preserve and segregate all output log data that would otherwise be deleted going forward until a further court order, even if a user asked the company to delete it or if privacy laws required the company to do so. "We strongly believe this [demand for data retention] is an overreach by the New York Times. We're continuing to appeal this order," OpenAI says. In a post discussing the development, OpenAI says that the order will affect users with a ChatGPT Free, Plus, Pro, and Team subscription and users who access the OpenAI API (without a Zero Data Retention agreement). Previously, OpenAI used to delete a chat from ChatGPT free, plus, pro, and team based on user requests within 30 days. A similar policy is also in place for OpenAI API users, besides Zero Data Retention APIs, wherein the company never logs inputs and outputs. Users with the ChatGPT Enterprise and ChatGPT Edu subscriptions will also remain unaffected. The company explained that it will not automatically share the data retained as a result of the order with the New York Times or anyone else. "It's locked under a separate legal hold, meaning it's securely stored and can only be accessed under strict legal protocols," OpenAI explained. It says that if the New York Times demands this data, the company will "protect user privacy at every step." The demand for preserving user logs first came before the court in January 2025 as part of another case in which OpenAI is fighting against the Authors Guild. In this case, the guild had demanded that OpenAI should carry out wholesale preservation of user logs (end-user prompts and outputs). At the time, the court had denied the demand, but later in May, a group of news organisations (including The New York Times, The Center for Investigative Reporting, Chicago Tribune Company) requested the court to compel OpenAI to identify the output logs it has destroyed since The Times filed its Original Complaint on December 27, 2023. They mentioned that the volume of user conversations that OpenAI destroyed since the original complaint began is substantial. OpenAI mentions that the reason the news organisations wanted this data retained was to find something that may support their case. It argues that the court's order for data retention suggests that OpenAI carried out targeted deletions in response to litigation events, which the company claims is "unequivocally false." In a letter OpenAI sent to the court asking it to reconsider its stance on data retention, the company points out that this requirement adversely affects its users. "Millions of individuals, businesses, and other organisations use OpenAI's services in a way that implicates uniquely private information -- including sensitive personal information, proprietary business data, and internal government documents," the company explains. It gives the example of data points like conversations that a user may have had with OpenAI's AI models about a family member's health condition or immigration status. As such, from the company's perspective, the data retention demand can undermine user trust. From the publishers' perspective, given the millions of interactions users probably have with ChatGPT in a day, the data could potentially include examples of alleged copyright violations, adding more strength to their case. However, as OpenAI mentioned in its letter, it currently retains tens of billions of user interactions, which is far more than any expert could review, making it challenging for the publishers to find evidence.
Share
Copy Link
OpenAI is appealing a court order that requires it to indefinitely store deleted ChatGPT conversations as part of The New York Times' copyright lawsuit. The company argues this violates user privacy, while CEO Sam Altman calls for 'AI privilege' to protect user-AI interactions.
In a significant development for AI privacy and copyright law, OpenAI has been ordered by a federal court to indefinitely store all ChatGPT conversations, including those deleted by users, as part of an ongoing copyright lawsuit filed by The New York Times (NYT) 1. This order, issued in May 2023, has sparked a heated debate about user privacy and the legal implications of AI-generated content.
Source: PYMNTS
OpenAI has strongly opposed this decision, arguing that it violates user privacy and sets a dangerous precedent. The company's Chief Operating Officer, Brad Lightcap, stated that the order "fundamentally conflicts with the privacy commitments we have made to our users" and "abandons long-standing privacy norms" 2. OpenAI has filed an appeal to overturn the decision, describing it as a "sweeping, unprecedented order" that compromises user privacy 3.
The court order affects users of ChatGPT Free, Plus, Pro, and Team versions, as well as API customers without a Zero Data Retention (ZDR) agreement 4. Previously, OpenAI's policy was to permanently delete user-deleted chats after 30 days. Now, even temporary chats and those manually deleted by users must be retained. However, ChatGPT Enterprise and Edu customers, as well as businesses with ZDR agreements, are not affected by this order 1.
The NYT sued OpenAI and Microsoft in December 2023, alleging that the companies infringed on copyrights by using millions of the newspaper's articles to train their AI models without permission 5. The Times argues that preserving user data could help maintain evidence to support its case, particularly instances where users may have requested the deletion of potentially infringing content 2.
Source: engadget
OpenAI CEO Sam Altman has expressed strong concerns about the privacy implications of this order. In a post on X (formerly Twitter), Altman suggested the need for "AI privilege," arguing that "talking to an AI should be like talking to a lawyer or a doctor" 4. This concept proposes that interactions between users and AI should be protected by a level of confidentiality similar to attorney-client or doctor-patient privilege 5.
This case highlights the ongoing tension between AI development, copyright protection, and user privacy. While OpenAI and other AI companies argue that their use of online content for training falls under "fair use" copyright law, content creators contend that AI reproduction of their work threatens their livelihoods 2. The outcome of this case and similar lawsuits could have far-reaching implications for the AI industry and how it interacts with copyright law.
Source: VentureBeat
OpenAI has clarified that the retained data is held under legal hold in a secure, segregated system, accessible only to a small team of legal and security personnel for legal purposes 5. The company emphasizes that this data is not automatically shared with the NYT or any other parties 5.
As this legal battle continues, it raises important questions about the balance between technological innovation, intellectual property rights, and individual privacy in the rapidly evolving field of artificial intelligence. The resolution of this case could set significant precedents for how AI companies handle user data and navigate copyright issues in the future.
Summarized by
Navi
The Trump administration is planning a series of executive actions to increase energy supply and infrastructure for AI development, aiming to maintain U.S. leadership in the AI race against China.
9 Sources
Technology
20 hrs ago
9 Sources
Technology
20 hrs ago
Germany's data protection commissioner has called on Apple and Google to remove the Chinese AI app DeepSeek from their app stores, citing illegal data transfers to China and non-compliance with EU data protection laws.
17 Sources
Policy and Regulation
20 hrs ago
17 Sources
Policy and Regulation
20 hrs ago
OpenAI has begun using Google's AI chips to power ChatGPT and other products, marking a shift from its reliance on Nvidia GPUs and Microsoft's data centers. This move aims to reduce costs and meet growing computing needs.
3 Sources
Technology
4 hrs ago
3 Sources
Technology
4 hrs ago
Microsoft's in-house AI chip, codenamed Braga, has been delayed by at least six months, pushing its mass production to 2026. The chip is expected to underperform compared to Nvidia's Blackwell, highlighting the challenges tech giants face in developing custom AI processors.
8 Sources
Technology
12 hrs ago
8 Sources
Technology
12 hrs ago
Meta Platforms is in advanced talks with private capital firms to raise $29 billion for building AI data centers, highlighting the company's aggressive push into artificial intelligence infrastructure.
4 Sources
Business and Economy
12 hrs ago
4 Sources
Business and Economy
12 hrs ago