3 Sources
[1]
NYT to start searching deleted ChatGPT logs after beating OpenAI in court
Last week, OpenAI raised objections in court, hoping to overturn a court order requiring the AI company to retain all ChatGPT logs "indefinitely," including deleted and temporary chats. But Sidney Stein, the US district judge reviewing OpenAI's request, immediately denied OpenAI's objections. He was seemingly unmoved by the company's claims that the order forced OpenAI to abandon "long-standing privacy norms" and weaken privacy protections that users expect based on ChatGPT's terms of service. Rather, Stein suggested that OpenAI's user agreement specified that their data could be retained as part of a legal process, which Stein said is exactly what is happening now. The order was issued by magistrate judge Ona Wang just days after news organizations, led by The New York Times, requested it. The news plaintiffs claimed the order was urgently needed to preserve potential evidence in their copyright case, alleging that ChatGPT users are likely to delete chats where they attempted to use the chatbot to skirt paywalls to access news content. A spokesperson told Ars that OpenAI plans to "keep fighting" the order, but the ChatGPT maker seems to have few options left. They could possibly petition the Second Circuit Court of Appeals for a rarely granted emergency order that could intervene to block Wang's order, but the appeals court would have to consider Wang's order an extraordinary abuse of discretion for OpenAI to win that fight. OpenAI's spokesperson declined to confirm if the company plans to pursue this extreme remedy. In the meantime, OpenAI is negotiating a process that will allow news plaintiffs to search through the retained data. Perhaps the sooner that process begins, the sooner the data will be deleted. And that possibility puts OpenAI in the difficult position of having to choose between either caving to some data collection to stop retaining data as soon as possible or prolonging the fight over the order and potentially putting more users' private conversations at risk of exposure through litigation or, worse, a data breach. News orgs will soon start searching ChatGPT logs The clock is ticking, and so far, OpenAI has not provided any official updates since a June 5 blog post detailing which ChatGPT users will be affected. While it's clear that OpenAI has been and will continue to retain mounds of data, it would be impossible for The New York Times or any news plaintiff to search through all that data. Instead, only a small sample of the data will likely be accessed, based on keywords that OpenAI and news plaintiffs agree on. That data will remain on OpenAI's servers, where it will be anonymized, and it will likely never be directly produced to plaintiffs. Both sides are negotiating the exact process for searching through the chat logs, with both parties seemingly hoping to minimize the amount of time the chat logs will be preserved. For OpenAI, sharing the logs risks revealing instances of infringing outputs that could further spike damages in the case. The logs could also expose how often outputs attribute misinformation to news plaintiffs. But for news plaintiffs, accessing the logs is not considered key to their case -- perhaps providing additional examples of copying -- but could help news organizations argue that ChatGPT dilutes the market for their content. That could weigh against the fair use argument, as a judge opined in a recent ruling that evidence of market dilution could tip an AI copyright case in favor of plaintiffs. Jay Edelson, a leading consumer privacy lawyer, told Ars that he's concerned that judges don't seem to be considering that any evidence in the ChatGPT logs wouldn't "advance" news plaintiffs' case "at all," while really changing "a product that people are using on a daily basis." Edelson warned that OpenAI itself probably has better security than most firms to protect against a potential data breach that could expose these private chat logs. But "lawyers have notoriously been pretty bad about securing data," Edelson suggested, so "the idea that you've got a bunch of lawyers who are going to be doing whatever they are" with "some of the most sensitive data on the planet" and "they're the ones protecting it against hackers should make everyone uneasy." So even though odds are pretty good that the majority of users' chats won't end up in the sample, Edelson said the mere threat of being included might push some users to rethink how they use AI. He further warned that ChatGPT users turning to OpenAI rival services like Anthropic's Claude or Google's Gemini could suggest that Wang's order is improperly influencing market forces, which also seems "crazy." To Edelson, the most "cynical" take could be that news plaintiffs are possibly hoping the order will threaten OpenAI's business to the point where the AI company agrees to a settlement. Regardless of the news plaintiffs' motives, the order sets an alarming precedent, Edelson said. He joined critics suggesting that more AI data may be frozen in the future, potentially affecting even more users as a result of the sweeping order surviving scrutiny in this case. Imagine if litigation one day targets Google's AI search summaries, Edelson suggested. Lawyer slams judges for giving ChatGPT users no voice Edelson told Ars that the order is so potentially threatening to OpenAI's business that the company may not have a choice but to explore every path available to continue fighting it. "They will absolutely do something to try to stop this," Edelson predicted, calling the order "bonkers" for overlooking millions of users' privacy concerns while "strangely" excluding enterprise customers. From court filings, it seems possible that enterprise users were excluded to protect OpenAI's competitiveness, but Edelson suggested there's "no logic" to their exclusion "at all." By excluding these ChatGPT users, the judge's order may have removed the users best resourced to fight the order, Edelson suggested. "What that means is the big businesses, the ones who have the power, all of their stuff remains private, and no one can touch that," Edelson said. Instead, the order is "only going to intrude on the privacy of the common people out there," which Edelson said "is really offensive," given that Wang denied two ChatGPT users' panicked request to intervene. "We are talking about billions of chats that are now going to be preserved when they weren't going to be preserved before," Edelson said, noting that he's input information about his personal medical history into ChatGPT. "People ask for advice about their marriages, express concerns about losing jobs. They say really personal things. And one of the bargains in dealing with OpenAI is that you're allowed to delete your chats and you're allowed to temporary chats." The greatest risk to users would be a data breach, Edelson said, but that's not the only potential privacy concern. Corynne McSherry, legal director for the digital rights group the Electronic Frontier Foundation, previously told Ars that as long as users' data is retained, it could also be exposed through future law enforcement and private litigation requests. Edelson pointed out that most privacy attorneys don't consider OpenAI CEO Sam Altman to be a "privacy guy," despite Altman recently slamming the NYT, alleging it sued OpenAI because it doesn't "like user privacy." "He's trying to protect OpenAI, and he does not give a hoot about the privacy rights of consumers," Edelson said, echoing one ChatGPT user's dismissed concern that OpenAI may not prioritize users' privacy concerns in the case if it's financially motivated to resolve the case. "The idea that he and his lawyers are really going to be the safeguards here isn't very compelling," Edelson said. He criticized the judges for dismissing users' concerns and rejecting OpenAI's request that users get a chance to testify. "What's really most appalling to me is the people who are being affected have had no voice in it," Edelson said.
[2]
Judge Rules That Newspaper Is Allowed to Search Through Users' ChatGPT Logs
Amid its long-running copyright infringement lawsuit against OpenAI, the paper of record will soon have access to all of OpenAI's user archives -- including the stuff that was deleted. As Ars Technica reports, the federal judge presiding over the lawsuit by the New York Times against OpenAI has granted the newspaper and its co-plaintiffs, the New York Daily News and the Center for Investigative Reporting, access to the AI company's logs to see exactly how much copyright was infringed. In its previous reporting on the order, which came down last month and was affirmed this week over OpenAI's attempts to appeal, Ars noted that the NYT justifies this broad sweep by suggesting that people who use ChatGPT to bypass its paywalls may delete their history after doing so. The newspaper also claims that searching through those logs may prove to be the crux of the whole suit: that OpenAI's large language models (LLMs) are not only trained on its copyrighted material, but are also able to plagiarize that material, too. OpenAI, as you might imagine, is none too pleased. Last month, the company claimed that the order will force it to bypass its "long-standing privacy norms," and after the latest ruling, an OpenAI spokesperson told Ars that OpenAI intends to "keep fighting" against it. Notably, all this is occurring as the NYT et al and OpenAI negotiate how to handle the data trove search. As OpenAI noted in a statement last month, the order covers everything from free ChatGPT logs to more sensitive information from folks who use its API. (The order does specifically note that logs from ChatGPT Enterprise and ChatGPT Edu, its custom model specifically for college and universities, will be exempt.) Along with its search for evidence of copyright infringement, this OpenAI log gambit may also help prove that ChatGPT dilutes the news market by summarizing articles within the chatbot, which ultimately leads to lost ad revenue for media outlets because their links are entirely bypassed by would-be readers. Earlier this year, the content licensing platform TollBit found, per Forbes, that chatbots from OpenAI, Google, and others sent 96 percent less traffic to publishers than traditional search engines would -- a trend that has already started to hurt the news industry. In the existential fight between word purveyors and AI, proof of market dilution could, as a judge told a publisher suing Anthropic last month, tip the scales in favor of copyright holders -- a momentous implication for anyone trying to bypass annoying paywalls.
[3]
NYT forces OpenAI to retain chat data in court
A federal court has delivered a significant blow to OpenAI, upholding a sweeping order that compels the AI giant to indefinitely retain all ChatGPT user logs, including those previously deleted, as part of its ongoing copyright infringement lawsuit with The New York Times and other news organizations. The decision has ignited a fierce debate over user privacy, setting a potentially alarming precedent for the rapidly evolving field of artificial intelligence. Last week, OpenAI's legal team contested the order in court, arguing that it fundamentally undermines the company's "long-standing privacy norms" and violates the privacy protections promised to users in its terms of service. However, US District Judge Sidney Stein was unconvinced, promptly denying the company's objections. Judge Stein pointed to OpenAI's own user agreement, suggesting it already specifies that user data could be retained for legal processes, which he affirmed is precisely the current situation. The controversial order was initially issued by magistrate judge Ona Wang at the urgent request of news plaintiffs, led by The New York Times. They argued that the preservation of chat logs was critical to securing potential evidence. The news organizations allege that ChatGPT users may have utilized the chatbot to bypass their paywalls to access copyrighted news content, and then deleted the incriminating chats. An OpenAI spokesperson confirmed to Ars Technica that the company intends to "keep fighting" the ruling, though its legal avenues appear to be narrowing. The firm could petition the Second Circuit Court of Appeals for an emergency stay, but such interventions are rare and would require demonstrating an extraordinary abuse of discretion by the lower court. OpenAI has not confirmed if it will pursue this high-stakes legal maneuver. In the interim, OpenAI finds itself in a precarious position, forced to negotiate with the news plaintiffs on a process for searching the vast trove of retained data. The company is caught between a rock and a hard place: it can cooperate to expedite the search process and hope for a quicker deletion of the sensitive data, or it can prolong the legal battle over the order, which risks exposing even more user conversations to potential scrutiny or a data breach. While the prospect of The New York Times combing through every user's chat history is unlikely, the agreed-upon process will involve searching a sample of the data based on specific keywords. This search will reportedly occur on OpenAI's servers with anonymized data, which is not expected to be handed over directly to the plaintiffs. For the news organizations, access to these logs is not necessarily a linchpin for their core copyright case but could provide crucial evidence of market dilution. They aim to show that ChatGPT's ability to generate content similar to their own articles harms their business, a factor that could weigh heavily against OpenAI's "fair use" defense. The ruling has drawn sharp criticism from privacy advocates. Jay Edelson, a prominent consumer privacy lawyer, expressed deep concern, telling Ars Tecnica that the potential evidence within the logs may not significantly advance the plaintiffs' case while drastically altering a product used daily by millions. Edelson warned of the security risks, noting that while OpenAI's security may be robust, "lawyers have notoriously been pretty bad about securing data." The idea of law firms handling "some of the most sensitive data on the planet," he argued, "should make everyone uneasy." New Shortcuts update brings ChatGPT-style replies to iOS and macOS Edelson suggested the order could have a chilling effect, pushing users to rival AI services and improperly influencing market dynamics. He posited a "cynical" view that the news plaintiffs might be leveraging the privacy concerns to pressure OpenAI into a settlement. Critics also highlight the "bonkers" nature of the order, which notably excludes enterprise customers, a move Edelson believes has "no logic." He argued this exemption protects powerful businesses while leaving "the common people" and their personal data exposed. This, he said, "is really offensive," particularly as a request by two individual ChatGPT users to intervene in the case was denied. "We are talking about billions of chats that are now going to be preserved," Edelson stated, emphasizing the deeply personal nature of user interactions with ChatGPT, which can range from medical queries to marital advice. The primary risk is a data breach, but the prolonged retention also exposes user data to future legal requests from law enforcement or other private litigants. Even OpenAI CEO Sam Altman's recent public statements championing user privacy have been met with skepticism. Edelson characterized Altman as trying to "protect OpenAI" rather than genuinely caring for consumer privacy rights, suggesting the company's financial motivations to resolve the case might not align with protecting its users. "What's really most appalling to me is the people who are being affected have had no voice in it," Edelson concluded, criticizing the judges for dismissing user concerns and setting a precedent that could see more AI-generated data frozen in future litigation.
Share
Copy Link
A federal court has upheld an order requiring OpenAI to indefinitely retain all ChatGPT logs, including deleted chats, as part of a copyright infringement lawsuit by The New York Times and other news organizations. This decision raises significant privacy concerns and sets a precedent in AI-related litigation.
In a significant legal development, a federal court has upheld an order requiring OpenAI to indefinitely retain all ChatGPT user logs, including deleted chats, as part of an ongoing copyright infringement lawsuit filed by The New York Times and other news organizations 1. US District Judge Sidney Stein denied OpenAI's objections to the order, dismissing the company's claims that it would force them to abandon "long-standing privacy norms" 1.
Source: Futurism
The order, initially issued by magistrate judge Ona Wang, came in response to an urgent request from news plaintiffs led by The New York Times. They argued that preserving chat logs was crucial to secure potential evidence, alleging that ChatGPT users may have used the chatbot to bypass paywalls and access copyrighted news content 13.
OpenAI, caught in a difficult position, is now negotiating with news plaintiffs on a process for searching the retained data. The company faces a dilemma: cooperate to expedite the search process and hope for quicker data deletion, or prolong the legal battle, risking further exposure of user conversations 3.
While it's unlikely that The New York Times will comb through every user's chat history, the agreed-upon process will involve searching a sample of the data based on specific keywords. This search will reportedly occur on OpenAI's servers with anonymized data 13.
Jay Edelson, a prominent consumer privacy lawyer, expressed deep concern about the ruling. He warned that while OpenAI's security may be robust, "lawyers have notoriously been pretty bad about securing data," raising the risk of potential data breaches 13.
Source: Ars Technica
The ruling has broader implications for the AI industry and user behavior. Critics suggest it could have a chilling effect, pushing users to rival AI services and improperly influencing market dynamics 1. The order also sets an alarming precedent, potentially leading to more AI data being frozen in future litigation 1.
For the news organizations, access to these logs is not necessarily crucial for their core copyright case but could provide evidence of market dilution. They aim to demonstrate that ChatGPT's ability to generate content similar to their own articles harms their business, a factor that could weigh heavily against OpenAI's "fair use" defense 23.
Earlier this year, the content licensing platform TollBit found that chatbots from OpenAI, Google, and others sent 96 percent less traffic to publishers than traditional search engines, a trend already impacting the news industry 2.
Source: Dataconomy
An OpenAI spokesperson confirmed that the company intends to "keep fighting" the ruling, though its legal options appear limited 13. The company could petition the Second Circuit Court of Appeals for an emergency stay, but such interventions are rare and would require demonstrating an extraordinary abuse of discretion by the lower court 1.
As the legal battle unfolds, the AI industry and privacy advocates watch closely, recognizing the potential far-reaching consequences of this case for user privacy and the future of AI regulation.
Ilya Sutskever, co-founder of Safe Superintelligence (SSI), assumes the role of CEO following the departure of Daniel Gross to Meta. The move highlights the intensifying competition for top AI talent among tech giants.
6 Sources
Business and Economy
6 hrs ago
6 Sources
Business and Economy
6 hrs ago
Google's advanced AI video generation tool, Veo 3, is now available worldwide to Gemini app 'Pro' subscribers, including in India. The tool can create 8-second videos with audio, dialogue, and realistic lip-syncing.
7 Sources
Technology
22 hrs ago
7 Sources
Technology
22 hrs ago
Microsoft's Xbox division faces massive layoffs and game cancellations amid record profits, with AI integration suspected as a key factor in the restructuring.
4 Sources
Business and Economy
14 hrs ago
4 Sources
Business and Economy
14 hrs ago
Google's AI video generation tool, Veo 3, has been linked to a surge of racist and antisemitic content on TikTok, raising concerns about AI safety and content moderation on social media platforms.
5 Sources
Technology
22 hrs ago
5 Sources
Technology
22 hrs ago
CEOs from major companies like Ford, JPMorgan, and Amazon predict significant job cuts due to AI advancements, sparking debate about the future of white-collar work.
4 Sources
Business and Economy
14 hrs ago
4 Sources
Business and Economy
14 hrs ago