9 Sources
[1]
OpenAI is storing deleted ChatGPT conversations as part of its NYT lawsuit
Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. OpenAI says it's forced to store deleted ChatGPT conversations "indefinitely" due to a court order issued as part of The New York Times' copyright lawsuit against it. In a post on Thursday, OpenAI chief operating officer Brad Lightcap says the company is appealing the court's decision, which he calls an "overreach" that "abandons long-standing privacy norms and weakens privacy protections." Last month, a court ordered OpenAI to preserve "all output log data that would otherwise be deleted," even if a user requests the deletion of a chat or if privacy laws require OpenAI to delete data. OpenAI's policies state that when a user deletes a chat, it retains it for 30 days before permanently deleting it. The company must now put a pause on this policy until the court says otherwise. OpenAI says the court order will impact free, Pro, Plus, and Team ChatGPT users. It won't affect ChatGPT Enterprise or ChatGPT Edu customers, or businesses that have a zero data retention agreement. OpenAI adds that the data won't be public, and "only a small, audited OpenAI legal and security team" will be able to access the stored information for legal purposes. The Times sued OpenAI and Microsoft for copyright infringement in 2023, accusing the companies of "copying and using millions" of the newspaper's articles to train their AI models. The publication argues that saving user data could help preserve evidence to support its case. "We think this was an inappropriate request that sets a bad precedent," OpenAI CEO Sam Altman said in a post on X. "We will fight any demand that compromises our users' privacy; this is a core principle." The New York Times declined to comment.
[2]
OpenAI appeals court order forcing it to preserve all ChatGPT data
OpenAI has appealed a court ruling from last month that forces it to retain ChatGPT data indefinitely as part of a copyright violation case brought by The New York Times in 2023. CEO Sam Altman said in a tweet on X that the judge's decision "compromises our users' privacy" and "sets a bad precedent." In May, federal judge Ona T. Wang ordered OpenAI to preserve and segregate all ChatGPT output log data that would otherwise be deleted due to a user request. She said that the ruling was justified because the volume of deleted conversations is "significant." The directive notes that the judge asked OpenAI if there was a way to anonymize the data to address users' privacy concerns. The New York Times sought the order so that it can accurately track how often OpenAI violates its IP, including instances when users requested deletion of chats. A federal judge allowed the original case to proceed, agreeing with the NYT's argument that OpenAI and Microsoft's tech had induced users to plagiarize its materials. In a FAQ on its site, OpenAI painted the order as a privacy issue without addressing the millions of alleged copyright violations. "This fundamentally conflicts with the privacy commitments we have made to our users," the company wrote. "It abandons long-standing privacy norms and weakens privacy protections." OpenAI noted that the order "does not impact ChatGPT Enterprise or ChatGPT Edu customers." The NYT and other AI copyright cases are still ongoing, as courts have not yet decided whether OpenAI, Google and other companies infringed copyrights on a massive scale by scraping material from the internet. The tech companies have argued that training is protected by "fair use" copyright law and that the lawsuits threaten the AI industry. Creators of that content, in turn, argue that AI harms their own livelihoods by stealing and reproducing works with little to no compensation.
[3]
OpenAI Appeals 'Sweeping, Unprecedented Order' Requiring It Maintain All ChatGPT Logs
Although OpenAI has continually slammed the New York Times' copyright lawsuit, the case isn't as meritless as the company claims. Last month, a federal judge ordered OpenAI to indefinitely maintain all of ChatGPT's data as part of an ongoing copyright lawsuit. In response, OpenAI has filed an appeal to overturn the decision stating that the "sweeping, unprecedented order" violates its users' privacy. The New York Times sued both OpenAI and Microsoft in 2023 claiming that the companies violated copyrights by using its articles to train their language models. However, OpenAI said the Times' case is "without merit" and argued that the training falls under "fair use". Previously, OpenAI only kept chat logs for users of ChatGPT Free, Plus, and Pro who didn't opt out. However, in May, the Times and other news organizations claimed that OpenAI was engaging in a "substantial, ongoing" destruction of chat logs that could contain evidence of copyright violations. Judge Ona Wang responded by ordering ChatGPT to maintain and segregate all ChatGPT logs that would otherwise be deleted. In a court appeal, OpenAI argued that Wang's order "prevent[s] OpenAI from respecting its users' privacy decisions." According to Ars Technica, the company also claimed that the Times' accusations were "unfounded", writing, "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events. The order appears to have incorrectly assumed the contrary." "The [Times] and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us," COO Brad Lightcap said in a statement. He added that the demand for OpenAI to retain all data "abandons long-standing privacy norms and weakens privacy protections." On X, CEO Sam Altman wrote that the "inappropriate request...sets a bad precedent." He also added that the case highlights the need for "AI privilege" where "talking to an AI should be like talking to a lawyer or a doctor." The court order triggered an initial wave of panic. Per Ars Technica, OpenAI's court filing cited social media posts from LinkedIn and X where users expressed concerns about their privacy. On LinkedIn, one person warned their clients to be "extra careful" about what information they shared with ChatGPT. In another example, someone tweeted, "Wang apparently thinks the NY Times' boomer copyright concerns trump the privacy of EVERY @OPENAI USER - insane!!!" On one hand, I couldn't imagine having a ChatGPT log sensitive enough data that I'd care if someone else read it. However, people do use ChatGPT as a therapist, for life advice, and even treat it as a romantic partner. Regardless of whether I'd personally do the same, they deserve the right to keep that content private.Γ At the same time, the Times' case isn't as baseless as OpenAI claims. It is absolutely worth discussing how artificial intelligence is trained. Remember when Clearview AI scraped 30 billion images from Facebook to train its facial recognition? Or reports that the federal government uses images of vulnerable people to test facial recognition software? Yes, those examples exist outside of journalism and copyright law. However, it highlights the need for conversations about whether companies like OpenAI should need explicit consent to utilize content rather than scraping whatever they want from the internet.
[4]
Sam Altman says AI chats should be as private as 'talking to a lawyer or a doctor', but OpenAI could soon be forced to keep your ChatGPT conversations forever
Back in December 2023, the New York Times launched a lawsuit against OpenAI and Microsoft, alleging copyright infringement. The New York Times alleges that OpenAI had trained its ChatGPT model, which also powers Microsoft's Copilot, by "copying and using millions" of its articles without permission. The lawsuit is still ongoing, and as part of it the New York Times (and other plaintiffs involved in the case) have made the demand that OpenAI are made to retain consumer ChatGPT and API customer data indefinitely, much to the ire of Sam Altman, CEO of OpenAI, who took to X.com to tweet, "We have been thinking recently about the need for something like 'AI privilege'; this really accelerates the need to have the conversation. IMO talking to an AI should be like talking to a lawyer or a doctor. I hope society will figure this out soon." OpenAI describes the New York Times lawsuit as "baseless", and in a lengthy post on the OpenAI website titled, 'How we're responding to The New York Times' data demands in order to protect user privacy', OpenAI lays out its approach to privacy. Brad Lightcap, COO, OpenAI, says that the demand from the NYT "fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections." As more and more people share intimate details of their lives with AI chatbots, which are often taking on the role of a therapist, I can appreciate the need to be able to keep AI conversations private, however, I can also see the NYT's point of view that if there is evidence that supports its claims against OpenAI then it needs to have access to that data without OpenAI being able to declare it all as too private to share. At the moment, a ChatGPT chat is removed from your account immediately when you delete the conversation, and scheduled for permanent deletion from OpenAI systems within 30 days. The order would mean that even deleted ChatGPT conversations would have to be retained by OpenAI. As a ChatGPT user myself, I've always appreciated the ability to be able to remove conversations entirely. If OpenAI is forced to comply with this request, then it's going to affect pretty much everybody who uses the service, on either a free, Plus, Pro, or Teams (but not Enterprise or Edu account holders). The order also does not impact API customers who are using Zero Data Retention endpoints under OpenAI's ZDR amendment. OpenAI has said it has appealed the order to the District Court Judge and will inform us when it knows more.
[5]
Sam Altman calls for 'AI privilege' as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Regular ChatGPT users (among whom include the author of this article) may or may not have noticed that the hit chatbot from OpenAI allows users to enter into a "temporary chat" that is designed to wipe all the information exchanged between the user and the underlying AI model as soon as the chat session is closed by the user. In addition, OpenAI also allows users to manually delete prior ChatGPT sessions from the sidebar on the web and desktop/mobile apps by left-clicking or control-clicking, or holding down/long pressing on them from the selector. However, this week, OpenAI found itself facing criticism from some of said ChatGPT users after they discovered that the company has not actually been deleting these chat logs as previously indicated. As AI influencer and software engineer Simon Willison wrote on his personal blog: "Paying customers of [OpenAI's] APIs may well make the decision to switch to other providers who can offer retention policies that aren't subverted by this court order!" "You're telling me my deleted chatgpt chats are actually not deleted and is being saved to be investigated by a judge?" posted X user @ns123abc, a comment that drew over a million views. Another user, @kepano, added, "you can 'delete' a ChatGPT chat, however all chats must be retained due to legal obligations ?". Instead, OpenAI confirmed it has been preserving deleted and temporary user chat logs since mid-May 2025 in response to a federal court order, though it did not disclose this to users until yesterday, June 5th. The order, embedded below and issued on May 13, 2025, by U.S. Magistrate Judge Ona T. Wang, requires OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis," including chats deleted by user request or due to privacy obligations. The court's directive stems from The New York Times (NYT) v. OpenAI and Microsoft, a now three-year-old copyright case still being argued in which the NYT's lawyers allege that OpenAI's language models regurgitate copyrighted news content verbatim. The plaintiffs argue that logs, including those users may have deleted, could contain infringing outputs relevant to the lawsuit. While OpenAI complied with the order immediately, it did not publicly notify affected users for more than three weeks, when OpenAI issued a blog post and an FAQ describing the legal mandate and outlining who is impacted. However, OpenAI is placing the blame squarely on the NYT and the judge's order, saying it believes the preservation demand to be "baseless." OpenAI clarifies what's going on with the court order to preserve ChatGPT user logs -- including which chats are impacted In a blog post published yesterday, OpenAI Chief Operating Officer Brad Lightcap defended the company's position and stated that it was advocating for user privacy and security against an over-broad judicial order, writing: "The New York Times and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us: retain consumer ChatGPT and API customer data indefinitely. This fundamentally conflicts with the privacy commitments we have made to our users." The post clarified that ChatGPT Free, Plus, Pro, and Team users, along with API customers without a Zero Data Retention (ZDR) agreement, are affected by the preservation order, meaning even if users on these plans delete their chats or use temporary chat mode, their chats will be stored for the foreseeable future. However, subscribers to the ChatGPT Enterprise and Edu users, as well as API clients using ZDR endpoints, are not impacted by the order and their chats will be deleted as directed. The retained data is held under legal hold, meaning it is stored in a secure, segregated system and only accessible to a small number of legal and security personnel. "This data is not automatically shared with The New York Times or anyone else," Lightcap emphasized in OpenAI's blog post. Sam Altman floats new concept of 'AI privilege' allowing for confidential conversations between models and users, similar to speaking to a human doctor or lawyer OpenAI CEO and co-founder Sam Altman also addressed the issue publicly in a post from his account on the social network X last night, writing: "recently the NYT asked a court to force us to not delete any user chats. we think this was an inappropriate request that sets a bad precedent. we are appealing the decision. we will fight any demand that compromises our users' privacy; this is a core principle." He also suggested a broader legal and ethical framework may be needed for AI privacy: "we have been thinking recently about the need for something like 'AI privilege'; this really accelerates the need to have the conversation." "imo talking to an AI should be like talking to a lawyer or a doctor." The notion of AI privilege -- as a potential legal standard -- echoes attorney-client and doctor-patient confidentiality. Whether such a framework would gain traction in courtrooms or policy circles remains to be seen, but Altman's remarks indicate OpenAI may increasingly advocate for such a shift. What comes next for OpenAI and your temporary/deleted chats? OpenAI has filed a formal objection to the court's order, requesting that it be vacated. In court filings, the company argues that the demand lacks a factual basis and that preserving billions of additional data points is neither necessary nor proportionate. Judge Wang, in a May 27 hearing, indicated the order is temporary. She instructed the parties to develop a sampling plan to test whether deleted user data materially differs from retained logs. OpenAI was ordered to submit that proposal by today, June 6, but I have yet to see the filing. What it means for enterprises and decision-makers in charge of ChatGPT usage in corporate environments While the order exempts ChatGPT Enterprise and API customers using ZDR endpoints, the broader legal and reputational implications matter deeply for professionals responsible for deploying and scaling AI solutions inside organizations. Those who oversee the full lifecycle of large language models -- from data ingestion to fine-tuning and integration -- will need to reassess assumptions about data governance. If user-facing components of an LLM are subject to legal preservation orders, it raises urgent questions about where data goes after it leaves a secure endpoint, and how to isolate, log, or anonymize high-risk interactions. Any platform touching OpenAI APIs must validate which endpoints (e.g., ZDR vs non-ZDR) are used and ensure data handling policies are reflected in user agreements, audit logs, and internal documentation. Even if ZDR endpoints are used, data lifecycle policies may require review to confirm that downstream systems (e.g., analytics, logging, backup) do not inadvertently retain transient interactions that were presumed short-lived. Security officers responsible for managing risk must now expand threat modeling to include legal discovery as a potential vector. Teams must verify whether OpenAI's backend retention practices align with internal controls and third-party risk assessments, and whether users are relying on features like "temporary chat" that no longer function as expected under legal preservation. A new flashpoint for user privacy and security This moment is not just a legal skirmish; it is a flashpoint in the evolving conversation around AI privacy and data rights. By framing the issue as a matter of "AI privilege," OpenAI is effectively proposing a new social contract for how intelligent systems handle confidential inputs. Whether courts or lawmakers accept that framing remains uncertain. But for now, OpenAI is caught in a balancing act -- between legal compliance, enterprise assurances, and user trust -- and facing louder questions about who controls your data when you talk to a machine.
[6]
OpenAI Challenges Court Order to Preserve User Data in NYT Lawsuit - Decrypt
The New York Times suit alleges OpenAI illegally used copyrighted content for training OpenAI is contesting a federal court order requiring it to preserve all user data, including deleted chats, as part of a copyright lawsuit brought by The New York Times. "We strongly believe this is an overreach by The New York Times. We're continuing to appeal this order so we can keep putting your trust and privacy first," OpenAI COO Brad Lightcap said in a statement. The decision stems from a May 13 order to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court." The New York Times sued OpenAI and Microsoft in December 2023, alleging that both companies illegally used Times content to train large language models like ChatGPT and Bing Chat. The Times claims this infringes on its copyrights and threatens the business model of original journalism. It said last month that potential evidence of copyright infringement might be deleted as users clear their chat histories. At the heart of the case is whether using copyrighted material to train generative AI models constitutes "fair use." The Times alleges that OpenAI's tools sometimes generate near-verbatim outputs from its articles and can bypass its paywall through AI-generated summaries. Both sides have argued they are taking the moral high ground. The Times has said it is protecting journalism and the ability of the media to do its work and get paid for it. OpenAI CEO Sam Altman has accused the outlet of being "on the wrong side of history", while the company has said The Times cherry-picked the data used in the suit. As the generative AI industry expands, courts are becoming key battlegrounds in the fight over data, privacy, and intellectual property. The lawsuit is one of several high-profile copyright claims brought against OpenAI and other AI firms. In April, Ziff Davis, which owns media outlets such as PCMag and Mashable, sued OpenAI over allegations of using its content without consent. This week, Reddit filed a suit against another AI company, Anthropic, alleging it scraped Reddit data without permission. Anthropic is also facing lawsuits from music publishers and authors.
[7]
OpenAI appeals data retention order over NYT copyright lawsuit
News outlets fighting the lawsuit claim that OpenAI is destroying output logs. OpenAI is appealing a court order which forces the AI giant to preserve ChatGPT logs, arguing that the order conflicts with the company's commitment to privacy. In a blog yesterday (5 June), OpenAI's chief operating officer Brad Lightcap rebutted that the plaintiffs - The New York Times, The New York Daily News and the Centre for Investigative Reporting - made a "sweeping and unnecessary" demand for ChatGPT and API output data. However, the News Plaintiffs - as they are known collectively - claim that OpenAI is destroying output logs. They argue that OpenAI's output log data is necessary to show that the company copies and sometimes misattributes their work, and at times, paraphrases them or uses their direct quotes in its output. OpenAI had a duty to preserve relevant output log data, the plaintiffs say, especially as its own privacy policy mentions that it may retain data for any legal obligations. Last month, a magistrate judge at the US district court in the Southern District of New York held that OpenAI needs to preserve and segregate all output log data that would otherwise be deleted. This includes the free, Plus, Pro and Team versions of ChatGPT, as well as its API. The company is clearly unhappy with the decision, arguing that the requests were "vastly overbroad". It maintained that a "preserve everything" request brings up issues around user preference, as well as privacy laws and regulations in different regions of the world. The New York Times initially filed the lawsuit against AI juggernaut OpenAI and its biggest backer Microsoft back in 2023 over claims that AI chatbots, including ChatGPT, are trained on millions of articles it published. "Defendants seek to free-ride on The Times's massive investment in its journalism by using it to build substitutive products without permission or payment," the New York Times alleged. It later joined forces with other outlets. Recently, several of OpenAI and Microsoft's motions aiming to dismiss claims from the lawsuit were denied, including their motion to dismiss direct and contributory infringement and claims of trademark dilution. Meanwhile, as the copyright infringement lawsuit moves forward, The New York Times recently signed an agreement to let Amazon use its content for generative AI. Besides news articles, Amazon will be allowed to use the publication's cooking and sports content as well. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[8]
OpenAI to retain deleted ChatGPT conversations following court order - SiliconANGLE
OpenAI to retain deleted ChatGPT conversations following court order OpenAI will retain users' deleted ChatGPT conversations to comply with a recently issued court order. Brad Lightcap, the artificial intelligence developer's Chief Operating Officer, disclosed the move in a late Thursday blog post. When users delete ChatGPT prompts and the chatbot's responses, OpenAI usually retains the data for 30 days before permanently erasing it. Going forward, the AI provider will stop discarding the logs. OpenAI will likewise retain many of the requests sent to its application programming interfaces along with the output generated in response. Not all of the ChatGPT developer's users are affected. OpenAI won't apply the new data retention policy to companies that use its API under a so-called Zero Data Retention agreement. According to Lightcap, the Enterprise and Edu editions of ChatGPT are exempt as well. The court order that prompted the new data retention policy was issued last month in connection with a high-profile copyright lawsuit against OpenAI. In 2023, the New York Times sued the ChatGPT developer for using its content without permission. The paper alleges that OpenAI and Microsoft Corp., one of the AI provider's most important partners, incorporated millions of articles into their model training datasets. The lawsuit further charged that ChatGPT outputted Times content verbatim without attribution. Several other publications filed similar copyright lawsuits a few months later. Those claims were subsequently combined with the Times' lawsuit into a single case. In March, a federal court dismissed parts of the lawsuit but ruled that it can proceed to trial. A few weeks ago, the judge presiding over the case ordered OpenAI to retain users' deleted prompts and prompt responses. The ruling stemmed from concerns that discarding the data might delete evidence relevant for the case. According to an OpenAI filing, the court determined that customers who "use ChatGPT for infringing purposes -- e.g., to 'get around the [Times's] pay wall' -- they might be more likely to 'delete all [their] searches' to cover their tracks." OpenAI stated that the data it must retain to comply with the ruling will be "stored separately in a secure system." The ChatGPT developer says that the system's contents can only be accessed to meet legal obligations. "Only a small, audited OpenAI legal and security team would be able to access this data as necessary to comply with our legal obligations," OpenAI stated. "This data is not automatically shared with The New York Times or anyone else. It's locked under a separate legal hold, meaning it's securely stored and can only be accessed under strict legal protocols."
[9]
OpenAI Appeals Court Order Requiring Retention of Consumer Data | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The order came in response to a demand from The New York Times and other plaintiffs in a lawsuit they brought against the artificial intelligence (AI) company, because they believe the data might support their case, OpenAI said in a Thursday (June 5) blog post. "This fundamentally conflicts with the privacy commitments we have made to our users," OpenAI Chief Operating Officer Brad Lightcap said in the post. "It abandons long-standing privacy norms and weakens privacy protections." Lightcap added that OpenAI believes the order is "an overreach" by The New York Times and that the company is appealing the order. Reached by PYMNTS, The New York Times declined to comment on the blog post. The order came in The New York Times' copyright case against OpenAI and Microsoft, Reuters reported Thursday. The publisher filed the suit in 2023, alleging that the companies used its articles without permission to train their AI models. The order requires the company to preserve ChatGPT output data indefinitely, according to the report. OpenAI appealed the order Tuesday (June 3), the report said, citing a court filing. According to OpenAI's Thursday blog post, the company's data retention policies vary by the type of account but generally include permanent removal of deleted chats within 30 days, unless legal or security reasons require it to do otherwise. For business customers using Zero Data Retention endpoints, inputs and outputs are not retained. This is not impacted by the court order, the post said. Under the court order, deleted chats that would normally be removed from OpenAI's systems will instead be retained and stored in a secure system, protected under legal hold and accessible only when required to meet legal obligations, per the post. OpenAI CEO Sam Altman said in a Thursday post on X: "We have been thinking recently about the need for something like 'AI privilege'; this really accelerates the need to have the conversation. [In my opinion] talking to an AI should be like talking to a lawyer or doctor. I hope society will figure this out soon." In another post, Altman said: "(maybe spousal privilege is a better analogy)." When The New York Times filed its lawsuit alleging copyright infringement in December 2023, the newspaper said in an article that the suit accuses OpenAI and Microsoft of "using the Times's content without payment to create products that substitute for The Times and steal audiences away from it."
Share
Copy Link
OpenAI appeals a court order requiring it to indefinitely store deleted ChatGPT conversations as part of The New York Times' copyright lawsuit, citing user privacy concerns and setting a precedent for AI data retention.
OpenAI, the company behind the popular AI chatbot ChatGPT, is appealing a court order that requires it to indefinitely store deleted user conversations. This directive comes as part of an ongoing copyright lawsuit filed by The New York Times against OpenAI and Microsoft in 2023 1.
Source: Gizmodo
In May 2025, U.S. Magistrate Judge Ona T. Wang issued an order compelling OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis" 5. This includes chats deleted by user request or due to privacy obligations. The order affects users of ChatGPT Free, Plus, Pro, and Team, as well as API customers without a Zero Data Retention (ZDR) agreement 4.
OpenAI has strongly opposed this order, arguing that it conflicts with the company's privacy commitments to its users. Brad Lightcap, OpenAI's COO, stated that the demand "abandons long-standing privacy norms and weakens privacy protections" 2. The company has filed a formal objection to the court's order, requesting that it be vacated 5.
Source: TechRadar
OpenAI CEO Sam Altman has proposed the concept of "AI privilege" in response to the court order. He argues that conversations with AI should be treated with the same level of confidentiality as those with lawyers or doctors 4. This suggestion hints at a potential future legal framework for AI privacy protection.
The court order stems from The New York Times' copyright infringement lawsuit against OpenAI and Microsoft. The newspaper alleges that these companies used millions of its articles without permission to train their AI models 3. The Times argues that preserving user data could help maintain evidence to support its case 1.
Source: engadget
This case raises important questions about the balance between intellectual property rights and user privacy in the age of AI. It also highlights the need for clearer regulations and standards governing AI data usage and retention 5.
Judge Wang has indicated that the order is temporary and has instructed the parties to develop a sampling plan to test whether deleted user data materially differs from retained logs 5. As the case progresses, it will likely set important precedents for how AI companies handle user data and respond to legal challenges in the future.
Summarized by
Navi
Anysphere, the company behind the AI coding assistant Cursor, has raised $900 million in funding, reaching a $9.9 billion valuation. The startup has surpassed $500 million in annual recurring revenue, making it potentially the fastest-growing software startup ever.
4 Sources
Technology
21 hrs ago
4 Sources
Technology
21 hrs ago
A multi-billion dollar deal to build one of the world's largest AI data center hubs in the UAE, involving major US tech companies, is far from finalized due to persistent security concerns and geopolitical complexities.
4 Sources
Technology
13 hrs ago
4 Sources
Technology
13 hrs ago
A new PwC study challenges common fears about AI's impact on jobs, showing that AI is actually creating jobs, boosting wages, and increasing worker value across industries.
2 Sources
Business and Economy
13 hrs ago
2 Sources
Business and Economy
13 hrs ago
Runway's AI Film Festival in New York highlights the growing role of artificial intelligence in filmmaking, showcasing innovative short films and sparking discussions about AI's impact on the entertainment industry.
5 Sources
Technology
13 hrs ago
5 Sources
Technology
13 hrs ago
A groundbreaking generative AI system developed by Northwestern Medicine has shown significant improvements in radiology efficiency and accuracy, potentially addressing the global radiologist shortage.
2 Sources
Health
21 hrs ago
2 Sources
Health
21 hrs ago