9 Sources
9 Sources
[1]
Anthropic cuts off OpenAI's access to its Claude models | TechCrunch
Anthropic has revoked OpenAI's access to its Claude family of AI models, according to a report in Wired. Sources told Wired that OpenAI was connecting Claude to internal tools that allowed the company to compare Claude's performance to its own models in categories like coding, writing, and safety. TechCrunch has reached out to Anthropic and OpenAI for comment. In a statement to Wired, an Anthropic spokesperson said, "OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," which is apparently "a direct violation of our terms of service." (Anthropic's commercial terms forbid companies from using Claude to build competing services.) Meanwhile, an OpenAI spokesperson said, "While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them." Anthropic executives had already shown resistance to providing access to competitors, with Chief Science Officer Jared Kaplan previously justifying the company's decision to cut off Windsurf (a rumored OpenAI acquisition target, subsequently acquired by Cognition) by saying, "I think it would be odd for us to be selling Claude to OpenAI."
[2]
Anthropic Revokes OpenAI's Access to Claude
All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Anthropic revoked OpenAI's API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service. "Claude Code has become the go-to choice for coders everywhere and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," Anthropic spokesperson Christopher Nulty said in a statement to WIRED. "Unfortunately, this is a direct violation of our terms of service." According to Anthropic's commercial terms of service, customers are barred from using the service to "build a competing product or service, including to train competing AI models" or "reverse engineer or duplicate" the services. This change in OpenAI's access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding. OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude's capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models' behavior under similar conditions and make adjustments as needed. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them," OpenAI's chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry." The company did not respond to WIRED's request for clarification on if and how OpenAI's current Claude API restriction would impact this work. Top tech companies yanking API access from competitors has been a tactic in the tech industry for years. Facebook did the same to Twitter-owned Vine (which led to allegations of anticompetitive behavior) and last month Salesforce restricted competitors from accessing certain data through the Slack API. This isn't even a first for Anthropic. Last month, the company restricted the AI coding startup Windsurf's direct access to its models after it was rumored OpenAI was set to acquire it. (That deal fell through). Anthropic's chief science officer Jared Kaplan spoke to TechCrunch at the time about revoking Windsurf's access to Claude, saying "I think it would be odd for us to be selling Claude to OpenAI."
[3]
Anthropic says OpenAI engineers using Claude Code ahead of GPT-5 launch
Anthropic says it has revoked OpenAI's access to the Claude API after ChatGPT's engineers were found using Claude's coding tools. Claude Code is better than any other coding tool in the AI coding industry, also known as "Vibe coding." With Claude, you can create web apps from scratch, and it's also pretty efficient with infra-related work. Not just vibe coders who don't know how to code use Claude, but also professional engineers. In fact, Claude Code is also used in Claude's development at Anthropic. Anthropic offers API access to Claude, which means OpenAI could also access it. But on Friday, Anthropic confirmed that it revoked OpenAI's access to Claude Code after the company was found using its coding tools ahead of the GPT-5 launch. "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," Anthropic spokesperson Christopher Nulty noted in a statement to Wired. It's unclear how OpenAI engineers were using Claude Code, but it's possible it was used to improve GPT-5's coding capabilities. GPT-5 is expected to launch next week with "auto" and reasoning modes.
[4]
Anthropic pulls OpenAI's access to Claude -- here's why
Anthropic, the company behind Claude AI, recently made a bold decision. The company has revoked OpenAI's API access to its models, accusing the company of violating its terms of service. According to Wired, which broke this news, Anthropic spokesperson Christopher Nulty said: "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of the latest version of ChatGPT in GPT-5. Unfortunately, this is a direct violation of our terms of service." Anthropic's commercial terms of service state that customers can't use the service to "build a competing product or service, including to train competing AI models or reverse engineer or duplicate services." This could be a big blow for OpenAI as it prepares to launch GPT-5 -- the latest version of the company's technology. While it isn't clear what OpenAI was accessing Claude for, Anthropic has quickly become known for its coding ability. However, according to Wired's sources, OpenAI was plugging Claude directly into its own internal tools instead of using the regular chat interface. This would have allowed the company to run tests to measure Claude's capabilities against its own model. This would, in theory, help OpenAI to determine its own model's behaviour and safeguard under similar conditions, giving them a competitive advantage in testing. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them," OpenAI's chief communications officer, Hannah Wong, said in a statement to WIRED. In response, Nutty stated that Anthropic will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry." Reports now suggest that GPT-5 could be here any day. Researchers are already in the early stages of testing the technology, and OpenAI has been hinting at a launch in the next week. With this in mind, OpenAI has likely now done all of its early comparisons against other tools. While this will be a small blow to OpenAI, it is likely to change GPT-5 at all. AI companies seem to be getting scrappier in recent months. Staff talent is being stolen from each other and, in cases like this, they are getting more private with their technology. As two of the main AI companies around right now, Anthropic and OpenAI are likely to keep clashing well into the future.
[5]
Anthropic reportedly cut OpenAI access to Claude
It seems OpenAI has been caught with its hands in the proverbial cookie jar. Anthropic has reportedly cut off OpenAI's access to Anthropic's APIs over what Anthropic is calling a terms of service breach. As reported by Wired, multiple sources claim that OpenAI has been cut off from Anthropic's APIs. Allegedly, OpenAI was using Anthropic's Claude Code to assist in creating and testing OpenAI's upcoming GPT-5, which is due to release in August. According to these sources, OpenAI was plugging into Claude's internal tools instead of using the chat interface. From there, they used the API to run tests against GPT-5 to check things like coding and creative writing against Claude to compare performance. OpenAI allegedly also tested safety prompts related to things like CSAM, self-harm, and defamation. This would give OpenAI data that it could then use to fine-tune GPT-5 to make it more competitive against Claude. Unfortunately for OpenAI, this violates Anthropic's commercial terms of service, which ban companies from using Anthropic's tools to build competitor AI products. "Customer may not and must not attempt to access the Services to build a competing product or service, including to train competing AI models or resell the Services except as expressly approved by Anthropic," the terms read. OpenAI responded by saying that what the company was doing was an industry standard, as all the AI companies test their models against the competing models. The company then went on to say that it respected Anthropic's decision but expressed disappointment in having its API access shut off, especially considering that Anthropic's access to OpenAI's API remains open. A spokesperson told Wired that OpenAI's access would be reinstated for "benchmarking and safety evaluations." It's not the first time this year that Anthropic has cut off API access. In June, the company cut off Windsurf's API access after rumors that it was being sold to OpenAI. That deal ultimately fell through, but Anthropic's cofounder, Jared Kaplan, told TechCrunch at the time that "it would be odd for us to be selling Claude to OpenAI." Anthropic has also tweaked its rate limits for Claude, which will take effect in late August, with one of the reasons being that a small number of users are violating the company's policy by sharing and reselling accounts. Disclosure: Ziff Davis, Mashable's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[6]
Anthropic bans OpenAI from using Claude for training
Anthropic has terminated OpenAI's access to its Claude family of AI models, citing violations of its terms of service. OpenAI was reportedly integrating Claude with internal tools to evaluate its performance against OpenAI's own models across categories such as coding, writing, and safety. An Anthropic spokesperson informed TechCrunch that "OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," which constitutes "a direct violation of our terms of service." Anthropic's commercial terms specifically prohibit companies from utilizing Claude to develop competing services. Despite the termination of general access, Anthropic stated it would maintain OpenAI's access for "the purposes of benchmarking and safety evaluations." OpenAI responded to the decision, with a spokesperson characterizing its usage as "industry standard." The spokesperson added, "While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them." This action aligns with Anthropic's established stance on competitor access; Chief Science Officer Jared Kaplan previously justified cutting off Windsurf, stating, "I think it would be odd for us to be selling Claude to OpenAI." Windsurf was later acquired by Cognition and had been rumored as an OpenAI acquisition target.
[7]
Anthropic Revokes OpenAI's Access To Its Family Of Claude LLMs Because The Latter Was Using Its Rival's Tools To Gauge The Performance Of Its Own Products, Giving The ChatGPT Creator Somewhat Of An Edge
OpenAI was provided access to Anthropic's AI models through its API, which might not seem anything out of the ordinary because this move can be treated similar to publications allowing ChatGPT to access exclusive content, but the artificial intelligence startup reportedly violated Anthropic's Terms of Service by benchmarking its competitor's Claude Large Language Models to gauge the performance of its own LLM. As expected, Anthropic cut off access as the data could be used to develop a more capable service, giving OpenAI a massive advantage in this race. Coming to the terms of using Anthropic's Claude, Wired reports that the commercial conditions of using the program forbid competitors from enhancing products or training rival models. In short, OpenAI was leveraging Claude to develop a competing AI model, which eventually meant that the company's access would be revoked. As for the tools that were used, the report states that OpenAI's technical team was taking advantage of Claude Code, Claude's coding toolkit, via developer APIs to support development of GPT-5. However, Anthropic was not cut off access completely, as its states that OpenAI will still be able to perform 'benchmarking and safety evaluations.' An OpenAI spokesperson has said that while the firm respects Anthropic's decision to cut off its API access, it is 'disappointing' because the company's API is still available with its competitor. As reported by TechCrunch, Anthropic executives are less than thrilled about sharing technology with OpenAI. The company behind ChatGPT recently acquired Windsurf, a coding startup, forcing Anthropic to sever access for the firm. This move is similar to what Facebook did with Vine and Salesforce in limiting data access. Assuming Anthropic would have allowed OpenAI access to its tools, the latter would eventually be able to develop LLMs that provide better coding and writing services, and that is just the tip of the iceberg.
[8]
Anthropic Yanks OpenAI's Access to Claude Model | PYMNTS.com
"Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," Anthropic spokesperson Christopher Nulty said, per the report. "Unfortunately, this is a direct violation of our terms of service." The terms of service prevent customers from using Anthropic to build a competing product or service, "including to train competing AI models" or "reverse engineer or duplicate" the services, the report said. OpenAI was plugging Claude into its own internal tools via APIs, rather than using the regular chat interface, according to the report. This let the company test Claude's capabilities in coding and creative writing against its own AI models, as well as determine how Claude responded to safety-related prompts involving categories like self-harm and defamation. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety," OpenAI Chief Communications Officer Hannah Wong said in a statement, per the report. "While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them." Meanwhile, there's a debate in the AI sector about whether advancements in large language models are slowing, centered around AI scaling laws. Popularized by OpenAI, the idea behind AI scaling laws says larger models trained on more compute will produce better performance. "Over the past few years, AI labs have hit on what feels like a winning strategy: scaling more parameters, more data, more compute," said Garry Tan, president of startup accelerator Y Combinator. "Keep scaling your models, and they keep improving." However, there are indications that early leaps in performance are slowing. The two chief fuels for scaling -- data and computing -- are becoming more costly and rarer, said Adnan Masood, UST's chief architect of AI and machine learning. "These trends strongly indicate a plateau in the current trajectory of large language models," he said.
[9]
Claude Cutoff: Anthropic Revokes OpenAI's API Access Ahead of GPT-5 Launch
The sudden access cut highlights the tension in the growing competitive AI market, and raises questions on openness and collaborations among the dominating players of this industry. The increasing competition in the AI LLM market is generating tension between companies. However, this cut-off decision came at a time when Sam Altman's company had been preparing to : GPT 5. In the statement, Anthropic spokesperson "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5." Continuing his statement, Christopher Nulty concludes, "Unfortunately, this is a direct violation of our terms of service." Claude's spokesperson straightforwardly mentioned that customers are not allowed to use the service to make a rival product, or train other competitive LLM models. As per the reports, OpenAI used to use Claude APIs, instead of the regular chat interface. Therefore, they had the opportunities to evaluate in sectors like coding and creative writing, compared to their own GPT models. Notably, OpenAI replied to all the allegations positively, mentioning what they have done is necessary to improve AI safety and doesn't go against industry standards. The spokesperson opined, "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them."
Share
Share
Copy Link
Anthropic has cut off OpenAI's API access to its Claude AI models, citing a violation of terms of service. The move comes as OpenAI reportedly used Claude's coding tools in preparation for the launch of GPT-5, sparking debates about industry practices and competition in the AI sector.
Anthropic, the company behind the Claude AI models, has revoked OpenAI's API access to its services, citing a violation of terms of service. The decision was reportedly made on Tuesday, with OpenAI being informed that their access was cut off due to the violation
2
. This move has sent ripples through the AI industry, highlighting the intensifying competition and the importance of proprietary AI technologies.Source: Dataconomy
According to Anthropic spokesperson Christopher Nulty, OpenAI's technical staff were using Claude's coding tools ahead of the launch of GPT-5, which Anthropic considers "a direct violation of our terms of service"
1
. Anthropic's commercial terms explicitly forbid companies from using Claude to build competing services or reverse engineer its capabilities2
.OpenAI was reportedly connecting Claude to internal tools that allowed the company to compare Claude's performance to its own models in categories like coding, writing, and safety
1
. This access allegedly enabled OpenAI to run tests evaluating Claude's capabilities against its own AI models and check how Claude responded to safety-related prompts involving sensitive categories2
.OpenAI's chief communications officer, Hannah Wong, defended the company's actions, stating, "It's industry standard to evaluate other AI systems to benchmark progress and improve safety"
2
4
. However, this incident has raised questions about the fine line between industry benchmarking and gaining an unfair competitive advantage.Source: Wccftech
The timing of this incident is particularly significant as OpenAI is reportedly preparing to release GPT-5, rumored to be better at coding
2
. While the exact impact on GPT-5's development is unclear, it's possible that OpenAI was using Claude Code to improve GPT-5's coding capabilities3
.Related Stories
This event highlights the increasingly competitive nature of the AI industry. Companies are becoming more protective of their technologies and are taking steps to prevent competitors from gaining advantages
4
. Anthropic had previously restricted access to its models for other companies, including the AI coding startup Windsurf, which was rumored to be an acquisition target for OpenAI1
5
.Source: PYMNTS
Despite the current restrictions, Anthropic has stated that it will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry"
2
4
. This suggests a nuanced approach to API access, balancing competitive interests with industry-wide safety and benchmarking needs.As the AI landscape continues to evolve, incidents like this are likely to shape future policies and practices regarding API access, collaborative research, and competitive strategies in the AI sector.
Summarized by
Navi
[3]
[4]
[5]
1
Business and Economy
2
Policy and Regulation
3
Technology