35 Sources
[1]
Pay up or stop scraping: Cloudflare program charges bots for each crawl
Cloudflare is now experimenting with tools that will allow content creators to charge a fee to AI crawlers to scrape their websites. In a blog Tuesday, Cloudflare explained that its "pay-per-crawl" feature is currently in a private beta. A small number of publishers -- including AdWeek, The Associated Press, The Atlantic, BuzzFeed, Fortune, Gannett, and Ars Technica owner Condé Nast -- will participate in the experiment. Each publisher will be able to set their own prices that bots must pay before scraping content, Cloudflare said. Matthew Prince, CEO of Cloudflare, said the feature would ensure that the Internet as we know it will survive "the age of AI." "Original content is what makes the Internet one of the greatest inventions in the last century, and it's essential that creators continue making it," Prince said. "AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate. This is about safeguarding the future of a free and vibrant Internet with a new model that works for everyone." Some participating publishers expressed optimism in the press release that Cloudflare's pay-per-crawl feature could potentially stop the endless scraping that publishers defending copyrights have alleged represents wide-scale theft. Any content creators interested in joining the beta can sign up, Cloudflare noted, and perhaps eventually, they too can "be compensated for their contributions to the AI economy." In the meantime, only the publishers involved in the beta will be able to choose which bots can access which parts of their sites, experimenting with blocking all bots or allowing certain bots to access certain content. Cloudflare's program also gives them the flexibility to charge some bots while letting other bots scrape for free. This lets publishers that have negotiated deals with AI companies to allow approved scraping while still protecting their content from companies that have not yet struck licensing deals. AI companies must buy in For Cloudflare's plan to work, AI companies must sign up, too. However, while some AI companies may not see the incentive, Cloudflare has confirmed that it has partnered with AI companies on the initiative, which may benefit from having a simple interface to negotiate with content creators. Cloudflare suggested its AI partners benefit from "long-term collaboration" with creators whose updated content will help AI products stay relevant. They also can stop wasting money scraping poor quality data sources, a Cloudflare blog said. "Without ongoing contributions from content creators, AI systems risk becoming outdated, biased, or less reliable -- ultimately diminishing user trust and the value of AI products," the blog said. "Cloudflare is working with AI companies to give them more signals, and ultimately improve the quality and relevance of content they can access. A healthy, sustainable ecosystem of original content is critical for AI innovation and relevance." However, Cloudflare's gamble seems to depend on AI companies agreeing to pay the prices set by publishers, and that could potentially scramble the experiment if bidding wars reduce rates to the point that they alienate publishers. It also hinges on Cloudflare detecting the AI bots, which, for now, relies on user reports and Cloudflare's analysis of mass traffic patterns. "In the early days, price discovery will play a key role -- as creators gain data on whoʼs paying for what, a transparent market will emerge that reflects the true value of original content," Cloudflare said. Looking to the future, Cloudflare suggested that its pay-per-crawl system would "evolve significantly." Perhaps one day publishers could use it to "charge different rates for different paths or content types," potentially even introducing dynamic pricing in the AI scraping environment. In that future, Cloudflare predicted that AI companies would possibly be incentivized to create agents that would crawl the web, seeking the best content deals to support specific AI products. "Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho -- and then giving that agent a budget to spend to acquire the best and most relevant content," Cloudflare said, promising that "we enable a future where intelligent agents can programmatically negotiate access to digital resources." AI crawlers now blocked by default Cloudflare's announcement comes after rolling out a feature last September, allowing website owners to block AI crawlers in a single click. According to Cloudflare, over 1 million customers chose to block AI crawlers, signaling that people want more control over their content at a time when Cloudflare observed that writing instructions for AI crawlers in robots.txt files was widely "underutilized." To protect more customers moving forward, any new customers (including anyone on a free plan) who sign up for Cloudflare services will have their domains, by default, set to block all known AI crawlers. This marks Cloudflare's transition away from the dreaded opt-out models of AI scraping to a permission-based model, which a Cloudflare spokesperson told Ars is expected to "fundamentally change how AI companies access web content going forward." In a world where some website owners have grown sick and tired of attempting and failing to block AI scraping through robots.txt -- including some trapping AI crawlers in tarpits to punish them for ignoring robots.txt -- Cloudflare's feature allows users to choose granular settings to prevent blocks on AI bots from impacting bots that drive search engine traffic. That's critical for small content creators who want their sites to still be discoverable but not digested by AI bots. "AI crawlers collect content like text, articles, and images to generate answers, without sending visitors to the original source -- depriving content creators of revenue, and the satisfaction of knowing someone is reading their content," Cloudflare's blog said. "If the incentive to create original, quality content disappears, society ends up losing, and the future of the Internet is at risk." Disclosure: Condé Nast, which owns Ars Technica, is a partner in Cloudflare's pay-per-crawl beta.
[2]
Cloudflare launches a marketplace that lets websites charge AI bots for scraping | TechCrunch
Cloudflare, a cloud infrastructure provider that serves 20% of the web, announced Tuesday the launch of a new marketplace that reimagines the relationship between website owners and AI companies -- ideally giving publishers greater control over their content. For the last year, Cloudflare has launched tools for publishers to address the rampant rise of AI crawlers, including a one-click solution to block all AI bots, as well as a dashboard to view how AI crawlers are visiting their site. In a 2024 interview, Cloudflare CEO Matthew Prince told TechCrunch these products were laying a foundation for a new type of marketplace in which publishers could distribute their content to AI companies and be compensated for it. Now, Cloudflare is bringing that marketplace to life. It's called Pay per Crawl, and Cloudflare is launching the "experiment" in private beta on Tuesday. Website owners in the experiment can choose to let AI crawlers, on an individual basis, scrape their site at a set rate -- a micropayment for every single "crawl." Alternatively, website owners can choose to let AI crawlers scrape their site for free, or block them altogether. Cloudflare claims its tools will let website owners see whether crawlers are scraping their site for AI training data, to appear in AI search responses, or for other purposes. At scale, Cloudflare's marketplace is a big idea that could offer publishers a potential business model for the AI era -- and it also places Cloudflare at the center of it all. The launch of the marketplace comes at a time when news publishers are facing existential questions about how to reach readers, as Google Search traffic fades away and AI chatbots rise in popularity. There's not a clear answer for how news publishers will survive in the AI era. Some, such as the New York Times, have filed lawsuits against tech companies for training their AI models on news articles without permission. Meanwhile, other publishers have struck multi-year deals to license their content for AI model training and to have their content appear in AI chatbot responses. Even so, only large publishers have struck AI licensing deals, and it's still unclear whether they provide meaningful sources of revenue. Cloudflare aims to create a more durable system where publishers can set prices on their own terms. The company also announced Tuesday that new websites set up with Cloudflare will now, by default, block all AI crawlers. Site owners will have to grant certain AI crawlers permission to access their site -- a change Cloudflare says will give every new domain "the default of control." Several large publishers, including Conde Nast, TIME, The Associated Press, The Atlantic, ADWEEK, and Fortune, have signed on with Cloudflare to block AI crawlers by default in support of the company's broader goal of a "permission-based approach to crawling." The business model that many of these publishers relied on for decades is slowly becoming unreliable. Historically, online publishers have allowed Google to scrape their sites in return for referrals in Google Search, which translated to traffic to their sites, and ultimately, ad revenue. However, new data from Cloudflare suggests that publishers may be getting a worse deal in the AI era than in the Google Search era. While some websites cite ChatGPT as a major traffic source, that doesn't appear to be the case broadly. This June, Cloudflare says it found that Google's crawler scraped its websites 14 times for every referral it gave them. Meanwhile, OpenAI's crawler scraped websites 17,000 times for every one referral, while Anthropic scraped websites 73,000 times for every referral. Meanwhile, OpenAI and Google are building AI agents that are designed to visit websites on behalf of users, collect information, and deliver it back to users directly. A future in which these tools are mainstream has huge implications for publishers that rely on readers visiting their sites. Cloudflare notes that the "true potential" of Pay per Crawl may emerge in an "agentic" future. "What if an agentic paywall could operate at the network edge, entirely programmatically? Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho -- and then giving that agent a budget to spend to acquire the best and most relevant content," Cloudflare said in a blog post. To participate in Cloudflare's experimental marketplace, AI companies and publishers must both be set up with Cloudflare accounts. In their accounts, both parties can set rates at which they'd like to buy and sell a "crawl" of the publisher's content. Cloudflare acts as the intermediary in these transactions, charging the AI company and distributing the earnings to the publisher. Cloudflare spokesperson Ripley Park tells TechCrunch there are no stablecoins or cryptocurrency involved in Pay per Crawl at this time, even though many have suggested digital currency would be perfect for something like this. Cloudflare's marketplace feels like a bold vision for the future that requires a lot of publishers and AI companies to get on board. Still, there's no guarantee publishers will get a good deal, and convincing AI firms to participate could be tough, given they're currently scraping content for free. Nevertheless, Cloudflare seems like one of the few companies in a position to make a marketplace like this happen.
[3]
Why Cloudflare wants AI companies to pay for content
Cloudflare wants AI companies to pay up. The cloud infrastructure provider, which powers around 20% of the web, is launching a new experiment that would let publishers charge AI firms every time their bots scrape a site. It's called Pay per Crawl, and it could reshape how content is accessed and monetized online. Today on TechCrunch's Equity podcast, hosts Kirsten Korosec and Max Zeff dig into Cloudflare's big swing, why it's a natural next step after a year of laying groundwork for bot-blocking tools, and whether the plan to sit at the center of a pay-for-content protocol is genius...or just wishful thinking. Listen to the full episode to hear more about: Equity will be back next week, and for those of you in the U.S., enjoy the long holiday weekend!
[4]
Cloudflare will now, by default, block AI bots from crawling its clients' websites
However, such systems don't provide the same opportunities for monetization and credit as search engines historically have. AI models draw from a great deal of data on the web to generate their outputs, but these data sources are often not credited, limiting the creators' ability to make money from their work. Search engines that feature AI-generated answers may include links to original sources, but they may also reduce people's interest in clicking through to other sites and could even usher in a "zero-click" future. "Traditionally, the unspoken agreement was that a search engine could index your content, then they would show the relevant links to a particular query and send you traffic back to your website," Will Allen, Cloudflare's head of AI privacy, control, and media products, wrote in an email to MIT Technology Review. "That is fundamentally changing." Generally, creators and publishers want to decide how their content is used, how it's associated with them, and how they are paid for it. Cloudflare claims its clients can now allow or disallow crawling for each stage of the AI life cycle (in particular, training, fine-tuning, and inference) and white-list specific verified crawlers. Clients can also set a rate for how much it will cost AI bots to crawl their website. In a press release from Cloudflare, media companies like the Associated Press and Time and forums like Quora and Stack Overflow voiced support for the move. "Community platforms that fuel LLMs should be compensated for their contributions so they can invest back in their communities," Stack Overflow CEO Prashanth Chandrasekar said in the release. Crawlers are supposed to obey a given website's directions (provided through a robots.txt file) to determine whether they can crawl there, but some AI companies have been accused of ignoring these instructions. Cloudflare already has a bot verification system where AI web crawlers can tell websites who they work for and what they want to do. For these, Cloudflare hopes its system can facilitate good-faith negotiations between AI companies and website owners. For the less honest crawlers, Cloudflare plans to use its experience dealing with coordinated denial-of-service attacks from bots to stop them. "A web crawler that is going across the internet looking for the latest content is just another type of bot -- so all of our work to understand traffic and network patterns for the clearly malicious bots helps us understand what a crawler is doing," wrote Allen. Cloudflare had already developed other ways to deter unwanted crawlers, like allowing websites to send them down a path of AI-generated fake web pages to waste their efforts. While this approach will still apply for the truly bad actors, the company says it hopes its new services can foster better relationships between AI companies and content producers.
[5]
Cloudflare Is Blocking AI Crawlers by Default
The age of the AI scraping free-for-all may be coming to an end. At least if Cloudflare gets its way. Last year, internet infrastructure firm Cloudflare launched tools enabling its customers to block AI scrapers. Today the company has taken its fight against permissionless scraping several steps further. It has switched to blocking AI crawlers by default for its customers and is moving forward with a Pay Per Crawl program that lets customers charge AI companies to scrape their websites. Web crawlers have trawled the internet for information for decades. Without them, people would lose vitally important online tools, from Google Search to the Internet Archive's invaluable digital preservation work. But the AI boom has produced a corresponding boomlet in AI-focused web crawlers, and these bots scrape web pages with a frequency that can mimic a DDoS attack, straining servers and knocking websites offline. Even when websites can handle the heightened activity, many do not want AI crawlers scraping their content, especially news publications that are demanding AI companies to pay to use their work. "We've been feverishly trying to protect ourselves," says Danielle Coffey, the president and CEO of the trade group News Media Alliance, which represents several thousand North American outlets. So far, Cloudflare's head of AI control, privacy, and media products, Will Allen, tells WIRED, over 1 million customer websites have activated its older AI-bot-blocking tools. Now millions more will have the option of keeping bot blocking as their default. Cloudflare also says it can identify even "shadow" scrapers that are not publicized by AI companies. The company noted that it uses a proprietary combination of behavioral analysis, fingerprinting, and machine learning to classify and separate AI bots from "good" bots. A widely used web standard called the Robots Exclusion Protocol, often implemented through a robots.txt file, helps publishers block bots on a case-by-case basis, but following it is not legally required, and there's plenty of evidence that some AI companies try to evade efforts to block their scrapers. "Robots.txt is ignored," Coffey says. According to a report from the content licensing platform Tollbit, which offers its own marketplace for publishers to negotiate with AI companies over bot access, AI scraping is still on the rise -- including scraping that ignores robots.txt. Tollbit found that over 26 million scrapes ignored the protocol in March 2025 alone. In this context, Cloudflare's shift to blocking by default could prove a significant roadblock to surreptitious scrapers and could give publishers more leverage to negotiate, whether through the Pay Per Crawl program or otherwise. "This could dramatically change the power dynamic. Up to this point, AI companies have not needed to pay to license content, because they've known that they can just take it without consequences," says Atlantic CEO (and former WIRED editor in chief) Nicholas Thompson. "Now they'll have to negotiate, and it will become a competitive advantage for the AI companies that can strike more and better deals with more and better publishers." AI startup ProRata, which operates the AI search engine Gist.AI, has agreed to participate in the Pay Per Crawl program, according to CEO and founder Bill Gross. "We firmly believe that all content creators and publishers should be compensated when their content is used in AI answers," Gross says. Of course, it remains to be seen whether the big players in the AI space will participate in a program like Pay Per Crawl, which is in beta. (Cloudflare declined to name current participants.) Companies like OpenAI have struck licensing deals with a variety of publishing partners, including WIRED parent company Condé Nast, but specific details of these agreements have not been disclosed, including whether the agreement covers bot access. Meanwhile, there's an entire online ecosystem of tutorials about how to evade Cloudflare's bot blocking tools aimed at web scrapers. As the blocking default rolls out, it's likely these efforts will continue. Cloudflare emphasizes that customers who do want to let the robots scrape unimpeded will be able to turn off the blocking setting. "All blocking is fully optional and at the discretion of each individual user," Allen says.
[6]
Pay Up, AI Bot: That's the Message From a Key Company in How the Internet Works
Expertise Artificial intelligence, home energy, heating and cooling, home technology. AI companies might find it harder to access the entire web to train their large language models after the internet infrastructure provider Cloudflare said this week it would block AI data crawlers by default. It's the latest front to open in an ongoing fight between the creators of content and the AI developers who use that content to train generative AI models. In court, authors and content creators are suing major AI companies for compensation, saying copyrighted content was used without permission. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) While content providers are seeking compensation for information that was used to train models in the past, Cloudflare's move marks a new defensive measure against future efforts to train models. But it isn't just about blocking crawlers: Cloudflare says it wants to create a marketplace where AI companies can pay to crawl and scrape a site, meaning the provider of that information gets paid, and the AI developer gets permission. "That content is the fuel that powers AI engines, and so it's only fair that content creators are compensated directly for it," Cloudflare CEO Matthew Prince said in a blog post. Crawlers -- bots that visit and copy the information from a website -- are a vital component of the connected internet. It's how search engines like Google know what's on different websites, and how they can serve you the latest information from places like CNET. AI crawlers pose distinct challenges for websites. For one, they can be aggressive, generating unsustainable levels of traffic for smaller sites. They also offer little reward for their scraping: If Google crawls a site for search engine results, it will likely send traffic back to that site by including it in search results. Being crawled for training data might mean no additional traffic or even less, if people stop visiting the site and rely just on the AI model. Read more: AI Essentials: 29 Ways to Make Gen AI Work for You, According to Our Experts That's why executives from major websites like Pinterest, Reddit and several major publishing companies (including Ziff Davis, which owns CNET) cheered Cloudflare's news in statements. "The whole ecosystem of creators, platforms, web users and crawlers will be better when crawling is more transparent and controlled, and Cloudflare's efforts are a step in the right direction for everyone," Reddit CEO Steve Huffman said in a statement. Asked about Cloudflare's announcement, OpenAI said its ChatGPT model is intended to help connect its users to content on the web, similarly to search engines, and that it has integrated search into its chat functions. The company also said it uses a separate model from what Cloudflare has proposed to allow publishers to indicate how AI crawlers should behave, known as robots.txt. OpenAI said the robots.txt model works already and Cloudflare's changes are unnecessary. AI models require a ton of data to train. That's how they're able to provide detailed answers to questions and do a decent (if imperfect) job of providing a wide range of information. These models are fed incredible amounts of information and make connections between words and concepts based on what they see in that training data. The issue is how developers have gotten that data. There are now dozens of lawsuits between content creators and AI companies. Two saw major rulings just last week. In one case, a federal judge ruled Anthropic followed the law when it used copyright-protected books to train its model Claude -- via a concept called fair use. At the same time, the judge said the company's creation of a permanent library of the books was not, and ordered a new trial on those piracy allegations. In a separate case, a judge ruled in favor of Meta in a dispute between the company and a group of 13 authors. But Judge Vince Chhabria said the ruling in this case doesn't mean future cases against Meta or other AI companies will go the same way, essentially that "these plaintiffs made the wrong arguments and failed to develop a record in support of the right one." The idea of charging crawlers to visit a site isn't entirely new. Other companies, like Tollbit, offer services that allow website owners to charge AI companies for crawling. Will Allen, head of AI control, privacy and media products at Tollbit, said the environment around this technology is still developing. "We think it's very early for a content marketplace to form, and we are just starting to experiment here," he told CNET. "We're excited to see many different models flourish."
[7]
Cloudflare declares war on AI crawlers - and the stakes couldn't be higher
The major Internet Content Delivery Network (CDN), Cloudflare, has declared war on AI companies. Starting July 1, Cloudflare now blocks by default AI web crawlers accessing content from your websites without permission or compensation. The change addresses a real problem. My own small site, where I track all my stories, Practical Technology, has been slowed dramatically at times by AI crawlers. It's not just me. Numerous website owners have reported that AI crawlers, such as OpenAI's GPTBot and Anthropic's ClaudeBot, generate massive volumes of automated requests that clog up websites so they're as slow as sludge. GoogleBot alone reports that the cloud-hosting service Vercel bombards the sites it hosts with over 4.5 billion requests a month. These AI bots often crawl sites far more aggressively than traditional search engine crawlers. They sometimes revisit the same pages every few hours or even hit sites with hundreds of requests per second. While the AI companies deny that their bots are to blame, the evidence tells a different story. Also: Senate removes ban on state AI regulations from Trump's tax bill Thus, on behalf of its two million-plus customers, 20% of the web, Cloudflare now blocks AI crawlers. For any new website signing up for its services, AI crawlers will be automatically blocked from accessing its content unless the site owner grants explicit permission. Additionally, Cloudflare promises to detect "shadow" scrapers -- bots that attempt to evade detection -- by using behavioral analysis and machine learning. What's good for the AI goose is good for the gander. This move reverses the previous status quo, where website owners had to opt out of AI crawling. Now, blocking is the default, and AI vendors must request access and clarify their intentions, whether for model training, search, or other uses, before they're allowed in. This change arises not only because of frustrated website owners. Numerous publishing companies, such as The Associated Press, Condé Nast, and ZDNET's own parent company, Ziff Davis, are frustrated that AI companies have been "strip mining" the web for content. All too often, this has been done without compensation or consent, and sometimes, ignoring standard protocols like robots.txt that are meant to block crawlers. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Moreover, recent court cases have ruled in favor of Meta and Anthropic, finding that their use of copyrighted works was legal under the doctrine of fair use. Needless to say, writers, artists, and publishers don't like this one bit. Publishers are still worried that the federal government will give AI free rein to do as it wants with their content. AI powerhouses such as OpenAI and Google are continuing to lobby the government to classify AI training on copyrighted data as fair use. It's also worth noting that after the Copyright Office released a pre-publication version of its 108-page copyright and AI report, which struck a middle ground by supporting both of these world-class industries that contribute so much to our economic and cultural advancement. However, it added that while some generative AI probably constitutes a "transformative" use, the mass scraping of all data did not qualify as fair use. The next day, the Trump administration fired the head of the Copyright Office and replaced her with an attorney with no prior experience in copyright law. Also: The US Copyright Office's new ruling on AI art is here - and it could change everything Given all this, it's no wonder that publishers sought an ally in technology. As Cloudflare CEO Matthew Prince said in a statement, its new policy is meant to "give publishers the control they deserve and build a new economic model that works for everyone -- creators, consumers, tomorrow's AI founders, and the future of the web itself." To complement the move to block AI crawlers, Cloudflare has also launched its "Pay Per Crawl" program. This enables publishers to set their own rates for AI companies that want to scrape their content. Also: AI-generated images are a legal mess - and still a very human process This system is currently in private beta and aims to create a framework where AI firms can pay for access, or be denied if they refuse. Technically, this will be done by dusting off an old, mostly unused web server response, HTTP 402, which responds with a "Payment Required" error message. This means it should be simple to implement and compatible with existing websites and their infrastructure. Overall, this is a big deal. Thanks to Cloudflare powering such a large portion of the internet, a significant amount of web content could become inaccessible to AI companies unless they negotiate access or pay licensing fees. As Nicholas Thompson, CEO of The Atlantic, noted, "Until now, AI companies have not needed to pay for content licenses because they could simply take it without repercussions. Now they will need to negotiate." To this point, most AI companies have been actively against paying for content. As Sir Nick Clegg, former deputy UK Prime Minister and Meta executive, said recently, merely asking artists' permission before they scrape copyrighted content will "basically kill the AI industry." Also: Cloudflare blocks largest DDoS attack - here's how to protect yourself Cloudflare's new policy is a direct response to this approach and the increasing volume and intrusiveness of AI crawlers that have come with it. It's also an attempt to stop the siphoning of traffic that would otherwise go to publishers. Since the rise of AI, traffic to news sites has plunged. For example, Business Insider's traffic dropped by over half, 55% from April 2022 to April 2025. Left unchecked, Thompson recently predicted that, thanks to AI, the Atlantic staff should expect traffic from Google to drop to zero. What will happen next? Will the other CDN, such as Akamai, follow suit? Stay tuned. For now, the era of unrestricted AI crawling appears to be ending, well, at least for the fifth of the internet that flows through Cloudflare's pipes. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[8]
Cloudflare just changed the internet, and it's bad new for the AI giants
The major internet Content Delivery Network (CDN), Cloudflare, has declared war on AI companies. Starting July 1, Cloudflare now blocks by default AI web crawlers accessing content from your websites without permission or compensation. The change addresses a real problem. My own small site, where I track all my stories, Practical Technology, has been slowed dramatically at times by AI crawlers. It's not just me. Numerous website owners have reported that AI crawlers, such as OpenAI's GPTBot and Anthropic's ClaudeBot, generate massive volumes of automated requests that clog up websites so they're as slow as sludge. GoogleBot alone reports that the cloud-hosting service Vercel bombards the sites it hosts with over 4.5 billion requests a month. These AI bots often crawl sites far more aggressively than traditional search engine crawlers. They sometimes revisit the same pages every few hours or even hit sites with hundreds of requests per second. While the AI companies deny that their bots are to blame, the evidence tells a different story. Also: Senate removes ban on state AI regulations from Trump's tax bill Thus, on behalf of its two million-plus customers, 20% of the web, Cloudflare now blocks AI crawlers. For any new website signing up for its services, AI crawlers will be automatically blocked from accessing its content unless the site owner grants explicit permission. Additionally, Cloudflare promises to detect "shadow" scrapers -- bots that attempt to evade detection -- by using behavioral analysis and machine learning. What's good for the AI goose is good for the gander. This move reverses the previous status quo, where website owners had to opt out of AI crawling. Now, blocking is the default, and AI vendors must request access and clarify their intentions, whether for model training, search, or other uses, before they're allowed in. This change arises not only because of frustrated website owners. Numerous publishing companies, such as The Associated Press, Condé Nast, and ZDNET's own parent company, Ziff Davis, are frustrated that AI companies have been "strip mining" the web for content. All too often, this has been done without compensation or consent, and sometimes, ignoring standard protocols like robots.txt that are meant to block crawlers. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Moreover, recent court cases have ruled in favor of Meta and Anthropic, finding that their use of copyrighted works was legal under the doctrine of fair use. Needless to say, writers, artists, and publishers don't like this one bit. Publishers are still worried that the federal government will give AI free rein to do as it wants with their content. AI powerhouses such as OpenAI and Google are continuing to lobby the government to classify AI training on copyrighted data as fair use. It's also worth noting that after the Copyright Office released a pre-publication version of its 108-page copyright and AI report, which struck a middle ground by supporting both of these world-class industries that contribute so much to our economic and cultural advancement. However, it added that while some generative AI probably constitutes a "transformative" use, the mass scraping of all data did not qualify as fair use. The next day, the Trump administration fired the head of the Copyright Office and replaced her with an attorney with no prior experience in copyright law. Also: The US Copyright Office's new ruling on AI art is here - and it could change everything Given all this, it's no wonder that publishers sought an ally in technology. As Cloudflare CEO Matthew Prince said in a statement, its new policy is meant to "give publishers the control they deserve and build a new economic model that works for everyone -- creators, consumers, tomorrow's AI founders, and the future of the web itself." To complement the move to block AI crawlers, Cloudflare has also launched its "Pay Per Crawl" program. This enables publishers to set their own rates for AI companies that want to scrape their content. Also: AI-generated images are a legal mess - and still a very human process This system is currently in private beta and aims to create a framework where AI firms can pay for access, or be denied if they refuse. Technically, this will be done by dusting off an old, mostly unused web server response, HTTP 402, which responds with a "Payment Required" error message. This means it should be simple to implement and compatible with existing websites and their infrastructure. Overall, this is a big deal. Thanks to Cloudflare powering such a large portion of the internet, a significant amount of web content could become inaccessible to AI companies unless they negotiate access or pay licensing fees. As Nicholas Thompson, CEO of The Atlantic, noted, "Until now, AI companies have not needed to pay for content licenses because they could simply take it without repercussions. Now they will need to negotiate." To this point, most AI companies have been actively against paying for content. As Sir Nick Clegg, former deputy UK Prime Minister and Meta executive, said recently, merely asking artists' permission before they scrape copyrighted content will "basically kill the AI industry." Also: Cloudflare blocks largest DDoS attack - here's how to protect yourself Cloudflare's new policy is a direct response to this approach and the increasing volume and intrusiveness of AI crawlers that have come with it. It's also an attempt to stop the siphoning of traffic that would otherwise go to publishers. Since the rise of AI, traffic to news sites has plunged. For example, Business Insider's traffic dropped by over half, 55% from April 2022 to April 2025. Left unchecked, Thompson recently predicted that, thanks to AI, the Atlantic staff should expect traffic from Google to drop to zero. What will happen next? Will the other CDN, such as Akamai, follow suit? Stay tuned. For now, the era of unrestricted AI crawling appears to be ending, well, at least for the fifth of the internet that flows through Cloudflare's pipes. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[9]
Cloudflare to Block AI Crawlers From Scraping Websites Unless They Pay
(Credit: SOPA Images / Contributor / LightRocket via Getty Images) Cloudflare has introduced a new way for websites to stop AI crawlers from stealing their content without permission -- and to potentially get them to pay up. Cloudflare is an internet infrastructure service that helps websites manage and secure their traffic, such as fending off Distributed Denial of Service (DDoS) attacks. Starting today, Cloudflare is "changing the default to block AI crawlers unless they pay creators for their content," says CEO Matthew Prince. "That content is the fuel that powers AI engines, and so it's only fair that content creators are compensated directly for it." Some AI companies have inked deals with publishers for access to their content; others have sued for copyright infringement. But not everyone has lawsuit money. Cloudflare says it wants to help its customers get paid with "Pay per crawl," a new service that creates "authentication mechanisms to create a framework for paid content access." "Even if a crawler doesn't have a billing relationship with Cloudflare, and thus couldn't be charged for access, a publisher can still choose to 'charge' them -- the functional equivalent of a network level block," Cloudflare says. Sites determine pricing, and they have the option to allow specific crawlers. Pay per crawl is currently in private beta; interested sites and crawlers can reach out to Cloudflare. A Sustainable Digital Ecosystem? The feature could replace an old way websites typically block bots -- updating their robots.txt file -- which is no longer sufficient in the AI era, as many data scrapers ignore the instructions in the robots.txt file. That's why PCMag's parent company Ziff Davis is suing OpenAI. "We applaud Cloudflare for advocating for a sustainable digital ecosystem that benefits all stakeholders -- the consumers who rely on credible information, the publishers who invest in its creation, and the advertisers who support its dissemination," says Vivek Shah, CEO of Ziff Davis, in a statement provided to Cloudflare. Reddit CEO Steve Huffman, who has expressed frustration with AI crawlers, praised Cloudflare's new solution as a "more transparent" approach and "a step in the right direction for everyone." And Cloudflare's press release lists several top publishing companies who "embrace a permission-based model for AI crawling," including Condé Nast, The Atlantic, the Associated Press, Buzzfeed, ADWEEK, Fortune, Dotdash Meredith, and Gannett Media. Cloudflare says it manages traffic for 20% of the web and "trillions of requests daily," which may be enough to take a meaningful bite out of AI companies' crawlers. However, the impact on the overall AI ecosystem depends on how many publishers choose to adopt the new setting. Prince tells The New York Times he is "100% confident we can block them from accessing the content. And if they don't get to access the content, then their product will be worse." If widely adopted, this could encourage a new business model for AI companies and publishers, requiring payment for content and a more consent-based approach to data scraping.
[10]
Cloudflare creates AI crawler tollbooth to pay publishers
The bargain between content makers and crawlers has broken down ai-pocalypse Cloudflare has started blocking AI web crawlers by default in a bid to become the internet's gatekeeper. The term "gatekeeper" has been applied in a pejorative sense to platform companies like Apple and Google that use their contractual and technical control over operating systems to extract monopoly rents from developers within the platform ecosystem. Cloudflare proposes an alternative interpretation of that role that's less self-serving: to help protect online publishers from the predation of AI firms. CEO Matthew Prince in a blog post explains that the deal Google made with content creators almost 30 years ago was that the search service would send traffic to websites in exchange for the opportunity to access and index their content. That deal, he argues, has been broken as Google and rival AI companies deploy search services derived from uncompensated web content crawls, services that starve publishers of revenue by referring less search traffic to their sites. ... the deal that Google made to take content in exchange for sending you traffic just doesn't make sense anymore "The problem is whether you create content to sell ads, sell subscriptions, or just to know that people value what you've created, an AI-driven web doesn't reward content creators the way that the old search-driven web did," said Prince. "And that means the deal that Google made to take content in exchange for sending you traffic just doesn't make sense anymore." The web, he said, is being "stripmined by AI crawlers," referring to the automated bots run by companies like Google, Microsoft, OpenAI, and Anthropic, that visit websites and download page content to use for training their AI models. Recall that last year, Mustafa Suleyman, the CEO of Microsoft AI, argued that content from non-professional publishers is fair game for commercial exploitation. "I think that with respect to content that is already on the open web, the social contract of that content since the 1990s has been it is fair use," he said. "Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That's been the understanding." The courts have also not done much to help publishers, with recent rulings that suggest training on publicly accessible web data qualifies as fair use under copyright law. In a separate post, Cloudflare's David Belson, head of data insight, and Sam Rhea, VP of product, published data illustrating the disparity between what AI crawlers take and the referral traffic they send back to websites. During the period between June 19 and 26, 2025, for example, "Anthropic's AI platform Claude made nearly 71,000 HTML page requests for every HTML page referral," observe Belson and Rhea. We must note that these measures only track traffic from the Claude website, not the app, as the app does not emit a header. The same goes for the other AI vendors. That said, the request/referral ratios listed for other AI firms with search operations include OpenAI (1,600:1), Perplexity (202.4:1), Microsoft (40:1), Yandex (18:1), Google (9.4:1), ByteDance (1.4:1), Baidu (1:1), DuckDuckGo (0.3:1), and Mistral (0.1:1). Pointing to this data, Belson and Rhea contend that legacy search crawlers scanned site content only a few times per visitor sent, and that making a site available for crawling improved revenue. Not so in this new world. "The new data we are observing suggests that is no longer the case," they write. In an effort to change that dynamic, "Cloudflare, along with a majority of the world's leading publishers and AI companies, is changing the default to block AI crawlers unless they pay creators for their content," said Prince. Large content providers have already arranged payment deals with various AI firms through litigation and contracts. In November 2024, for example, Microsoft entered into an AI training deal with publisher HarperCollins, according to a court filing [PDF] in the copyright lawsuit Bird v. Microsoft. The license gives Microsoft the right to use a given work for AI training for three years, in exchange for a whopping $5,000 payment split between the author and publisher. Cloudflare aims to make such deals occur through a network handshake. The firm's payment service, now in private beta testing, is called Pay per crawl. "Each time an AI crawler requests content, they either present payment intent via request headers for successful access (HTTP response code 200), or receive a response with pricing," explain Will Allen, VP of product, and Simon Newton, engineering manager, in a blog post. "Cloudflare acts as the Merchant of Record for pay per crawl and also provides the underlying technical infrastructure." The amount charged is not clear and is likely to vary by publisher. The payment service will let publishers block AI crawlers, allow specific ones, charge for access, or grant free access. If publishers see value in Cloudflare's gatekeeping, their adoption of this technology will make the open web less open. But that may be the price necessary to keep the web populated with new material. ®
[11]
Cloudflare launches tool to help website owners monetize AI bot crawler access
NEW YORK, July 1 (Reuters) - Cloudflare has launched a tool that blocks bot crawlers from accessing content without permission or compensation to help websites make money from AI firms trying to access and train on their content, the software company said on Tuesday. The tool allows website owners to choose whether artificial intelligence crawlers can access their material and set a price for access through a "pay per crawl" model, which will help them control how their work is used and compensated, Cloudflare said. With AI crawlers increasingly collecting content without sending visitors to the original source, website owners are looking to develop additional revenue sources as search traffic referrals that once generated advertising revenue decline. The initiative is supported by major publishers including Condé Nast and Associated Press, as well as social media companies such as Reddit and Pinterest. Cloudflare's Chief Strategy Officer Stephanie Cohen said the goal of such tools was to give publishers control over their content, and ensure a sustainable ecosystem for online content creators and AI companies. "The change in traffic patterns has been rapid, and something needed to change," Cohen said in an interview. "This is just the beginning of a new model for the internet." Google, for example, has seen its ratio of crawls to visitors referred back to sites drop to 18:1 from 6:1 just six months ago, according to Cloudflare data, suggesting the search giant is maintaining its crawling but decreasing referrals. The decline could be a result of users finding answers directly within Google's search results, such as AI Overviews. Still, Google's ratio is much higher than other AI companies, such as OpenAI's 1,500:1. For decades, search engines have indexed content on the internet directing users back to websites, an approach that rewards creators for producing quality content. However, AI companies' crawlers have disrupted this model because they harvest material without sending visitors to the original source and aggregate information through chatbots such as ChatGPT, depriving creators of revenue and recognition. Many AI companies are circumventing a common web standard used by publishers to block the scraping of their content for use in AI systems, and argue they have broken no laws in accessing content for free. In response, some publishers, including the New York Times, have sued AI companies for copyright infringement, while others have struck deals to license their content. Reddit, for example, has sued AI startup Anthropic for allegedly scraping Reddit user comments to train its AI chatbot, while inking a content licensing deal with Google. Reporting by Krystal Hu in New York; Editing by Kate Mayberry Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence Krystal Hu Thomson Reuters Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.
[12]
Cloudflare experiment will block AI bot scrapers unless they pay a fee
New customers will also be asked if they want to block AI crawlers by default. Cloudflare has rolled out a couple of new measures meant to keep AI bot crawlers at bay. To start with, every new domain customer that signs up with the company to manage their website traffic will now be asked if they want to allow AI crawlers or to block them altogether. The company released a free tool in 2024 to block AI bots, but with this change, users can block them by default without having to tinker with their settings. Several big publishers, including Condé Nast, TIME and The Associated Press have already signed up to block crawlers. In addition, Cloudflare has launched a private beta experiment called "pay per crawl," which would only allow crawlers to access a website's content if they pay for it. Matthew Prince, Cloudflare's CEO, recently went on record to say that publishers are facing an existential threat, because people aren't clicking on chatbots' source links. If users don't visit those sources, the websites don't get the ad revenue they need to be able to keep running. "Original content is what makes the Internet one of the greatest inventions in the last century, and it's essential that creators continue making it," Prince said in a statement released with the company's latest updates. "AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate. This is about safeguarding the future of a free and vibrant Internet with a new model that works for everyone." Cloudflare believes publishers should be able to charge AI bots for access if they want to, and pay per crawl is its first experiment for that particular purpose. "Each time an AI crawler requests content, they either present payment intent via request headers for successful access (HTTP response code 200), or receive a 402 Payment Required response with pricing," Cloudflare explained. The company records those transactions and provides the underlying technical infrastructure. Publishers will be able to allow certain crawlers to access their content for free if they want to, and they can define a flat, per-request price across its websites for other crawlers. The company says pay per crawl is still in its very early stages, and it expects the tool to evolve in the future. It also says that it supports the development of other marketplaces and ways to charge AI crawlers for content. A marketplace could, for instance, allow dynamic pricing that enable publishers to charge different rates for different types of content.
[13]
Web giant Cloudflare to block AI bots from scraping content by default
Internet firm Cloudflare will start blocking artificial intelligence crawlers from accessing content without website owners' permission or compensation by default, in a move that could significantly impact AI developers' ability to train their models. Starting Tuesday, every new web domain that signs up to Cloudflare will be asked if they want to allow AI crawlers, effectively giving them the ability to prevent bots from scraping data from their websites. Cloudflare is what's called a content delivery network, or CDN. It helps businesses deliver online content and applications faster by caching the data closer to end-users. They play a significant role in making sure people can access web content seamlessly every day. Roughly 16% of global internet traffic goes directly through Cloudflare's CDN, the firm estimated in a 2023 report. "AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate," said Matthew Prince, co-founder and CEO of Cloudflare, in a statement Tuesday.
[14]
Cloudflare tests "pay-per-crawl" system to charge AI firms for scraping website content
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? Cloudflare is experimenting with a new way to prevent AI crawlers from scraping website content. The CDN/security company has announced that it will block them from accessing content without permission or compensation by default. Publishers can allow the crawlers, but the bots' AI firms will be charged. Starting from today, every new website that signs up to Cloudflare will be asked if they want to allow AI crawlers to scrape their site. Site owners can not only choose if they want to allow access and to which content, but also decide how AI companies can use it. Moreover, the AI companies can clearly state if the crawlers are being used for training, inference, or search, helping owners decide which crawlers to allow. Must read: The Zero Click Internet Cloudflare launched a free tool to block AI bots in 2024, but this change allows publishers to block them by default, and without altering any settings. Condé Nast, TIME and The Associated Press are just some of the publishers who have signed up to block the crawlers. Cloudflare says over 1 million customers have chosen this option. Cloudflare adds that a small number of publishers and content creators are participating in a private beta for its pay-per-crawl feature. This will allow those who do allow the bots to scrape their content to set a price for the privilege. "Each time an AI crawler requests content, they either present payment intent via request headers for successful access (HTTP response code 200), or receive a 402 Payment Required response with pricing," Cloudflare explained. Anyone interested in becoming part of the beta can sign up here. Around 16% of global internet traffic goes directly through Cloudflare's CDN, according to a 2023 report, so the move could have a huge impact on AI companies. "Original content is what makes the Internet one of the greatest inventions in the last century, and it's essential that creators continue making it," said Matthew Prince, CEO of Cloudflare. "AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate. This is about safeguarding the future of a free and vibrant Internet with a new model that works for everyone." For pay-per-crawl to work properly, AI companies must also sign up for the program. Cloudflare says that it has partnered with several AI firms willing to participate in what should be a mutually beneficial arrangement - assuming they agree to pay the prices set by publishers. The news comes just a couple of weeks after Prince reiterated his previous warning that AI crawlers and summaries were destroying the internet's business model. Default blocking and pay-per-crawl are part of the company's plan to combat the threat of a zero-click internet, a term describing when users no longer need to click on links to find whatever content they want. In the past, websites typically saw one human visitor for every six times Google crawled their pages - a relatively balanced ratio that often translated into ad views. By comparison, OpenAI's crawler had a much lower engagement rate of about one visitor per 250 crawls, while Anthropic's ratio was even steeper at roughly 6,000 to one. According to Prince, those gaps have widened: Google now averages around 18 crawls per visitor, OpenAI's rate has dropped to 1,500 to one, and Anthropic's is estimated at a staggering 60,000 to one.
[15]
Free Lunch Is Over for the AI That Broke the Web
The foundational deal of the modern web, a handshake agreement that powered two decades of search and content, is officially dead. Cloudflare just put a price on scraping the internet, and it’s coming for artificial intelligence’s free lunch. Almost 30 years ago, two Stanford grad students, Larry Page and Sergey Brin, built Google on a simple bargain: content creators would let them copy the entire web in exchange for traffic. For years, that traffic powered ad revenue, subscriptions, and the growth of online media. Google mostly upheld its end of the deal. But that era is collapsing under the weight of AI. On July 1, Cloudflare, one of the internet’s core infrastructure companies, declared “Content Independence Day.†In a landmark policy shift, the company announced it will now block AI crawlers from scraping sites hosted on its platform unless those bots pay content creators for the data they consume. “Cloudflare, along with a majority of the world's leading publishers and AI companies, is changing the default to block AI crawlers unless they pay creators for their content,†CEO Matthew Prince announced in a blog post. “That content is the fuel that powers AI engines, and so it's only fair that content creators are compensated directly for it.†This is a sharp, aggressive turn from the web’s traditionally open access ethos. Cloudflare argues it’s long overdue. AI tools like ChatGPT and Google’s own AI Overviews are now answering user questions directly, effectively strip-mining websites for information while sending almost no traffic back to the original source. “Instead of being a fair trade, the web is being stripmined by AI crawlers with content creators seeing almost no traffic and therefore almost no value,†Prince said. The numbers are stark. Cloudflare claims it’s already 10 times harder to get traffic from Google than it was a decade ago due to features like the answer box. But the new AI models are far worse. According to Cloudflare’s internal metrics, OpenAI drives 750 times less traffic than traditional Google search, while Anthropic drives a staggering 30,000 times less. The reason is simple: people are asking ChatGPT instead of Googling. The content still gets used, but the creators have been completely cut out of the value chain. Using its position as a gatekeeper for roughly 20% of all websites globally (around one-fifth of all web traffic passes through Cloudflare's network), Cloudflare is now forcing the issue by charging a toll. But the plan goes further than just blocking bots. Cloudflare aims to build a new content marketplace where AI companies and creators can trade directly. Compensation would be based not on clicks, but on how valuable the content is for training AI models. To explain this, the company uses a quirky metaphor: imagine an AI’s knowledge is a block of Swiss cheese. The holes represent knowledge gaps. The more your content fills one of those holes, the more it’s worth to an AI company. It’s an ambitious proposal that challenges the entire web economy, which still judges value by how viral something is. Cloudflare is betting that filling gaps in machine knowledge is a more stable long-term market than chasing fickle human attention. It also hints at something more radical: the end of the free and open web as we knew it. This move marks the dawn of the pay-to-train era. OpenAI has already signed high-profile licensing deals with publishers like Reddit and the Financial Times. Other AI giants are quietly inking data partnerships or scraping whatever they can until they get blocked. But Cloudflare’s decision is the first time a major infrastructure provider has flipped the default setting for a huge portion of the internet. The real story here is not just technical; it is economical. We are watching the rise of a new class of digital middlemen, companies that will broker access between the creators of web content and the AI models that feed on it. In a post-click internet, training data is the new currency, and Cloudflare just positioned itself as a major bank. The company says its goal is to usher in a new golden age for creators. “We believe that if we can begin to score and value content not on how much traffic it generates, but on how much it furthers knowledge,†Prince said, “we not only will help AI engines get better faster, but also potentially facilitate a new golden age of high-value content creation.†That sounds nice. But it also raises messy questions. Who decides what counts as high value? Who gets paid, and how much? If content is optimized for AI rather than people, what happens to the soul of the web? The darker possibility is a content Cold War, where publishers wall off everything and AI companies hoard exclusive data deals, making the web more fragmented and less open than ever before. Whether or not Cloudflare’s “Swiss cheese†model takes off, this much is true: AI broke the old search-based web economy. On July 1, Cloudflare drew a line in the sand. For the first time in the age of generative AI, the pipes of the internet are fighting back.
[16]
Cloudflare launches way to charge AI bots for crawling sites
Cloudflare announced a new tool on Tuesday that can allow publishers to charge AI bots for scraping websites. That could prove to be a major change to how the internet has functioned so far in the AI age -- models have, by default, scraped the internet with abandon and without permission, often to the chagrin of content owners. Cloudflare is a major infrastructure provider, meaning large swaths of the internet will get access to this tool. The "pay per crawl" feature was part of Cloudflare's announcement on Tuesday that the company is now the first internet infrastructure provider to "block AI crawlers accessing content without permission or compensation, by default." The tool, which is currently in beta, allows site owners to charge a fee each time an AI bot wants to "crawl" its website for information. The idea behind this push from Cloudflare is to put power back in the hands of the people who make the content that AI uses to train. (Full disclosure: Ziff Davis, which owns Mashable, was among the many publishers quoted in Cloudflare's press release that supported a permission-based approach to AI bots.) Wrote Cloudflare in its press release: "For decades, the Internet has operated on a simple exchange: search engines index content and direct users back to original websites, generating traffic and ad revenue for websites of all sizes. This cycle rewards creators that produce quality content with money and a following, while helping users discover new and relevant information. That model is now broken. AI crawlers collect content like text, articles, and images to generate answers, without sending visitors to the original source - depriving content creators of revenue, and the satisfaction of knowing someone is viewing their content." We'll see how it evolves, but the tool could prove to be useful for people who create original content on the internet.
[17]
Cloudflare will now block AI crawlers on your website - and even force them to pay you
Specific crawlers can be granted free access, charged, or blocked Online creators often have very little control over the types of crawlers that can access their content, but Cloudflare may have a solution. The company has revived HTTP response code 402 as a neat way to block or charge AI crawlers to access your site in a new feature it calls 'pay per crawl'. The best part is, it's not a block or charge all control - users will be able to allow specific crawlers to access their site for free, charge others for access, and block the ones you don't want trawling your content. HTTP response code 402, otherwise known as the 402 Payment Required status code, indicates to crawlers payment is needed to access the content. As a result, the crawler can either respond with intent to pay, or is blocked from accessing the content. As an added bonus, content creators with a block on their site can effectively 'tell' AI crawlers that they are open to potential payments in the future. For those thinking that someone could simply spoof a crawler that has access to the site, Cloudflare is one step ahead. An authentic crawler will use the 'signature-agent', 'signature-input', and 'signature' headers to authenticate themselves with Cloudflare. Cloudflare will then compare a public key from a Ed25519 key pair that is stored in a hosted directory with the URL of the key directory and user agent information that is registered with Cloudflare, thus allowing the authentic crawler through and blocking any spoofed crawlers. Crawlers will also be able to crawl the web with a set budget for accessing protected sites using the 'crawler-exact-price' header to accept the proposed price listed by the 'crawler-price' header on the desired site, or preemptively use the 'crawler-max-price' when accessing a site which will grant access if the price is equal to or less than the crawler's budget. Cloudflare also has some theories for the potential of pay per crawl in the future. An AI agent can be given a budget to crawl the web when responding to a prompt, allowing the user to access high-quality and relevant content when entering a prompt. Pay per crawl is currently only available in private beta, but interested parties can reach out to Cloudflare via the link at the bottom of the blog.
[18]
Cloudflare will let content creators charge fees to AI chatbots
Cybersecurity company Cloudflare announced on Tuesday it is adding a new feature that allows users to entirely block AI chatbots from crawling their websites or charge a fee to grant bots access. "The web is being stripmined by AI crawlers with content creators seeing almost no traffic and therefore almost no value," Matthew Prince, co-founder and CEO of Cloudflare, said in a statement on Tuesday. The new feature will give users three options: they can allow AI bots full, free access to their websites; they can block all AI chatbots from their website; or they can charge a "flat, per-request price across their entire site," according to the company's statement. Users are able to bypass charges for some AI chatbots if they want. "This is particularly helpful if you want to allow a certain crawler through for free, or if you want to negotiate and execute a content partnership outside the pay per crawl feature," the statement said. The statement refers to this new feature as a "technical shift in how content is controlled online" to help give users "control over who accesses their work." AI training crawl activity has increased by 65% over the past 6 months, the company said in a separate statement on Tuesday. This move could make it more difficult for developers to quickly train AI chatbots, but will let content creators make money from AI chatbot activity. "That content is the fuel that powers AI engines, and so it's only fair that content creators are compensated directly for it," Prince said in his statement. Cloudflare introduced a different feature in July of last year that let users block AI chatbots with a single click. The company said that more than one million users enabled the feature. The company estimates that about 16% of all internet traffic around the world operates through Cloudflare, according to its 2023 Q3 Browser Market Share Report.
[19]
Cloudflare Hits the Kill Switch on AI Crawlers -- And an Entire Industry Cheers - Decrypt
Cloudflare's "pay-per-crawl" marketplace lets publishers set fees for AI companies seeking access, potentially creating a new revenue stream -- and a new internet business model. Cloudflare CEO Matthew Prince just made it official: Starting today, Cloudflare will block by default known AI bots from scraping sites unless given explicit permission -- or unless they pay. It's not just a technical tweak. Cloudflare's "Content Independence Day" marks a collective escalation. More than a million websites had already opted into blocking since last fall, but now it's the default for every new Cloudflare domain. Furthermore, a roster of media heavyweights -- from the Associated Press, Time, The Atlantic, and BuzzFeed to Reddit, Quora, and Universal Music Group -- have signed on. The movement has been well-received on social media, with investors and SEO experts praising the initiative. "This is the way," said said Ed Newton-Rex, former VP of audio at Stability AI; "Cloudflare solving problems again," said pseudonymous trader and content creator Romano RNR. "It'll be OK. A force is needed to tilt the balance back to where it once was," said former Google and current SEO consultant Pedro Dias. Of course, not everyone is convinced. The main argument against such a move is that it could hurt businesses not appearing on AI searches. Or maybe not: Prince pointed out that a decade ago, Google crawled two pages for every one visit it sent to a publisher. Given that the number of people using the internet has skyrocketed to include most of the planet, publishers should benefit. But they haven't, since in order to be "seen" on Google, publishers need to make their content easier for the search giant to summarize -- the better to keep them within Google Search. So today, for every "18 pages that Google takes from you, you get one visitor," Prince said. But the numbers for AI companies are exponentially more devastating. Said Prince: "What's the ratio for OpenAI six months ago? 250 to 1. What is it today? 1,500 to 1." And OpenAI is hardly the worst offender. Anthropic was "6,000 to 1 six months ago. What is it today? 60,0000" to 1, he said. "If the internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone -- creators, consumers, tomorrow's AI founders, and the future of the web itself," Prince argued. "I'm not going to sugarcoat it. If people don't have incentive to create content, they're not going to create content" he said separately. "We have to give that back or the internet dies," Prince said. To pressure AI companies into putting up or getting shut out, Cloudflare is pioneering a new approach to AI content access: a permissioned, pay-to-crawl marketplace. For the first time, publishers can set their own rates for AI bots -- invoking the long-dormant HTTP 402 "payment required" code as a digital tollbooth. Publishers can allow free access, charge for each crawl, or block them entirely. The company says this is only available for leading content creators in early access, but the plan is to scale the system. The "pay-per-crawl" initiative arrives as some of the web's biggest content providers seek leverage over AI companies, after years of having content ingested for free and monetized elsewhere. "When AI companies can no longer take anything they want for free, it opens the door to sustainable innovation built on permission and partnership," said Condé Nast CEO Roger Lynch. This coalition-building, Prince believes, is essential. "You cannot have a good sort of market without scarcity, which is why this needs to be a collective effort," he said. He cautioned that licensing deals made today, if not coupled with real blocking, would be worse in the future. Research confirms that the movement is more than symbolic. According to studies by Originality.AI and the Reuters Institute, 48% of the top global websites are already blocking AI crawlers. Last year, Cloudflare implemented an option to make it easy to block AI crawlers, and more than a million customers have exercised that option, the company said. Now, new domains signing up with Cloudflare are asked upfront whether they want to allow AI crawlers, with blocking as the default. Existing customers can change their settings anytime.
[20]
Cloudflare will block AI web crawlers by default, and introduces new Pay Per Crawl scheme that means AI companies will have to fork out for the privilege to scrape
As time goes on, just saying 'no' to AI feels more and more futile. However, Cloudflare is announcing a few more tools for your anti-AI arsenal that put some of the power back in your hands. Cloudflare offers a range of network services from domain registration and content delivery to DDOS mitigation. Adding to that quiver, Cloudflare is launching the sharp and pointy Pay Per Crawl scheme, which aims to hit AI companies scraping online content where it hurts -- namely, their deep pockets. Last year, Cloudflare began allowing customers to simply block AI bots from their websites with one click. This AI Audit tool also allows users to pick and choose which bots they want to block, as well as offering insights into which LLM or gen-AI is regurgitating their content. Now, Cloudflare has announced that AI crawlers will be blocked by default, describing it as "the first step toward a more sustainable future for both content creators and AI innovators." Users who want to have their content scraped for free can still turn this default setting off, but Cloudflare's second announcement seemingly asks the question, 'Why would you want to?' Pay Per Crawl, a scheme announced last year that allows users to charge AI companies wanting to scrape their content, is now in beta. To date, Cloudflare has not revealed the names of any Pay Per Crawl participants aside from ProRata, the startup behind AI search engine Gist.AI. Founder and CEO Bill Gross offered comment as part of Cloudflare's announcement regarding AI crawlers, saying, "Our whole mission is to protect and elevate human creativity in the AI era. That's why we're proud to be one of the first AI companies to participate in Cloudflare's initiative to create a new permission-based model for the Internet. We believe that creators and publishers deserve to be fairly compensated for the value they bring, and we're thrilled to support Cloudflare in helping make that vision a reality." It doesn't seem unlikely that bigger AI companies would go for a scheme like this. After all, OpenAI has already struck content partnerships with a number of media companies, including PC Gamer's own parent company, Future. Pay Per Crawl feels like an attempt to level the playing field, giving smaller content creators a leg to stand on, who would otherwise simply not have an entire team to dedicate to working out the terms of a content partnership. But let's reverse for a second, especially because, as Wired notes, not all web crawlers are bad. For instance, the Internet Archive's preservation work in particular relies heavily on them. Last year, the Pew Research Center found that a quarter of all webpages that existed between 2013 and 2023 are now no longer accessible. Web crawlers are one way to preserve pages like that before they disappear in a puff of broken links and long-forgotten hosting services. AI crawlers, however, are a bit of a menace as they scrape without permission from the owner of the content, and can end up overly taxing servers by behaving not unlike a DDOS attack. As previously mentioned, Cloudflare is no stranger to DDOS attacks, telling our Nick last month that their network blocks about '225,000,000,000 attacks per day.' The company also shared that its network recently mitigated the largest attack ever recorded, battering a single IP address with 37.5 TB of data over the course of 45 seconds. As such, Cloudflare's AI blocking tools leverage a mix of machine learning and behavioural analysis to differentiate between these two types of web crawlers. Cloudflare has also found some success fingerprinting naughty bots by punting AI web crawlers down a rabbit warren of AI-generated pages. Now that's what I call fighting fire with fire...or a flinging slop fight. At any rate, would it be naive to feel just a little optimistic about Cloudflare's push for a 'Permission-Based Model for the Internet'? Time will tell.
[21]
AI game changer? Cloudflare lets websites charge AI developers for content access - SiliconANGLE
AI game changer? Cloudflare lets websites charge AI developers for content access Cloudflare Inc. today debuted a tool that enables website operators to charge artificial intelligence developers for access to their content. The new pay per crawl feature, as it's called, is in private beta. Cloudflare operates a popular content delivery network, or CDN, that website operators use to speed up page loading times. The company also provides related services that ease tasks such as blocking cyberattacks. Cloudflare estimates that its infrastructure processes about 20% of the world's web traffic. After enabling the new pay per crawl feature, website operators can define a flat fee that will be charged from AI bots for every request. If necessary, different settings can be applied to specific crawlers. "This is particularly helpful if you want to allow a certain crawler through for free or if you want to negotiate and execute a content partnership outside the pay per crawl feature," Cloudflare executives Will Allen and Simon Newton wrote in a blog post. Websites exchange data with visitors' browsers in the form of HTTP messages. When a user visits a webpage, the user's browser sends an HTTP message that asks to download the contents of the page. The website then responds with HTTP messages of its own that contain the requested files. When an AI crawler visits websites connected to Cloudflare's pay per crawl tool, it receives an HTTP message indicating that payment is required for access. The crawler can optionally send an HTTP message that broadcasts the maximum price its developer is willing to pay. Access is granted if the website operator's pricing is accepted. Cloudflare has equipped its pay per crawl feature with technology designed to block malicious activity. In particular, it prevents hackers from gaining access to a website by impersonating an AI crawler whose developer paid for content. Cloudflare will mitigate such spoofing campaigns using public key cryptography. With public key cryptography, an AI crawler generates an encrypted message that only it can decrypt. Website operators then receive a copy of this encrypted message. To verify an AI crawl request, website operators ask the bot that sent the request to decrypt the encrypted message. The pay per crawl feature uses a public key cryptography algorithm called Ed25519. It encrypts data by encoding it into mathematical structures known as twisted Edwards curves that were discovered in 2008. Besides Cloudflare's AI crawler verification mechanism, the algorithm is also used in the popular SSH protocol for logging into servers and other cybersecurity tools. "Given that Cloudflare protects the majority of the world's most popular websites, as well as millions of smaller websites that publish academic and scientific content, this security feature will elegantly prevent data-greedy bots from unwarrantedly scraping human-created content without permission and without paying for it," said Ilia Kolochenko, the Chief Executive Officer of cybersecurity company ImmuniWeb.
[22]
Cloudflare gives creators more control over AI crawlers
The company wants to give users a new option when it comes to AI crawlers and that is monetisation. Cybersecurity and network services provider Cloudflare has announced an update that will give content creators, publishers and website operators more control over who can access their work via AI crawlers. AI crawlers, also known as bots or AI scrapers, are software programmes designed to gather information from websites, typically with the goal of training large language models (LLMs) and generating human-like, more accurate responses. On their website, Cloudflare stated, "If a creator wants to block all AI crawlers from their content, they should be able to do so. If a creator wants to allow some or all AI crawlers full access to their content for free, they should be able to do that, too. Creators should be in the driver's seat." However, the organisation believes there is a third option. Publishers, creators and website owners should have the power to grant AI crawlers access to their platforms, but for fair compensation, which can be a difficult process that requires striking the right deal and often may only be available to platforms capable of scaling and holding leverage. As a result, Cloudflare is introducing the experimental 'pay per crawl' system, which will integrate with existing web infrastructure, leveraging HTTP status codes and established authentication mechanisms to create a framework for paid content access. Currently it is in the private beta phase. "Each time an AI crawler requests content, they either present payment intent via request headers for successful access (HTTP response code 200), or receive a 402 Payment Required response with pricing. Cloudflare acts as the Merchant of Record for pay per crawl and also provides the underlying technical infrastructure." Site owners will have the power to control their monetisation strategy and can select three options: allow, which gives complete and free access; charge, which will require payment at the set price; and block, which denies access entirely. "At its core, pay per crawl begins a technical shift in how content is controlled online. By providing creators with a robust, programmatic mechanism for valuing and controlling their digital assets, we empower them to continue creating the rich, diverse content that makes the internet invaluable. "We expect pay per crawl to evolve significantly. It's very early, we believe many different types of interactions and marketplaces can and should develop simultaneously. We are excited to support these various efforts and open standards." Cloudflare was recently a victim of the major global outage affecting a range of organisations across Europe, the US and Asia, such as Google, OpenAI and Spotify. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[23]
Cloudflare Just Became an Enemy of All AI Companies | AIM
"Our goal is to put the power back in the hands of creators, while still helping AI companies innovate." Cloudflare might have just killed the web search functionality of AI chatbots. The company announced that it would start blocking AI crawlers by default, drawing a line in the open web where content is no longer a free fuel for AI. If AI companies want in, they will have to pay up. The announcement reframes the foundational deal that powered the web for decades. For years, websites gave Google content, and in return, Google sent them traffic. Now, generative AI is severing that loop with GEO -- copying without clicks, quoting without proper credit, and more. Cloudflare, which routes traffic for 20% of the internet (as the company claims), says it is time for publishers and AI companies to work together to reward the content that it deserves, and improve the economy of the web. This move won't halt AI, but it might slow its free lunch. And that's precisely the point. "AI-driven web doesn't reward content creators the way that the old search-driven web did," reads the blog post, arguing that the exchange of traffic-for-content no longer holds in a world where tools like ChatGPT and Claude scrape text to generate answers with no attribution or reward. "With OpenAI, it's 750 times harder to get traffic than it was with the Google of old. With Anthropic, it's 30,000 times harder." That isn't a gentle drop-off, it's a cliff. And content creators are falling off it. Cloudflare's new policy flips the default, from passive permission to active protection. Every new domain signing up with the service now gets asked whether they want to allow AI crawlers. The default is "no". Companies like Gannett Media, Condé Nast, Quora, Ziff Davis, and Reddit are backing the initiative, aiming to restore value that AI has quietly eroded. This could also address the trouble caused by AI crawlers. Bots from OpenAI, Anthropic, and Meta are increasingly burdening independent websites by consuming excessive bandwidth and disregarding protocols like robots.txt, resulting in higher bills and degraded server performance. Developers like Gergely Orosz on LinkedIn and X also have raised concerns over this aggressive scraping, with some building tools like Anubis to fight back. Cloudflare seems to be adamant on what it wants to do. The company earlier reported that AI bots now account for more than 50 billion daily requests and have responded with deflection tools, such as AI Labyrinth, to waste bot resources. "If the Internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone - creators, consumers, tomorrow's AI founders, and the future of the web itself," said Matthew Prince, co-founder and CEO of Cloudflare. He added that the goal of Cloudflare is to put the power back in the hands of creators, while still helping AI companies innovate. "This is about safeguarding the future of a free and vibrant Internet with a new model that works for everyone," he added. Even Reddit agrees. "AI companies, search engines, researchers, and anyone else crawling sites have to be who they say they are. And any platform on the web should have a say in who is taking their content for what," said Steve Huffman, co-founder and CEO of Reddit. "The whole ecosystem of creators, platforms, web users and crawlers will be better when crawling is more transparent and controlled, and Cloudflare's efforts are a step in the right direction for everyone." While web search features in AI tools offer utility, there is a growing consensus that crawler behaviour must be regulated to protect smaller web operators. Considering this, it looks like Cloudflare's new measures can be a necessary feature for the web. The real significance of Cloudflare's move isn't just the block, it's the framework it hopes to build next. The company plans to work on a marketplace where the value of content is judged not by page views, but by how much it adds value in terms of knowledge. It's a step toward rewarding originality, not clickbait. Cloudflare is also working on protocols to help AI crawlers identify themselves, allowing publishers to make nuanced decisions, which could permit AI for search, but not for training. Until now, content scraping has been largely unregulated, masked behind generic user agents and vague intentions. Still, the policy opens up a paradox. AI companies are invited to work with Cloudflare, provided they compensate. This puts the company in a powerful position, which could be beneficial for publishers using Cloudflare, and in a way, could also be controversial for AI companies. Publishers may celebrate the move, but AI developers may see it as a speed bump to innovation. For an industry built on large-scale web scraping, "permission" could become the new latency.
[24]
New Cloudflare feature lets websites charge AI developers for content access - SiliconANGLE
New Cloudflare feature lets websites charge AI developers for content access Cloudflare Inc. today debuted a tool that enables website operators to charge artificial intelligence developers for access to their content. The new pay per crawl feature, as it's called, is in private beta. Cloudflare operates a popular content delivery network, or CDN, that website operators use to speed up page loading times. The company also provides related services that ease tasks such as blocking cyberattacks. Cloudflare estimates that its infrastructure processes about 20% of the world's web traffic. After enabling the new pay per crawl feature, website operators can define a flat fee that will be charged from AI bots for every request. If necessary, different settings can be applied to specific crawlers. "This is particularly helpful if you want to allow a certain crawler through for free or if you want to negotiate and execute a content partnership outside the pay per crawl feature," Cloudflare executives Will Allen and Simon Newton wrote in a blog post. Websites exchange data with visitors' browsers in the form of HTTP messages. When a user visits a webpage, the user's browser sends an HTTP message that asks to download the contents of the page. The website then responds with HTTP messages of its own that contain the requested files. When an AI crawler visits websites connected to Cloudflare's pay per crawl tool, it receives an HTTP message indicating that payment is required for access. The crawler can optionally send an HTTP message that broadcasts the maximum price its developer is willing to pay. Access is granted if the website operator's pricing is accepted. Cloudflare has equipped its pay per crawl feature with technology designed to block malicious activity. In particular, it prevents hackers from gaining access to a website by impersonating an AI crawler whose developer paid for content. Cloudflare will mitigate such spoofing campaigns using public key cryptography. With public key cryptography, an AI crawler generates an encrypted message that only it can decrypt. Website operators then receive a copy of this encrypted message. To verify an AI crawl request, website operators ask the bot that sent the request to decrypt the encrypted message. The pay per crawl feature uses a public key cryptography algorithm called Ed25519. It encrypts data by encoding it into mathematical structures known as twisted Edwards curves that were discovered in 2008. Besides Cloudflare's AI crawler verification mechanism, the algorithm is also used in the popular SSH protocol for logging into servers and other cybersecurity tools. "Given that Cloudflare protects the majority of the world's most popular websites, as well as millions of smaller websites that publish academic and scientific content, this security feature will elegantly prevent data-greedy bots from unwarrantedly scraping human-created content without permission and without paying for it," said Ilia Kolochenko, the Chief Executive Officer of cybersecurity company ImmuniWeb.
[25]
New Cloudflare tool lets sites charge AI crawlers
Cloudflare announced the launch of "Pay per Crawl," a private beta marketplace enabling website owners to charge AI crawlers for content access, fulfilling a vision articulated by CEO Matthew Prince in 2024. Cloudflare has, over the past year, introduced various tools designed to manage AI crawler interactions for publishers. These include a one-click solution for blocking all AI bots and a dashboard providing visibility into how AI crawlers are accessing their sites. Cloudflare CEO Matthew Prince stated in a 2024 interview that these products were foundational steps toward creating a marketplace where publishers could monetize their content by distributing it to AI companies. Cloudflare launched this marketplace, named "Pay per Crawl," as a private beta experiment. Within this experiment, website owners can establish a specific micropayment rate for each "crawl" of their site by individual AI crawlers. Alternatively, website owners retain the option to permit AI crawlers to scrape their content without charge or to block them entirely. Cloudflare asserts that its tools provide website owners with the ability to discern whether crawlers are accessing their site for AI training data, for inclusion in AI search responses, or for other purposes. Cloudflare additionally announced that new websites configured with Cloudflare will, by default, block all AI crawlers. This modification requires site owners to explicitly grant permission for specific AI crawlers to access their content, a change Cloudflare describes as providing "the default of control" to every new domain. Several prominent publishers, including Conde Nast, TIME, The Associated Press, The Atlantic, ADWEEK, and Fortune, have committed to blocking AI crawlers by default in alignment with Cloudflare's objective of promoting a "permission-based approach to crawling." Cloudflare fights AI scrapers with a maze of useless content Data collected by Cloudflare in June 2025 indicates a disparity in crawling frequency between AI companies and traditional search engines. Cloudflare reported that Google's crawler accessed websites 14 times for every referral it provided. In contrast, OpenAI's crawler accessed websites 17,000 times for each referral, while Anthropic's crawler accessed websites 73,000 times per referral. Cloudflare envisions "Pay per Crawl" potentially evolving into an "agentic paywall." This concept could involve AI agents, acting on behalf of users, being allocated a budget to acquire relevant content. Cloudflare stated in a blog post, "What if an agentic paywall could operate at the network edge, entirely programmatically? Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho -- and then giving that agent a budget to spend to acquire the best and most relevant content." Participation in Cloudflare's experimental marketplace necessitates that both AI companies and publishers maintain Cloudflare accounts. Within these accounts, both parties can establish rates for the purchase and sale of a "crawl" of the publisher's content. Cloudflare functions as an intermediary in these transactions, charging the AI company and subsequently distributing the earnings to the publisher. Ripley Park, a Cloudflare spokesperson, confirmed that "Pay per Crawl" does not currently involve stablecoins or cryptocurrency.
[26]
Publishers, website owners to block AI 'crawlers' that don't pay
Gift 5 articles to anyone you choose each month when you subscribe. Huge artificial intelligence companies that train their models on data harvested without permission from commercial websites will face a new roadblock, with global internet infrastructure giant Cloudflare making it easier for publishers and other website owners to lock their virtual doors to "web crawlers". Publishers including TIME, Condé Nast, Sky News, Quora, The Atlantic, Fortune, Associated Press, Pinterest and BuzzFeed all put their names to an announcement of the new "permission-based" model of directing internet traffic to websites being offered by CloudFlare, which makes it the default setting to block AI bots without explicit approval.
[27]
Cloudflare declares war on AI companies, first blood was drawn on July 1st
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you. One of the most popular Content Delivery Networks (CDN) available, Cloudflare, has recently announced it is drawing a line in the sand with AI companies. The new initiative began on July 1 with Cloudflare announcing via a new blog post that it will be blocking AI web crawlers on the websites it hosts by default, preventing AI companies such as OpenAI, Anthropic, and others from scraping the websites' content. For those who are out of the loop, AI companies are continuously gathering new data through web crawlers that essentially act as leeches on websites, and in many instances, slowing down the speed of the website through their continuous requests. However, Cloudflare is putting a stop to that, and as of July 1, all AI web crawlers will be blocked by default. This isn't a small change either, as Cloudflare has more than two million customers, which is approximately 20% of the web. Moreover, Cloudflare states it also has the capabilities to detect "shadow" scrapers, which are bots that attempt to evade detection by using various unpopular or sophisticated tactics. So, what happens to AI vendors? AI companies now must request access and clarify their intentions when attempting to crawl a Cloudflare-hosted website. The request is then presented to the website owner, who makes the decision. Notably, Cloudflare isn't just introducing a block for AI crawlers; it is also launching the "Pay Per Crawl" program, which enables website publishers to set their own rates for AI companies to buy the content being published on the website. "Each time an AI crawler requests content, they either present payment intent via request headers for successful access (HTTP response code 200), or receive a 402 Payment Required response with pricing. Cloudflare acts as the Merchant of Record for pay per crawl and also provides the underlying technical infrastructure," writes Cloudflare
[28]
Cloudflare launches tool to help website owners monetise AI bot crawler access - The Economic Times
The tool allows website owners to choose whether artificial intelligence crawlers can access their material and set a price for access through a "pay per crawl" model, which will help them control how their work is used and compensated, Cloudflare said.Cloudflare has launched a tool that blocks bot crawlers from accessing content without permission or compensation to help websites make money from AI firms trying to access and train on their content, the software company said on Tuesday. The tool allows website owners to choose whether artificial intelligence crawlers can access their material and set a price for access through a "pay per crawl" model, which will help them control how their work is used and compensated, Cloudflare said. With AI crawlers increasingly collecting content without sending visitors to the original source, website owners are looking to develop additional revenue sources as search traffic referrals that once generated advertising revenue decline. The initiative is supported by major publishers including Conde Nast and Associated Press, as well as social media companies such as Reddit and Pinterest. Cloudflare's Chief Strategy Officer Stephanie Cohen said the goal of such tools was to give publishers control over their content, and ensure a sustainable ecosystem for online content creators and AI companies. "The change in traffic patterns has been rapid, and something needed to change," Cohen said in an interview. "This is just the beginning of a new model for the internet." Google, for example, has seen its ratio of crawls to visitors referred back to sites drop to 18:1 from 6:1 just six months ago, according to Cloudflare data, suggesting the search giant is maintaining its crawling but decreasing referrals. The decline could be a result of users finding answers directly within Google's search results, such as AI Overviews. Still, Google's ratio is much higher than other AI companies, such as OpenAI's 1,500:1. For decades, search engines have indexed content on the internet directing users back to websites, an approach that rewards creators for producing quality content. However, AI companies' crawlers have disrupted this model because they harvest material without sending visitors to the original source and aggregate information through chatbots such as ChatGPT, depriving creators of revenue and recognition. Many AI companies are circumventing a common web standard used by publishers to block the scraping of their content for use in AI systems, and argue they have broken no laws in accessing content for free. In response, some publishers, including the New York Times, have sued AI companies for copyright infringement, while others have struck deals to license their content. Reddit, for example, has sued AI startup Anthropic for allegedly scraping Reddit user comments to train its AI chatbot, while inking a content licensing deal with Google.
[29]
Cloudflare introduces default blocking of AI data scrapers - The Economic Times
Cloudflare has introduced a new setting allowing websites to block AI bots from scraping content without permission. The move aims to protect original digital content and addresses growing concerns over AI companies using data without consent. Cloudflare's update impacts how online data is accessed for AI training and development.Cloudflare, a tech company that helps websites secure and manage their internet traffic, said Tuesday that it had rolled out a new permission-based setting that allows customers to automatically block artificial intelligence companies from collecting their digital data, a move that has implications for publishers and the race to build AI. With Cloudflare's new setting, websites can block -- by default -- online bots that scrape their data, requiring the website owner to grant access for a bot to collect the content, the company said. In the past, those whom Cloudflare did not flag as a hacker or malicious actor could get through to a website to gather its information. "We're changing the rules of the internet across all of Cloudflare," said Matthew Prince, the CEO of the company, which provides tools that protect websites from cyberattacks and helps them load content more efficiently. "If you're a robot, now you have to go on the toll road in order to get the content of all of these publishers." Cloudflare is making the change to protect original content on the internet, Prince said. If AI companies freely use data from various websites without permission or payment, people will be discouraged from creating new digital content, he said. The company, which says its network of servers handles about 20% of internet traffic, has seen a sharp increase in AI data crawlers on the web. Data for AI systems has become an increasingly contentious issue. OpenAI, Anthropic, Google and other companies building AI systems have amassed reams of information from across the internet to train their AI models. High-quality data is particularly prized because it helps AI models become more proficient in generating accurate answers, videos and images. But website publishers, authors, news organizations and other content creators have accused AI companies of using their material without permission and payment. Last month, Reddit sued Anthropic, saying the startup had unlawfully used the data of its more than 100 million daily users to train its AI systems. In 2023, The New York Times sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.
[30]
Cloudflare Launches Pay Per Crawl to Help Publishers Monetise AI Access
Aims to give control and revenue options to all content publishers. In a significant move to reshape how online content is accessed and monetized in the age of artificial intelligence (AI), Cloudflare has announced the private beta of Pay Per Crawl, a system that allows publishers to charge AI crawlers for accessing their content. According to the cloud platform, traditionally, website owners faced a binary choice: allow AI systems unrestricted access to their content, or block them entirely. Cloudflare's new initiative introduces a third option -- enabling content owners to charge for access, using standard web infrastructure and a revitalized HTTP status code: 402 Payment Required. Also Read: AI: Vodafone Deploys AI Agents, Cloudflare AI Labyrinth, HCLTech-Western Union AI Partnership "We wanted content creators to have control over who accesses their work. If a creator wants to block all AI crawlers from their content, they should be able to do so. If a creator wants to allow some or all AI crawlers full access to their content for free, they should be able to do that, too," Cloudflare shared in a blog post on July 1, 2025. Pay Per Crawl integrates into existing HTTP protocols and uses a combination of cryptographic authentication and structured pricing headers to establish a secure, verifiable mechanism for payment-based content access. Publishers can configure a flat per-request fee, and then define policies to allow, charge, or block specific crawlers. Cloudflare acts as the Merchant of Record for pay-per-crawl, managing transactions and revenue distribution. Crucially, the system prevents crawler spoofing by requiring authenticated requests signed with Ed25519 keys. AI companies must register with Cloudflare and implement signed headers to identify themselves before accessing paid content. According to Cloudflare, the initiative responds to growing concerns from publishers, news organizations, and content creators who want fair compensation when AI models use their content. Current licensing models are fragmented and depend on private negotiations. Cloudflare's model introduces a scalable alternative, allowing even smaller publishers to participate in monetizing their digital assets. Two pricing interaction models are supported. In a reactive flow, crawlers receive a 402 response with the cost and may retry the request with payment. Alternatively, crawlers may proactively include a maximum price they are willing to pay. If the content's price is within that range, the server returns a 200 OK response and processes the payment. Also Read: Meta Restructures AI Division, Launches Superintelligence Labs Cloudflare believes this mechanism will support the emergence of an agent-driven web economy, where digital assistants and autonomous agents can transact and retrieve information on behalf of users. Future expansions may support dynamic pricing, differentiated licensing for training versus inference, and path-specific content valuation. "By anchoring our first solution on HTTP response code 402, we enable a future where intelligent agents can programmatically negotiate access to digital resources," Cloudflare stated in its announcement. Pay Per Crawl is currently available to select partners in private beta.
[31]
Cloudflare Debuts Bot Blocker to Help 'Internet Survive Age of AI' | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The new offering, announced Tuesday (July 1), lets website owners decide if they want artificial intelligence (AI) crawlers to access their content, and determine how AI firms can use it. It also lets site owners set a price for access via a "pay per crawl" model. "For decades, the Internet has operated on a simple exchange: search engines index content and direct users back to original websites, generating traffic and ad revenue for websites of all sizes," the company said in a news release. "This cycle rewards creators that produce quality content with money and a following, while helping users discover new and relevant information." But that model, Cloudflare contended, is broken, with AI crawlers collecting things like words and images to generate answers without sending visitors to the initial source, robbing creators of revenue and the satisfaction of knowing someone is viewing their work. "If the Internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone -- creators, consumers, tomorrow's AI founders, and the future of the web itself," said Matthew Prince, co-founder and CEO of Cloudflare. "Original content is what makes the Internet one of the greatest inventions in the last century, and it's essential that creators continue making it," Prince added. "AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate." Writing about this issue last year, PYMNTS noted the significant financial implications of content scraping, as each company invests heavily in researching, writing and publishing website content. Experts argued that allowing bots to scrape this material freely undermines this work while leading to derivative content that potentially outranks the original on search engines. "Beyond content theft, scraping can have detrimental effects on website performance," that report said. "Unchecked bot activity may overload servers, slow down websites and skew analytics data, potentially increasing operational costs. These consequences underscore the urgency of many content providers implementing robust protective measures." All the same, that report said, experts have been divided on effectiveness of new anti-scraping tools, with some cautioning that their track record is still unproven, and others more optimistic about their potential. At the time, Cloudflare had just introduced another tool to fight AI-data harvesting, which Pankaj Kumar, CEO of Naxisweb, acknowledged in an interview with PYMNTS. "Its purposeful blockage focuses exclusively on AI bots so that people can still visit the site or search engine robots can continue to crawl it. Search engine optimization (SEO) performance is not compromised, while unauthorized scraping is prevented by selective blocking," Kumar said.
[32]
Cloudflare Rolls Out Permission-Based Web Scraping For AI Bots
Internet infrastructure company Cloudflare has announced a major update to how it handles AI web crawlers. The company will now block AI bots from accessing websites by default, unless website owners explicitly choose to allow them. This change applies to all new domains that sign up with Cloudflare. The move aims to give content creators more control over how their work is used by AI companies. For context, these bots often scrape data, such as articles, images, and other media, from websites without asking for permission or offering compensation. This has raised concerns among publishers and media organisations, who say they are losing traffic and revenue as a result. Cloudflare CEO Matthew Prince said, "If the Internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone - creators, consumers, tomorrow's AI founders, and the future of the web itself." Cloudflare now also lets AI crawlers identify themselves clearly and state whether the data they collect is being used for training, inference, or search. This, the company says, will help website owners make better decisions about who can access their content. The move has been welcomed by several major publishers and media groups, including Condé Nast, Dotdash Meredith, Gannett, TIME, and The Atlantic. "This is a critical step toward creating a fair value exchange on the Internet that protects creators, supports quality journalism and holds AI companies accountable," said Roger Lynch, CEO of Condé Nast. Social media companies like Pinterest and Reddit also backed the move, emphasising that it gives creators clearer control and transparency over the use of their content online. While Cloudflare's new policy does not ban all AI crawling, it shifts the default toward permission-based access, meaning AI companies must now seek approval from websites instead of scraping freely. As AI tools like ChatGPT, Google Gemini, and Meta's LLaMA evolve, they rely heavily on scraping large volumes of data from the internet, often without permission. This includes news articles, books, images, source code, and user-generated content. This practice has triggered a wave of legal challenges. One notable case is Reddit vs. Anthropic, where the former alleges that Anthropic scraped user-generated content to train its Claude chatbot. Similarly, The New York Times has sued OpenAI and Microsoft for using its articles to train AI models. Elsewhere, Getty Images has filed a lawsuit against Stability AI, and several authors, including George R.R. Martin, have taken legal action over the unauthorised use of their works. For context, data scraping by AI models not only violates the copyright claims of publishers, but also undercuts their business models. When AI models generate content summaries or direct answers, users often bypass the original sources entirely, reducing traffic and ad revenue for content creators. Therefore, Cloudflare's move to block AI bots by default is a major shift in giving power back to website owners. It challenges the current free-for-all approach and pushes for a consent-based system where AI companies must disclose who they are and how they intend to use the data. And as the line between fair use and exploitation continues to blur, such measures are essential to protect the economic and creative interests of publishers, platforms, and independent creators worldwide. Notably, this is not the first time Cloudflare has addressed this issue. In September last year, the company introduced a set of tools under the name AI Audit. These tools allow website owners to track when, how often, and for what purpose AI bots visit their sites. Website owners can also see whether the bots provide proper attribution or not. Cloudflare argued that such tools can help publishers prepare for licensing deals with AI firms, which are becoming more common. For example, companies like OpenAI have signed agreements with major publishers including Time, The Atlantic, and News Corp. Cloudflare also pointed out that while some large publishers have the resources to negotiate these deals, smaller websites may not. It believes its tools can make it easier for both publishers and AI developers to understand the value of content and streamline the negotiation process. In March this year, Cloudflare introduced another approach to deal with unauthorised scraping, called AI Labyrinth. Instead of simply blocking bots - which often alerts them and causes them to change tactics - Cloudflare now misleads them by serving AI-generated web pages that appear real but contain irrelevant information. The company said this technique wastes the crawler's time and resources while helping identify and track bad bots more effectively.
[33]
Cloudflare launches tool to help website owners monetize AI bot crawler access
NEW YORK (Reuters) -Cloudflare has launched a tool that blocks bot crawlers from accessing content without permission or compensation to help websites make money from AI firms trying to access and train on their content, the software company said on Tuesday. The tool allows website owners to choose whether artificial intelligence crawlers can access their material and set a price for access through a "pay per crawl" model, which will help them control how their work is used and compensated, Cloudflare said. With AI crawlers increasingly collecting content without sending visitors to the original source, website owners are looking to develop additional revenue sources as search traffic referrals that once generated advertising revenue decline. The initiative is supported by major publishers including Condé Nast and Associated Press, as well as social media companies such as Reddit and Pinterest. Cloudflare's Chief Strategy Officer Stephanie Cohen said the goal of such tools was to give publishers control over their content, and ensure a sustainable ecosystem for online content creators and AI companies. "The change in traffic patterns has been rapid, and something needed to change," Cohen said in an interview. "This is just the beginning of a new model for the internet." Google, for example, has seen its ratio of crawls to visitors referred back to sites drop to 18:1 from 6:1 just six months ago, according to Cloudflare data, suggesting the search giant is maintaining its crawling but decreasing referrals. The decline could be a result of users finding answers directly within Google's search results, such as AI Overviews. Still, Google's ratio is much higher than other AI companies, such as OpenAI's 1,500:1. For decades, search engines have indexed content on the internet directing users back to websites, an approach that rewards creators for producing quality content. However, AI companies' crawlers have disrupted this model because they harvest material without sending visitors to the original source and aggregate information through chatbots such as ChatGPT, depriving creators of revenue and recognition. Many AI companies are circumventing a common web standard used by publishers to block the scraping of their content for use in AI systems, and argue they have broken no laws in accessing content for free. In response, some publishers, including the New York Times, have sued AI companies for copyright infringement, while others have struck deals to license their content. Reddit, for example, has sued AI startup Anthropic for allegedly scraping Reddit user comments to train its AI chatbot, while inking a content licensing deal with Google. (Reporting by Krystal Hu in New York; Editing by Kate Mayberry)
[34]
Cloudflare, Inc. Just Changes How AI Crawlers Scrape the Internet-At-Large; Permission-Based Approach Makes Way for A New Business Model
Cloudflare, Inc. announced it is now the first Internet infrastructure provider to block AI crawlers accessing content without permission or compensation, by default. Starting July 1, 2025, website owners can choose if they want AI crawlers to access their content, and decide how AI companies can use it. For decades, the Internet has operated on a simple exchange: search engines index content and direct users back to original websites, generating traffic and ad revenue for websites of all sizes. This cycle rewards creators that produce quality content with money and a following, while helping users discover new and relevant information. That model is now broken. AI crawlers collect content like text, articles, and images to generate fans, without sending visitors to the original source - depriving content creators of revenue, and the satisfaction of knowing someone is viewing their content. If the incentive to create original, quality content appears, society ends up losing, and the future of the Internet is at risk.
[35]
Cloudflare takes a stand: AI crawlers must now ask permission to access web content
In a landmark move that could reshape the relationship between content creators and artificial intelligence companies, Cloudflare has announced a sweeping policy update that effectively locks out AI crawlers unless they explicitly seek permission. The new default, live from July 1, 2025, makes Cloudflare the first major Internet infrastructure provider to enforce a "permission-based model" for AI scraping, placing control back in the hands of web publishers. For years, AI companies have been crawling the web, collecting data to train models or serve up direct answers to users' queries, often without crediting the original sources or compensating them. The result? Content creators lose visibility, traffic, and revenue, while large language models and generative AI tools benefit from the work of others. Cloudflare's new system flips the dynamic. From now on, when a website signs up with Cloudflare, its owner will be asked whether they wish to allow AI crawlers to access their content. The default setting is "no," and companies seeking access must declare who they are, what data they want, and why, whether it's for training, inference, or search. This marks a meaningful departure from the traditional opt-out model where creators had to chase AI bots with robots.txt files or custom firewall rules. At the heart of Cloudflare's argument is the erosion of a long-standing social contract: search engines crawl websites, send users to those sites, and creators are rewarded with traffic, engagement, and ad revenue. But AI flips that model on its head. With tools like chatbots and smart search answers drawing on scraped content to deliver full responses, users often get what they need without ever clicking through. In this new reality, the incentive to create original content begins to vanish. "If the Internet is going to survive the age of AI," said Cloudflare CEO Matthew Prince, "we need to give publishers the control they deserve." His company, which handles a staggering 20% of the world's web traffic, believes that the only sustainable solution is one where creators consent to the use of their content and get something in return. This isn't just theory. According to Cloudflare, over one million websites had already activated its earlier one-click crawler blocking feature introduced in late 2024. The latest update enforces that stance by default, setting a precedent that could ripple through the wider web infrastructure ecosystem. The announcement has attracted widespread support from some of the Internet's most prominent publishers and content-driven platforms. Condé Nast, Gannett, Dotdash Meredith, TIME, and The Atlantic are among the dozens of organisations backing Cloudflare's initiative. Many see it as the first real defence against the mass harvesting of content without attribution or licensing. "This is a critical step toward creating a fair value exchange," said Condé Nast CEO Roger Lynch. Others, like Reddit CEO Steve Huffman and Pinterest's Bill Ready, echoed similar sentiments, praising Cloudflare's efforts to bring transparency and accountability into AI content consumption. Importantly, the initiative isn't just about blocking. Cloudflare is working on standardising protocols to help AI crawlers properly identify themselves and their purpose. This allows web publishers to make informed decisions, and opens the door to licencing discussions where content usage could be monetised, paving the way for healthier AI-content creator ecosystems. Cloudflare's new default setting could significantly disrupt the way AI companies source their training and inference data. By making crawler blocking the standard, the company has erected a high wall around a large chunk of the open web, one that scrapers can no longer quietly slip through. This marks a turning point for publishers, who now gain meaningful leverage to negotiate licensing terms, potentially through Cloudflare's emerging Pay Per Crawl programme or alternative arrangements. "This could dramatically change the power dynamic," said Nicholas Thompson, CEO of The Atlantic. "Up to this point, AI companies have not needed to pay to license content, because they've known that they can just take it without consequences. Now they'll have to negotiate, and it will become a competitive advantage for the AI companies that can strike more and better deals with more and better publishers." ProRata, an AI startup behind the Gist.AI search engine, is among the first to publicly commit to participating in Pay Per Crawl. Founder Bill Gross said the company believes "all content creators and publishers should be compensated when their content is used in AI answers." However, while some startups are engaging with the new framework, it's still unclear whether industry heavyweights like OpenAI or Anthropic will formally join. OpenAI has made licensing deals with major publishers but details remain vague, and it's not confirmed whether these agreements permit bot-based crawling under Cloudflare's new regime. That said, blocking alone won't eliminate unauthorised scraping. A parallel shadow ecosystem of tutorials and tools aimed at bypassing Cloudflare's defences already exists and will likely continue to evolve. Cloudflare admits as much, but stresses that its system is opt-out by design. Websites that wish to allow unfettered scraping are free to disable blocking at any time. The point, as Cloudflare sees it, is to reset the baseline: access to content should begin with consent, not assumption. With this policy, Cloudflare steps beyond its traditional role as a network and security provider, taking on a more influential, and possibly controversial, position as a gatekeeper between AI companies and the open web. By mediating access to a fifth of the Internet, Cloudflare has effectively positioned itself as a power-broker in the emerging AI-content economy. It's a role not dissimilar to what Apple plays in the mobile ecosystem: a platform arbiter setting the terms for what is permissible. Website owners will look to Cloudflare not just for performance and protection, but as a tool for asserting rights over their digital assets. And AI companies, big and small, may have no choice but to engage with Cloudflare's framework if they want access to the most trusted and high-quality data on the web. This new gatekeeping role could give Cloudflare considerable leverage in negotiations around AI data access and licensing. It may even open the door to an AI-focused content marketplace where publishers and AI developers engage under structured, transparent, and monetised terms. The question, of course, is how long this model can remain voluntary before governments step in with legislation. What Cloudflare has initiated can be seen as a reframing of Internet norms. By making permission the default, not the exception, it challenges the idea that everything on the web is up for grabs. It shifts power away from AI giants and toward the individuals, creators, and organisations that make the Internet what it is. If widely adopted, this permission-based approach could drive AI firms to rethink their business models. It might encourage more formal partnerships with publishers, or even prompt the rise of licensable content datasets built through consent. More broadly, it could inject fresh momentum into efforts to monetise quality content in a digital ecosystem long dominated by scale and surveillance. As AI continues to reshape how we discover and consume information, Cloudflare's move is likely to be a defining moment. It won't solve every problem overnight but it sets a precedent. And in doing so, it gives the web a fighting chance to stay free, fair, and worth writing for.
Share
Copy Link
Cloudflare introduces a marketplace allowing websites to charge AI bots for content scraping, potentially reshaping the relationship between content creators and AI companies.
Cloudflare, a leading cloud infrastructure provider serving 20% of the web, has introduced a groundbreaking marketplace called Pay-Per-Crawl. This new system allows website owners to charge AI companies for scraping their content, potentially reshaping the relationship between content creators and AI firms 1.
Source: pcgamer
The Pay-Per-Crawl feature, currently in private beta, enables publishers to set their own prices for AI bots to access their content. Participating publishers include AdWeek, The Associated Press, The Atlantic, BuzzFeed, Fortune, Gannett, and Condé Nast 2.
Matthew Prince, CEO of Cloudflare, emphasized the importance of this initiative:
"Original content is what makes the Internet one of the greatest inventions in the last century, and it's essential that creators continue making it. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate." 2
In a significant move, Cloudflare announced that all new customers, including those on free plans, will have their domains set to block all known AI crawlers by default 2. This shift towards a permission-based model for AI scraping marks a departure from the previous opt-out approach.
Source: ZDNet
Cloudflare's data reveals a stark contrast between traditional search engine crawlers and AI bots. While Google's crawler scraped websites 14 times for every referral, OpenAI's crawler scraped 17,000 times and Anthropic's 73,000 times for each referral 1. This disparity highlights the changing dynamics of web content usage in the AI era.
Several large publishers, including Conde Nast, TIME, and The Associated Press, have expressed support for Cloudflare's initiative. Stack Overflow CEO Prashanth Chandrasekar stated:
"Community platforms that fuel LLMs should be compensated for their contributions so they can invest back in their communities." 4
While Cloudflare's marketplace presents a bold vision for the future, its success depends on widespread adoption by both publishers and AI companies. Some challenges include:
Source: PYMNTS
Cloudflare envisions a future where intelligent agents can programmatically negotiate access to digital resources. This could lead to more sophisticated research tools and personalized content acquisition 2.
As the AI landscape evolves, Cloudflare's Pay-Per-Crawl system may play a crucial role in balancing the interests of content creators and AI companies, potentially providing a sustainable model for content monetization in the AI era.
Summarized by
Navi
[2]
[3]
[4]
MIT Technology Review
|Cloudflare will now, by default, block AI bots from crawling its clients' websitesFrench tech giant Capgemini agrees to acquire US-listed WNS Holdings for $3.3 billion, aiming to strengthen its position in AI-powered intelligent operations and expand its presence in the US market.
10 Sources
Business and Economy
5 hrs ago
10 Sources
Business and Economy
5 hrs ago
Isomorphic Labs, a subsidiary of Alphabet, is preparing to begin human trials for drugs developed using artificial intelligence, potentially revolutionizing the pharmaceutical industry.
3 Sources
Science and Research
13 hrs ago
3 Sources
Science and Research
13 hrs ago
BRICS leaders are set to call for protections against unauthorized AI use, addressing concerns over data collection and fair payment mechanisms during their summit in Rio de Janeiro.
3 Sources
Policy and Regulation
21 hrs ago
3 Sources
Policy and Regulation
21 hrs ago
Huawei's AI research division, Noah Ark Lab, denies allegations that its Pangu Pro large language model copied elements from Alibaba's Qwen model, asserting independent development and adherence to open-source practices.
3 Sources
Technology
5 hrs ago
3 Sources
Technology
5 hrs ago
Samsung Electronics is forecasted to report a significant drop in Q2 operating profit due to delays in supplying advanced memory chips to AI leader Nvidia, highlighting the company's struggles in the competitive AI chip market.
2 Sources
Business and Economy
13 hrs ago
2 Sources
Business and Economy
13 hrs ago