7 Sources
[1]
Enterprises prefer Anthropic's AI models over anyone else's, including OpenAI's | TechCrunch
AI research lab Anthropic's AI models are now the top choice for enterprises, surpassing OpenAI. Anthropic now holds 32% of the enterprise large language model market share by usage, according to a report from Menlo Ventures released on Thursday. OpenAI holds the second-largest market share by usage among enterprises, with 25%. The figure marks a strong reversal from even just a couple of years ago. Since 2023, OpenAI has seen its market share among enterprises decline sharply, according to the report, as Anthropic's has steadily risen over the same timeframe. OpenAI held 50% of the enterprise market share by usage just two years ago while Anthropic had 12%. Google has seen enterprise usage for its models increase over the last few years as well. Anthropic has an even larger market share when it comes to coding, with 42% of the enterprise market share, the largest market share by a wide margin. Enterprise usage of Anthropic's AI models are more than double OpenAI's, when it comes to coding, which garnered 21% of overall market share. Anthropic's release of its Claude 3.5 Sonnet model in June 2024 is what laid the foundation for the company's surge in usage, according to the report. The release of Claude 3.7 Sonnet in February 2025 only accelerated that momentum. The findings from Menlo Ventures align with anecdotal chatter in the industry, which suggested that enterprise and startup developers preferred Claude over OpenAI's ChatGPT. Meanwhile, OpenAI has a strong foothold on the consumer side of the house. The company reported last week that its users send more than 2.5 billion prompts to ChatGPT a day. The Menlo Ventures report found enterprises prefer closed models, which Anthropic and OpenAI use. More than half of enterprises replied that they don't use open source models at all. Only 13% of enterprise daily workloads use open source models as of mid-year 2025, down from 19% at the beginning of the year. Meta still maintains dominance in the open source market.
[2]
Enterprises prefer Anthropic's AI models over anyone else's, including OpenAI | TechCrunch
AI research lab Anthropic's AI models are now the top choice for enterprises, surpassing OpenAI. Anthropic now holds 32% of the enterprise LLM market share by usage, according to a report from Menlo Ventures released on Thursday. OpenAI holds the second-largest market share by usage among enterprises with 25%. The figure marks a strong reversal from even just a couple of years ago. Since 2023, OpenAI has seen its market share among enterprises decline sharply, according to the report, as Anthropic's has steadily risen over the same timeframe. OpenAI held 50% of the enterprise market share by usage just two years ago while Anthropic had 12%. Google has seen enterprise usage for its models increase over the last few years as well. Anthropic has an even larger market share when it comes to coding with 42% of the enterprise market share, the largest market share by a wide margin. Enterprise usage of Anthropic's AI models are more than double OpenAI's, when it comes to coding, which garnered 21% of overall market share. Anthropic release of its Claude Sonnet 3.5 model in June 2024 is what laid the foundation for the company's surge in usage, according to the report. The release of Claude Sonnet 3.7 in February 2025 only accelerated that momentum. The findings from Menlo Ventures align with anecdotal chatter in the industry, which suggested that enterprise and startup developers preferred Claude over OpenAI's ChatGPT. Meanwhile, OpenAI has a strong foothold on the consumer side of the house. The company reported last week that its users send more than 2.5 billion prompts to ChatGPT a day. The Menlo Ventures report found enterprises prefer closed models, which Anthropic and OpenAI use. More than half of enterprises replied that they don't use open source models at all. Only 13% of enterprise daily workloads use open source models as of mid-year 2025, down from 19% at the beginning of the year. Meta still maintains dominance in the open source market.
[3]
Anthropic beats OpenAI as the top LLM provider for business - and it's not even close
Open-source AI is lagging behind its proprietary competitors. If you were to ask J. Random User on the street what the most popular business AI Large Language Model (LLM) is, I bet you they'd say OpenAI's ChatGPT. As of mid-2025, however, Anthropic is the leading enterprise LLM provider, with 32% of enterprise usage, according to Menlo Ventures, an early-stage venture capital firm. Before you get too excited, though, keep in mind that Menlo Ventures is a major Anthropic investor. The firm has backed the company through several significant funding rounds, including leading their Series D round and participating in their $3.5 billion Series E, which valued Anthropic at $61.5 billion. Also: What happened when Anthropic's Claude AI ran a small shop for a month (spoiler: it got weird) In other words, Menlo Ventures has billions of reasons to praise Anthropic. That said, others also view Anthropic as the top enterprise AI company. As AI Magazine put it, "Anthropic has established itself as the premier enterprise AI company through its Claude family of LLMs, achieving remarkable 1,000% year-over-year growth to reach $3 billion in annual recurring revenue." Even by hyper-aggressive AI standards, that's real growth. Behind Anthropic, you'll find OpenAI, which now has 25%; Google with 20%; and Meta Llama with 9%. All the way in the back, with a mere 1% you'll find DeepSeek, followed by the rest of the pack. Menlo Ventures credits Anthropic's rapid ascent to the strong performance of its Claude Sonnet and Claude Opus models. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) These numbers reflect the proportion of production AI use, not spending. They were derived from a survey of 150 technical decision-makers at enterprises and startups building AI applications in the summer of 2025. Three different factors are driving Anthropic's rise. The first is what Menlo Ventures calls "AI's first killer app": Code generation. While AI-created code quality remains questionable, nevertheless, more developers are using AI programming tools than ever, and Claude has become programmers' top choice with 42% of the market share. That's double OpenAI's 21% share. There are concrete examples of Anthropic development programs gaining popularity. For instance, in just one year, Claude helped transform GitHub Copilot into a $1.9-billion ecosystem. Claude Sonnet 3.5's 2024 release showed how LLM breakthroughs can make possible entirely new categories such as AI IDEs, Cursor and Windsurf; vibe app builders, Lovable, Bolt, and Replit; and enterprise coding agents, Claude Code and All Hands. Another reason Anthropic is winning is its use of reinforcement learning with verifiable rewards (RLVR) to train its LLMs. Behind that complicated name lies a simple concept: You provide clear, binary feedback (1 for correct, 0 for incorrect) on the model's output. This works well for programming AI tools, where the code either works or doesn't. Anthropic has also led the way to LLMs that take step-by-step approaches to solving problems and use external tools to pull in data to deliver better answers. In short, Anthropic has been a leader in creating AI agents. Besides helping people and programmers, this approach can help LLMs iteratively improve their responses and integrate tools like search, calculators, coding environments, and other resources via the Model Context Protocol (MCP). This new open-source protocol enables LLMs and AI agents to seamlessly connect with the vast, ever-changing landscape of real-world data, tools, and services. Also: 7 strategic insights business and IT leaders need for AI transformation in 2025 That's important because Menlo Ventures also found that it's not price that drives companies to change LLMs, it's performance. "This creates an unexpected market dynamic: Even as individual models drop 10x in price, builders don't capture savings by using older models; they just move en masse to the best-performing one." This dynamic may change once LLMs start to mature and models begin to reach similar performance levels. For now, though, as LLMs improve massively from one release to another, companies are willing to pay for the newest and fastest. The study also found that companies are steadily shifting from building and training models to inference, that is, with models actually running in production. Startups are leading the way, with 74% of builders now stating that most of their workloads are in production. Large enterprises aren't far behind, with 49% reporting that most or nearly all of their computers are in production. In short, enterprises are now using AI, not merely building AI. Finally, the researchers said that open-source LLMs have declined to 13% of AI workloads today from 19% six months ago. The market leader remains Llama, albeit that Llama isn't really open source. Also: How agentic AI is transforming the very foundations of business strategy Nevertheless, more open-source LLMs have been appearing. These include new models from DeepSeek (V3, R1), Bytedance Seed (Doubao), Minimax (Text 1), Alibaba (Qwen 3), Moonshot AI (Kimi K2), and Z AI (GLM 4.5) in the last six months. They're just not used much. That's because, despite their advantages, "greater customization, potential cost savings, and the ability to deploy within private cloud or on-premises environments," their performance has continued to "trail frontier, closed-source models." Add in that many of the best-performing open-source LLMs to date are from Chinese companies that Western businesses are wary of, and open-source LLMs appear to be stalling out. "Predicting the future of AI can be a fool's errand. The market changes by the week, with exciting new model launches, advancements in foundation model capabilities, and plunging costs," Menlo Ventures said. Still, "conditions are ripe for a new generation of enduring AI businesses to be built on top of today's foundational building blocks." The question remains, however, "What will those foundational building blocks be?" OpenAI? Google, Meta? Anthropic? Stay tuned. We're not yet close to being able to say which AI models will ultimately end up on top.
[4]
Anthropic's new Claude 4.1 dominates coding tests days before GPT-5 arrives
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Anthropic released an upgraded version of its flagship artificial intelligence model Monday, achieving new performance heights in software engineering tasks as the AI startup races to maintain its dominance in the lucrative coding market ahead of an expected competitive challenge from OpenAI. The new Claude Opus 4.1 model scored 74.5% on SWE-bench Verified, a widely-watched benchmark that tests AI systems' ability to solve real-world software engineering problems. The performance surpasses OpenAI's o3 model at 69.1% and Google's Gemini 2.5 Pro at 67.2%, cementing Anthropic's leading position in AI-powered coding assistance. The release comes as Anthropic has achieved spectacular growth, with annual recurring revenue jumping five-fold from $1 billion to $5 billion in just seven months, according to industry data. However, the company's meteoric rise has created a dangerous dependency: nearly half of its $3.1 billion in API revenue stems from just two customers -- coding assistant Cursor and Microsoft's GitHub Copilot -- generating $1.4 billion combined. "This is a very scary position to be in. A single contract change and you're going under," warned Guillaume Leverdier, senior product manager at Logitech, responding to the revenue concentration data on social media. The upgrade represents Anthropic's latest move to fortify its position before OpenAI launches GPT-5, expected to challenge Claude's coding supremacy. Some industry watchers questioned whether the timing suggests urgency rather than readiness. "Opus 4.1 feels like a rushed release to get ahead of GPT-5," wrote Alec Velikanov, comparing the model unfavorably to competitors in user interface tasks. The comment reflects broader industry speculation that Anthropic is accelerating its release schedule to maintain market share. How two customers generate nearly half of Anthropic's $3.1 billion API revenue Anthropic's business model has become increasingly centered on software development applications. The company's Claude Code subscription service, priced at $200 monthly compared to $20 for consumer plans, has reached $400 million in annual recurring revenue after doubling in just weeks, demonstrating enormous enterprise appetite for AI coding tools. "Claude Code making 400 million in 5 months with basically no marketing spend is kinda crazy, right?" noted developer Minh Nhat Nguyen, highlighting the organic adoption rate among professional programmers. The coding focus has proven lucrative but risky. While OpenAI dominates consumer and business subscription revenue with broader applications, Anthropic has carved out a commanding position in the developer market. Industry analysis shows that "pretty much every single coding assistant is defaulting to Claude 4 Sonnet," according to Peter Gostev, who tracks AI company revenues. GitHub, which Microsoft acquired for $7.5 billion in 2018, represents a particularly complex relationship for Anthropic. Microsoft owns a significant stake in OpenAI, creating potential conflicts as GitHub Copilot relies heavily on Anthropic's models while Microsoft has competing AI capabilities. "I dunno - one of those is 49% owned by a competitor...so there's that for vulnerability too," observed Siya Mali, business fellow at Perplexity, referencing Microsoft's ownership structure. Claude's enhanced coding abilities come with stricter safety protocols after AI blackmail tests Beyond coding improvements, Opus 4.1 enhanced Claude's research and data analysis capabilities, particularly in detail tracking and autonomous search functions. The model maintains Anthropic's hybrid reasoning approach, combining direct processing with extended thinking capabilities that can utilize up to 64,000 tokens for complex problems. However, the model's advancement comes with heightened safety protocols. Anthropic classified Opus 4.1 under its AI Safety Level 3 (ASL-3) framework, the strictest designation the company has applied, requiring enhanced protections against model theft and misuse. Previous testing of Claude 4 models revealed concerning behaviors, including attempts at blackmail when the AI believed it faced shutdown. In controlled scenarios, the model threatened to reveal personal information about engineers to preserve its existence, demonstrating sophisticated but potentially dangerous reasoning capabilities. The safety concerns haven't deterred enterprise adoption. GitHub reports that Claude Opus 4.1 delivers "particularly notable performance gains in multi-file code refactoring," while Rakuten Group praised the model's precision in "pinpointing exact corrections within large codebases without making unnecessary adjustments or introducing bugs." Why OpenAI's GPT-5 poses an existential threat to Anthropic's developer-focused strategy The AI coding market has become a high-stakes battleground worth billions in revenue. Developer productivity tools represent some of the clearest immediate applications for generative AI, with measurable productivity gains justifying premium pricing for enterprise customers. Anthropic's concentrated customer base, while lucrative, creates vulnerability if competitors can lure away major clients. The coding assistant market particularly favors rapid model switching, as developers can easily test new AI systems through simple API changes. "My sense is that Anthropic's growth is extremely dependent on their dominance in coding," Gostev noted. "If GPT-5 challenges that, with e.g. Cursor and GitHub Copilot switching to OpenAI, we might see some reversal in the market." The competitive dynamics may intensify as hardware costs decline and inference optimizations improve, potentially commoditizing AI capabilities over time. "Even if there is no model improvement for coding from all AI labs, drop in HW costs and improvement in Inf optimizations alone will result in profits in ~5years," predicted Venkat Raman, an industry analyst. For now, Anthropic maintains its technical edge while expanding Claude Code subscriptions to diversify beyond API dependency. The company's ability to sustain its coding leadership through the next wave of competition from OpenAI, Google, and others will determine whether its rapid growth trajectory continues or faces significant headwinds. The stakes couldn't be higher: whoever controls the AI tools that power software development may ultimately control the pace of technological progress itself. In Silicon Valley's latest winner-take-all battle, Anthropic has built an empire on two customers -- and now must prove it can keep them.
[5]
OpenAI, Anthropic release new reasoning-optimized language models - SiliconANGLE
OpenAI, Anthropic release new reasoning-optimized language models OpenAI and Anthropic PBC today both introduced new language models optimized for reasoning tasks. OpenAI's new algorithms, gpt-oss-120b and gpt-oss-20b, are available under an open-source license. Anthropic, for its part, released an upgraded version of its proprietary Claude Opus 4 large language model. The update improves upon the LLM's coding capabilities, which the company claims already outperformed the competition. OpenAI says that gpt-oss-120b and gpt-oss-20b outperform comparably sized open models across multiple reasoning tasks. The former algorithm features 117 billion parameters, while the latter includes 21 billion. They can both run code, interact with external systems such as databases and optimize the amount of time they spend on a task based on its complexity. "Proprietary API moats shrink; enterprises can now run and refine models in-house," commented Dave Vellante, co-Chief Executive Officer of SiliconANGLE Media and co-founder and Chief Analyst at theCUBE Research. "Differentiation in our view now rises to tools, RL loops, guardrails, and -- most importantly -- data." Running gpt-oss-20b requires a single graphics card with 16 gigabytes of memory. This means that the model is compact enough to run on certain consumer devices. The model is "ideal for on-device use cases, local inference, or rapid iteration without costly infrastructure," OpenAI researchers wrote in a blog post today. The company's other new model, gpt-oss-120b, trades off some hardware efficiency for increased output quality. It can run on a single graphics card with 80 gigabytes of memory. The algorithm provides comparable performance to o4-mini, one of OpenAI's newest and most advanced proprietary reasoning models. Both gpt-oss-120b and gpt-oss-20b are based on a mixture-of-experts architecture. A mixture-of-experts model comprises multiple neural networks that are each optimized for a narrow set of tasks. When it receives a prompt, the model activates only the neural network that is best equipped to generate an answer. OpenAI's new models feature two performance-optimization features called grouped multi-query attention and rotary positional embeddings. The former technology reduces the memory usage of the algorithms' attention mechanism, which they use to interpret user prompts. Rotary positional embeddings, in turn, make language models better at processing lengthy input. Both models support a context window of 128,000 tokens. OpenAI developed gpt-oss-120b and gpt-oss-20b through a multi-step process. First, it trained them on a dataset that mostly comprised English-language text about science and technology topics. OpenaI then carried out two more training runs that used supervised fine-tuning and reinforcement learning, respectively. Supervised fine-tuning is carried out with training datasets that contain annotations explaining their contents. Reinforcement learning, in turn, doesn't use annotations. The latter technique can be more cost-efficient because it reduces the amount of time that developers must spend organizing their datasets. "Irrespective of OpenAI's intentions, open-weight reasoning models democratize frontier model capability but push the value conversation up the stack into enterprise agents, proprietary data, RL feedback efficacy, and business context," Vellante stated. "In our view, enterprises that build a digital-twin capability will program the most valuable agents; everyone else will fight for thinner slices of an ever-cheaper API." Against the backdrop of OpenAI's latest product update, rival Anthropic debuted a new proprietary LLM called Claude 4.1 Opus. It's an upgraded version of the company's flagship Claude 4 Opus reasoning model. Anthropic described the latter LLM as the "world's best coding model" when it launched in May. Claude Opus 4 scored 72.5% on SWE-bench Verified, a benchmark for measuring LLMs' coding capabilities. The new Claude Opus 4.1 model achieved 74.5%. Additionally, Anthropic has improved the LLM's research and data analysis capabilities. Claude Opus 4.1 is available today in the paid versions of the company's Claude AI assistant, as well as via its application programming interface, Amazon Bedrock and Google Cloud's Vertex AI service. The update is the first in a planned series of enhancements to Anthropic's LLM lineup. The company expects to release the other upgrades, which it describes as a "substantially larger," in the coming weeks.
[6]
Anthropic leads enterprise LLMs with 32% market share
Menlo Ventures reports Claude holds 42% of developer share and powers a $3B revenue stream, driven by its performance in enterprise applications Anthropic leads enterprise large language model usage with 32% market share according to a Menlo Ventures survey of 150 technical decision-makers conducted in summer 2025, driven by superior performance in code generation applications. Menlo Ventures, an early-stage venture capital firm and major Anthropic investor, authored the report analyzing enterprise AI adoption. The firm has invested substantially in Anthropic, leading its Series D funding round and participating in its $3.5 billion Series E financing that valued Anthropic at $61.5 billion. Independent validation of Anthropic's growth trajectory comes from AI Magazine, which reported the company achieved 1,000% year-over-year revenue growth to reach $3 billion in annual recurring revenue. This establishes Anthropic as the premier enterprise AI provider through its Claude model family. The market share distribution shows OpenAI follows Anthropic with 25% of enterprise usage. Google captures 20% while Meta's Llama holds 9%. DeepSeek trails significantly with 1% market penetration. These figures specifically measure production AI implementation rather than spending allocations. Menlo Ventures attributes Anthropic's rapid market expansion to the technical capabilities of its Claude Sonnet and Claude Opus models, which have demonstrated significant performance advantages in enterprise settings. Code generation represents what researchers identified as AI's inaugural "killer app," with Claude becoming programmers' preferred tool. Claude commands 42% market share among developers, doubling OpenAI's 21% adoption rate. Concrete evidence of Claude's programming impact includes its transformation of GitHub Copilot into a $1.9-billion ecosystem within a single year. The release of Claude Sonnet 3.5 in 2024 enabled entirely new application categories including AI integrated development environments like Cursor and Windsurf, application builders such as Lovable, Bolt and Replit, and enterprise coding agents including Claude Code and All Hands. Anthropic employs reinforcement learning with verifiable rewards for model training, a methodology using binary feedback where outputs receive scores of 1 for correct responses and 0 for incorrect responses. This approach proves particularly effective for programming applications where code functionality provides clear pass/fail metrics. The company pioneered step-by-step problem-solving architectures where language models utilize external tools to retrieve data and improve output accuracy. This positions Anthropic at the forefront of AI agent development, allowing iterative response refinement through integration of search engines, calculators, coding environments and other resources via the Model Context Protocol, an open-source framework enabling seamless connections between LLMs and real-world data services. Anthropic is adding new weekly limits to its Claude AI Market analysis reveals performance capabilities rather than pricing drive enterprise decisions when switching large language model providers. A Menlo Ventures finding notes: "This creates an unexpected market dynamic: Even as individual models drop 10x in price, builders don't capture savings by using older models; they just move en masse to the best-performing one." The firm observes this behavior pattern persists because newer model generations demonstrate substantially improved capabilities over predecessors. Enterprises prioritize operational advantages despite cost reductions in legacy systems. Enterprise AI implementation has shifted substantially from experimental development to production deployment. Among startups building AI applications, 74% report most workloads now operate in production environments. Large enterprises follow closely behind with 49% indicating most or nearly all computational resources support production AI systems. This transition signals maturation beyond initial model training phases toward practical business application. Open-source large language model adoption has declined to 13% of AI workloads, down from 19% six months prior. Despite remaining the most utilized open-source option, Meta's Llama faces criticism because its licensing terms mean it "isn't really open source" according to the report. Recent open-source releases include: Despite offering customization options, potential cost reductions and deployment flexibility within private cloud or on-premises environments, open-source models collectively demonstrate inferior performance compared to proprietary "frontier models." Additional adoption barriers exist for models developed by Chinese companies like DeepSeek, Bytedance, Minimax, Alibaba, Moonshot AI and Z AI, as Western enterprises express caution regarding their implementation. These factors contribute to stagnating open-source LLM adoption trajectories.
[7]
Anthropic Just Surpassed OpenAI in the $8.4 Billion Enterprise Market, Says New Report
OpenAI might have the highest valuation of the new crop of AI startups, but when it comes to enterprise usage, Anthropic is the new king of the hill, according to a new report from Menlo Ventures. Based on a survey of 150 technical startup and enterprise leaders, Menlo claims that while OpenAI has dominated enterprise spending on large language models (LLMs) for some time, Anthropic is now the market leader, with 32 percent of enterprise usage. OpenAI, which had a 50 percent share of that market through 2023, now controls just 25 percent, says Menlo. (Anthropic, it's worth noting, is a Menlo portfolio company. Menlo has invested in the Series C, D, and E rounds, though the exact amount the company has contributed is not public.) It's a smaller piece of a much bigger (and fast-growing) pie, however. In the past six months, enterprise spending has more than doubled, climbing from $3.5 billion at the start of the year to $8.4 billion now. By the end of the year, Menlo projects that number will be in the neighborhood of $13 billion. Anthropic, which is reportedly in talks to raise another $5 billion -- raising its valuation to $170 billion -- isn't the only AI company to see market share gains. Menlo says Google has increased its market share to 20 percent as Gemini sees improvements. Meta's Llama holds a 9 percent share, while DeepSeek barely pings the radar, making up just 1 percent of usage. "Some might be surprised to see Anthropic overtake OpenAI, given its first-mover advantage," said Tim Tully, partner at Menlo Ventures. "But our research puts real numbers behind what we've heard anecdotally from the market: Teams are prioritizing real performance in production." Despite the increased usage by enterprise customers, challenges remain for Anthropic. Last month, the company won a key ruling in an ongoing lawsuit over using books without permission to train its system, but that suit is far from over and it could expose the company to billions of dollars in copyright damages. While Anthropic has unseated OpenAI, according to Menlo's study, the venture capital firm also found that businesses do not lightly hopscotch from one AI company to another. Vendor switching is a rare thing, with just 11 percent of teams reporting a change in their model provider in the past year. Two-thirds of the people surveyed said they had upgraded to newer models from existing providers. The rest made no changes at all. The way enterprise and startup customers use AI is changing, though. A growing number are using their preferred large language model to draw conclusions from data, rather than working to train the AI. Menlo says 74 percent of startups and 49 percent of enterprises said that inference (the term for AI decision making) made up most of their computing usage. Both numbers are up significantly from last November, the firm said. The next evolution, the company said, would likely be the use of AI to autonomously handle multistep, open-ended tasks, such as software development and research synthesis -- a process known as long-horizon agents. While these systems are still early in their development cycle, Menlo said it believes they will be key to the next wave of business AI adoption. "Long-horizon agents represent an operating model shift," said Derek Xiao, an investor at Menlo Ventures. "The startups building agentic infrastructure today are laying the foundation for the next generation of $10 billion-plus platforms. With legacy vendors lagging behind, the opportunity is massive." The growing use of AI by enterprise and startup businesses mirrors the increased use among consumers. A separate Menlo study last month found that 61 percent of Americans now use AI, with just shy of one in five adults using it daily. The trust factor in some areas was low in that study, however. Just 16 percent of consumers use the technology to navigate health care (such as dealing with insurance or caregivers), and only 18 percent use AI to manage expenses such as monthly bill payments, budgeting, and expense tracking. As enterprise and small-business trust grows with increased inference usage, though, that could prompt consumers to do the same. The final deadline for the 2025 Inc. Power Partner Awards is Friday, August 8, at 11:59 p.m. PT. Apply now.
Share
Copy Link
Anthropic's AI models have become the top choice for enterprises, surpassing OpenAI in market share. The company's focus on coding and performance improvements has led to significant growth, particularly in the software development sector.
In a significant shift within the AI industry, Anthropic has emerged as the leading provider of AI models for enterprises, surpassing its rival OpenAI. According to a report from Menlo Ventures, Anthropic now holds 32% of the enterprise large language model market share by usage, while OpenAI's share has declined to 25% 12. This marks a dramatic reversal from just two years ago when OpenAI dominated with 50% market share compared to Anthropic's 12% 1.
Source: Inc. Magazine
Anthropic's dominance is particularly pronounced in the coding sector, where it commands an impressive 42% of the enterprise market share, more than double OpenAI's 21% 13. This success in the coding domain has been a key driver of Anthropic's growth, with AI-assisted programming emerging as what Menlo Ventures calls "AI's first killer app" 3.
The company's rapid ascent is attributed to the strong performance of its Claude Sonnet and Claude Opus models. The release of Claude 3.5 Sonnet in June 2024 and Claude 3.7 Sonnet in February 2025 laid the foundation for Anthropic's surge in usage 1. The company's latest release, Claude Opus 4.1, has further solidified its position by achieving a score of 74.5% on the SWE-bench Verified benchmark for coding capabilities 45.
Source: VentureBeat
Enterprises have shown a clear preference for closed models, with more than half of the surveyed companies reporting that they don't use open-source models at all 1. This trend has led to a decline in open-source model usage, dropping from 19% to 13% of enterprise daily workloads in just six months 3.
The market dynamics favor performance over price, with companies willing to pay for the newest and fastest models. As Matt Murphy from Menlo Ventures notes, "Even as individual models drop 10x in price, builders don't capture savings by using older models; they just move en masse to the best-performing one" 3.
Anthropic's success has translated into substantial financial growth. The company has reportedly seen its annual recurring revenue jump from $1 billion to $5 billion in just seven months 4. However, this rapid growth comes with risks, as nearly half of Anthropic's $3.5 billion in API revenue stems from just two customers – coding assistant Cursor and Microsoft's GitHub Copilot 4.
Source: TechCrunch
While Anthropic currently leads the enterprise AI market, competition remains fierce. OpenAI is expected to launch GPT-5, which could challenge Claude's coding supremacy 4. Google and Meta are also significant players, with Google holding 20% of the enterprise market share and Meta's Llama maintaining dominance in the open-source segment 3.
The shift in enterprise preferences towards Anthropic's models signals a maturing AI market where performance and specialized capabilities, particularly in coding, are becoming key differentiators. As companies increasingly move from building and training models to inference and production use, the competition among AI providers is likely to intensify, driving further innovation and specialization in the field 3.
This evolving landscape presents both opportunities and challenges for AI companies and enterprises alike, as they navigate the rapidly changing terrain of artificial intelligence technologies and their applications in business contexts.
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
23 hrs ago
3 Sources
Technology
23 hrs ago
The UK's technology secretary and OpenAI's CEO discussed a potential multibillion-pound deal to provide ChatGPT Plus access to all UK residents, highlighting the government's growing interest in AI technology.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
Multiple news outlets, including Wired and Business Insider, have been duped by AI-generated articles submitted under a fake freelancer's name, raising concerns about the future of journalism in the age of artificial intelligence.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago
Google inadvertently revealed a new smart speaker during its Pixel event, sparking speculation about its features and capabilities. The device is expected to be powered by Gemini AI and could mark a significant upgrade in Google's smart home offerings.
5 Sources
Technology
1 day ago
5 Sources
Technology
1 day ago
As AI and new platforms transform search behavior, brands must adapt their strategies beyond traditional SEO to remain visible in an increasingly fragmented digital landscape.
2 Sources
Technology
1 day ago
2 Sources
Technology
1 day ago