Curated by THEOUTPOST
On Thu, 8 May, 12:03 AM UTC
7 Sources
[1]
Mistral claims its newest AI model delivers leading performance for the price | TechCrunch
French AI startup Mistral is releasing a new AI model, Mistral Medium 3, that's focused on efficiency without compromising performance. Available in Mistral's API priced at $0.40 per million input tokens and $20.80 per million output tokens, Mistral Medium 3 performs "at or above" 90% of Anthropic's costlier Claude Sonnet 3.7 model on "benchmarks across the board," claims Mistral. It also surpasses recent open models including Meta's Llama 4 Maverick and Cohere's Command A on popular AI performance evaluations. Tokens are the raw bits of data models work with, with a million tokens equivalent to about 750,000 words (roughly 163,000 words longer than "War and Peace"). "Mistral Medium 3 can [...] be deployed on any cloud, including self-hosted environments of four GPUs and above," explained Mistral in a blog post sent to TechCrunch. "On pricing, the model beats cost leaders such as DeepSeek v3, both in API and self-deployed systems." Mistral, founded in 2023, is a frontier model lab, aiming to build a range of AI-powered services including a chatbot platform, Le Chat, and mobile apps. It's backed by VCs including General Catalyst, and has raised over €1.1 billion (roughly $1.24 billion) to date. Mistral's customers include BNP Paribas, AXA, and Mirakl. According to Mistral, Mistral Medium 3 is best for coding and STEM tasks, and excels at multimodal understanding. The company says that clients in financial services, energy, and healthcare have been beta testing the model for use cases like customer service, workflow automation, and analyzing complex data sets. In addition to Mistral's API, where enterprise customers can work with Mistral to fine-tune it, Mistral Medium 3 is available on Amazon's Sagemaker platform starting Wednesday. It'll soon come to other hosts, including Microsoft's Azure AI Foundry and Google's Vertex AI platforms, the company added. The launch of Mistral Medium 3 follows on the heels of Mistral's Mistral Small 3.1 in March. In its blog post, the company teased the release of a much larger model in the coming weeks.
[2]
Mistral comes out swinging for enterprise AI customers with new Le Chat Enterprise, Medium 3 model
But that may change starting today: The company just unveiled Le Chat Enterprise, a unified AI assistant platform designed for enterprise-scale productivity and privacy, powered by its new Medium 3 model that outperforms larger ones at a fraction of the cost (here, "larger" refers to the number of parameters, or internal model settings, which typically denote more complexity and more powerful capabilities, but also take more compute resources such as GPUs to run). Le Chat Enterprise is a ChatGPT-like assistant and competitor built from the ground up for data protection, auditing, and cross-application support Available on the web and via mobile apps, Le Chat Enterprise is like a ChatGPT competitor, but one built specifically for enterprises and their employees, taking into account the fact that they'll likely be working across a suite of different applications and data sources. It's designed to consolidate AI functionality into a single, privacy-first environment that enables deep customization, cross-functional workflows, and rapid deployment. Among its key features that will be of interest to business owners and technical decision makers are: Le Chat Enterprise supports seamless integration into existing tools and workflows. Companies can build AI agents tailored to their operations and maintain full sovereignty over deployment and data -- without vendor lock-in. The platform's privacy architecture adheres to strict access controls and supports full audit logging, ensuring data governance for regulated industries. Enterprises also gain full control over the AI stack -- from infrastructure and platform features to model-level customization and user interfaces. And given the general suspicion from some Western companies and governments around China and its growing library of powerful open source models from companies there, coupled with Mistral's location in the European Union and the tight data protection laws it must follow (General Data Protection Regulation aka GDPR and the EU AI Act), Mistral's new Le Chat Enterprise offering could be appealing to many enterprises with stricter security and data storage policies (especially medium-to-large and legacy businesses). Mistral is also rolling out improvements to its Le Chat Pro and Team plans, targeting individuals and small teams looking for productivity tools backed by its language models. All tiers benefit from the core capabilities introduced in Le Chat Enterprise. Mistral Medium 3 outperforms GPT-4o and even Claude 3.7 Sonnet on key benchmarks and is available via API and on-prem Mistral Medium 3 introduces a new performance tier in the company's model lineup, positioned between lightweight and large-scale models. It is a proprietary model, meaning unlike previous Mistral releases, it is not available under an open source license and must be used through Mistral's website and API or that of its partners. Designed for enterprise use, the model delivers more than 90% of the benchmark performance of Claude 3.7 Sonnet, but at one-eighth the cost -- $0.40 per million input tokens and $20.80 per million output tokens, compared to Sonnet's $3/$15 for input/output. Benchmarks show that Mistral Medium 3 is particularly strong in software development tasks. In coding tests like HumanEval and MultiPL-E, it matches or surpasses both Claude 3.7 Sonnet and OpenAI's GPT-4o models. According to third-party human evaluations, it outperforms Llama 4 Maverick in 82% of coding scenarios and exceeds Command-A in nearly 70% of cases. The model also performs competitively across languages and modalities. Compared to Llama 4 Maverick, it has higher win rates in English (67%), French (71%), Spanish (73%), and Arabic (65%), and leads in multimodal performance with notable scores in tasks like DocVQA (0.953), AI2D (0.937), and ChartQA (0.826). Mistral Medium 3 is optimized for enterprise integration. It supports hybrid and on-premises deployment, offers custom post-training, and connects easily to business systems. According to Mistral, it's already being used in beta by organizations in sectors such as financial services, energy, and healthcare to power domain-specific workflows and customer-facing solutions. Mistral Medium 3 is now accessible via Mistral's La Plateforme API and Amazon Sagemaker, with support coming soon to IBM WatsonX, NVIDIA NIM, Azure AI Foundry, and Google Cloud Vertex. Meanwhile, Le Chat Enterprise is available in the Google Cloud Marketplace, and will launch shortly on Azure AI and AWS Bedrock. For those ready to explore the assistant experience, Le Chat is available at chat.mistral.ai, as well as in the App Store and Google Play Store, with no credit card required to get started. By combining a high-efficiency model with a customizable enterprise platform, Mistral AI is making a concerted push to lower the barriers to scalable, privacy-respecting AI adoption in the enterprise world.
[3]
New Mistral AI Version Drops: A Worthy ChatGPT and Claude at a Fraction of the Cost - Decrypt
Now live on Mistral La Plateforme and Amazon Sagemaker, with Google Cloud and Azure integrations coming soon. Mistral Medium 3 dropped yesterday, positioning the model as a direct challenge to the economics of enterprise AI deployment. The Paris-based startup, founded in 2023 by former Google DeepMind and Meta AI researchers, released what it claims delivers frontier performance at one-eighth the operational cost of comparable models. "Mistral Medium 3 delivers frontier performance while being an order of magnitude less expensive," the company said. The model represents Mistral AI's most powerful proprietary offering to date, distinguishing itself from an open-source portfolio that includes Mistral 7B, Mixtral, Codestral, and Pixtral. At $0.4 per million input tokens and $2 per million output tokens, Medium 3 significantly undercuts competitors while maintaining performance parity. Independent evaluations by Artificial Analysis positioned the model "amongst the leading non-reasoning models with Medium 3 rivalling Llama 4 Maverick, Gemini 2.0 Flash and Claude 3.7 Sonnet." The model excels particularly in professional domains. Human evaluations demonstrated superior performance in coding tasks, with Sophia Yang, a Mistral AI representative, noting that "Mistral Medium 3 shines in the coding domain and delivers much better performance, across the board, than some of its much larger competitors." Benchmark results indicate Medium 3 performs at or above Anthropic's Claude Sonnet 3.7 across diverse test categories, while substantially outperforming Meta's Llama 4 Maverick and Cohere's Command A in specialized areas like coding and reasoning. The model's 128,000-token context window is standard, and its multimodality lets it process documents and visual inputs across 40 languages. But unlike the models that made Mistral famous, users will not be able to modify it or run it locally. Right now, the best source for open source enthusiasts is Mixtral-8x22B-v0.3, a mixture of experts model that runs 8 experts of 22 billion parameters each. Besides Mixtral, the company has over a dozen different open-source models available. It's also initially available for enterprise deployment and not domestic usage via LeChat -- Mistral's chatbot interface. Mistral AI emphasized the model's enterprise adaptation capabilities, supporting continuous pretraining, full fine-tuning, and integration into corporate knowledge bases for domain-specific applications. Beta customers across financial services, energy, and healthcare sectors are testing the model for customer service enhancement, business process personalization, and complex dataset analysis. The API will launch immediately on Mistral La Plateforme and Amazon Sagemaker, with a forthcoming integration planned for IBM WatsonX, NVIDIA NIM, Azure AI Foundry, and Google Cloud Vertex. The announcement sparked considerable discussion across social media platforms, with AI researchers praising the cost-efficiency breakthrough while noting the proprietary nature as a potential limitation. The model's closed-source status marks a departure from Mistral's open-weight offerings, though the company hinted at future releases. "With the launches of Mistral Small in March and Mistral Medium today, it's no secret that we're working on something 'large' over the next few weeks," Mistral's Head of Developer Relationships Sophia Yang teased in the announcement. "With even our medium-sized model being resoundingly better than flagship open source models such as Llama 4 Maverick, we're excited to 'open' up what's to come." Mistral tends to hallucinate less than the average model, which is excellent news considering its size. It's better than Meta Llama-4 Maverick, Deepseek V3 and Amazon Nova Pro, to name a few. Right now, the model that has the least hallucinations is Google's recently launched Gemini 2.5 Pro. This release comes amid impressive business growth for the Paris-based company despite being quiet since the release of Mistral Large 2 last year. Mistral recently launched an enterprise version of its Le Chat chatbot that integrates with Microsoft SharePoint and Google Drive, with CEO Arthur Mensch telling Reuters they've "tripled (their) business in the last 100 days, in particular in Europe and outside of the U.S." The company, now valued at $6 billion, is flexing its technological independence by operating its own compute infrastructure and reducing reliance on U.S. cloud providers -- a strategic move that resonates in Europe amid strained relations following President Trump's tariffs on tech products. Whether Mistral's claim of achieving enterprise-grade performance at consumer-friendly prices holds up in real-world deployment remains to be seen. But for now, Mistral has positioned Medium 3 as a compelling middle ground in an industry that often assumes bigger (and pricier) equals better.
[4]
Mistral Medium 3 Outperforms Llama 4; Open Model Launch Planned Next
The French AI startup is getting ready to open up its upcoming large model. Mistral AI unveiled Mistral Medium 3, a new AI model that claims to balance cutting-edge performance with cost-effectiveness while outperforming competitors like Meta's Llama 4 Maverick in benchmark tests. The company stated in the announcement that the new model is specifically designed for enterprise deployment and excels in coding, STEM, and multimodal tasks. According to the company, Mistral Medium 3 achieves over 90% of Claude Sonnet 3.7's benchmark scores at significantly lower pricing -- $0.40 per million tokens for input and $2 for output. This comes right after its open-source model, Mistral Small 3.1's release. The new model builds on Mistral Small 3, with improved text performance, multimodal understanding, and an expanded context window of up to 128k tokens. Mistral claimed that the model outperforms comparable models like Gemma 3 and GPT-4o mini while delivering inference speeds of 150 tokens per second. Mistral Medium 3 can be deployed in hybrid or on-premise environments with support for continuous pretraining and enterprise system integration. The company reports that early adopters in finance, energy, and healthcare sectors are already using it for personalised customer service and complex data analysis. Despite its medium size, the model reportedly outperforms several larger competitors in both API and self-hosted formats. It can run on systems with as few as four GPUs, making deployment more accessible for organisations with varying infrastructure capabilities. In third-party human evaluations focused on real-world scenarios, Mistral Medium 3 particularly shines in coding tasks, surpassing some significantly larger models. The company claims that on benchmarks, Mistral Medium 3 outperforms Cohere Command A alongside Llama 4 Maverick, while beating DeepSeek v3 on pricing in both API and self-deployed scenarios. The model is now available on Mistral's own platform and Amazon SageMaker, with upcoming support on Azure AI, Google Cloud, IBM WatsonX, and NVIDIA NIM. Interestingly, for future releases, Mistral confirmed that a larger open model is in development.
[5]
Mistral AI adds Medium 3 to its family of models claiming low cost and high performance - SiliconANGLE
Mistral AI adds Medium 3 to its family of models claiming low cost and high performance Paris-based artificial intelligence startup Mistral AI today announced the release of a new model, Mistral Medium 3, which the company said outperforms competitors at significantly lower cost. The company also introduced a new version of its AI assistant for enterprise customers, Le Chat Enterprise, powered by Medium 3, featuring enterprise search, agent building and tool connectors. On performance, Mistral said the new model delivers performance surpassing models such as Meta Platform Inc.'s Llama 4 Maverick and enterprise models such as Cohere Inc.'s Command A. On price, it beats cost for leaders such as DeepSeek v3 with much lower cost through both application programming interface and self-deployed systems. For enterprise use, the company said it delivers more than 90% or better benchmark performance of Anthropic PBC's Claude 3.7 Sonnet. However, it can do it at around one-eighth the cost or around $0.40 per million output tokens and $20.80 per million output tokens, compared to Sonnet's $3 per input and $15 per output. Mistral said Medium 3 was designed as a frontier-class model and stands out in professional use cases, but works best for coding and science, technology, engineering and mathematics tasks, where it closes the gap with larger and much slower competitors. "In a world where organizations are forced to choose between fine-tuning over API or self-deploying and customizing model behaviour from scratch, Mistral Medium 3 offers a path to comprehensively integrate intelligence into enterprise systems," said Mistral. Enterprise companies can continuously pretrain, fully fine-tune and blend the model into knowledge sources for domain-specific training and adapt it to both company culture and information systems. According to Mistral, beta customers across financial services, energy and healthcare have already harnessed the model to personalize business processes, workflows and analyze complex datasets. The new model is available starting today via API on the company's Mistral La Plateforme and Amazon Sagemaker. It will also come soon to IBM WatsonX, NVIDIA NIM, Azure AI Foundry and Google Cloud Vertex. Le Chat Enterprise builds on the company's existing AI chatbot assistant with a focus on enterprise productivity tools and security into a unified AI platform. It will provide enterprise teams a feature-rich platform that includes enterprise search, AI agent builders, custom data and tool connectors, document libraries, custom AI models and hybrid deployment capabilities. Enterprise teams will be able to use Le Chat Enterprise to connect to all of their usual productivity apps such as Google Drive, Google Calendar, Gmail, SharePoint and OneDrive in one place - with more connectors coming soon. Using this information, it can help organize external data sources, documents and web sources to provide relevant answers and provide a handy way to get up to speed in anything business-related. With agent builders, workers can automate routine tasks by connecting to apps and business tools. "Le Chat will enable your team to easily build custom assistants that match your own requirements -- no code required," Mistral said. The company stressed that the enterprise-focused Le Chat is privacy-first and can be deployed anywhere, in public or private cloud, or as a service hosted on the Mistral cloud. All data connectors to enterprise tools remain fully protected and adhere to access controls.
[6]
Mistral Medium 3 destroys others in benchmarks
Mistral, a French AI startup, has released its latest AI model, Mistral Medium 3, which promises leading performance at a competitive price. The model is available through Mistral's API, priced at $0.40 per million input tokens and $2 per million output tokens. Mistral claims that Mistral Medium 3 performs "at or above" 90% of Anthropic's more expensive Claude Sonnet 3.7 model across various benchmarks. It also outperforms recent open models, including Meta's Llama 4 Maverick and Cohere's Command A, on popular AI performance evaluations. To put this into perspective, a million tokens are equivalent to about 750,000 words, roughly 163,000 words longer than "War and Peace." Mistral Medium 3 is optimized for coding and STEM tasks and excels at multimodal understanding. The company states that clients in financial services, energy, and healthcare have been beta testing the model for use cases like customer service, workflow automation, and analyzing complex datasets. The model can be deployed on any cloud, including self-hosted environments with four GPUs and above. In addition to Mistral's API, where enterprise customers can work with Mistral to fine-tune the model, Mistral Medium 3 is available on Amazon's Sagemaker platform as of Wednesday. It will soon be available on other hosts, including Microsoft's Azure AI Foundry and Google's Vertex AI platforms. The launch of Mistral Medium 3 follows the release of Mistral Small 3.1 in March. Mistral has also teased the release of a much larger model in the coming weeks. On Wednesday, the company launched Le Chat Enterprise, a corporate-focused chatbot service that offers tools like an AI "agent" builder and integrates Mistral's models with third-party services like Gmail, Google Drive, and SharePoint. Le Chat Enterprise will soon support MCP, Anthropic's standard for connecting AI assistants to the systems and software where data resides. Mistral has raised over €1.1 billion (roughly $1.24 billion) to date, with backing from VCs including General Catalyst. Its customers include BNP Paribas, AXA, and Mirakl.
[7]
Mistral's New Medium 3 AI Model Is Optimised for Performance and Cost
Mistral claims Medium 3 offers SOTA performance at 8X lower cost Mistral Medium 3, a new artificial intelligence (AI) model, was released on Wednesday. The Paris-based AI firm introduced the enterprise-focused model with an emphasis on high performance at lower costs. It is claimed to be state-of-the-art (SOTA) in terms of performance in its class, and is said to be available at significantly lower costs compared to the competition. Notably, Mistral Medium 3 is not an open-source model and will not be available on open repositories such as GitHub and Hugging Face. In a newsroom post, the AI firm shared details about the capabilities of the new AI model. Notably, in March, Mistral released Small 3.1 open-source model with a context window of up to 1,28,000 tokens. The Medium 3 is part of the company's enterprise-focused offerings. These are paid models that can be accessed via cloud services and as an application programming interface (API). Mistral Medium 3 is a multimodal large language model (LLM) which can be deployed in a hybrid setup (cloud + edge) or on-premise and in virtual private cloud (VPC) setups. The model can be post-trained by enterprises on their internal data to make its responses grounded. Additionally, the model can be integrated into enterprise tools and systems. Coming to pricing, the Mistral Medium 3 AI model will cost $0.4 (roughly Rs. 34) per million input tokens and $2 (roughly Rs. 171) per million output tokens. For reference, the Claude 3.7 Sonnet, a model Mistral says Medium 3 is comparable to, is priced at $3 (roughly Rs. 257) per million input and $15 (roughly Rs. 1,284) per million output tokens. Based on internal testing conducted by the French AI firm, the Mistral Medium 3 AI model outperforms Llama 4 Maverick, GPT-4o, and Claude 3.7 Sonnet on several benchmarks such as HumanEval, ArenaHard, Math500 Instruct, and AI2D. The scores are also comparable to the DeepSeek 3.1 model in several benchmarks. Enterprises and developers interested in using Mistral Medium 3 can access it as an API on Mistral La Plateforme and Amazon Sagemaker. The company will also make it available on IBM WatsonX, Nvidia NIM, Azure AI Foundry, and Google's Vertex AI soon. Notably, Mistral also teased that after the launches of Small and Medium AI models, it is now working on an open-source Large AI model, which could arrive later this year.
Share
Share
Copy Link
Mistral AI launches Medium 3, a new AI model claiming superior performance at a fraction of competitors' costs, alongside Le Chat Enterprise for business applications.
French AI startup Mistral AI has unveiled its latest AI model, Mistral Medium 3, positioning it as a game-changer in the AI industry. The company claims that this new model delivers frontier performance at a significantly lower cost compared to its competitors 1.
Mistral Medium 3 is priced at $0.40 per million input tokens and $20.80 per million output tokens, which is approximately one-eighth the cost of Anthropic's Claude 3.7 Sonnet model 2. Despite its competitive pricing, Mistral claims that Medium 3 performs at or above 90% of Claude Sonnet 3.7 on benchmarks across the board 1.
Mistral Medium 3 boasts a 128,000-token context window and supports multimodality, allowing it to process documents and visual inputs across 40 languages 3. The model excels particularly in coding and STEM tasks, and demonstrates strong performance in multimodal understanding 1.
Independent evaluations have positioned Mistral Medium 3 among the leading non-reasoning models, rivaling Llama 4 Maverick, Gemini 2.0 Flash, and Claude 3.7 Sonnet 3. In coding tests like HumanEval and MultiPL-E, it matches or surpasses both Claude 3.7 Sonnet and OpenAI's GPT-4o models 2.
Mistral Medium 3 is designed for enterprise deployment, supporting hybrid and on-premises deployment, custom post-training, and easy integration with business systems 4. Beta customers in financial services, energy, and healthcare sectors are already using the model for personalized customer service, workflow automation, and complex data analysis 1.
Alongside Medium 3, Mistral AI has introduced Le Chat Enterprise, a unified AI assistant platform for businesses 2. This platform offers features such as enterprise search, AI agent builders, custom data and tool connectors, document libraries, and hybrid deployment capabilities 5.
Mistral Medium 3 is now accessible via Mistral's La Plateforme API and Amazon Sagemaker, with support coming soon to IBM WatsonX, NVIDIA NIM, Azure AI Foundry, and Google Cloud Vertex 2. The company has hinted at the development of a larger open model in the coming weeks 3.
Mistral AI's latest offerings represent a significant push to lower the barriers to scalable, privacy-respecting AI adoption in the enterprise world 2. The company's focus on cost-efficiency and performance optimization could potentially disrupt the current AI market dynamics, challenging established players and offering more accessible AI solutions to businesses of various sizes.
Reference
[1]
[2]
[3]
[4]
Mistral AI unveils Mistral Small 3, a 24-billion-parameter open-source AI model that rivals larger competitors in performance while offering improved efficiency and accessibility.
4 Sources
4 Sources
Mistral AI, a French startup, has released Large 2, an open-source AI model that rivals offerings from tech giants like OpenAI, Meta, and Anthropic. The model demonstrates exceptional performance in coding and mathematics tasks, potentially reshaping the AI landscape.
6 Sources
6 Sources
French startup Mistral AI releases Mistral Small 3.1, a lightweight, open-source AI model that outperforms larger competitors while running on modest hardware, potentially reshaping the AI landscape.
7 Sources
7 Sources
Mistral AI introduces two new AI models, Ministral 3B and 8B, designed for on-device and edge computing. These models offer high performance in a compact size, challenging larger cloud-based AI systems.
6 Sources
6 Sources
Mistral AI, a French startup, has released significant updates to its Le Chat platform, introducing new AI models and features that rival those of ChatGPT and other leading AI chatbots.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved