Perplexity Unveils Sonar: A New AI Model Challenging Industry Giants

2 Sources

Share

Perplexity launches Sonar, an AI model built on Meta's Llama 3.3 70B, claiming superior performance and user satisfaction compared to competitors like GPT-4 and Claude 3.5. The company's methodology and comparisons, however, raise questions about transparency and objectivity.

News article

Perplexity Introduces Sonar: A New Contender in AI Search

Perplexity, an AI search engine startup, has launched Sonar, its proprietary AI model, for all Pro users on its platform

1

. Built on Meta's open-source Llama 3.3 70B and powered by Cerebras Inference, Sonar claims to outperform leading AI models in factuality, readability, and speed

2

.

Performance Claims and Comparisons

Perplexity asserts that Sonar surpasses OpenAI's GPT-4o mini and Anthropic's Claude 3.5 Haiku in performance, while matching or exceeding GPT-4o and Claude 3.5 Sonnet in user satisfaction

1

. The company also states that Sonar operates at 1,200 tokens per second, making it nearly 10 times faster than Google's Gemini 2.0 Flash

2

.

Benchmark Results and User Satisfaction

According to Perplexity, Sonar outperformed its competitors in academic benchmark tests such as IFEval and MMLU, which evaluate instruction-following capabilities and general knowledge across disciplines

1

. The company claims that A/B testing revealed higher user satisfaction and engagement with Sonar compared to rival models

1

.

Methodology and Transparency Concerns

While Perplexity provides screenshot examples comparing Sonar's outputs to those of competitor models, the methodology behind these comparisons remains unclear

1

. The company does not disclose details about the queries used, the number of tests conducted, or the specific metrics for measuring factuality and readability

1

.

Sonar API and Pricing

Perplexity has made the Sonar API available in two variants: Sonar and Sonar Pro

2

. The company touts it as the most affordable API in the market, with Sonar Pro costing $3 per million input tokens, $15 per million output tokens, and $5 per 1,000 searches

2

. The standard Sonar plan charges $1 per million tokens for both input and output, with a $5 per 1,000 searches fee

2

.

Industry Context and Competition

Sonar's launch comes amid fierce competition in the AI model space. French startup Mistral recently introduced Le Chat, which also uses Cerebras Inference and claims to be the fastest AI assistant available

2

. Perplexity has also added the DeepSeek-R1 model to its platform, hosted on U.S. servers, further diversifying its AI offerings

2

.

Implications for the AI Industry

The introduction of Sonar and its claimed performance metrics could potentially shake up the AI model landscape. However, the lack of standardized, independent benchmarks for factuality and user satisfaction in AI search engines makes it challenging to verify these claims objectively

1

. As the AI industry continues to evolve rapidly, the need for transparent and standardized evaluation methods becomes increasingly apparent.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo