Curated by THEOUTPOST
On Fri, 14 Mar, 8:05 AM UTC
2 Sources
[1]
AI Companies Embrace Efficient Models That Run on Fewer Chips
Nearly two months after the viral success of China's DeepSeek prompted a reckoning over how much tech companies spend to develop artificial intelligence systems, some leading AI firms are embracing a less-is-more approach. On Thursday, Toronto-based Cohere Inc. is set to announce a new model called Command A that can carry out complicated business tasks while running on just two of Nvidia Corp.'s AI-focused A100 or H100 chips. That's significantly less than the number of chips required for some large models -- and also less than what DeepSeek's system is thought to need.
[2]
AI companies embrace efficient models that run on fewer chips
Nearly two months after the viral success of China's DeepSeek prompted a reckoning over how much tech companies spend to develop artificial intelligence systems, some leading AI firms are embracing a less-is-more approach. On Thursday, Toronto-based Cohere Inc. is set to announce a new model called Command A that can carry out complicated business tasks while running on just two of Nvidia Corp.'s AI-focused A100 or H100 chips. That's significantly less than the number of chips required for some large models -- and also less than what DeepSeek's system is thought to need. A day earlier, Alphabet Inc.'s Google unveiled a new series of its Gemma AI models that it said can run on a single Nvidia H100 chip. Both companies said their models rivaled or outperformed DeepSeek's most recent AI system on certain tasks. AI companies, including Google, continue to invest billions of dollars in data centers, chips and talent to develop their most cutting-edge models and support widespread adoption of them. But the recent launches also hint at an industrywide push to make AI software that can run as efficiently as possible. While some of these projects, including Cohere's new model, were set in motion before DeepSeek's most recent launch, the frenzy around the Chinese upstart may only accelerate this broader effort. In January, DeepSeek released open-source AI software that rivaled models from OpenAI and Google -- and was purportedly built at a fraction of the cost of competitors' models. DeepSeek's team came up with some simple but key innovations, such as finding ways to get more use from the computer chips they had access to.Some AI executives argue the immense attention on DeepSeek is a positive for the industry, indicating it may be cheaper to roll out advanced AI systems than was commonly thought. "I think the service DeepSeek did was make the market aware of how inefficient many players have been," Aidan Gomez, Cohere's co-founder and chief executive officer, said in an interview. "It was a very healthy wakeup call." Cohere said it began work on the new model eight months ago, before DeepSeek upended the tech world. For Cohere, which focuses on business uses of AI and was most recently valued at $5.5 billion, there's an added benefit to this approach. Running AI models on fewer chips is "crucial" for business customers, who may have limited access to computing power. "They don't have tens, let alone hundreds, of GPUs to be able to deploy against problems," Gomez said, referring to graphics processing units. "So they need a very light and scalable form factor."
Share
Share
Copy Link
Leading AI firms are embracing a less-is-more approach, developing efficient AI models that can run on fewer chips. This trend, accelerated by DeepSeek's success, aims to reduce costs and improve accessibility for businesses.
In a significant trend reshaping the AI landscape, leading companies are now focusing on developing more efficient AI models that can operate on fewer chips. This shift comes nearly two months after the viral success of China's DeepSeek, which prompted a reevaluation of the resources required for AI system development 1.
Toronto-based Cohere Inc. is at the forefront of this movement with its new model, Command A. Set to be announced on Thursday, Command A can perform complex business tasks while running on just two of Nvidia Corp.'s AI-focused A100 or H100 chips. This represents a significant reduction in chip requirements compared to larger models and even DeepSeek's system 1.
Not to be outdone, Alphabet Inc.'s Google unveiled its new series of Gemma AI models a day earlier. These models can reportedly run on a single Nvidia H100 chip, further pushing the boundaries of efficiency. Both Cohere and Google claim their models match or surpass DeepSeek's most recent AI system on certain tasks 2.
While major AI companies continue to invest heavily in infrastructure and talent, there's a growing emphasis on creating AI software that can run as efficiently as possible. This trend, although predating DeepSeek's latest launch, has likely been accelerated by the Chinese company's success 2.
In January, DeepSeek released open-source AI software that rivaled models from OpenAI and Google, reportedly built at a fraction of the cost. Their success stemmed from innovations in chip utilization, demonstrating that advanced AI systems could be developed more cost-effectively than previously thought 2.
For companies like Cohere, which focuses on business applications of AI, this efficiency drive has additional benefits. Running AI models on fewer chips is crucial for business customers who may have limited access to computing power. As Aidan Gomez, Cohere's co-founder and CEO, explains, "They don't have tens, let alone hundreds, of GPUs to be able to deploy against problems. So they need a very light and scalable form factor" 2.
This shift towards efficiency could democratize access to advanced AI capabilities, potentially reshaping the competitive landscape and accelerating AI adoption across various industries.
Reference
[1]
Cohere releases Command A, a new large language model designed for enterprise use, offering high performance with minimal hardware requirements and expanded multilingual capabilities.
5 Sources
5 Sources
DeepSeek, a Chinese AI startup, has developed a new language model that achieves state-of-the-art performance without relying on advanced hardware, challenging the 'bigger is better' approach in AI development.
3 Sources
3 Sources
As Nvidia dominates the AI training chip market with GPUs, competitors are focusing on developing specialized AI inference chips to meet the growing demand for efficient AI deployment and reduce computing costs.
6 Sources
6 Sources
China-based DeepSeek disrupts the generative AI market with its R1 model, challenging industry leaders like OpenAI and Google with a cost-effective solution that sparks debate on the future of AI development and competition.
9 Sources
9 Sources
Ant Group, backed by Alibaba's Jack Ma, reports significant cost reduction in AI model training using Chinese-made chips, potentially rivaling Nvidia's performance and signaling a shift in the global AI chip market.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved