AI Companies Shift Focus to Efficient Models Running on Fewer Chips

2 Sources

Leading AI firms are embracing a less-is-more approach, developing efficient AI models that can run on fewer chips. This trend, accelerated by DeepSeek's success, aims to reduce costs and improve accessibility for businesses.

News article

AI Industry Shifts Towards Efficiency

In a significant trend reshaping the AI landscape, leading companies are now focusing on developing more efficient AI models that can operate on fewer chips. This shift comes nearly two months after the viral success of China's DeepSeek, which prompted a reevaluation of the resources required for AI system development 1.

Cohere's Command A: A Leap in Efficiency

Toronto-based Cohere Inc. is at the forefront of this movement with its new model, Command A. Set to be announced on Thursday, Command A can perform complex business tasks while running on just two of Nvidia Corp.'s AI-focused A100 or H100 chips. This represents a significant reduction in chip requirements compared to larger models and even DeepSeek's system 1.

Google's Gemma: Single-Chip Performance

Not to be outdone, Alphabet Inc.'s Google unveiled its new series of Gemma AI models a day earlier. These models can reportedly run on a single Nvidia H100 chip, further pushing the boundaries of efficiency. Both Cohere and Google claim their models match or surpass DeepSeek's most recent AI system on certain tasks 2.

Industry-Wide Push for Efficiency

While major AI companies continue to invest heavily in infrastructure and talent, there's a growing emphasis on creating AI software that can run as efficiently as possible. This trend, although predating DeepSeek's latest launch, has likely been accelerated by the Chinese company's success 2.

DeepSeek's Impact on the AI Landscape

In January, DeepSeek released open-source AI software that rivaled models from OpenAI and Google, reportedly built at a fraction of the cost. Their success stemmed from innovations in chip utilization, demonstrating that advanced AI systems could be developed more cost-effectively than previously thought 2.

Business Implications and Future Outlook

For companies like Cohere, which focuses on business applications of AI, this efficiency drive has additional benefits. Running AI models on fewer chips is crucial for business customers who may have limited access to computing power. As Aidan Gomez, Cohere's co-founder and CEO, explains, "They don't have tens, let alone hundreds, of GPUs to be able to deploy against problems. So they need a very light and scalable form factor" 2.

This shift towards efficiency could democratize access to advanced AI capabilities, potentially reshaping the competitive landscape and accelerating AI adoption across various industries.

Explore today's top stories

Apple Considers Partnering with OpenAI or Anthropic to Boost Siri's AI Capabilities

Apple is reportedly in talks with OpenAI and Anthropic to potentially use their AI models to power an updated version of Siri, marking a significant shift in the company's AI strategy.

TechCrunch logoThe Verge logoTom's Hardware logo

22 Sources

Technology

14 hrs ago

Apple Considers Partnering with OpenAI or Anthropic to

Microsoft's AI Diagnostic Tool Outperforms Human Doctors in Complex Medical Cases

Microsoft unveils an AI-powered diagnostic system that demonstrates superior accuracy and cost-effectiveness compared to human physicians in diagnosing complex medical conditions.

Wired logoFinancial Times News logoGeekWire logo

6 Sources

Technology

22 hrs ago

Microsoft's AI Diagnostic Tool Outperforms Human Doctors in

Google Unveils Comprehensive AI Integration in Education with Gemini and NotebookLM

Google announces a major expansion of AI tools in education, including Gemini for Education and NotebookLM for under-18 users, aiming to transform classroom experiences while addressing concerns about AI in learning environments.

TechCrunch logoThe Verge logoAndroid Police logo

7 Sources

Technology

14 hrs ago

Google Unveils Comprehensive AI Integration in Education

NVIDIA's GB300 Blackwell Ultra AI Servers Set to Revolutionize AI Computing in Late 2025

NVIDIA's upcoming GB300 Blackwell Ultra AI servers, slated for release in the second half of 2025, are poised to become the most powerful AI servers globally. Major Taiwanese manufacturers are vying for production orders, with Foxconn securing the largest share.

TweakTown logoWccftech logo

2 Sources

Technology

6 hrs ago

NVIDIA's GB300 Blackwell Ultra AI Servers Set to

Elon Musk's xAI Secures $10 Billion in Funding Amid Intensifying AI Competition

Elon Musk's AI company, xAI, has raised $10 billion through a combination of debt and equity financing to expand its AI infrastructure and development efforts.

Reuters logoBenzinga logoMarket Screener logo

3 Sources

Business and Economy

6 hrs ago

Elon Musk's xAI Secures $10 Billion in Funding Amid
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Twitter logo
Instagram logo
LinkedIn logo