ScaleOps raises $130M to tackle computing efficiency crisis as AI infrastructure costs soar

3 Sources

Share

ScaleOps has raised $130 million in Series C funding at an $800 million valuation to address the growing problem of wasted compute resources in AI infrastructure. The startup's platform autonomously manages Kubernetes infrastructure in real-time, reducing cloud and AI infrastructure costs by up to 80%. Led by Insight Partners, the round brings total funding to $210 million as the company reports 450% year-over-year growth.

ScaleOps Secures $130M Series C Funding Round to Address AI Infrastructure Waste

ScaleOps has closed a $130 million Series C funding round at an $800 million valuation, positioning the startup as a major player in autonomous cloud and AI infrastructure management

1

. Insight Partners led the round, with participation from existing investors including Lightspeed Venture Partners, NFX, Glilot Capital Partners, and Picture Capital

2

. The investment brings ScaleOps' total funding to $210 million and includes a secondary transaction worth tens of millions of dollars, allowing employees to realize equity

2

.

Source: The Next Web

Source: The Next Web

The New York-headquartered company reports more than 450% year-over-year growth and has tripled its headcount over the past 12 months, with plans to triple again by year-end

1

. This rapid expansion reflects surging demand for solutions that can reduce cloud and AI infrastructure costs amid the AI boom, where companies face mounting expenses from underutilized GPUs and over-provisioned workloads.

Computing Efficiency Crisis Drives Market Opportunity

While AI compute demand has grown three times year-over-year in 2026, most enterprises still rely on pre-AI maintenance tools that cannot keep pace with dynamic workloads

3

. ScaleOps claims its platform can reduce cloud costs by as much as 80% through real-time optimization of compute resources

1

. The problem stems from what CEO and founder Yodar Shafrir identifies as the compute bottleneck of the AI era, where GPUs sit idle, workloads are over-provisioned, and cloud costs continue climbing despite abundant resources.

Source: TechCrunch

Source: TechCrunch

"Compute is the defining bottleneck of the AI era, and the way most enterprises manage compute was built for a world that no longer exists," Shafrir explained

3

. The issue extends beyond GPUs to encompass compute, memory, storage, and networking, where the same inefficiency patterns repeatedly emerge across DevOps teams struggling to manage production workloads.

From Run:ai Engineer to Autonomous Infrastructure Pioneer

Yodar Shafrir co-founded ScaleOps in 2022 after serving as an engineer at Run:ai, a GPU orchestration startup acquired by Nvidia

1

. During his tenure at Run:ai, Shafrir witnessed firsthand how DevOps teams struggled to manage increasingly complex AI workloads, particularly as inference workloads became more common. While tools like Kubernetes help run applications across large clusters of machines, they rely on static configurations that fail to adapt to fast-changing demand, leading to performance issues and costly inefficiencies.

"Kubernetes is a great system. It's flexible and highly configurable. But that's also the problem," Shafrir told TechCrunch

1

. "Kubernetes relies heavily on static configurations. Applications today are highly dynamic, which requires constant manual work across teams. You need something that understands the context of each application -- what it needs, how it behaves, and how the environment is changing."

Shafrir's background as a professional triathlete competing internationally for Israel for 15 years may explain his methodical approach to solving a problem most competitors were ignoring when ScaleOps launched in the months before the AI infrastructure buildout accelerated

2

.

Autonomously Managing Kubernetes Infrastructure in Real-Time

ScaleOps differentiates itself by providing fully autonomous infrastructure management that operates without human intervention

2

. The platform covers Kubernetes pod rightsizing, replica optimization, node management, spot instance optimization, and increasingly, GPU resource management for AI models. Unlike competitors such as Cast AI, Kubecost, and Spot, which offer visibility into infrastructure problems, ScaleOps delivers continuous, context-aware automation that adjusts resources in real-time based on application behavior and environmental changes.

The platform was built specifically for production environments from the ground up, working out of the box without requiring manual configuration

1

. This context-aware approach addresses a critical limitation Shafrir observed: most automation tools operate without full application context, which can lead to performance issues and downtime that erode trust among teams running production environments. ScaleOps connects application needs with infrastructure decisions in real-time, understanding what each application needs, how it behaves, and how the environment is changing.

Enterprise Adoption Across Fortune 500 Companies

ScaleOps serves enterprise customers globally, particularly those operating Kubernetes-based infrastructure, with clients including Adobe, Wiz, DocuSign, Salesforce, and Coupa

1

. The platform is available on AWS Marketplace, Azure Marketplace, and Google Cloud Marketplace, and is FIPS-compatible for FedRAMP-regulated environments

2

. The company operates with more than 120 employees across Israel, North America, and Europe.

Jeff Horing, Managing Director at Insight Partners, emphasized the urgency of the problem ScaleOps addresses: "ScaleOps is addressing the urgent challenge of managing cloud and AI workloads, helping enterprises unlock performance, efficiency, and innovation at scale"

3

. The Series C funding comes roughly a year and a half after ScaleOps raised $58 million in its Series B round in November 2024, reflecting accelerating demand for autonomous solutions.

Building Toward Fully Autonomous Infrastructure Standard

With the new capital, ScaleOps plans to roll out new products and expand its platform as AI drives exponential growth in compute requirements

1

. The company aims to make infrastructure that manages itself the new enterprise standard, creating what Shafrir describes as a new category of autonomous infrastructure management. As traffic patterns shift by the second and GPU demand spikes unpredictably, the static resource configurations that Kubernetes relies on become increasingly untenable for engineering teams managing hundreds or thousands of workloads simultaneously.

The timing of ScaleOps' growth aligns with a fundamental shift in how enterprises must approach infrastructure. With AI models being invoked constantly and triple-digit year-over-year growth in AI compute demand, the manual tuning that DevOps teams traditionally performed is no longer tractable

2

. The company's 350%+ year-over-year growth suggests enterprises recognize that managing AI infrastructure requires a fundamentally different approach than the pre-AI tools most still use. As ScaleOps continues building toward fully autonomous infrastructure, the question for enterprises becomes not whether to adopt autonomous management, but how quickly they can transition before inefficiency costs become unsustainable.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo