Curated by THEOUTPOST
On Wed, 30 Oct, 12:08 AM UTC
2 Sources
[1]
GMI Cloud secures $82M in Series A for its GPU cloud infrastructure
The AI boom has spurred massive demand for graphics processing units (GPUs). As many enterprises seek to integrate AI technology into their systems, providers of GPU infrastructure are helping businesses get access to the chips they need. In the latest development, GMI Cloud, a San Jose-based startup that provides GPU cloud infrastructure, has raised an $82 million Series A led by Headline Asia with participation from strategic investors such as Banpu, a Thailand energy firm, and Wistron, a Taiwan-based electronics company. Banpu will offer power to GMI Cloud, while Wistron will co-develop products with the startup. The company's strategic partnerships boost its capacity to satisfy the increasing worldwide need for GPU resources, Founder and CEO of GMI Cloud Alex Yeh told TechCrunch. The investment round, bringing its total raised capital to $93 million, consists of $15 million in equity and $67 million in debt financing. The outfit, launched in 2022, started as a data center that focused on providing Bitcoin computing node services. Yeh said in an exclusive interview with TechCrunch that he noticed growing demand for GPU computing power from investors and clients, and shifted the company's focus to AI cloud infrastructure in response. The two-year-old startup already serves dozens of clients, including those in the healthcare, research, and telecom industries, Yeh said. The corporation plans to utilize the newly acquired funds to establish a new data center in Colorado. This facility will be essential for expanding its capacity in North America and complementing its existing data centers in Taiwan, Thailand, and Malaysia. It also aims to increase its workforce to 60 to 70 staff by the end of the year. The startup has 35 staff in Asia and 18 in the U.S. A recent McKinsey report predicted that artificial intelligence could contribute around $13 trillion to the economy by 2030, with the industrial sector expected to account for about $1 trillion of that total. The market size of global AI is expected to reach $1.8 trillion by 2030, per a report by Grand View Research. GMI Cloud isn't the only GPU cloud provider. It competes with Coreweave, Nebius, Google Cloud's Vertex AI and Big Tech firms. Yeh told TechCrunch that GMI Cloud distinguishes itself from competitors by providing distinctive features such as customized private cloud services and built-in support for Nvidia NIM, which simplifies integration with Nvidia hardware and software. He also noted that the company has a group of top-tier AI engineers and high-performance computing (HPC) experts with experience at GoogleX, Alibaba Cloud, and Supermicro. "[Our] team has over 20 years of AI and HPC experience," Yeh said, boasting 33 AI patents and extensive experience building large-scale distributed systems. "GMI also offers professional AI consulting services, guiding enterprises on model training, fine-tuning, and scaling, which competitors rarely offer." GMI provides a cost-effective solution that delivers optimized performance and resource management compared to competitors, supporting businesses through end-to-end solutions from GPU hardware to AI applications, Yeh highlighted. "[On top of that], we hold a significant supply chain advantage by sourcing directly from manufacturers, which allows us to maintain cost efficiency and a highly secure supply chain," Yeh continued. "Uniquely, GMI is also the only Nvidia-certified cloud service provider in Taiwan under the NCP/NPN program, further solidifying its competitive edge in offering premium cloud services."
[2]
GPU cloud operator GMI Cloud secures $82M investment - SiliconANGLE
Cloud infrastructure startup GMI Cloud Inc. today announced that it has closed a $82 million early-stage funding round led by Headline Asia. The Series A investment also included the participation of Thailand-based energy company Banpu Next and Wistron Corp., a Taiwanese electronics maker. The bulk of the round, $67 million, took the form debt financing. The remaining $15 million was provided as equity funding. Santa Clara, California-based GMI Cloud operates an infrastructure-as-a-service platform geared towards artificial intelligence workloads. The platform provides access to Nvidia Corp.'s H100 graphics processing units. A few weeks from now, GMI Cloud will also start offering the H200, an upgraded version of the H100 with more memory. For users, spinning up a GPU-powered cloud instance usually takes several minutes. GMI Cloud says that its platform performs the task in a few fractions of a second. Customers can link together their instances into a cluster and, if necessary, run multiple clusters side by side. After spinning up a GPU environment, developers have to install the AI model they plan to run along with the external software components the model requires to work. Configuring those components can require a significant amount of time. To speed up the task, GMI Cloud provides preconfigured AI software that can be deployed without extensive customization. The company offers access to NIMs, Nvidia-developed versions of popular AI models. NIMs are packaged into containers to ease deployment. They also include performance optimizations designed to make AI models run faster on Nvidia silicon. For customers with more advanced requirements, GCI Cloud offers the ability to deploy custom models on its cloud. The platform integrates with several of the code editing tools that developers most commonly use to build AI applications. It also supports open-source machine learning frameworks such as PyTorch. Under the hood, GCI Cloud's platform is powered by a Kubernetes-based infrastructure management platform called Cluster Engine. The software automates the task of provisioning new instances and allocating hardware resources to AI workloads. If one of the servers on which a model runs experiences technical issues, Cluster Engine can move the data inside to a different machine without incurring downtime. Since launching in 2022, GMI Cloud built up a user base that includes several dozen companies. The company says that its customers include healthcare, telecommunications and research organizations.
Share
Share
Copy Link
GMI Cloud, a GPU cloud infrastructure provider, has raised $82 million in a Series A funding round to expand its AI-focused services and data center capacity.
GMI Cloud, a San Jose-based startup specializing in GPU cloud infrastructure, has successfully raised $82 million in a Series A funding round. The investment, led by Headline Asia, includes strategic partnerships with Banpu, a Thai energy firm, and Wistron, a Taiwanese electronics company 1.
The funding round comprises $15 million in equity and $67 million in debt financing, bringing GMI Cloud's total raised capital to $93 million. These strategic partnerships are set to enhance the company's capabilities, with Banpu providing power to GMI Cloud and Wistron collaborating on product development 1.
Founded in 2022, GMI Cloud initially focused on Bitcoin computing node services before pivoting to AI cloud infrastructure in response to growing demand for GPU computing power. The company now serves dozens of clients across healthcare, research, and telecom industries 1.
GMI Cloud plans to use the new funds to establish a data center in Colorado, expanding its North American capacity. This addition will complement existing data centers in Taiwan, Thailand, and Malaysia. The company aims to increase its workforce from the current 53 to 60-70 staff by year-end 1.
GMI Cloud distinguishes itself through:
The company's infrastructure-as-a-service platform offers:
With AI expected to contribute $13 trillion to the global economy by 2030, GMI Cloud faces competition from established players like Coreweave, Nebius, and Google Cloud's Vertex AI. However, the company's unique features and experienced team of AI engineers and HPC experts position it strongly in this rapidly growing market 1 2.
Reference
TensorWave, an AI cloud platform using AMD GPUs, raises $43 million to expand its data center capacity and launch a new inference platform, aiming to provide an alternative to Nvidia's dominance in the AI chip market.
3 Sources
3 Sources
CoreWeave, an AI-optimized cloud platform operator, has closed a $650 million secondary sale led by major investors. The deal values the company at $23 billion, reflecting growing interest in AI cloud infrastructure.
3 Sources
3 Sources
Groq, an AI chip startup, has raised $640 million in a funding round led by D1 Capital Partners, achieving a $2.8 billion valuation. The company aims to challenge Nvidia's dominance in the AI chip market with its innovative tensor streaming processor technology.
6 Sources
6 Sources
CoreWeave, an Nvidia-backed AI cloud computing provider, has secured a $650 million credit line from major Wall Street banks to accelerate its growth and expand its global data center operations.
3 Sources
3 Sources
Nebius Group, formerly known as Yandex N.V., secures $700 million in funding to accelerate its AI infrastructure expansion in the US and Europe, with investments from Nvidia, Accel, and Orbis Investments.
10 Sources
10 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved