Curated by THEOUTPOST
On Thu, 3 Apr, 12:02 AM UTC
2 Sources
[1]
Parasail says its fleet of on-demand GPUs is larger than Oracle's entire cloud | TechCrunch
Cloud infrastructure is dominated by several large industry players: AWS, Microsoft's Azure, and Google Cloud. While to some it may look like AI is headed in a similar direction, the founders of Parasail think AI infrastructure will look very different -- and are betting their company's fate on it. Parasail works with dozens of providers to deliver on-demand GPUs for companies and enterprises looking to build AI models and applications. Parasail gives customers access to hardware, including Nvidia's H100, H200, A100, and 4090 GPUs, at a fraction of the cost that incumbents charge, according to the company. "There's basically three cloud vendors who run the internet, and that isn't exactly how the internet is being rebuilt when you look at AI," Tim Harris, one of Parasail's co-founders and the CEO of Swift Navigation, told TechCrunch. "It's much more fragmented. The compute is much more fungible and fluid, so you can actually inherently run it in a more horizontal nature, and that's really what we were trying to do." Harris added, "We didn't want a world where AI was controlled from soup to nuts by the hyperscalers." Harris and Parasail CEO Mike Henry had the idea for the startup a few years back. Henry, the former CPO of Groq, told TechCrunch that he had spent a long time thinking about what it would take to build AI infrastructure that could compete with Nvidia. He said that when he realized AI infrastructure was being rapidly built up by numerous players, he saw an opportunity for a horizontal move. According to Henry, the rapid clip of innovation happening in AI hardware was making it difficult for companies to keep up. "We had to really focus on, how do we make [this] as simple as possible for the customer?" Henry said. "They're barely keeping up with the open-source model releases alone." Harris and Henry got started on the company back in 2023, hired an engineering team, and began building in early 2024. Today, there's no shortage of vendors looking to help enterprises and other companies build and scale their AI products. From hyperscalers like Nvidia and Microsoft to startups such as Together AI and Lepton AI, customers have plenty of options. The Parasail founders don't think they're all one and the same. Sure, all these vendors provide GPUs and AI infrastructure, Henry said, but he thinks the proprietary technology Parasail has running under the hood helps it stand out. This tech is what connects Parasail's GPUs from various sources together. Wednesday marks the official launch of its platform, but Parasail is already working with dozens of customers including Elicit, Weights & Biases, and Rasa. The company has also raised $10 million from a seed round in 2024 that had participation from Basis Set Ventures, Threshold Ventures, Buckley Ventures, and Black Opal Ventures. To gain meaningful market share, Parasail will have to go head-to-head with hyperscalers and bet on the demand for GPUs continuing to grow. There is reason to believe that it will, but there are also signs -- like Microsoft canceling some of its data center contracts -- that the predictions around needed AI infrastructure may be a bit overblown. "We see literally no end [to] the demand," Harris said. "It's really that customers have a hard time doing it -- have a hard time scaling AI. The models now are getting to a place where [companies] can just grab open-source models and pretty much run them, but then being able to get access to GPUs, access to data centers, all the optimizations... we can do that with a click of a button."
[2]
Parasail promises to power any AI workload with on-demand access to cloud-based GPUs - SiliconANGLE
Parasail promises to power any AI workload with on-demand access to cloud-based GPUs Artificial intelligence infrastructure startup Parasail Inc. today announced the launch of its new AI Deployment Network, a new cloud-hosted platform that provides enterprises with immediate, cost-efficient and on-demand access to powerful graphics processing units. Parasail says its AI Deployment Network is special because it allows companies to tap into a massive, contract-free pool of high-performance GPUs at highly competitive rates, making it easy to scale any AI workload in a matter of minutes. It provides access to a wide range of GPUs, including Nvidia Corp.'s beefy H100 and H200 chips, plus A100 and 4090 processors for more specialized tasks. The startup further claims that its on-demand AI compute platform provides the most optimal performance of any GPU resource provider too, thanks to its proprietary orchestration engine that matches workloads with its global network of GPUs. Through this, it says, it's able to solve cost permutation problems for customers while eliminating any complexities around workload management. In other words, Parasail simplifies access to AI infrastructure, providing computing resources that can be accessed at any time of day or night and with no commitments, to support rapid deployments, AI experimentation and scale existing production workloads. The startup was founded by AI infrastructure veterans Mike Henry and Tim Harris and is backed by $10 million in seed funding from investors including Basis Set Ventures, Threshold Ventures, Buckley Ventures and Black Opal Ventures. Henry, the startup's chief executive, said there's a lot of demand for true "on-demand access" to GPU resources, because few infrastructure providers offer it at the same scale as his company does. "In reality, legacy cloud providers use small amounts of compute capacity to lure customers into long-term contracts," Henry said. "At Parasail, we are providing the first real-time, true on-demand access to massive compute without the hidden constraints." With Parasail, companies can deploy production-ready endpoints and gain access to dozens of GPUs within a matter of hours, with minimal setup or configuration involved, allowing them to accelerate almost any kind of AI workload, the startup claims. In doing this, they can benefit from a two-to-five times cost advantage versus other infrastructure providers, it says. The startup gathered a slew of testimonials from early adopters to back up its claims. For instance, Elicit Research PBC, which has built an AI-powered research assistant similar to Microsoft Corp.'s Researcher agents and Google LLC's Deep Research tool, tapped Parasail's cloud to screen hundreds of thousands of scientific papers on demand. "Elicit is using LLMs to screen more than 100,000 scientific papers each day, but the cost of high-quality, real-time processing was prohibitive," said Elicit CEO Andreas Stuhlmüller. "Parasail was essential for removing this bottleneck." The popular LLM development platform Weights & Biases Inc. is another customer, using Parasail's cloud to support its deployment of the popular DeepSeek R1 reasoning model when it took the industry by storm earlier this year. "Parasail moved at lightning speed to get us set up with massive DeepSeek capacity and top-shelf throughput," said Weights & Biases Chief Technology Officer Shawn Lewis. "They will give you the latest and greatest faster than anyone else." Parasail has certainly made a splash and there's a lot to like about the on-demand GPU access it provides. That will be all the more true if it can cater to the large-scale deployment requirements of the world's biggest enterprises in future. That said, it's far from alone in offering simpler access to GPU resources these days. One of its biggest rivals looks to be CoreWeave Inc., which has built a GPU cloud that's dedicated to AI workloads and notably went public last week through a somewhat disappointing initial public offering. Other rivals include GMI Cloud Inc., which is focused on the rapid deployment of GPUs and claims to be able to do so in seconds. There's also Together Computer Inc., which has raised millions of dollars in capital from backers including Salesforce Inc. And the decentralized cloud storage company Storj Labs Inc. got into the GPU game last year after acquiring a startup called Valdi Labs PBC. While the industry is competitive, Henry said the prospects are looking good for Parasail because most enterprises are shifting toward a multi-supplier aggregation infrastructure model. "The future of AI infrastructure isn't about a single cloud provider," he said. "It's about an interconnected network of high-performance compute providers."
Share
Share
Copy Link
Parasail, a startup founded in 2023, has launched its AI Deployment Network, offering on-demand access to a vast pool of GPUs. The company claims to have a larger fleet than Oracle's entire cloud, aiming to reshape the AI infrastructure landscape.
Parasail, an artificial intelligence infrastructure startup, has officially launched its AI Deployment Network, a cloud-hosted platform providing enterprises with immediate, cost-efficient, and on-demand access to powerful graphics processing units (GPUs). The company claims its fleet of on-demand GPUs is larger than Oracle's entire cloud, positioning itself as a significant player in the rapidly evolving AI infrastructure landscape 12.
Founded in 2023 by AI infrastructure veterans Mike Henry and Tim Harris, Parasail aims to disrupt the traditional cloud infrastructure model dominated by tech giants like AWS, Microsoft Azure, and Google Cloud. The startup's founders believe that AI infrastructure will develop differently from conventional cloud services, with a more fragmented and fluid compute environment 1.
Mike Henry, Parasail's CEO, stated, "We didn't want a world where AI was controlled from soup to nuts by the hyperscalers." This vision drives Parasail's mission to provide a more accessible and flexible AI infrastructure solution 1.
Parasail's platform stands out due to its proprietary orchestration engine, which efficiently matches workloads with its global network of GPUs. This technology allows the company to solve cost permutation problems for customers while simplifying workload management 2.
The startup offers access to a wide range of GPUs, including Nvidia's H100, H200, A100, and 4090 processors, catering to various AI workloads. Parasail claims to provide these resources at a fraction of the cost charged by incumbent providers, with a reported two-to-five times cost advantage 12.
Despite being a newcomer, Parasail has already attracted several high-profile customers, including Elicit, Weights & Biases, and Rasa. These early adopters have reported significant benefits from using Parasail's platform 12.
Andreas Stuhlmüller, CEO of Elicit, shared, "Elicit is using LLMs to screen more than 100,000 scientific papers each day, but the cost of high-quality, real-time processing was prohibitive. Parasail was essential for removing this bottleneck." 2
Parasail has secured $10 million in seed funding from investors including Basis Set Ventures, Threshold Ventures, Buckley Ventures, and Black Opal Ventures. This financial backing positions the company to compete effectively in the growing AI infrastructure market 12.
As demand for GPU resources continues to surge, Parasail sees a bright future ahead. Tim Harris, one of the co-founders, expressed confidence in the market potential: "We see literally no end [to] the demand. It's really that customers have a hard time doing it -- have a hard time scaling AI." 1
While Parasail's offering is innovative, the company faces competition from both established cloud providers and other AI infrastructure startups. Notable competitors include CoreWeave, GMI Cloud, Together Computer, and Storj Labs, each offering unique solutions in the GPU-as-a-service space 2.
Despite the competitive environment, Parasail's leadership believes that the future of AI infrastructure lies in an interconnected network of high-performance compute providers rather than reliance on a single cloud provider 2.
As the AI industry continues to evolve rapidly, Parasail's launch marks a significant development in the quest to make powerful GPU resources more accessible and cost-effective for businesses of all sizes.
Intel launches Tiber AI Cloud, powered by Gaudi 3 chips, partnering with Inflection AI to offer enterprise AI solutions, competing with major cloud providers and NVIDIA in the AI accelerator market.
4 Sources
4 Sources
As AI continues to transform enterprise computing, companies are navigating new infrastructure paradigms. From cloud-based solutions to custom on-premises setups, businesses are exploring various options to gain a competitive edge in the AI-driven landscape.
4 Sources
4 Sources
Datapelago, a startup focused on optimizing data processing for enterprises, has exited stealth mode with a $47 million funding round. The company introduces a universal data processing engine designed to accelerate computing tasks and reduce infrastructure costs.
3 Sources
3 Sources
TensorWave, an AI cloud platform using AMD GPUs, raises $43 million to expand its data center capacity and launch a new inference platform, aiming to provide an alternative to Nvidia's dominance in the AI chip market.
3 Sources
3 Sources
As edge computing rises in prominence for AI applications, it's driving increased cloud consumption rather than replacing it. This symbiosis is reshaping enterprise AI strategies and infrastructure decisions.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved