6 Sources
[1]
Spain's Multiverse raises $217 million for compressing AI models
PARIS, June 12 (Reuters) - Spanish AI firm Multiverse Computing said on Thursday it has raised 189 million euros ($217 million) from investment firm Bullhound Capital, HP Inc (HPQ.N), opens new tab, Forgepoint Capital and Toshiba, to compress AI language models. The company said it has developed a compression technology capable of reducing the size of large language models (LLMs) by up to 95% without hurting performance and reducing costs by up to 80%. It combines ideas from quantum physics and machine learning in ways that mimic quantum systems but doesn't need a quantum computer. The latest funding round makes Multiverse the largest Spanish AI startup, joining the list of top European AI startups such as Mistral, Aleph Alpha, Synthesia, Poolside and Owkin. Multiverse has launched compressed versions of LLMs such as Meta's (META.O), opens new tab Llama, China's DeepSeek and France's Mistral, with additional models coming soon, the company said. "We are focused just on compressing the most used open-source LLMs, the ones that the companies are already using," Chief Executive Officer Enrique Lizaso Olmos said. "When you go to a corporation, most of them are using the Llama family of models." The tool is also available on Amazon (AMZN.O), opens new tab Web Services AI marketplace. ($1 = 0.8709 euros) Reporting by Supantha Mukherjee in Paris; Editing by Lisa Shumaker Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence
[2]
Multiverse Computing Raises $215M At A 5x Valuation Jump To Help Speed Up LLM Rollout
Spain's quantum software startup Multiverse Computing has secured $215 million in Series B funding, the company told Crunchbase News. Of the $215 million in funding, $170 million was in the form of equity and $45 million in "grants and partnerships," according to Enrique Lizaso Olmos, founder and CEO of Multiverse Computing. He declined to divulge valuation, but sources say San Sebastian-based Multiverse is now valued at over $500 million, which would be "a 5x jump" from its $108 million valuation following its Series A round in March 2024. In total, it has raised about $250 million since its 2019 inception. Bullhound Capital led the Series B financing, which included participation from HP Tech Ventures, Forgepoint Capital International, CDP Venture Capital and Toshiba, among others. Multiverse Computing was initially founded to develop quantum computing solutions for complex, real-world problems in finance, energy and manufacturing. In 2023, it discovered its quantum expertise could be applied to the artificial intelligence sector. So the company developed CompactifAI, its proprietary AI compressor. "Currently, cost and performance are seen as tradeoffs in AI. Bigger models are more powerful but more expensive to run; smaller models are cheaper but the results are less precise," said Olmos. "CompactifAI overturns this dilemma because it's able to shrink a large language model without compromising on performance -- making it more energy-efficient, affordable and viable for edge devices like smartphones and drones." He further claims that CompactifAI can shrink a model by up to 95% with only a 2%-3% loss in accuracy. Multiverse rolled out the technology to initial customers in 2024. Its goal is that widespread adoption will help address "the massive costs" prohibiting the rollout of large language models. Olmos declined to reveal revenue figures, sharing only that the company has "been more than doubling" its revenue each year. Presently, Multiverse has about 160 employees. Per Roman, co-founder and managing partner at Bullhound Capital, told Crunchbase News that CompactifAI opens up "vast" opportunities. "Some of their work has brought their most highly compressed models to the edge, and through partnering with a range of firms they will turbocharge the proliferation of local LLMs into a host of devices: from cars and laptops, to IoTs and satellites," he said. Roman also believes Multiverse's traction with hyperscalers is also significant. "The decrease in inference costs, lower latency and reduced power consumption that their compressed models offer will form part of these cloud providers go-to-market strategies," he said. "The reduced cost per token will help increase overall compute usage, while facilitating doing more with less and potentially addressing sovereign or private cloud rollouts." Last year set a new high for venture dollars invested into quantum -- a level of computing much faster and superior to traditional computers that can perform many complex calculations simultaneously. In 2024, venture capital-backed quantum startups raised $1.9 billion in 62 rounds, according to Crunchbase data. The dollar figure is a 138% jump from the $789 million raised in 67 rounds in 2023.
[3]
Big Brains, Tiny Models: Spain's Multiverse Computing Bags $215M to Shrink AI for Smartphones - Decrypt
The method uses tensor networks from quantum physics to compress models and "heals" them with fast retraining, claiming 50% faster performance at inference. A Spanish AI startup has just convinced investors to hand over $215 million based on a bold claim: they can shrink large language models by 95% without compromising their performance. Multiverse Computing's innovation hinges on its CompactifAI technology, a compression method that borrows mathematical concepts from quantum physics to shrink AI models down to smartphone size. The San Sebastian company says that their compressed Llama-2 7B model runs 25% faster at inference while using 70% fewer parameters, with accuracy dropping just 2-3%. If validated at scale, this could address AI's elephant-sized problem: models so massive they require specialized data centers just to operate. "For the first time in history, we are able to profile the inner workings of a neural network to eliminate billions of spurious correlations to truly optimize all sorts of AI models," RomΓ‘n OrΓΊs, Multiverse's chief scientific officer, said in a blog post on Thursday. Bullhound Capital led the $215 million Series B round with backing from HP Tech Ventures and Toshiba. Applying quantum-inspired concepts to tackle one of AI's most pressing issues sounds improbable -- but if the research holds up, it's real. Unlike traditional compression that simply cuts neurons or reduces numerical precision, CompactifAI uses tensor networks -- mathematical structures that physicists developed to track particle interactions without drowning in data. The process works like an origami for AI models: weight matrices get folded into smaller, interconnected structures called Matrix Product Operators. Instead of storing every connection between neurons, the system preserves only meaningful correlations while discarding redundant patterns, like information or relationships that are repeated over and over again. Multiverse discovered that AI models aren't uniformly compressible. Early layers prove fragile, while deeper layers -- recently shown to be less critical for performance -- can withstand aggressive compression. This selective approach lets them achieve dramatic size reductions where other methods fail. After compression, models undergo brief "healing" -- retraining that takes less than one epoch thanks to the reduced parameter count. The company claims this restoration process runs 50% faster than training original models due to decreased GPU-CPU transfer loads. Long story short -- per the company's own offers -- you start with a model, run the Compactify magic, and end up with a compressed version that has less than 50% of its parameters, can run at twice the inference speed, costs a lot less, and is just as capable as the original. In its research, the team shows you can reduce the Llama-2 7B model's memory needs by 93%, cut the number of parameters by 70%, speed up training by 50%, and speed up answering (inference) by 25% -- while only losing 2-3% accuracy. Traditional shrinking methods like quantization (reducing the precision like using fewer decimal places), pruning (cutting out less important neurons entirely, like trimming dead branches from a tree), or distillation techniques (training a smaller model to mimic a larger one's behavior) are not even close to achieving these numbers. Multiverse already serves over 100 clients including Bosch and Bank of Canada, applying their quantum-inspired algorithms beyond AI to energy optimization and financial modeling. The Spanish government co-invested β¬67 million in March, pushing total funding above $250 million. Currently offering compressed versions of open-source models like Llama and Mistral through AWS, the company plans to expand to DeepSeek R1 and other reasoning models. Proprietary systems from OpenAI or Claude remain obviously off-limits since they are not available for tinkering or study. The technology's promise extends beyond cost savings measures. HP Tech Ventures' involvement signals interest in edge AI deployment -- running sophisticated models locally rather than cloud servers. "Multiverse's innovative approach has the potential to bring AI benefits of enhanced performance, personalization, privacy and cost efficiency to life for companies of any size," Tuan Tran, HP's President of Technology and Innovation, said. So, if you find yourself running DeepSeek R1 on your smartphone someday, these dudes may be the ones to thank.
[4]
Multiverse Computing bags $215M for its quantum-inspired AI model compression tech - SiliconANGLE
Multiverse Computing bags $215M for its quantum-inspired AI model compression tech Multiverse Computing S.L. said today it has raised $215 million in funding to accelerate the deployment of its quantum computing-inspired artificial intelligence model compression technology, which promises to reduce the costs of AI inference dramatically without impacting on performance. The Series B investment was led by Bullhound Capital and saw the participation of numerous others, including Hewlett Packard Enterprise Co.'s HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Quantonation, Toshiba Corp. and Capital Riesgo de Euskadi - Grupo SPRI. The round represents a significant capital injection for the startup, which last raised $25 million through a Series A investment in March 2024. The big boost illustrates the enormous potential of its technology, which the company says can reduce the size of large language models by up to 95% without them taking any performance hit. It has dramatic implications in terms of the cost of AI inference, or running those models in production. When AI applications scale up, the cost of running them can quickly run into millions of dollars. It's extremely prohibitive, hence the desire to find a way to run them at a more affordable cost, and that's what Multiverse aims to provide. The challenge is that LLMs require powerful hardware, with the most advanced applications utilizing enormous clusters of Nvidia Corp.'s graphics processing units, which cost thousands of dollars each and use massive amounts of energy. What Multiverse does is compress the size of those LLMs so they can run on much smaller clusters. Its CompatifAI technology does this through the use of "quantum-inspired algorithms," which are advanced tensor networks based on the principles of quantum computing. These algorithms have the unique ability to locate the most relevant parts of any AI model, as well as the less relevant parts. Doing this, it claims, it can strip out the unnecessary bits of the model and significantly reduce its size, without any noticeable performance impact. Multiverse co-founder and Chief Technology Officer RomΓ‘n OrΓΊs, the mastermind who first pioneered tensor networks, said they work by profiling the inner workings of the neural networks that power LLMs. "We can eliminate billions of spurious correlations to truly optimize all sorts of AI models," he said. Besides offering its technology, it has also created a library of CompactifAI models, which are highly compressed versions of leading open-source LLMs such as Llama, Mistral and DeepSeek that retain their original accuracy. According to Multiverse, these compacted models are anywhere from four to 12 times faster than the originals, allowing for inference costs to be reduced by anything from 50% to 80%. It says the CompactifAI models can run in the cloud, in private on-premises data centers or, in the case of its "ultra-compressed LLMs," they can even run on edge devices such as personal computers, smartphones, cars and other devices, such as the Raspberry Pi. The company insists that CompactifAI is much more effective than existing model compression techniques such as quantization and pruning, which significantly hamper the accuracy and performance of LLMs. It adds that the technology can also be used for AI training, accelerating the time it takes to train and fine-tune models by up to 1,000 times, meaning dramatically lower costs. Co-founder and Chief Executive Enrique Lizaso Olmos said he's trying to change the prevailing wisdom that shrinking LLMs comes at a cost in terms of performance. "What started as a breakthrough in model compression quickly proved transformative," he said. "We're unlocking new efficiencies in AI deployment and earning rapid adoption for our ability to radically reduce the hardware requirements for running AI models." The startup has already convinced some very big enterprises of the advantages of its quantum-inspired algorithms, including HPE, which has been using the technology to shrink AI to the point where it can be run locally on personal computers. "By making AI applications more accessible at the edge, Multiverse's innovative approach has the potential to bring AI benefits of enhanced performance, personalization, privacy and cost efficiency to life for companies of any size," said HP President of Technology and Innovation Tuan Tran. Bullhound Capital co-founder and Managing Partner Per Roman said he's backing Multiverse because there's a "global need" for more efficiency in AI models. "RomΓ‘n OrΓΊs has convinced us that he and his team of engineers are developing truly world-class solutions in this highly complex and compute-intensive field," he said.
[5]
Spain's Multiverse raises $217 million for compressing AI models
The company said it has developed a compression technology capable of reducing the size of large language models (LLMs) by up to 95% without hurting performance and reducing costs by up to 80%.Spanish AI firm Multiverse Computing said on Thursday it has raised 189 million euros ($217 million) from investment firm Bullhound Capital, HP Inc, Forgepoint Capital and Toshiba, to compress AI language models. The company said it has developed a compression technology capable of reducing the size of large language models (LLMs) by up to 95% without hurting performance and reducing costs by up to 80%. It combines ideas from quantum physics and machine learning in ways that mimic quantum systems but doesn't need a quantum computer. The latest funding round makes Multiverse the largest Spanish AI startup, joining the list of top European AI startups such as Mistral, Aleph Alpha, Synthesia, Poolside and Owkin. Multiverse has launched compressed versions of LLMs such as Meta's Llama, China's DeepSeek and France's Mistral, with additional models coming soon, the company said. "We are focused just on compressing the most used open-source LLMs, the ones that the companies are already using," Chief Executive Officer Enrique Lizaso Olmos said. "When you go to a corporation, most of them are using the Llama family of models." The tool is also available on Amazon Web Services AI marketplace.
[6]
Spain's Multiverse raises $217 million for compressing AI models
PARIS (Reuters) -Spanish AI firm Multiverse Computing said on Thursday it has raised 189 million euros ($217 million) from investment firm Bullhound Capital, HP Inc, Forgepoint Capital and Toshiba, to compress AI language models. The company said it has developed a compression technology capable of reducing the size of large language models (LLMs) by up to 95% without hurting performance and reducing costs by up to 80%. It combines ideas from quantum physics and machine learning in ways that mimic quantum systems but doesn't need a quantum computer. The latest funding round makes Multiverse the largest Spanish AI startup, joining the list of top European AI startups such as Mistral, Aleph Alpha, Synthesia, Poolside and Owkin. Multiverse has launched compressed versions of LLMs such as Meta's Llama, China's DeepSeek and France's Mistral, with additional models coming soon, the company said. "We are focused just on compressing the most used open-source LLMs, the ones that the companies are already using," Chief Executive Officer Enrique Lizaso Olmos said. "When you go to a corporation, most of them are using the Llama family of models." The tool is also available on Amazon Web Services AI marketplace. (Reporting by Supantha Mukherjee in Paris; Editing by Lisa Shumaker)
Share
Copy Link
Spanish AI startup Multiverse Computing secures $217 million in funding to advance its quantum-inspired AI model compression technology, promising to dramatically reduce the size and cost of running large language models.
Multiverse Computing, a Spanish AI firm, has successfully raised β¬189 million ($217 million) in a Series B funding round led by Bullhound Capital, with participation from HP Inc, Forgepoint Capital, Toshiba, and others 12. This significant investment positions Multiverse as the largest Spanish AI startup, joining the ranks of top European AI companies 1.
Source: Decrypt
At the heart of Multiverse's appeal is its groundbreaking CompactifAI technology, which promises to revolutionize the AI industry by dramatically reducing the size of large language models (LLMs) 3. The company claims its compression method can shrink LLMs by up to 95% without compromising performance, while simultaneously reducing operational costs by up to 80% 14.
Source: SiliconANGLE
Multiverse's technology uniquely combines concepts from quantum physics and machine learning 1. The CompactifAI system utilizes tensor networks, mathematical structures originally developed for tracking particle interactions in quantum physics 3. This approach allows for the preservation of meaningful correlations within AI models while discarding redundant patterns, resulting in significantly smaller yet equally capable models 3.
The company has already launched compressed versions of popular open-source LLMs, including Meta's Llama, China's DeepSeek, and France's Mistral 1. In one example, Multiverse demonstrated that their compressed Llama-2 7B model runs 25% faster at inference while using 70% fewer parameters, with only a 2-3% drop in accuracy 34.
Source: Economic Times
Multiverse's technology has far-reaching implications for the AI industry:
Cost Reduction: By significantly decreasing the hardware requirements for running AI models, Multiverse aims to make AI more accessible and affordable for companies of all sizes 45.
Edge AI Deployment: The ability to run sophisticated models locally on devices like smartphones, laptops, and IoT devices opens up new possibilities for edge computing 23.
Energy Efficiency: Compressed models require less computational power, potentially leading to more environmentally friendly AI applications 3.
With over 100 clients, including Bosch and the Bank of Canada, Multiverse is rapidly gaining traction in the industry 3. The company's compressed models are available on the Amazon Web Services AI marketplace, making them easily accessible to a wide range of businesses 15.
Multiverse plans to expand its offerings to include more reasoning models and continue refining its compression technology 3. The substantial funding is expected to accelerate the deployment and further development of their quantum-inspired AI model compression technology 4.
As the AI industry grapples with the challenges of scaling and cost, Multiverse Computing's innovative approach could play a crucial role in shaping the future of AI deployment and accessibility across various sectors.
AMD reveals its new Instinct MI350 and MI400 series AI chips, along with a comprehensive AI roadmap spanning GPUs, networking, software, and rack architectures, in a bid to compete with Nvidia in the rapidly growing AI chip market.
18 Sources
Technology
20 hrs ago
18 Sources
Technology
20 hrs ago
Google DeepMind has launched Weather Lab, an interactive website featuring AI weather models, including an experimental tropical cyclone model. The new AI system aims to improve cyclone predictions and is being evaluated by the US National Hurricane Center.
8 Sources
Technology
20 hrs ago
8 Sources
Technology
20 hrs ago
Meta's new AI app is facing criticism for its "Discover" feature, which publicly displays users' private conversations with the AI chatbot, often containing sensitive personal information.
6 Sources
Technology
20 hrs ago
6 Sources
Technology
20 hrs ago
A major Google Cloud Platform outage affected numerous AI services and popular platforms, highlighting the vulnerabilities of cloud-dependent systems and raising concerns about the resilience of digital infrastructure.
3 Sources
Technology
4 hrs ago
3 Sources
Technology
4 hrs ago
Harvard University and other libraries are releasing vast collections of public domain books and documents to AI researchers, providing a rich source of cultural and historical data for machine learning models.
6 Sources
Technology
20 hrs ago
6 Sources
Technology
20 hrs ago