The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 19 Mar, 12:12 AM UTC
6 Sources
[1]
AI puts value in data. So how do we get it out?
Sponsored Feature AI is driving an explosion in infrastructure spending. But while GPU-enabled compute may grab the headlines, data management and storage are also central to determining whether enterprises ultimately realize value from their AI investments and drive broader transformation efforts. The worldwide AI infrastructure market is expected to hit $100bn by 2027, according to IDC. Servers are expected to account for the lion's share of this spending, but storage investment is increasing in line with overall growth as tech leaders cater for the massive datasets which AI requires along with the need for training, checkpoint and inference data repositories. While AI is fueling this spending boom, many of the underlying challenges facing CIOs haven't changed, explains HPE's SVP and GM for storage, Jim O'Dorisio. These include driving innovation, streamlining operations, and reducing the total cost of operations, all within the maelstrom of a constantly evolving tech and business landscape. Data, and therefore storage, all play into this. AI relies on data. But so do the myriad of other, more traditional, operations that companies regularly undertake. But it must be the right data, available to the right systems at the right time, and at the right speed, says O'Dorisio. "If you go back 15 years ago, 10 years ago, storage was really just where data sat. Increasingly, it's where we create value now, right," he explains. Dealing with the issues of data gravity and location is particularly challenging, a situation aggravated by the broader span and complexity in customer IT environments. The last two decades have seen a rush to the cloud, for example. But many enterprises are now wondering just how much they actually need to be off premises, particularly when it comes to managing all the data they need in order to realize value from AI. That decision may come down to higher-than-expected costs, or any given cloud provider's inability to meet strict organizational performance or security requirements, especially for real time and/or AI workloads. IDC notes that even cloud native organizations are beginning to question whether private cloud or on-prem has a role to play for them. And beyond creating value through AI or other advanced applications, enterprise data still needs to be protected and managed as well. The cyberthreat is more acute than ever - with threat actors themselves enthusiastically leveraging AI. The cyber challenge is clearly right up there, says O'Dorisio, but this repatriation of data also creates additional hybrid complexity. There's sustainability to consider as well, for example. Complex systems require energy to run, and data should be managed efficiently. But the underlying storage should also be as efficient as possible. That includes optimizing energy consumption but also considering the impact of overprovisioning and unduly short life cycles. This is a legacy problem The crucial question for an organization's storage systems then is whether they can keep up with the speed of change. The answer, too often, is they can't. For multiple reasons. Traditional architectures that rigidly tie together compute and storage can pose problems when scaling up to meet increasingly complex or large workloads. Expanding storage capacity can mean spending on compute that isn't really needed, and vice versa. This can lead to silos of systems built out for a particular business unit or workload, or a particular location, for example, core datacenters or edge deployments. Likewise, legacy architectures are often targeted at specific types of storage: block; file; object. But AI doesn't distinguish between data formats. It generally wants to chew through all the data it can, wherever it is. This lack of flexibility can be aggravated by legacy systems that were designed for a particular type of organization or scale, e.g. "enterprise" or a medium sized business. Integrating a raft of standalone systems can present a clear architectural issue as well as management challenges. Disparate hardware often means disparate management systems and consoles for example, meaning managers are left with a fragmented view of their overall estate. That situation can force team members to specialize in a subset of the organization's infrastructure, which can often result in inefficiencies and increased operational costs. These fragmented, siloed, and often hard to scale systems don't lend themselves well to the hybrid operations that are increasingly becoming the norm. Any organization contemplating repatriating some or all of its data will likely balk at losing the ease of use of managing their data in the cloud. This can all contribute to a massive bottleneck when it comes to maximizing the value of all the data available. "The architectures are typically complex, and they're siloed, explains O'Dorisio. "And it makes extracting value from the data very difficult." Where is the value? HPE has sought to address these challenges with its HPE Alletra storage MP platform. The architecture disaggregates storage and compute, meaning each can be scaled separately. So, as the demands of AI increase, infrastructure can be scaled incrementally, sidestepping the likelihood of siloes or wasteful overprovisioning, says HPE. This is bolstered by HPE's Timeless program, which ensures a free, nondisruptive controller refresh, cutting TCO by 30 percent compared to standard forklift upgrades according to HPE estimates. The MP stands for multiprotocol, with common underlying hardware optimized for particular applications. HPE Alletra Storage MP B10000 modernizes enterprise block storage with AI-driven cloud management, disaggregated scaling, and 100 percent data availability for all workloads, says HPE. Whereas, the HPE Alletra Storage MP X10000 is purpose built for intelligent high-performance object storage. The AMD EPYC embedded processors at their core are designed to offer a scalable X86 CPU portfolio delivering maximum performance with enterprise-class reliability in a power-optimized profile. An upcoming release of the X10000 system will give the ability to tag data and add metadata as data is being stored. Users will be able to add vector embeddings and similar functions to support downstream Gen AI RAG pipelines. "Our whole notion is really to add the intelligence and create value as the data is being stored, which really significantly reduces time to value for our customers," O'Dorisio says. Together with the unified global namespace in HPE Ezmeral Data Fabric, customers can aggregate data from across their enterprise to fuel AI initiatives. But, even if tech leaders have good reason to situate some or even all their storage infrastructure outside the cloud, giving up the ease of management the cloud offers is a harder sell. Step forward the HPE GreenLake cloud, designed to deliver a single cloud operating model to manage the entire storage estate, across the core, edge and cloud. Any form of disruption to IT operations, whether due to a disaster or a cyberattack, is now considered an inevitability rather than misfortune. However, by harnessing the Zerto ransomware detection and recovery software, organizations "can really recover in hours and days, versus maybe weeks and months when you're trying to recover a bunch of data from a cloud," says O'Dorisio. Intelligent data savings This intelligent approach to architecture and ownership also supports a reduction in associated emissions by half, O'Dorisio adds, by reducing overprovisioning and the need for forklift upgrades. HPE's own research shows that HPE Alletra Storage MP's disaggregated architecture can reduce storage costs by up to 40 percent. Better still, intelligent self-service provisioning can deliver up to 99 percent operational time savings, calculates the company. One major global telecom provider recently deployed HPE Alletra Storage MP B10000 to refresh its legacy storage arrays. In the process, the company dramatically reduced the costs associated with support, energy and cooling, as well as datacenter space, says HPE. The move helped reduce operating expenses by more than 70 percent while allowing the telco to accommodate a higher volume of traditional databases as well as more modern applications. The increased storage capacity with a smaller footprint means the telco provider also now has space in their datacenter to accommodate future growth. None of that is to suggest that storage in the AI age is anything less than complex.After all, as O'Dorisio says, "The data really spans, from private to the edge to the public cloud. Data sits across all those environments. Data is more heterogeneous." But deploying block, file or object storage services on a common cloud-managed architecture means both managing and extracting value from that data will be much easier and efficient.
[2]
Unified data platforms reshape AI storage - SiliconANGLE
Metadata matters: Inside HPE's strategy for streamlined storage and accelerated AI adoption Unified data platforms are reshaping the foundation of enterprise storage as AI adoption accelerates, forcing businesses to overhaul aging architectures and rethink how data is managed, scaled and consumed. Faced with mounting data sprawl and the demands of GPU-accelerated workloads, organizations are turning to storage solutions that unify block, file and object workloads under one streamlined framework. The shift is urgent: Legacy systems can't keep pace with AI-driven environments that require flexibility, centralized control and the ability to seamlessly handle unstructured datasets across hybrid cloud deployments. At the heart of this transformation is a growing push to simplify complexity, reduce operational costs and unlock the full potential of AI at scale, according to Rob Strechay (pictured, right), principal analyst at theCUBE Research. "The growth has been phenomenal with the Alletra MP and having come out 18 to 24 months ago," Strechay said. "Really what they're doing with it ... the unified data services bringing together a full lifecycle management of the hardware and making that easy, as well as giving that cloud consumption model under GreenLake." Strechay spoke with fellow analyst Savannah Peterson (left), as they provided analysis of their recent interview with Patrick Osborne interview for the "Cloud AI Journey With HPE" interview series, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how unified data platforms are enabling AI scalability, the growing role of hybrid deployments and the increasing importance of metadata and inference in modern storage strategies. (* Disclosure below.) AI's integration into modern IT environments is highlighting the need for unified data platforms that can deliver flexibility without sacrificing performance. Organizations are moving away from piecemeal storage solutions in favor of workload-oriented systems that streamline management and simplify infrastructure, according to Strechay. "This is why people like cloud and really enjoy buying in that methodology," he said. "I think customer adoption ... like Patrick [Osborne] said, they've been voting with their dollars, which has been great to see." This shift is also tied to enhancing the developer and customer experience, Peterson underscored. Unified data services not only provide technical scalability but also create smoother, more intuitive user journeys. "You highlighted the three things that stood out to me in that particular part of the [Osborne] interview as well," Peterson noted. "It's a unique value proposition that's really mindful of the experience of those developers and of their customers." The discussion also covered how hybrid and on-premises deployments are seeing renewed attention, especially as AI models require high-speed access to vast datasets, much of which still resides within enterprise walls. Data privacy, regulation and performance concerns are keeping significant AI workloads on-prem, according to Strechay. "I think also a lot of it was about being hybrid," he explained. "The hybrid and on-prem deployments, we see ... 85% of data being used for AI is actually on-premise these days." Another key takeaway revolved around metadata and how enterprises are prioritizing not just the data itself, but the information about that data. As AI-driven workloads scale, metadata management becomes a critical differentiator, enabling faster retrieval, smarter organization and more efficient processing. "One quote I want to call out," Peterson said. "[Osborne] said something that's really important that I think sometimes people miss when they're starting to think about their strategy around this. He said, 'That the data about the data is almost as important or more important than the data itself.'" Metadata has become vital in modern storage systems. AI inferencing is moving closer to the data itself, allowing enterprises to reduce latency and avoid unnecessary data movement -- an approach that Strechay fully supports. He remarked on the shift toward embedding inference and retrieval-augmented generation directly within storage platforms to improve efficiency and performance. "What [Osborne's] talking about is the two different types of metadata that people are utilizing for building out these AI systems," he said. "When you start to look at it, he was really on top of it. Enterprise data is essential. It's the core of the enterprise; it's the core of organizations and it's their intellectual property." While AI often feels like magic, it's the result of extensive planning, development and engineering behind the scenes, according to Peterson. Achieving simplicity in today's highly complex data infrastructures requires significant effort and expertise. "I think that is a key ... that making something simple that is as complex as these storage and data platforms is not easy in the least," Strechay added. "It is about making it a simple customer experience." The conversation is set to continue at HPE Discover. Here's theCUBE's complete video interview, part of SiliconANGLE's and theCUBE's coverage of the "Cloud AI Journey With HPE" interview series:
[3]
Enterprise data storage meets AI demands - SiliconANGLE
HPE outlines how AI is transforming enterprise data storage management Enterprise data storage is entering a new era as artificial intelligence reshapes the demands placed on modern infrastructure. Faced with surging volumes of unstructured data, GPU-driven workloads and hybrid cloud environments, organizations are reengineering storage strategies to keep pace. This shift is fueled by the convergence of flexible cloud consumption models, unified data services and the growing necessity for automation. The focus is no longer on piecemeal solutions but on integrated, workload-optimized systems designed to deliver both scalability and long-term efficiency as AI adoption accelerates, according to Patrick Osborne (pictured), senior vice president and general manager of cloud data infrastructure, HPE Storage, at Hewlett Packard Enterprise Co. "I think one of the things is that people need to start thinking right now about their data architecture for the future," Osborne said. "This whole market around AI and how it's being applied ... it's just the beginning. Incrementalism is not going to work in this case. They need a new way of thinking. They need to set an order of magnitude ... a different mindset on performance, on scale, on data services that come along with their data." Osborne spoke with theCUBE Research's Savannah Peterson for the "Cloud AI Journey With HPE" interview series, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how AI is fundamentally transforming enterprise data storage strategies, prompting organizations to adopt unified, scalable and automated solutions that can handle the growing demands of unstructured data, GPU-driven workloads and hybrid cloud environments. (* Disclosure below.) At the heart of today's data strategy is the concept of unified data services. Enterprises are increasingly seeking flexible, scalable storage architectures that can support a range of workloads -- from block and file to object storage -- without the burden of managing disparate systems. HPE has responded to this demand with its Electra MP product line, a disaggregated storage architecture built entirely on NVMe and designed for seamless scaling, Osborne explained. "The opportunity around AI, it's enormous right now," he said. "That puts a huge tax on customers' ability to do data prep, data preparation, data classification. They're re-architecting their storage ecosystem to be able to deal with all of the demands around scale and performance for GPUs and GPU-accelerated workloads." Companies are no longer satisfied with the traditional approach of buying compute, networking and storage from different vendors. Instead, they are gravitating toward outcome-based solutions that deliver a strong total cost of ownership and ROI, with unified systems that handle diverse workloads efficiently, Osborne explained. "One of the key things I see is that customers, they no longer want to buy on horizontal lines," he said. "They want outcomes. They want to drive toward a very compelling TCO and obviously an ROI for them that they can show back to their business and they want workload-oriented systems." AI is increasingly being integrated into the management of storage systems, not only driving demand for more robust architectures but also simplifying operations, automating tasks and enhancing observability across storage and hybrid cloud environments. HPE's GreenLake cloud platform exemplifies this trend by enabling enterprises to manage millions of devices globally, delivering continuous feature updates, improved visibility, sustainability metrics and a streamlined user experience -- all of which free IT teams to focus on higher-value initiatives. "We talk about storage for AI, but we're also using AI for storage," Osborne noted. "We are one of the first folks to come out with a full-fledged AI ops stack and it's not just for storage. It helps customers get a view of their applications, their workloads, how they interact with compute, GPU-accelerated compute, networking, storage. We can do a lot of that work for customers and automate that so they can work on more important things to their business." The fast-moving AI landscape, however, brings both excitement and uncertainty. Osborne acknowledged that while customers are energized by the efficiency gains, they also feel the pressure of keeping pace with technological changes. To address this, HPE introduced solutions such as private cloud for AI, offering curated, vertically integrated systems that ease the operational burden for customers. "The one thing that they have angst about is it's a super fast-moving portion of the market," he added. "What we've done within storage and hybrid cloud in particular is we came out with what we call private cloud for AI, very curated set of infrastructure for customers. We can really take the complexity out of that for customers and drive commonality." The conversation is set to continue at HPE Discover. Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of the "Cloud AI Journey With HPE" interview series:
[4]
HPE and Nvidia tighten partnership with broad infrastructure enhancements
HPE and Nvidia tighten partnership with broad infrastructure enhancements Hewlett Packard Enterprise Co. and Nvidia Corp. today are extending the Nvidia AI computing by HPE initiative they announced last year with new servers, software and storage targeted specifically at artificial intelligence use cases. They include a new HPE Private Cloud AI developer system using Nvidia accelerated computing and featuring an integrated control node, a full suite of AI development software and 32 terabytes of integrated storage. "We're extending the private cloud AI family to make the portfolio more accessible with the ability to start even faster," said Cheri Williams, senior vice president and general manager of private cloud and AI solutions at HPE. "This solution delivers instant AI development capabilities, and it has the same predefined software tools as in our standard Private Cloud AI." HPE is also expanding its Private Cloud AI offering with support for the new Nvidia AI Data Platform. Private Cloud AI is a self-service cloud that transforms data through continuous data processing leveraging Nvidia's accelerated computing, networking, AI software and enterprise storage. HPE is unveiling a unified data layer for AI that supports structured, unstructured and streaming data across hybrid cloud environments. The Nvidia AI Data Platform is a customizable reference design that integrates Nvidia accelerated computing, networking, and AI software with HPE storage. It will be available in HPE Private Cloud AI and support HPE Alletra Storage MP B10000, HPE Alletra Storage MP X10000 and HPE GreenLake for File Storage. Alletra Storage MP is a software-defined, scale-out data system combining all-flash object storage with unified management capacity. It will be expanded to support Nvidia Networking. Its integrated HPE Data Fabric supplies AI models with structured, unstructured and streaming data. The fabric software will support HPE Alletra Storage MP X10000 and Apache Iceberg as the data backbone of the HPE Private Cloud AI data lakehouse. Also, HPE GreenLake for File Storage, the HPE Private Cloud AI file storage service, is now validated for enterprise AI Factories with the new Nvidia-Certified Storage Program. AI-optimized storage Enhancements to Alletra Storage MP X10000 will enable organizations to create AI-ready object data with new automated, inline metadata tagging for faster ingestion by downstream AI applications. HPE said it expects to improve the X10000's performance further through a collaboration with Nvidia that will enable a direct data path for remote direct memory access transfers between GPU memory, system memory and the X10000. The Alletra Storage MP B10000 multiprotocol storage subsystem is expanding to address a wider variety of workloads via unified file access, enhance ransomware protection and simpler data movement between on-premises data centers and public clouds. HPE said it's the first vendor to support disaggregated scale-out unified block and file storage on a single operating system and storage architecture. That combination allows management of the entire storage environment to be unified. New HPE Alletra Block Storage for Azure simplifies data management and workload placement across hybrid cloud with software-defined storage on Microsoft Corp.'s Azure cloud and AIOps automate provisioning and support. New multi-layered ransomware detection and protection in the B10000 array and Zerto data replication and protection software protect data across storage arrays and applications. Private Cloud AI expands Nvidia support HPE Private Cloud AI now also supports rapid deployment of Nvidia blueprints covering agentic and physical AI applications. HPE said that means pre-validated blueprints like the Multimodal PDF Data Extraction Blueprint and Digital Twins Blueprint can be deployed faster. HPE's OpsRamp hybrid digital operations management platform has been enhanced with graphic processing unit optimization and training and inference workload observability when running on Nvidia accelerated computing clusters. The capability is available through HPE Private Cloud AI and on a standalone basis for large clusters. It's also available as a new day 2 operational service HPE Complete Care Service with Nvidia GPU optimization. New agentic AI initiatives include Zora AI by Deloitte LLP in the HPE Private Cloud AI. HPE will be the first customer to deploy the Zora agentic AI platform for finance, which supports dynamic, on-demand and interactive use cases include financial statement analysis, scenario modeling, competitive and market analysis. It will be available worldwide. CrewAI Inc., the developer of a popular tool for building artificial intelligence agents, has joined the HPE Unleash AI program to deliver multi-agent automation: Combined with HPE Private Cloud AI, CrewAI can securely deploy and scale special-purpose agent-driven automations. HPE is also adding professional services for agentic AI that combine Nvidia NIM microservices and Nvidia NeMo with HPE Private Cloud AI. New Blackwell-based AI servers New HPE ProLiant Gen12 servers feature an enhanced and dedicated security processor enclave that establishes an unbreakable chain of trust to protect against firmware attacks and creates full line-of-sight from the factory and throughout HPE's trusted supply chain. HPE iLO 7, an embedded server management technology that provides out-of-band management capabilities for HPE ProLiant servers, will provide post-quantum cryptography that meets the requirements for FIPS 140-3 Level 3 high-level cryptographic security standard. A new integrated AI environment in a software container called AI Mod POD provides a modular, performance-optimized data center for AI and high-performance computing workloads. The modular data center supports up to 1.5 megawatts of power per module and can be set up quickly. It supports HPE's AI and HPC servers and HPE Private Cloud AI and uses HPE's patented Adaptive Cascade Cooling technology, a hybrid system that supports air and 100% and hybrid liquid cooling. "It will enable three times faster deployment with industry leading power usage effectiveness of under 1.1," said Trish Damkroger, senior vice president and general manager of high-performance computing and AI. "It is a fully integrated solution in a container."
[5]
HPE's Nvidia GTC AI Enterprise Computing Blitz: Five Things To Know
Hewlett Packard Enterprise unleashed an all-out enterprise AI market blitz this week at Nvidia's GTC Conference with a slew of announcements including a unified data layer for "deeper collaboration" for Nvidia including expanded Alletra MP support for Nvidia networking. The new HPE unified data layer - which brings together both structured and unstructured data - works in conjunction with the Nvidia AI Data Platform. HPE and Nvidia said the work the two companies are doing together opens the door for enterprise customers to achieve faster time to insight in the agentic AI era. Furthermore, HPE said its Alletra MP B10000 will add unified block and file storage, new software defined cloud storage on Microsoft Azure and integrated ransomware detection and recovery. HPE is also introducing a new HPE Private Cloud AI Developer system that partners have been clamoring for to enable them to speed up the development of AI applications for their enterprise customers. HPE Vice President of Global HPE GreenLake Partner & Service Provider Sales Ulrich "Uli" Seibold said in his 21 years at HPE he has never seen as strong a collaboration and integration between two companies. "This is something that has never happened with any other type of partner," he said. "The integration is so different to everything I have seen in the past. Even if you talk about other technology OEM or software partners I have never seen this kind of intensity. Everyday we have calls where we talk to each other." The partnership goes deep in the channel too with both companies jointly looking at which partners to invest to drive a new era of AI solutions forward, said Seibold. "We do joint trainings, joint enablement sessions around the globe," he said. Seibold said the HPE AI Developers Kit system is a sign of just how fast HPE partners and service providers are moving to bring AI to the enterprise In fact, he said, partner requests for the AI developer kit system is a sign of how fast the market is moving to embrace AI solutions. "What we realized is that partners are asking more and more if they can get a developer's kit," he said. "Six or eight months ago I didn't believe partners would be developing their own AI applications so quickly...More and more of our partners want to develop individualized AI applications or chatbots into an enterprise customer ready solution. This fits perfectly into our strategy to have AI data on premise." The lesson, Seibold said, is simply: "Never underestimate partners or the channel...They understand how to make money." Here are five things you need to know about the new Nvidia by HPE Computing offerings.
[6]
Hewlett Packard Enterprise drives agentic AI era with an intelligent, unified data layer for AI
HPE will support new NVIDIA AI Data Platform, advancing business insights through AI and data HPE Alletra Storage MP X10000 speeds data pipelines with built-in intelligence HPE Alletra Storage MP B10000 to add unified block and file storage, software-defined cloud storage on Microsoft Azure, and integrated ransomware detection and recovery Hewlett Packard Enterprise (NYSE: HPE) today unveiled a unified data layer for AI, deeper collaboration with NVIDIA, and significant enhancements across its enterprise data storage portfolio. Enterprises will be able to achieve quicker time to insight through easy access, built-in intelligence and governed control of multi-vendor, multicloud data through HPE GreenLake cloud. The new unified data layer brings together both structured and unstructured data, speeding up the AI data lifecycle through a combination of HPE's high-performance data fabric and leading enterprise storage with sophisticated data intelligence. When deployed with the new NVIDIA AI Data Platform, organizations will be able to feed their AI applications, models and agents intelligently with AI-ready data. "To thrive in the agentic AI era, data management must undergo a fundamental shift. With our unified data layer, intelligent storage, and HPE Private Cloud AI, enterprises can run AI at scale -- wherever their data lives," said Fidelma Russo, executive vice president and general manager, Hybrid Cloud and CTO at HPE. "Our approach for harnessing highly distributed, hybrid data in collaboration with NVIDIA gives organizations a competitive edge, while accelerating time to value." "NVIDIA and HPE are working together to pave the way for AI agents to connect to business knowledge through high-performance intelligent data infrastructure. The AI data platforms we're building together will enable enterprises to leverage their data during inference to power AI reasoning that delivers insights and drives action," said Justin Boitano, vice president, Enterprise Software at NVIDIA. HPE and NVIDIA transform the entire AI data pipeline to accelerate business insights The rise of agentic and physical AI is set to radically increase data generation and use from edge to cloud, but today the fragmented and distributed nature of enterprise data limits AI-driven insights. Together with NVIDIA, HPE will accelerate the entire pipeline of data through the unified data layer and the new NVIDIA AI Data Platform. Built on NVIDIA's expertise in AI workflow optimization, the NVIDIA AI Data Platform is a customizable reference design for integrating NVIDIA accelerated computing, networking and AI software with enterprise storage to transform data into actionable intelligence. HPE will support the NVIDIA AI Data Platform through HPE Private Cloud AI as well as HPE Alletra Storage MP B10000, HPE Alletra Storage MP X10000 and HPE GreenLake for File Storage. The platform speeds AI reasoning workloads with specialized AI agents that use NVIDIA AI Enterprise software, including NVIDIA AI-Q Blueprints, NVIDIA NIM microservices and new NVIDIA Llama Nemotron reasoning models. To ensure high-speed AI data access, HPE Alletra Storage MP will also expand support for NVIDIA Networking. Additionally, HPE GreenLake for File Storage, the file storage service for HPE Private Cloud AI, is now validated for enterprise AI Factories with the new NVIDIA-Certified Storage Program. This program is designed for enterprise deployments and represents a significant step forward in ensuring that enterprise AI factories are built on a foundation of high-performance, reliable data solutions. HPE accelerates AI time to value with AI-ready data across hybrid cloud The intelligent, unified data layer is designed to expedite data flow across HPE GreenLake cloud and multi-vendor, multicloud environments with: Unified, seamless edge-to-cloud data access: HPE Data Fabric Software, which will now support HPE Alletra Storage MP X10000 and Apache Iceberg, is the data backbone of the HPE Private Cloud AI data lakehouse and the unified data layer. HPE Data Fabric ensures AI models are consistently supplied with optimized, high-quality structured, unstructured and streaming data. AI-ready data with advanced intelligence: The latest advancements to HPE Alletra Storage MP X10000 will enable organizations to create AI-ready object data with new automated, inline metadata tagging. Enterprises can infuse their object data -- as it is stored -- with intelligence that accelerates ingestion by downstream AI applications. HPE expects to further accelerate the X10000's performance through the ongoing collaboration with NVIDIA to enable a direct data path for remote direct memory access (RDMA) transfers between GPU memory, system memory and the X10000. "The VodafoneZiggo vision is to transform into an autonomous network driven by data and AI, with a Network Data Hub (NDH) for Assurance Data," said Jannie Minnema, Head of Operational Support Systems Assurance and Tools, VodafoneZiggo. "The HPE Data Fabric is the foundation of our NDH, enabling our journey towards secure and seamless connectivity. HPE Data Fabric supports us to deliver data-as-a-service with data unification, governance, intelligence, orchestration, and transformation." HPE expands block storage with unified file storage, Microsoft Azure cloud storage and integrated ransomware detection To simplify data management, the HPE Alletra Storage MP B10000 is expanding to enable customers to address more diverse workloads via unified file access, enhance ransomware protection, and easily move data between on-prem data centers and public clouds. Superior flexibility and economics with unified storage: HPE is the industry's first vendor to support disaggregated scale-out unified block and file on a single operating system (OS) and storage architecture.[1] With a common storage architecture and OS, customers can manage their entire environment as a unified fleet, which reduces management complexity and improves efficiency. Simplified management with AI-driven hybrid cloud experience: New HPE Alletra Block Storage for Azure simplifies data management and workload placement across hybrid cloud with software-defined storage on Microsoft Azure. Comprehensive AIOps automate provisioning and support, creating the foundation for autonomous storage management. Minimize Ransomware Risks: Customers can now safeguard data with multi-layered ransomware detection and protection. The combination of new native ransomware detection and recovery in the B10000 array and Zerto means data is protected across storage arrays and apps. Availability HPE Data Fabric with support for HPE Private Cloud AI and HPE Alletra Storage MP X10000 will be available in Summer 2025. The significant updates to the B10000 and the X10000 will be orderable in May 2025. Additional Resources BLOG: Supercharge AI by Unleashing Object Storage Data Intelligence and Performance (X10000) BLOG: Groundbreaking advancements unveiled: HPE Alletra Storage MP B10000 BLOG: Fueling AI innovation with HPE Data Fabric About Hewlett Packard Enterprise Hewlett Packard Enterprise (NYSE: HPE) is a global technology leader focused on developing intelligent solutions that allow customers to capture, analyze, and act upon data seamlessly. The company innovates across networking, hybrid cloud, and AI to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com.
Share
Share
Copy Link
HPE and Nvidia announce major enhancements to their AI infrastructure offerings, including unified data platforms, new storage solutions, and expanded partnership initiatives to meet the growing demands of enterprise AI adoption.
Hewlett Packard Enterprise (HPE) and Nvidia have announced significant enhancements to their partnership, focusing on revolutionizing enterprise AI infrastructure. This collaboration aims to address the growing demands of AI-driven workloads and the challenges of managing vast amounts of unstructured data 12.
At the heart of this initiative is the introduction of unified data platforms that can handle diverse workloads across block, file, and object storage. HPE's Alletra MP product line exemplifies this approach, offering a disaggregated storage architecture built on NVMe for seamless scaling 2. The new unified data layer for AI supports structured, unstructured, and streaming data across hybrid cloud environments, integrating with the Nvidia AI Data Platform 4.
HPE is expanding its storage offerings to meet AI-specific needs:
The partnership introduces several AI-focused infrastructure improvements:
The collaboration extends beyond hardware to include:
This partnership represents a significant shift in enterprise AI strategy. HPE's SVP Jim O'Dorisio emphasizes that storage is no longer just where data sits, but where value is created 1. The collaboration aims to simplify complexity, reduce operational costs, and unlock the full potential of AI at scale 2.
Industry analysts note that this shift is driven by the need for flexibility without sacrificing performance. Rob Strechay, principal analyst at theCUBE Research, highlights the phenomenal growth of Alletra MP and the importance of unified data services in full lifecycle management 2.
As AI continues to reshape enterprise computing, HPE and Nvidia's strengthened partnership positions them at the forefront of this transformation. Their integrated approach to AI infrastructure promises to accelerate adoption and innovation across various industries 35.
Reference
[1]
[2]
[3]
Dell Technologies enhances its PowerStore platform to meet the demands of AI-driven data storage, focusing on performance, security, and adaptability in response to the evolving needs of enterprise IT infrastructure.
3 Sources
3 Sources
As AI transforms industries, enterprises face the challenge of managing vast amounts of unstructured data. Dell and NVIDIA experts discuss strategies for efficient data organization, storage solutions, and the importance of governance in AI implementations.
2 Sources
2 Sources
Hewlett Packard Enterprise (HPE) introduces new hybrid cloud AI solutions at HPE Discover 2024, focusing on private cloud offerings and AI-driven innovations to meet growing enterprise demands.
2 Sources
2 Sources
Hewlett Packard Enterprise (HPE) announces new AI-powered managed services and edge computing solutions at HPE Discover 2024, aiming to revolutionize enterprise IT management and data processing at the edge.
2 Sources
2 Sources
A comprehensive look at the latest advancements in high-performance computing and multicloud AI strategies, highlighting key insights from SC24 and Microsoft Ignite 2024 events.
2 Sources
2 Sources