Curated by THEOUTPOST
On Wed, 20 Nov, 8:02 AM UTC
7 Sources
[1]
AI infrastructure innovations reshape computing at SC24 - SiliconANGLE
Weka SC24 highlights from theCUBE: Tackling AI infrastructure challenges Artificial intelligence is driving transformative advancements across industries, and AI infrastructure innovations showcased at SC24 highlight the potential to reshape modern computing. From overcoming data center inefficiencies to accelerating cancer research breakthroughs, Weka is collaborating with leaders such as Nvidia Corp., Super Micro Computer Inc. and Dell Technologies Inc. to develop solutions that enable scalable enterprise AI deployments. With groundbreaking tools such as the Weka AI RAG Reference Platform, these partnerships are setting the stage for seamless AI integration across diverse sectors, solving complex challenges and unlocking new possibilities. "We created this reference architecture called Weka AI RAG Reference Platform," said Shimon Ben-David (pictured), CTO of WekaIO Inc. "Weka is a high-performance data platform ... we are seeing customers still struggling with how to implement RAG inferencing. It has a lot of moving components. Honestly, there's no real blueprint or protocols defined yet for that." Ben-David; Nilesh Patel, chief product officer of WekaIO; and Jonathan Martin, president of WekaIO, spoke with theCUBE Research's Savannah Peterson at SC24, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how AI infrastructure innovations showcased at SC24, including Weka's WARRP and collaborations with Nvidia, Supermicro and Dell, are addressing challenges in scalable AI deployments, data center efficiency and cancer research advancements. Here's a special recap of key themes discussed with Weka executives during SC24, and be sure to check out SiliconANGLE and theCUBE's full coverage. Find more articles here, and our on-demand broadcast here. (* Disclosure below.) TheCUBE's live coverage from SC24 highlighted the transformative potential of cutting-edge AI infrastructure, as showcased by Weka, Nvidia and Run:ai. Ben-David talked about how these companies are addressing challenges in deploying enterprise AI at scale through collaborative solutions, such as Weka's WARRP. Designed to simplify retrieval-augmented generation workflows, the platform integrates Nvidia GPUs and Run:ai's orchestration tools to create a cohesive system for scalable AI deployment. "What we found when we went through that journey of describing WARRP, creating WARRP, building it, we saw that obviously, as I mentioned, there's a lot of moving parts, a lot of frameworks, orchestration, data challenges, whether you are scaling or not," Ben-David said. "Not all of them are actually the GPUs. We are hitting some, we measure our efficiency by times to token, cost per token, token throughput." Read More: https://siliconangle.com/2024/11/20/ai-infrastructure-expanding-arena-modern-supercomputing-sc24/ In addition, at SC24, Nvidia, Supermicro and Weka unveiled a collaborative approach to addressing power efficiency, scalability and cost concerns in AI data centers. Patel discussed how their combined innovations aim to balance system design while meeting the growing demands of AI infrastructure. "As we continue to see the build-out [of AI data centers], two challenges are happening," Patel said. "One is the power consumption and the power requirement in data centers is growing like crazy. The second thing is now we are getting into influencing space where it's becoming a token economy. The cost token for dollars, tokens per wattage use and so on ... have become our important KPIs." Also, at SC24, experts from Memorial Sloan Kettering Cancer Center, Dell and Weka discussed how their collaboration is driving breakthroughs in cancer research through advanced AI infrastructure. MSK's innovative supercluster has dramatically reduced research timelines, enabling faster discoveries and improved patient care. Progressive companies prioritize GPUs, fast networking and advanced infrastructure, explained Martin. "It's very hard to kind of leap forward 30 years and think that you can walk around with a plastic rectangle in your pocket with some of the world's knowledge on it," Martin said. "That's kind of where AI is right now. We are very early in the journey ... but it is going to transform every walk of life."
[2]
Nvidia, Weka and Supermicro collab to tackle AI data centers - SiliconANGLE
The future of AI data centers: Analyzing supercharged collaboration with Weka, Nvidia and Supermicro Innovation isn't a sprint; it's a relay where partnerships dictate the pace. At Supercomputing 2024, three industry giants -- Nvidia Corp., Super Micro Computer Inc. and WekaIO Inc. -- unveiled a collaboration aimed at redefining high-performance computing and AI data centers. "As we continue to see the build-out [of AI data centers], two challenges are happening," said Nilesh Patel (pictured, second from left), chief product officer of Weka. "One is the power consumption and the power requirement in data centers is growing like crazy. The second thing is now we are getting into influencing space where it's becoming a token economy. The cost token for dollars, tokens per wattage use and so on ... have become our important KPIs. We got together with Nvidia and Supermicro and tried to attack one of the core problems that is becoming the Achilles heel for data center growth, particularly for AI infrastructure." Patel, alongside Patrick Chiu (left), senior director of storage product management at Supermicro, and Ian Finder (right), group product manager of accelerated computing at Nvidia, spoke with theCUBE Research's Savannah Peterson at SC24, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how all three companies are joining forces to tackle key challenges, such as power efficiency, scalability and sustainability, paving the way for a new era in AI-driven infrastructure. (* Disclosure below.) Nvidia's Grace ARM-based CPUs take center stage in this partnership, marking a milestone in the evolution of storage solutions. Known for its high-performance storage software, Weka integrated the Grace CPU into its platform to deliver unprecedented throughput and memory bandwidth. The result is a 2X improvement in power efficiency, coupled with the ability to handle the most demanding AI and HPC workloads, according to Finder. "Everyone at this conference is obsessed with the idea of balance from a system design and platform design level," he said. "It's the balance that causes you to need high-throughput storage to saturate your compute environments. Once you look into the box with Grace, we've architected a chip that has a tremendous amount of memory bandwidth. We have 512 gigabytes a second of memory bandwidth per socket in Grace, so the Weka machine has a terabyte-per-second of memory bandwidth in aggregate in that storage appliance." Power consumption is a growing concern as AI infrastructures scale. Supermicro, the hardware powerhouse in this collaboration, highlighted how the new storage solution reduces power usage while maintaining high performance, according to Chiu. "You can convert this hardware advantage to the whole rack and whole data center," he said. "We are so excited that we can be the partner and launch these new systems. We believe there will be a revolution for the new AI data centers and HPC data centers." Supermicro's 1U system has been robustly equipped with the latest SSD and PCIe Gen 5 technologies and achieves nearly one petabyte of storage capacity. Combined with Weka's software, this system delivers four to 10 times better power density compared to traditional solutions, Chiu added. Here's the complete video interview, part of SiliconANGLE's and theCUBE Research's coverage of SC24:
[3]
AI infrastructure: New developments from Weka, Nvidia and Run:ai - SiliconANGLE
Exploring AI's deepening role in modern supercomputing -- Nvidia, Weka and Run:ai weigh in The real-world application of artificial intelligence to solve age-old problems across various technology sectors proves that AI is no longer just a buzzword. It's a transformative force with real-world implications. A blossoming arena witnessing that transformation is supercomputing, and new AI infrastructure developments and nifty ecosystem partnerships highlight the innovation underway. "We created this reference architecture called Weka AI RAG Reference Platform," said Shimon Ben-David (pictured, second from left), chief technology officer of WekaIO Inc. "Weka is a high-performance data platform ... we are seeing customers still struggling with how to implement RAG inferencing. It has a lot of moving components. Honestly, there's no real blueprint or protocols defined yet for that. We created this environment that shows all of the layers that are needed. We're heavily using Run:ai and the Nvidia stack also, the GPUs, but also the software frameworks." Ben-David, alongside Ronen Dar (left), co-founder and chief technology officer of Runai Labs USA Inc., and Dion Harris (right), director of accelerated data center GTM at Nvidia Corp., spoke with theCUBE Research's Savannah Peterson at SC24, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed the three companies joining forces to create cutting-edge AI infrastructure solutions that are more than the sum of their parts. (* Disclosure below.) WARRP has been designed to simplify the implementation of retrieval-augmented generation workflows. While RAG enables enterprises to customize AI models by integrating proprietary data, thus enhancing their relevance and utility, deploying these systems at scale has remained a significant challenge. Underpinned by Nvidia GPUs and Run:ai's orchestration solutions, the platform provides fertile ground for enterprises to integrate and scale AI effortlessly, according to Harris. For its part, Nvidia has long experimented with the AI idea, with evidence showing the company's inroads as early as 2006 with foundational tools such as CUDA. Today, these tools are vital for enterprises as they transition from AI experimentation to large-scale deployment. "RAG is the way to customize foundational models to incorporate proprietary data or data that you care about and want to be represented in your AI models," Harris said. "As it relates to Nvidia, we've been down this path of trying to be a proponent of AI and help customers adopt AI. [We're] providing more guidelines, blueprints, templates and APIs that make it easy to plug and play and leverage these tools. Working with Weka and Run:ai is a great example of doing exactly just that." Run:ai has emerged as a crucial player in optimizing AI workloads. As organizations deploy open-source large language models to maintain control over data, costs and intellectual property, the platform's orchestration solutions ensure efficient scaling and GPU utilization. With an emphasis on "tokenomics," or the economics of AI token production, Run:ai's tools address the increasing demand for cost-effective and scalable AI operations, according to Dar. "When you scale your application, GPU utilization, the cost of those LLMs and the cost to serve those LLMs becomes a real problem," he said. "As we all move forward and LLMs become more and more important, GPU utilization will become more and more important -- just increasing GPU utilization and reducing the cost of serving LLMs." Here's the complete video interview, part of SiliconANGLE's and theCUBE Research's coverage of SC24:
[4]
AI infrastructure transforming computing and sustainability - SiliconANGLE
Scalable AI infrastructure reshaping data centers: theCUBE insights from SC24 The boundaries of high-performance computing are being redefined by artificial intelligence, driving innovations that prioritize scalable AI infrastructure to meet growing demands for efficiency and power. As enterprises adopt advanced AI-ready systems, the tech industry is moving beyond theoretical applications to deliver tangible solutions. This transformation is reshaping data center capabilities, promoting sustainability and fostering collaboration across academia, government and private sectors to address global challenges, according to John Furrier (pictured, left), executive analyst at theCUBE Research. "I think this year you're starting to see real build-out around the infrastructure hardware and where hardware is turning into systems," he said. "You're going to start to see the game change and then the era's here, the chapter's closed, the old IT is over, and the new systems are coming in." Furrier spoke with fellow theCUBE Research analysts Savannah Peterson (center) and Dave Vellante (right) at SC24, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how AI and scalable AI infrastructure are transforming high-performance computing, sustainability and enterprise solutions. (* Disclosure below.) At this week's Supercomputing 2024 in Atlanta, Georgia, theCUBE's industry analysts discussed the shift from traditional HPC to systems tailored for AI-driven workloads. Once confined to niche applications, HPC is becoming accessible to enterprises of all sizes, thanks to innovations from companies such as Nvidia Corp. and collaborations across sectors, according to Furrier. "This year the conversation is supercomputing is democratized thanks to Nvidia's messaging and all their work," he said. "But now the computers on premise, the data center technologies have to be rebuilt and then connected to the cloud for customization. I think this year will be the year, the era of clustered systems, on-premise edge where it's just the system that's going to matter." The industry has seen a surge in interest from builders who are tasked with creating cutting-edge systems. These builders bridge the gap between hyperscalers, such as Amazon Web services Inc., and traditional hardware giants, such as Dell Technologies Inc. and Hewlett Packard Enterprise Co., Furrier explained. "There's a new level of builder emerging. It's the classic hyperscalers on the high end," he said. "Then you've got the enterprise, like the Dells, the HPEs. They're going to embed their devices and systems, obviously servers. They control the IT. The Broadcoms and the chip guys are targeting these new builders, because they're going to come into the enterprise and build it." Central to this evolution is Nvidia's continued leadership in AI infrastructure hardware. The company's H100 GPUs and the introduction of Blackwell chips have set new benchmarks in performance, though not without challenges. Reports of "melting racks" have reignited discussions on sustainable cooling technologies, according to Peterson. "We're at that inflection ... when are we going to make AI real, when are we going to start realizing some of this high-performance computing gains in terms of the world, not just in terms of prototypes?" she asked. "We've got this this type of a conversation going on that means innovation is moving at light speed or a super-hot speed. As someone who's been a hardware nerd their whole life, I feel like hardware is having its moment." Sustainability is emerging as a pivotal concern in HPC. The intersection of energy efficiency, cooling advancements and dense chip design underscores the need for long-term solutions. From direct liquid cooling to on-chip innovations, these developments are enabling more efficient systems while addressing environmental concerns. "There's a whole challenge of how do you cool these things, liquid cooling is now back in a big way," Vellante said. "We've got a panel this afternoon on direct liquid cooling, and we have one of the foremost experts in the field coming on to talk about phase change, and there's a big debate about whether or not that can scale." Quantum computing also garnered attention, with experts suggesting its hybrid integration with classical computing architectures as the next major leap. While widespread adoption may still be 24-36 months away, its potential to revolutionize encryption and scientific modeling cannot be overlooked, Furrier explained. "Quantum's definitely going to happen, that's not going to dominate this year because it's still out there," he said. "IBM and others see it coming fast because the first order of businesses [is] get the clustered systems up and running because you got to power the software ... I think the innovations in the data center is going to be this year's theme, quantum's right around the corner, the app frameworks is going to be key." Here's the complete video interview, part of SiliconANGLE's and theCUBE Research's coverage of SC24:
[5]
AI compute solutions drive sustainable tech evolution - SiliconANGLE
Decentralized AI infrastructure emerges as a sustainable alternative for enterprises Technology is seeing a seismic shift, driven in part by artificial intelligence, with AI compute solutions emerging as the backbone of this growth. As the demand for accessible, energy-efficient and flexible infrastructure skyrockets, industry leaders are embracing decentralization, sustainability and multi-vendor ecosystems. This approach challenges traditional hardware dominance, fostering collaboration across platforms to spark innovation and reduce dependency on singular providers, according to Saurabh Kapoor (pictured, left), director of product management and strategy at Dell Technologies Inc. "Open is the future ... innovation and collaboration. You create that ecosystem and let everybody contribute and build on it," Kapoor said. Kapoor and Jon Stevens (right), chief executive officer of Hot Aisle Inc., spoke with theCUBE Research's Dave Vellante and Savannah Peterson at SC24, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed the Dell and Hot Aisle partnership, the transformative impact of AI on technology infrastructure, and the rise of sustainable and decentralized AI compute solutions. (* Disclosure below.) AMD-powered infrastructure has become a viable alternative in the AI space, Stevens said, as he talked about his journey from large-scale cryptocurrency mining operations to spearheading AI-focused compute solutions. By leveraging AMD's M1300X GPUs, Hot Aisle has established a groundbreaking compute model that offers developers remote access to high-performance resources while promoting energy sustainability in green-powered data centers. "The thing that I think that we're going to focus on is just continuously releasing whatever's [the] latest and greatest, working with Dell, working with AMD, working with Broadcom to continuously make this latest and greatest hardware available to developers, to anyone, and support them with that," Stevens said. This shift aligns with a broader industry push toward decentralization and flexibility in AI infrastructure. The emphasis on multi-vendor ecosystems challenges Nvidia Corp.'s dominance in the space, providing enterprises with a much-needed alternative for critical workloads, according to Stevens. "Nvidia's done a fantastic job. They are number one for a reason," he said. "Their hardware and software is unparalleled. But in the grander scheme of AI and the safety of AI, we talk about sovereign AI quite a bit. The source of the data that we're putting into AI affects what comes out at the end of the day. It comes all the way down to the hardware. We need to have multiple solutions available for people." Hot Aisle's collaboration with Dell has been pivotal in this effort, enabling tailored solutions for enterprises seeking to optimize AI workloads. The partnership exemplifies how forward-thinking companies are focusing on simplifying the adoption of AI infrastructure. From training to inferencing, these solutions cater to a range of applications, from computational fluid dynamics to high-frequency trading. "It's all about building the right partnerships for [the] future," Kapoor said. "The AI thing is real now. You're building infrastructures that are going to be fundamental build blocks for the future as well. As things evolve, every ecosystem, the end consumer from enterprises like healthcare and financial services, like AI, is going to expand very quickly over the next few months and years. We are building infrastructures that are able to support the future." Here's the complete video interview, part of SiliconANGLE's and theCUBE Research's coverage of SC24:
[6]
Silicon diversity transforms AI with flexible solutions - SiliconANGLE
Scalable AI: Open-hardware solutions driving the next wave of AI innovation Silicon diversity is redefining the future of artificial intelligence by addressing critical challenges in performance, cost and scalability. As AI workloads grow increasingly complex, spanning inference, training and multimodal applications, the need for adaptable, open hardware solutions is at an all-time high. This shift prioritizes flexibility and efficiency, allowing enterprises to choose the best tools for their specific needs while driving innovation and resource optimization across industries, according to Steen Graham (pictured, left), chief executive officer of Metrum AI Inc., which is partnering with Dell Technologies Inc. on AI workload innovation. "I think right now with AI, we've really kind of optimized software in a great way," Graham said. "We're building this really systematic software with AI workers that will save people material, time and ultimately drive top-line revenue and getting enterprises to really high-fidelity solutions." Graham and Manya Rastogi (right), technical marketing engineer at Dell Technologies, spoke with theCUBE Research's John Furrier and Savannah Peterson at SC24, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how Metrum and Dell work together, as well as how silicon diversity is transforming AI infrastructure by enabling flexible, cost-effective and scalable hardware solutions to meet evolving enterprise demands. (* Disclosure below.) The benefits of emerging hardware, such as the Dell PowerEdge XE9680 Rack Server with Intel's Gaudi 3 AI Accelerator, have placed silicon diversity in AI infrastructure into the limelight. Designed to provide enterprises with unparalleled flexibility, this system supports open-compute modules and integrates networking features to eliminate the need for additional hardware, reducing costs while increasing scalability, according to Rastogi. "There are a few challenges which exist in the industry today," she said. "What Gaudi 3 with Intel kind of solves is you don't have any more like one GPU, you have choices. Second thing, it's basically an OAM, which is open compute accelerator module, like the cart, the GPU cart. It ultimately builds up to a big board ... which are in the XE9680 server. It's a way out of the proprietary networking and software ... all this networking also provides you an opportunity for scale out ... in a cost-efficient data center that you can get out of it." Central to this innovation is addressing the industry's demand for choice. By offering alternatives to proprietary GPUs, the PowerEdge XE9680 empowers organizations to avoid hardware lock-in and tailor solutions to their specific workloads, Graham said. "The one thing about enterprises today, before they deploy AI, they really want to know what the fidelity of it is," he elaborated. "Both from all the performance metrics we love like throughput and latency, but the quality of the AI. We actually announced our new 'Know your AI' platform where we test the AI in development and in production for those domain specific quality metrics, as well as those typical metrics we all love at Supercompute, like latency and throughput." Metrum AI is collaborating with Dell Technologies to address key AI and machine learning workloads that drive innovation in real-world applications. Together, they are focusing on critical areas such as inferencing, fine-tuning and distributed fine-tuning to optimize performance across diverse workloads. "It's just part of the messaging for Dell with silicon diversity," Rastogi said. "We want all our customers to have every single choice that they can have with the XE9680 servers. And that's where this comes in." Key use cases include AI agents for customer service, fine-tuning large language models and distributed processing for industries such as healthcare, manufacturing and telecommunications. Dell showcased the platform's potential through live demonstrations, such as autonomous AI agents capable of performing complex tasks such as generating service tickets and upselling plans. "They want to pre-program AI agents to get things done and then they want to loop humans in the loop later for quality assessments as well," Graham said. "Humans don't need to be in the loop for every token every second. I think what we're all hoping for AI to do some work for us, that we don't have to sit there with it and ... co-pilot all the time." Here's the complete video interview, part of SiliconANGLE's and theCUBE Research's coverage of SC24:
[7]
AI factories are requiring better cooling, more efficient systems - SiliconANGLE
Collaboration between Dell, Broadcom and Denvr focuses on AI factories of tomorrow What does artificial intelligence or AI factories of the future look like? That is the question Denvr Dataworks Inc., Broadcom Inc. and Dell Technologies Inc. aim to answer with their combined computing and cooling hardware. AI requires a massive amount of compute, so power and efficiency are key for the construction of modern data centers. "We all know that everybody's running out of data center power and space," said Vaishali Ghiya (pictured, right), executive officer of global ecosystems and partnerships at Denvr Dataworks. "AI workloads are very power-hungry. So, that is exactly why we designed our Denvr Dataworks private zone, in partnership with Broadcom and Dell, so that we can give customers different choices and options, as well as open architecture. Liquid immersion cooling, as well as liquid to the chip cooling, really results in the efficient power usage as well as a compact footprint." Ghiya; Hasan Siraj (left), head of software products, ecosystem, at Broadcom; and Arun Narayanan, senior vice president of compute and networking product management at Dell, spoke with theCUBE Research's John Furrier at SC24, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed what is next for AI factories and networking. (* Disclosure below.) Denvr has created modular superclusters, which can house more than 1,000 GPUs in less than 900 square feet, and use liquid immersion cooling as part of an effort to house larger AI workloads. The goal is to build the most efficient rack infrastructure, according to Narayanan. "The Dell-Broadcom partnership is about open systems," he said. "We believe ethernet will be the technology of choice for AI networking. We want to be time-to-market, first-to-market, but we want interoperability. You buy an asset today, you want that asset to live for 18 months to 24 months. The next asset comes along, it has to be interoperable." Broadcom recently released the Tomahawk 5, a class of Ethernet switching devices, as well as linear drive optics, which remove the digital signal processor's less electrical components. Both Broadcom and Dell see ethernet as the future of AI networking. "We'll see the scale become bigger and bigger over the next four years, but we'll also see this go down to other verticals," Siraj said. "We will see enterprise adopters ... and from a networking perspective, we believe ethernet will win. It's already on its way, and it can scale from the largest clusters on the planet to whatever optimizations that are required for inference and other use cases." Equally important to the construction of an efficient computing structure and network is the software layer, according to Ghiya. Denvr focuses on working backward from its customers' business goals to create a complete computing stack that meets their needs. "We create a full software AI stack on top of [what Dell and Broadcom provide]," she said. "So, starting from the optimizing of the data center for power cooling and space, then layering the networking and storage fabric on top of it, which is based on the open standards. Then doing platform orchestration, which is based on Kubernetes ... and then providing rest APIs to the customer to bring that AI workloads to address their key business outcomes." Here's the complete video interview, part of SiliconANGLE's and theCUBE Research's coverage of SC24:
Share
Share
Copy Link
Weka, Nvidia, and partners showcase advancements in AI infrastructure at SC24, addressing challenges in scalability, efficiency, and sustainability for enterprise AI deployments.
The Supercomputing 2024 (SC24) conference has become a showcase for groundbreaking advancements in AI infrastructure, with industry leaders like Weka, Nvidia, and Supermicro collaborating to address key challenges in enterprise AI deployments. These innovations are set to reshape modern computing, offering solutions for scalability, efficiency, and sustainability 1.
Weka introduced its AI RAG (Retrieval-Augmented Generation) Reference Platform at SC24, aiming to simplify the implementation of RAG workflows for enterprises. Shimon Ben-David, CTO of WekaIO, explained, "We created this reference architecture called Weka AI RAG Reference Platform. We are seeing customers still struggling with how to implement RAG inferencing" 3. The platform integrates Nvidia GPUs and Run:ai's orchestration tools to create a cohesive system for scalable AI deployment.
Nvidia, Supermicro, and Weka unveiled a joint effort to tackle power efficiency, scalability, and cost concerns in AI data centers. Nilesh Patel, Chief Product Officer of WekaIO, highlighted two main challenges: "One is the power consumption and the power requirement in data centers is growing like crazy. The second thing is now we are getting into influencing space where it's becoming a token economy" 2.
Nvidia's Grace ARM-based CPUs play a crucial role in this partnership, marking a significant evolution in storage solutions. Ian Finder, Group Product Manager of Accelerated Computing at Nvidia, noted, "We have 512 gigabytes a second of memory bandwidth per socket in Grace, so the Weka machine has a terabyte-per-second of memory bandwidth in aggregate in that storage appliance" 2.
Supermicro contributed to the collaboration with its advanced hardware solutions. Patrick Chiu, Senior Director of Storage Product Management at Supermicro, stated, "You can convert this hardware advantage to the whole rack and whole data center. We believe there will be a revolution for the new AI data centers and HPC data centers" 2. Their 1U system, equipped with the latest SSD and PCIe Gen 5 technologies, achieves nearly one petabyte of storage capacity.
The collaboration between Memorial Sloan Kettering Cancer Center, Dell, and Weka is driving breakthroughs in cancer research through advanced AI infrastructure. Jonathan Martin, President of WekaIO, emphasized the transformative potential of AI: "We are very early in the journey ... but it is going to transform every walk of life" 1.
The conference highlighted several emerging trends in AI infrastructure:
As AI continues to evolve, the infrastructure supporting it must adapt. John Furrier, Executive Analyst at theCUBE Research, predicts, "You're going to start to see the game change and then the era's here, the chapter's closed, the old IT is over, and the new systems are coming in" 4. This transformation is set to redefine data center capabilities, promote sustainability, and foster collaboration across various sectors to address global challenges.
The innovations showcased at SC24 represent a significant step forward in AI infrastructure, promising to enable more efficient, scalable, and sustainable AI deployments across industries.
Reference
Dell Technologies and its partners presented advancements in AI infrastructure, including the AI Factory, cooling technologies, and networking solutions at the Supercompute conference (SC24).
11 Sources
11 Sources
The rise of AI is transforming data centers and enterprise computing, with new infrastructure requirements and challenges. Companies like Penguin Solutions are offering innovative solutions to help businesses navigate this complex landscape.
4 Sources
4 Sources
A comprehensive look at the latest advancements in high-performance computing and multicloud AI strategies, highlighting key insights from SC24 and Microsoft Ignite 2024 events.
2 Sources
2 Sources
Supermicro, a leader in AI infrastructure, has introduced liquid cooling technology that increases computing power by 30% without additional energy consumption. This development comes as the AI industry faces growing power demands and environmental concerns.
2 Sources
2 Sources
Nutanix and Nvidia partner to address challenges in enterprise AI adoption, offering solutions for hybrid cloud environments and full-stack accelerated computing to meet the demands of generative and agentic AI.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved