Curated by THEOUTPOST
On Fri, 13 Dec, 8:07 AM UTC
2 Sources
[1]
Red Hat expands AI model support in new release of Linux AI platform - SiliconANGLE
Red Hat expands AI model support in new release of Linux AI platform IBM's Corp.'s Red Hat subsidiary today released to general availability the latest generation of Red Hat Enterprise Linux AI, a version of the company's core Linux platform optimized for developing, testing and running artificial intelligence large language models. RHEL AI provides a foundation model for building and running LLMs and includes image mode, which allows users to build, deploy and manage RHEL as a bootable container image. Version 1.3 supports the latest release of the Granite LLMs that IBM announced and released to open source in October. It also has improved data preparation features, expanded choices for hybrid cloud deployment and support for Intel Corp.'s Gaudi 3 AI accelerator. Citing a recent International Data Corp. report that found 61% of enterprises plan to use open-source foundation models for generative AI, Red Hat said it's orienting its operating system features to support smaller, open-source-licensed models, fine-tuning capabilities and inference performance engineering. The new release also pays homage to the company's corporate parent with support for Granite and InstructLab, an initiative birthed at IBM that aims to accelerate open-source contributions to generative AI development. The new release supports Granite 3.0 8b, an 8 billion-parameter converged model that supports more than a dozen natural languages and has code generation and function-calling capabilities. Non-English languages, code generation and function features are available as a developer preview within RHEL AI 1.3, with the expectation that the capabilities will be supported in future RHEL AI releases. It also includes support for Docling, an open-source project developed by IBM Research that allows PDFs, manuals and slide decks to be converted into specialized data formats such as JavaScript Object Notation and Markdown, a lightweight language that allows formatting elements to be added to plain text without using tags or a formal text editor. Users can convert documents into Markdown for simplified data ingestion for model tuning with InstructLab. Docling includes context-aware chunking, a method used in natural language processing to break down text or data into smaller, meaningful segments while considering the surrounding context. This helps resulting applications deliver more coherent and contextually appropriate responses to questions and tasks out of the box. Gaudi 3 is Intel's answer to Nvidia Corp.'s H100 graphics processing unit, which has since been succeeded by the H200 series. Intel has said Gaudi 3 can inference at up to 2.3 times the power efficiency of the H100 while speeding up LLM training times. Red Hat already supports GPUs from Nvidia and Advanced Micro Devices Inc. Red Hat OpenShift AI, which natively supports RHEL AI, now supports parallelized serving across multiple nodes with the vLLM runtimes, enabling multiple requests to be handled in real-time. VLLM is a high-performance inference engine for serving LLMs at low latency and high throughput. Users can dynamically alter an LLM's parameters when being served, such as sharding - or distributing - the model across multiple GPUs or quantizing it to a smaller footprint.
[2]
Red Hat Adds Granite, Gaudi Support In Latest AI Release
'Our services partners and systems integrators are the ones helping companies explore and integrate different use cases in a cost-effective way,' says Joe Fernandes, Red Hat's AI business unit vice president and general manager. Red Hat's latest release of its Enterprise Linux AI foundation model platform adds Granite large language model support and comes with previews for code generation and function calling capabilities as well as Intel Gaudi 3 accelerator support, bringing more opportunities to its services partners. The Raleigh, N.C.-based open-source tools vendor and IBM subsidiary's latest version of RHEL AI -- now generally available -- is a prime area for Red Hat services partners and systems integrators "helping customers implement and integrate that into their use cases and business," Joe Fernandes, Red Hat's AI business unit vice president and general manager, told CRN in an email. "AI in general and generative AI in particular is complicated, so our services partners and systems integrators are the ones helping companies explore and integrate different use cases in a cost-effective way," Fernandes said. "That maps directly to how Red Hat aims to reduce costs (with smaller models), eliminate complexity (around integrating those models with customer data and use cases) and enable flexibility (to deploy those models wherever they need to run across a hybrid environment)." [RELATED: The 10 Coolest GenAI Products And AI Tools Of 2024] According to CRN's 2024 Channel Chiefs, Red Hat has been working to increase the overall percentage of company revenue that comes through the channel and improve partner profitability. RHEL AI is meant for developing, testing and running GenAI models for enterprise applications, the vendor said in a statement Thursday. Version 1.3 adds support for the latest innovations in parent IBM's open-source licensed Granite large language models (LLMs) and leverages open-source technology for data preparation. RHEL AI brings together Granite with the InstructLab model alignment project created by Red Hat and IBM. Users can leverage the components for a packaged, bootable RHEL image for individual server deployments across hybrid clouds, according to Red Hat. Version 1.3 supports Granite 3.0 8b English language use cases. This version has a developer preview for using this model's non-English language, code generation and function calling capabilities, with full support coming in a future RHEL AI release. The new RHEL AI version also supports the Docling IBM Research open-source project for converting common document formats into Markdown, JSON and other formats for GenAI applications and training. The new version can do context-aware chunking, accounting for the structure and semantic elements of documents used for GenAI training with the goal of improving GenAI responses. RHEL AI added a technology preview for Intel Gaudi 3 accelerator support with this release and now supports parallelized serving across multiple nodes for multiple requests in real time, according to the vendor. Users can dynamically alter LLM parameters during serving. Other capabilities slated for future RHEL AI releases will support additional document formats for Docling, integration for retrieval-augmented generation pipelines and InstructLab knowledge tuning.
Share
Share
Copy Link
Red Hat releases RHEL AI 1.3, enhancing AI model support, data preparation, and deployment options for enterprise AI development and implementation.
Red Hat, an IBM subsidiary, has released version 1.3 of Red Hat Enterprise Linux AI (RHEL AI), a specialized Linux platform designed for developing, testing, and running artificial intelligence large language models (LLMs). This latest release significantly expands AI model support and introduces new features to streamline enterprise AI development and deployment 12.
The new version brings several notable improvements:
Granite LLM Support: RHEL AI 1.3 now supports Granite 3.0 8b, an 8 billion-parameter model capable of handling multiple languages and offering code generation and function-calling capabilities 1.
Improved Data Preparation: The release incorporates Docling, an open-source project from IBM Research, which enables the conversion of various document formats (PDFs, manuals, slide decks) into specialized data formats like JSON and Markdown 12.
Expanded Deployment Options: RHEL AI 1.3 offers enhanced choices for hybrid cloud deployment, allowing for more flexible implementation across diverse environments 1.
Hardware Acceleration: The platform now includes support for Intel's Gaudi 3 AI accelerator, complementing existing support for NVIDIA and AMD GPUs 12.
RHEL AI 1.3 introduces several advanced features:
Context-aware Chunking: This natural language processing method breaks down text into smaller, meaningful segments while preserving context, leading to more coherent and contextually appropriate AI responses 1.
Parallelized Serving: Red Hat OpenShift AI, which natively supports RHEL AI, now enables parallelized serving across multiple nodes with vLLM runtimes, allowing for real-time handling of multiple requests 1.
Dynamic LLM Parameter Adjustment: Users can now dynamically alter an LLM's parameters during serving, including sharding the model across multiple GPUs or quantizing it to a smaller footprint 1.
Red Hat's focus on supporting smaller, open-source-licensed models aligns with the growing trend of enterprise AI adoption. A recent International Data Corp. report found that 61% of enterprises plan to use open-source foundation models for generative AI 1.
Joe Fernandes, Red Hat's AI business unit vice president and general manager, emphasized the role of service partners and systems integrators in helping companies explore and integrate AI use cases cost-effectively. He stated, "AI in general and generative AI in particular is complicated, so our services partners and systems integrators are the ones helping companies explore and integrate different use cases in a cost-effective way" 2.
Red Hat has outlined plans for future RHEL AI releases, including:
As AI continues to evolve rapidly, Red Hat's RHEL AI 1.3 represents a significant step forward in providing enterprises with the tools and flexibility needed to develop and deploy AI solutions effectively across hybrid cloud environments.
Red Hat has released Enterprise Linux AI, a new platform designed for enterprise AI innovation. Simultaneously, Red Hat and Dell Technologies have announced a collaboration to support open-source AI workloads on Dell PowerEdge servers and Red Hat Enterprise Linux AI.
4 Sources
4 Sources
Red Hat, an open-source solutions provider, has acquired Neural Magic, a company specializing in AI optimization. This acquisition aims to improve AI performance and accessibility across hybrid cloud environments, focusing on large language models and generative AI workloads.
4 Sources
4 Sources
IBM introduces Granite 3.0, a new family of high-performing AI models designed for business use, featuring enhanced performance, safety features, and flexible deployment options.
18 Sources
18 Sources
Dell Technologies and Red Hat announce a groundbreaking partnership to advance virtualization, AI, and cloud-native applications. The collaboration aims to revolutionize enterprise IT infrastructure and edge computing solutions.
2 Sources
2 Sources
IBM releases Granite 3.1, a new family of open-source Large Language Models (LLMs) designed for enterprise use, boasting improved performance, extended context length, and multilingual capabilities.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved