Curated by THEOUTPOST
On Wed, 4 Dec, 12:09 AM UTC
4 Sources
[1]
Latest NVIDIA AI, Robotics and Quantum Computing Software Comes to AWS
NVIDIA software and accelerated computing integrations, including NVIDIA Blackwell, offer full-stack platforms for advancing enterprise development on AWS. Expanding what's possible for developers and enterprises in the cloud, NVIDIA and Amazon Web Services are converging at AWS re:Invent in Las Vegas this week to showcase new solutions designed to accelerate AI and robotics breakthroughs and simplify research in quantum computing development. AWS re:Invent is a conference for the global cloud-computing community packed with keynotes and more than 2,000 technical sessions. Announcement highlights include the availability of NVIDIA DGX Cloud on AWS and enhanced AI, quantum computing and robotics tools. NVIDIA DGX Cloud on AWS for AI at Scale The NVIDIA DGX Cloud AI computing platform is now available through AWS Marketplace Private Offers, offering a high-performance, fully managed solution for enterprises to train and customize AI models. DGX Cloud offers flexible terms, a fully managed and optimized platform, and direct access to NVIDIA experts to help businesses scale their AI capabilities quickly. Early adopter Leonardo.ai, part of the Canva family, is already using DGX Cloud on AWS to develop advanced design tools. AWS Liquid-Cooled Data Centers With NVIDIA Blackwell Newer AI servers benefit from liquid cooling to cool high-density compute chips more efficiently for better performance and energy efficiency. AWS has developed solutions that provide configurable liquid-to-chip cooling across its data centers. The cooling solution announced today will seamlessly integrate air- and liquid-cooling capabilities for the most powerful rack-scale AI supercomputing systems like NVIDIA GB200 NVL72, as well as AWS' network switches and storage servers. This flexible, multimodal cooling design provides maximum performance and efficiency for running AI models and will be used for the next-generation NVIDIA Blackwell platform. Blackwell will be the foundation of Amazon EC2 P6 instances, DGX Cloud on AWS and Project Ceiba. NVIDIA Advances Physical AI With Accelerated Robotics Simulation on AWS NVIDIA is also expanding the reach of NVIDIA Omniverse on AWS with NVIDIA Isaac Sim, now running on high-performance Amazon EC2 G6e instances accelerated by NVIDIA L40S GPUs. Available now, this reference application built on NVIDIA Omniverse enables developers to simulate and test AI-driven robots in physically based virtual environments. One of the many workflows enabled by Isaac Sim is synthetic data generation. This pipeline is now further accelerated with the infusion of OpenUSD NIM microservices, from scene creation to data augmentation. Robotics companies such as Aescape, Cohesive Robotics, Cobot, Field AI, Standard Bots, Swiss Mile and Vention are using Isaac Sim to simulate and validate the performance of their robots prior to deployment. In addition, Rendered.ai, SoftServe and Tata Consultancy Services are using the synthetic data generation capabilities of Omniverse Replicator and Isaac Sim to bootstrap perception AI models that power various robotics applications. NVIDIA BioNeMo on AWS for Advanced AI-Based Drug Discovery NVIDIA BioNeMo NIM microservices and AI Blueprints, developed to advance drug discovery, are now integrated into AWS HealthOmics, a fully managed biological data compute and storage service designed to accelerate scientific breakthroughs in clinical diagnostics and drug discovery. This collaboration gives researchers access to AI models and scalable cloud infrastructure tailored to drug discovery workflows. Several biotech companies already use NVIDIA BioNeMo on AWS to drive their research and development pipelines. For example, A-Alpha Bio, a biotechnology company based in Seattle, recently published a study in biorxiv describing a collaborative effort with NVIDIA and AWS to develop and deploy an antibody AI model called AlphaBind. Using AlphaBind via the BioNeMo framework on Amazon EC2 P5 instances equipped with NVIDIA H100 Tensor Core GPUs, A-Alpha Bio achieved a 12x increase in inference speed and processed over 108 million inference calls in two months. Additionally, SoftServe today launched Drug Discovery, its generative AI solution built with NVIDIA Blueprints, to enable computer-aided drug discovery and efficient drug development. This solution is set to deliver faster workflows and will soon be available in AWS Marketplace. Real-Time AI Blueprints: Ready-to-Deploy Options for Video, Cybersecurity and More NVIDIA's latest AI Blueprints are available for instant deployment on AWS, making real-time applications like vulnerability analysis for container security, and video search and summarization agents readily accessible. Developers can easily integrate these blueprints into existing workflows to speed deployments. Developers and enterprises can use the NVIDIA AI Blueprint for video search and summarization to build visual AI agents that can analyze real-time or archived videos to answer user questions, generate summaries and enable alerts for specific scenarios. AWS collaborated with NVIDIA to provide a reference architecture applying the NVIDIA AI Blueprint for vulnerability analysis to augment early security patching in continuous integration pipelines on AWS cloud-native services. NVIDIA CUDA-Q on Amazon Braket: Quantum Computing Made Practical NVIDIA CUDA-Q is now integrated with Amazon Braket to streamline quantum computing development. CUDA-Q users can use Amazon Braket's quantum processors, while Braket users can tap CUDA-Q's GPU-accelerated workflows for development and simulation. The CUDA-Q platform allows developers to build hybrid quantum-classical applications and run them on many different types of quantum processors, simulated and physical. Now preinstalled on Amazon Braket, CUDA-Q provides a seamless development platform for hybrid quantum-classical applications, unlocking new potential in quantum research. Enterprise Platform Providers and Consulting Leaders Advance AI With NVIDIA on AWS Leading software platforms and global system integrators are helping enterprises rapidly scale generative AI applications built with NVIDIA AI on AWS to drive innovation across industries. Cloudera is using NVIDIA AI on AWS to enhance its new AI inference solution, helping Mercy Corps improve the precision and effectiveness of its aid distribution technology. Cohesity has integrated NVIDIA NeMo Retriever microservices in its generative AI-powered conversational search assistant, Cohesity Gaia, to improve the recall performance of retrieval-augmented generation. Cohesity customers running on AWS can take advantage of the NeMo Retriever integration within Gaia. DataStax announced that Wikimedia Deutschland is applying the DataStax AI Platform to make Wikidata available to developers as an embedded vectorized database. The Datastax AI Platform is built with NVIDIA NeMo Retriever and NIM microservices, and available on AWS. Deloitte's C-Suite AI now supports NVIDIA AI Enterprise software, including NVIDIA NIM microservices and NVIDIA NeMo for CFO-specific use cases, including financial statement analysis, scenario modeling and market analysis. RAPIDS Quick Start Notebooks Now Available on Amazon EMR NVIDIA and AWS are also speeding data science and data analytics workloads with the RAPIDS Accelerator for Apache Spark, which accelerates analytics and machine learning workloads with no code change and reduces data processing costs by up to 80%. Quick Start notebooks for RAPIDS Accelerator for Apache Spark are now available on Amazon EMR, Amazon EC2 and Amazon EMR on EKS. These offer a simple way to qualify Spark jobs tuned to maximize the performance of RAPIDS on GPUs, all within AWS EMR. NVIDIA and AWS Power the Next Generation of Industrial Edge Systems The NVIDIA IGX Orin and Jetson Orin platforms now integrate seamlessly with AWS IoT Greengrass to streamline the deployment and running of AI models at the edge and to efficiently manage fleets of connected devices at scale. This combination enhances scalability and simplifies the deployment process for industrial and robotics applications. Developers can now tap into NVIDIA's advanced edge computing power with AWS' purpose-built IoT services, creating a secure, scalable environment for autonomous machines and smart sensors. A guide for getting started, authored by AWS, is now available to support developers putting these capabilities to work. The integration underscores NVIDIA's work in advancing enterprise-ready industrial edge systems to enable rapid, intelligent operations in real-world applications.
[2]
Nvidia and AWS team up to accelerate AI deployments in the cloud - SiliconANGLE
Nvidia and AWS team up to accelerate AI deployments in the cloud Nvidia Corp. is doubling down on its partnership with Amazon Web Services Inc. to expand what's possible in the realms of artificial intelligence, robotics and quantum computing development. The two companies took to the stage during Amazon's annual customer conference, AWS re:Invent this week, where they made a host of announcements regarding their ongoing collaboration. The updates include the availability of Nvidia's NIM microservices on various AWS AI services, which should enable faster inference with lower latency for AI developers, plus the launch of Nvidia's DGX Cloud on AWS, and various other developments in AI. For developers, the biggest news is the expanded availability of NIM microservices on AWS. Nvidia's NIM provides developers with easy access to a range of easy-to-use microservices that make it easy to deploy high-performance AI model inference workloads in any environment, such as the cloud, on-premises data centers and workstations. With today's update, they can now be accessed from the AWS Marketplace and the new AWS Bedrock Marketplace, as well as Amazon SageMaker Jumpstart, making it even easier for developers to deploy them from whatever interface they're working with, the companies said. What's more, users will be able to deploy them across multiple AWS services, including Amazon Elastic Compute Cloud, Amazon SageMaker and the Amazon Elastic Kubernetes Service. The NIM microservices are available as prebuilt containers and they come with a choice of inference engines, including Nvidia Triton Inference Server, Nvidia TensorRT, Nvidia TensorRT-LLM and PyTorch. Moreover, they support hundreds of different AI models, including those available in the AWS Bedrock Marketplace, Nvidia's own AI foundation models, plus customer's custom models. In addition to the NIM microservices, developers are also getting access to a new infrastructure offering, namely the Nvidia DGX Cloud. It's now available through AWS Marketplace Private Offers, and gives customers access to a fully-managed, high-performance compute platform for training, customizing and deploying AI models. DGX Cloud is a cloud-hosted AI supercomputing service that gives enterprises access to Nvidia's graphics processing units and the software they need to train advanced models for generative AI and other types of applications. One advantage of using the DGX Cloud is its flexible deployment terms, Nvidia said, and customers will also get direct access to the company's experts, who will be on hand to provide the technical expertise needed to scale their AI deployments. The DGX Cloud platform currently provides access to Nvidia's most powerful GPUs, the Nvidia H100 and H200, and will soon be expanded to include the next-generation Blackwell GPUs, slated to launch in the new year. AWS said the Blackwell chips will be available as part of the GB200 NVL supercomputing system, which will benefit from its new, liquid cooling system to deliver the highest performance with greater energy efficiency than other cloud platforms. In other AI-related announcements, Nvidia said it's making a number of new AI Blueprints available for instant deployment on AWS. The blueprints provide ready-to-deploy AI agents for tasks such as video search, container vulnerability analysis and text summarization, that can easily be integrated into existing developer workflows. They enable a number of possibilities, Nvidia said. For instance, developers can use the AI Blueprints for video search to quickly create a visual AI agent that's able to analyze video in real-time. It can then generate alerts for security teams, or identify health and safety violations in the work place, spot defective products on a manufacturing line and so on, the company said. Nvidia is also making advances in terms of AI-powered robots. The company has long been a believer in the potential of AI to help automate robots so they can perform more useful tasks in the real world, and its latest update aims to accelerate how these use cases can be simulated. Key to this is the Nvidia Omniverse platform. The company said it's making a reference application available on Nvidia Omniverse, which is used to create realistic virtual environments and digital twins. The application is said to be powered by high-performance AWS EC2 G6e instances accelerated by its L40S GPUs. According to Nvidia, developers will be able to use it to simulate and test AI-powered robots in any kind of environment, with highly realistic physics. Meanwhile, Nvidia and AWS are also trying to accelerate AI's application in the development of new pharmaceuticals. They said Nvidia's BioNeMo NIM microservices and AI Blueprints for advancing drug discovery are now available with AWS HealthOmics, which is a fully-managed biological data compute and storage service designed to support clinical diagnostics. The collaboration extends the capabilities of AWS HealthOmics, giving researchers the chance to experiment with more AI models, the companies said. Last but not least, Nvidia said it's working with AWS to help accelerate quantum computing development. The chipmaker's Nvidia CUDA-Q platform, which is used to develop "hybrid quantum/classical computing applications" that span traditional and quantum computers, is being integrated with the Amazon Braket service. Amazon Braket makes it easier for users to set up, monitor and execute hybrid quantum-classical algorithms on quantum processors. With the integration, CUDA-Q users will be able to tap into Amazon Braket's quantum resources, Nvidia said, while Braket users will be able to take advantage of CUDA-Q's GPU-accelerated workflows for development and simulation.
[3]
NVIDIA NIM on AWS Supercharges AI Inference
Optimized NIM microservices are now available on Amazon Bedrock Marketplace, SageMaker JumpStart and AWS Marketplace for a broad catalog of NVIDIA and ecosystem models. Generative AI is rapidly transforming industries, driving demand for secure, high-performance inference solutions to scale increasingly complex models efficiently and cost-effectively. Expanding its collaboration with NVIDIA, Amazon Web Services (AWS) revealed today at its annual AWS re:Invent conference that it has extended NVIDIA NIM microservices across key AWS AI services to support faster AI inference and lower latency for generative AI applications. NVIDIA NIM microservices are now available directly from the AWS Marketplace, as well as Amazon Bedrock Marketplace and Amazon SageMaker JumpStart, making it even easier for developers to deploy NVIDIA-optimized inference for commonly used models at scale. NVIDIA NIM, part of the NVIDIA AI Enterprise software platform available in the AWS Marketplace, provides developers with a set of easy-to-use microservices designed for secure, reliable deployment of high-performance, enterprise-grade AI model inference across clouds, data centers and workstations. These prebuilt containers are built on robust inference engines, such as NVIDIA Triton Inference Server, NVIDIA TensorRT, NVIDIA TensorRT-LLM and PyTorch, and support a broad spectrum of AI models -- from open-source community ones to NVIDIA AI Foundation models and custom ones. NIM microservices can be deployed across various AWS services, including Amazon Elastic Compute Cloud (EC2), Amazon Elastic Kubernetes Service (EKS) and Amazon SageMaker. Developers can preview over 100 NIM microservices built from commonly used models and model families, including Meta's Llama 3, Mistral AI's Mistral and Mixtral, NVIDIA's Nemotron, Stability AI's SDXL and many more on the NVIDIA API catalog. The most commonly used ones are available for self-hosting to deploy on AWS services and are optimized to run on NVIDIA accelerated computing instances on AWS. NIM microservices now available directly from AWS include: NIM on AWS for Everyone Customers and partners across industries are tapping NIM on AWS to get to market faster, maintain security and control of their generative AI applications and data, and lower costs. SoftServe, an IT consulting and digital services provider, has developed six generative AI solutions fully deployed on AWS and accelerated by NVIDIA NIM and AWS services. The solutions, available on AWS Marketplace, include SoftServe Gen AI Drug Discovery, SoftServe Gen AI Industrial Assistant, Digital Concierge, Multimodal RAG System, Content Creator and Speech Recognition Platform. They're all based on NVIDIA AI Blueprints, comprehensive reference workflows that accelerate AI application development and deployment and feature NVIDIA acceleration libraries, software development kits and NIM microservices for AI agents, digital twins and more. Start Now With NIM on AWS Developers can deploy NVIDIA NIM microservices on AWS according to their unique needs and requirements. By doing so, developers and enterprises can achieve high-performance AI with NVIDIA-optimized inference containers across various AWS services. Visit the NVIDIA API catalog to try out over 100 different NIM-optimized models, and request either a developer license or 90-day NVIDIA AI Enterprise trial license to get started deploying the microservices on AWS services. Developers can also explore NIM microservices in the AWS Marketplace, Amazon Bedrock Marketplace or Amazon SageMaker JumpStart.
[4]
NVIDIA Advances Physical AI With Accelerated Robotics Simulation on AWS
NVIDIA Isaac Sim is now available on cloud instances of NVIDIA L40S GPUs in Amazon EC2 G6e instances, offering a 2x boost for scaling robotics simulation, and faster AI model training. Field AI is building robot brains that enable robots to autonomously manage a wide range of industrial processes. Vention creates pretrained skills to ease development of robotic tasks. And Cobot offers Proxie, an AI-powered cobot designed to handle material movement and adapt to dynamic environments, working seamlessly alongside humans. These leading robotics startups are all making advances using NVIDIA Isaac Sim on Amazon Web Services. Isaac Sim is a reference application built on NVIDIA Omniverse for developers to simulate and test AI-driven robots in physically based virtual environments. NVIDIA announced at AWS re:Invent today that Isaac Sim now runs on Amazon Elastic Cloud Computing (EC2) G6e instances accelerated by NVIDIA L40S GPUs. And with NVIDIA OSMO, a cloud-native orchestration platform, developers can easily manage their complex robotics workflows across their AWS computing infrastructure. This combination of NVIDIA-accelerated hardware and software -- available on the cloud -- allows teams of any size to scale their physical AI workflows. Physical AI describes AI models that can understand and interact with the physical world. It embodies the next wave of autonomous machines and robots, such as self-driving cars, industrial manipulators, mobile robots, humanoids and even robot-run infrastructure like factories and warehouses. With physical AI, developers are embracing a three computer solution for training, simulation and inference to make breakthroughs. Yet physical AI for robotics systems requires robust training datasets to achieve precision inference in deployment. Developing such datasets, however, and testing them in real situations can be impractical and costly. Simulation offers an answer, as it can significantly accelerate the training, testing and deployment of AI-driven robots. Harnessing L40S GPUs in the Cloud to Scale Robotics Simulation and Training Simulation is used to verify, validate and optimize robot designs as well as the systems and their algorithms before deployment. Simulation can also optimize facility and system designs before construction or remodeling starts for maximum efficiencies, reducing costly manufacturing change orders. Amazon EC2 G6e instances accelerated by NVIDIA L40S GPUs provide a 2x performance gain over the prior architecture, while allowing the flexibility to scale as scene and simulation complexity grows. The instances are used to train many computer vision models that power AI-driven robots. This means the same instances can be extended for various tasks, from data generation to simulation to model training. Using NVIDIA OSMO in the cloud allows teams to orchestrate and scale complex robotics development workflows across distributed computing resources, whether on premises or in the AWS cloud. Isaac Sim provides access to the latest robotics simulation capabilities and the cloud, fostering collaboration. One of the critical workflows is generating synthetic data for perception model training. Using a reference workflow that combines NVIDIA Omniverse Replicator, a framework for building custom synthetic data generation (SDG) pipelines and a core extension of Isaac Sim, with NVIDIA NIM microservices, developers can build generative AI-enabled SDG pipelines. These include the USD Code NIM microservice for generating Python USD code and answering OpenUSD queries, and the USD Search NIM microservice for exploring OpenUSD assets using natural language or image inputs. The Edify 360 HDRi NIM microservice generates 360-degree environment maps, while the Edify 3D NIM microservice creates ready-to-edit 3D assets from text or image prompts. This eases the synthetic data generation process by reducing many tedious and manual steps, from asset creation to image augmentation, using the power of generative AI. Rendered.ai's synthetic data engineering platform integrated with Omniverse Replicator enables companies to generate synthetic data for computer vision models used in industries from security and intelligence to manufacturing and agriculture. SoftServe, an IT consulting and digital services provider, uses Isaac Sim to generate synthetic data and validate robots used in vertical farming with Pfeifer & Langen, a leading European food producer. Tata Consultancy Services is building custom synthetic data generation pipelines to power its Mobility AI suite to address automotive and autonomous use cases by simulating real-world scenarios. Its applications include defect detection, end-of-line quality inspection and hazard avoidance. Learning to Be Robots in Simulation While Isaac Sim enables developers to test and validate robots in physically accurate simulation, Isaac Lab, an open-source robot learning framework built on Isaac Sim, provides a virtual playground for building robot policies that can run on AWS Batch. Because these simulations are repeatable, developers can easily troubleshoot and reduce the number of cycles required for validation and testing. Several robotics developers are embracing NVIDIA Isaac on AWS to develop physical AI, such as: Learn more about Isaac Sim 4.2, now available on Amazon EC2 G6e instances powered by NVIDIA L40S GPUs on AWS Marketplace.
Share
Share
Copy Link
NVIDIA and AWS announce major collaborations at AWS re:Invent, introducing new AI tools, robotics simulations, and quantum computing solutions to enhance cloud-based development and deployment.
NVIDIA and Amazon Web Services (AWS) have announced the availability of NVIDIA DGX Cloud on AWS, marking a significant advancement in cloud-based AI computing [1]. This fully managed, high-performance platform is designed to help enterprises train and customize AI models at scale. Available through AWS Marketplace Private Offers, DGX Cloud provides flexible terms and direct access to NVIDIA experts, enabling businesses to rapidly scale their AI capabilities [1][2].
The collaboration introduces NVIDIA NIM microservices across key AWS AI services, aiming to support faster AI inference and lower latency for generative AI applications [3]. These microservices are now accessible through AWS Marketplace, Amazon Bedrock Marketplace, and Amazon SageMaker JumpStart, simplifying deployment for developers [2][3]. The NIM microservices support a wide range of AI models and can be deployed across various AWS services, including Amazon EC2, Amazon EKS, and Amazon SageMaker [3].
NVIDIA is expanding its reach in robotics development with NVIDIA Isaac Sim, now running on high-performance Amazon EC2 G6e instances accelerated by NVIDIA L40S GPUs [1][4]. This reference application, built on NVIDIA Omniverse, allows developers to simulate and test AI-driven robots in physically based virtual environments [1]. The integration of OpenUSD NIM microservices further accelerates the synthetic data generation pipeline, from scene creation to data augmentation [1][4].
In the realm of quantum computing, NVIDIA CUDA-Q is now integrated with Amazon Braket, streamlining quantum computing development [1]. This integration allows CUDA-Q users to access Amazon Braket's quantum processors, while Braket users can utilize CUDA-Q's GPU-accelerated workflows for development and simulation [1].
For drug discovery, NVIDIA BioNeMo NIM microservices and AI Blueprints are now integrated into AWS HealthOmics, a fully managed biological data compute and storage service [1][2]. This collaboration provides researchers with access to AI models and scalable cloud infrastructure tailored to drug discovery workflows [1].
AWS has developed solutions for configurable liquid-to-chip cooling across its data centers, which will seamlessly integrate air- and liquid-cooling capabilities for powerful rack-scale AI supercomputing systems like NVIDIA GB200 NVL72 [1]. This cooling solution is designed to provide maximum performance and efficiency for running AI models and will be used for the next-generation NVIDIA Blackwell platform [1][2].
NVIDIA's latest AI Blueprints are available for instant deployment on AWS, offering ready-to-deploy options for various applications such as video search, cybersecurity, and more [1]. These blueprints enable developers to easily integrate AI capabilities into existing workflows, speeding up deployments [1][2].
Several companies are already leveraging these new technologies. For instance, SoftServe has developed six generative AI solutions fully deployed on AWS and accelerated by NVIDIA NIM and AWS services [3]. In the robotics field, companies like Field AI, Vention, and Cobot are using NVIDIA Isaac Sim on AWS to advance their robotics development [4].
This collaboration between NVIDIA and AWS represents a significant step forward in making advanced AI, robotics, and quantum computing technologies more accessible and efficient for developers and enterprises across various industries.
Reference
[1]
[3]
[4]
Amazon Web Services (AWS) showcases significant AI developments at its annual re:Invent conference, including new Trainium chips, enhancements to SageMaker and Bedrock platforms, and AI-powered tools to compete with Microsoft in the cloud computing market.
6 Sources
NVIDIA introduces AI Agent Blueprints, a new tool designed to simplify the creation of AI-powered enterprise applications. This release aims to democratize AI development and enable businesses to build custom AI experiences efficiently.
3 Sources
NVIDIA announces partnerships with major US technology companies to develop custom AI applications across various industries using its latest AI software tools, including NIM Agent Blueprints and NeMo microservices.
2 Sources
AWS executives outline the company's strategy for integrating AI into enterprise operations, emphasizing productivity gains, democratized data access, and innovative tools like Amazon Q and Bedrock.
5 Sources
Amazon Web Services (AWS) made significant AI-related announcements at its re:Invent 2024 conference, including new AI models, chips, and enhancements to existing services, signaling a strong push into the AI market.
9 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved