Curated by THEOUTPOST
On Fri, 17 Jan, 12:03 AM UTC
5 Sources
[1]
Nvidia releases microservices to safeguard AI agents - SiliconANGLE
Nvidia Corp. today announced today the release of new Nvidia Inference Microservices aimed at helping enterprise organizations develop artificial intelligence agents to address issues of trust, security and safety. AI agents are a blossoming technology that is beginning to revolutionize how people interact with computers, but they also come with several critical issues. Agentic AI is currently set to change how knowledge workers accomplish tasks and customers "talk" to brands, but the large language models under the hood can still go off the rails and produce unwanted responses or create security concerns when malicious users break their safeguards. Nvidia NIM is a set of containerized microservices designed to speed up the deployment of generative AI models and today's announcement builds on NeMo Guardrails, a protective framework for developers that moderates AI models to allow them to build more secure, trustworthy AI agents. Nvidia announced three NIM microservices that cover topic control, content safety and jailbreak protection. These microservices are highly optimized small, lightweight AI models that moderate responses from a larger model to improve application performance. "One of the new microservices, built for moderating content safety, was trained using the Aegis Content Safety Dataset -- one of the highest-quality, human-annotated data sources in its category," said Kari Briski, vice president of enterprise AI models, software and services at Nvidia. The data is curated and owned by Nvidia with more than 35,000 human-annotated data samples flagged for AI safety and jailbreak attempts to bypass system restrictions. The dataset will be publicly available on Hugging Face later this year. The topic control NIM, for example, helps prevent agents from getting too "chatty" or diverging from their original mission by keeping them on topic. The longer a conversation goes with an AI chatbot, often the more likely it will begin to forget the original intent of the chat and the conversation can begin to wander, similar to how human conversations can tend to meander. Although this can be fine for people, it's bad for chatbots, especially branded AI agents that might start talking about famous rock stars or competing products. "Small language models, like those in the NeMo Guardrails collection, offer lower latency and are designed to run efficiently, even in resource-constrained or distributed environments," said Briski. "This makes them ideal for scaling AI applications in industries such as healthcare, automotive and manufacturing, in locations like hospitals or warehouses." The NIM approach allows developers to stack multiple guardrails with minimal additional latency, or response time added. This is very important for most generative AI applications because customers don't like waiting while watching three dots blinking or a spinning circle before text appears or a voice begins speaking.
[2]
NVIDIA Releases NIM Microservices to Safeguard Applications for Agentic AI
NVIDIA NeMo Guardrails includes new NVIDIA NIM microservices to enhance accuracy, security and control for enterprises building AI across industries. AI agents are poised to transform productivity for the world's billion knowledge workers with "knowledge robots" that can accomplish a variety of tasks. To develop AI agents, enterprises need to address critical concerns like trust, safety, security and compliance. New NVIDIA NIM microservices for AI guardrails -- part of the NVIDIA NeMo Guardrails collection of software tools -- are portable, optimized inference microservices that help companies improve the safety, precision and scalability of their generative AI applications. Central to the orchestration of the microservices is NeMo Guardrails, part of the NVIDIA NeMo platform for curating, customizing and guardrailing AI. NeMo Guardrails helps developers integrate and manage AI guardrails in large language model (LLM) applications. Industry leaders Amdocs, Cerence AI and Lowe's are among those using NeMo Guardrails to safeguard AI applications. Developers can use the NIM microservices to build more secure, trustworthy AI agents that provide safe, appropriate responses within context-specific guidelines and are bolstered against jailbreak attempts. Deployed in customer service across industries like automotive, finance, healthcare, manufacturing and retail, the agents can boost customer satisfaction and trust. One of the new microservices, built for moderating content safety, was trained using the Aegis Content Safety Dataset -- one of the highest-quality, human-annotated data sources in its category. Curated and owned by NVIDIA, the dataset is publicly available on Hugging Face and includes over 35,000 human-annotated data samples flagged for AI safety and jailbreak attempts to bypass system restrictions. AI is rapidly boosting productivity for a broad range of business processes. In customer service, it's helping resolve customer issues up to 40% faster. However, scaling AI for customer service and other AI agents requires secure models that prevent harmful or inappropriate outputs and ensure the AI application behaves within defined parameters. NVIDIA has introduced three new NIM microservices for NeMo Guardrails that help AI agents operate at scale while maintaining controlled behavior: By applying multiple lightweight, specialized models as guardrails, developers can cover gaps that may occur when only more general global policies and protections exist -- as a one-size-fits-all approach doesn't properly secure and control complex agentic AI workflows. Small language models, like those in the NeMo Guardrails collection, offer lower latency and are designed to run efficiently, even in resource-constrained or distributed environments. This makes them ideal for scaling AI applications in industries such as healthcare, automotive and manufacturing, in locations like hospitals or warehouses. NeMo Guardrails, available to the open-source community, helps developers orchestrate multiple AI software policies -- called rails -- to enhance LLM application security and control. It works with NVIDIA NIM microservices to offer a robust framework for building AI systems that can be deployed at scale without compromising on safety or performance. Amdocs, a leading global provider of software and services to communications and media companies, is harnessing NeMo Guardrails to enhance AI-driven customer interactions by delivering safer, more accurate and contextually appropriate responses. "Technologies like NeMo Guardrails are essential for safeguarding generative AI applications, helping make sure they operate securely and ethically," said Anthony Goonetilleke, group president of technology and head of strategy at Amdocs. "By integrating NVIDIA NeMo Guardrails into our amAIz platform, we are enhancing the platform's 'Trusted AI' capabilities to deliver agentic experiences that are safe, reliable and scalable. This empowers service providers to deploy AI solutions safely and with confidence, setting new standards for AI innovation and operational excellence." Cerence AI, a company specializing in AI solutions for the automotive industry, is using NVIDIA NeMo Guardrails to help ensure its in-car assistants deliver contextually appropriate, safe interactions powered by its CaLLM family of large and small language models. "Cerence AI relies on high-performing, secure solutions from NVIDIA to power our in-car assistant technologies," said Nils Schanz, executive vice president of product and technology at Cerence AI. "Using NeMo Guardrails helps us deliver trusted, context-aware solutions to our automaker customers and provide sensible, mindful and hallucination-free responses. In addition, NeMo Guardrails is customizable for our automaker customers and helps us filter harmful or unpleasant requests, securing our CaLLM family of language models from unintended or inappropriate content delivery to end users." Lowe's, a leading home improvement retailer, is leveraging generative AI to build on the deep expertise of its store associates. By providing enhanced access to comprehensive product knowledge, these tools empower associates to answer customer questions, helping them find the right products to complete their projects and setting a new standard for retail innovation and customer satisfaction. "We're always looking for ways to help associates to above and beyond for our customers," said Chandhu Nair, senior vice president of data, AI and innovation at Lowe's. "With our recent deployments of NVIDIA NeMo Guardrails, we ensure AI-generated responses are safe, secure and reliable, enforcing conversational boundaries to deliver only relevant and appropriate content." To further accelerate AI safeguards adoption in AI application development and deployment in retail, NVIDIA recently announced at the NRF show that its NVIDIA AI Blueprint for retail shopping assistants incorporates NeMo Guardrails microservices for creating more reliable and controlled customer interactions during digital shopping experiences. Consulting leaders Taskus, Tech Mahindra and Wipro are also integrating NeMo Guardrails into their solutions to provide their enterprise clients safer, more reliable and controlled generative AI applications. NeMo Guardrails is open and extensible, offering integration with a robust ecosystem of leading AI safety model and guardrail providers, as well as AI observability and development tools. It supports integration with ActiveFence's ActiveScore, which filters harmful or inappropriate content in conversational AI applications, and provides visibility, analytics and monitoring. Hive, which provides its AI-generated content detection models for images, video and audio content as NIM microservices, can be easily integrated and orchestrated in AI applications using NeMo Guardrails. The Fiddler AI Observability platform easily integrates with NeMo Guardrails to enhance AI guardrail monitoring capabilities. And Weights & Biases, an end-to-end AI developer platform, is expanding the capabilities of W&B Weave by adding integrations with NeMo Guardrails microservices. This enhancement builds on Weights & Biases' existing portfolio of NIM integrations for optimized AI inferencing in production. Developers ready to test the effectiveness of applying safeguard models and other rails can use NVIDIA Garak -- an open-source toolkit for LLM and application vulnerability scanning developed by the NVIDIA Research team. With Garak, developers can identify vulnerabilities in systems using LLMs by assessing them for issues such as data leaks, prompt injections, code hallucination and jailbreak scenarios. By generating test cases involving inappropriate or incorrect outputs, Garak helps developers detect and address potential weaknesses in AI models to enhance their robustness and safety. NVIDIA NeMo Guardrails microservices, as well as NeMo Guardrails for rail orchestration and the NVIDIA Garak toolkit, are now available for developers and enterprises. Developers can get started building AI safeguards into AI agents for customer service using NeMo Guardrails with this tutorial.
[3]
Nvidia tackles agentic AI safety and security with new NeMo Guardrails NIMs
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As the use of agentic AI continues to grow, so too does the need for safety and security. Today, Nvidia announced a series of updates to its NeMo Guardrails technology designed specifically to address the needs of agentic AI. The basic idea behind guardrails is to provide some form of policy and control for large language models (LLMs) to help prevent unauthorized and unintended outputs. The guardrails concept has been broadly embraced in recent years by multiple vendors, including AWS. The new NeMo Guardrails updates from Nvidia are designed to make it easier for organizations to deploy and provide more granular types of controls. NeMo Guardrails are now available as a NIM (Nvidia Inference Microservices), which are optimized for Nvidia's GPUs. Additionally, there are three new specific NIM services that enterprises can deploy for content safety, topic control and jailbreak detection. The guardrails have been optimized for agentic AI deployments, rather than just singular LLMs. "It's not just about guard-railing a model anymore," Kari Briski, VP for enterprise AI models, software and services at Nvidia, said in a press briefing. "It's about guard railing and a total system." What the new NeMo Guardrails bring to enterprise Agentic AI Agentic AI use is expected to be a dominant trend in 2025. While agentic AI has plenty of benefits, it also brings new challenges, particularly around security, data privacy and governance requirements, which can create significant barriers to deployment. The three new NeMo Guardrails NIMs are intended to help solve some of those challenges. They include: Complexity of safeguarding agentic AI systems The complexity of safeguarding agentic AI systems is significant, as they can involve multiple interconnected agents and models. Briski provided an example of a retail customer service agent scenario. Consider a person interacting with at least three agents, a reasoning LLM, a retrieval-augmented generation (RAG) agent and a customer service assistant agent. All are required to enable the live agent. "Depending on the user interaction, many different LLMs or interactions can be made, and you have to guardrail each one of them," said Briski. While there is complexity, she noted that a key goal with NeMo Guardrails NIMs is to make it easier for enterprises. As part of today's rollout, Nvidia is also providing blueprints to demonstrate how the different guardrail NIMs can be deployed for varying scenarios, including customer service and retail. How Nvidia guardrails impact agentic AI performance Another primary concern for enterprises deploying agentic AI is performance. Briski said that as enterprises deploy agentic AI, there can be concern about introducing latency by adding guardrails. "I think as people were initially trying to add guardrails in the past, they were applying larger LLMs to try and guardrail," she explained. The latest NeMo Guardrail NIMs have been fine-tuned and optimized to address latency concerns. Nvidia's early testing shows that organizations can get 50% better protection with guardrails, which only add approximately a half second of latency. "This is really important when deploying agents, because as we know, it's not just one agent, there are multiple agents that could be within an agentic system," said Briski. Nvidia NeMo Guardrails NIMs for agentic AI are available under the Nvidia AI enterprise license, which currently costs $4,500 per GPU per year. Developers can try them out for free under an open source license, as well as on build.nvidia.com.
[4]
Nvidia releases more tools and guardrails to nudge enterprises to adopt AI agents | TechCrunch
Nvidia is releasing three new NIM microservices, or small independent services that are part of larger applications, to help enterprises bring additional control and safety measures to their AI agents. One of these new NIM services targets content safety and works to prevent an AI agent from generating harmful or biased outputs. Another works to keep conversations focused on approved topics only, while the third new service helps prevent an AI agent from jailbreak attempts, or removing software restrictions. These three new NIM microservices are part of Nvidia NeMo Guardrails, Nvidia's existing open source collection of software tools and microservices meant to help companies improve their AI applications. "By applying multiple lightweight, specialized models as guardrails, developers can cover gaps that may occur when only more general global policies and protections exist -- as a one-size-fits-all approach doesn't properly secure and control complex agentic AI workflows," the press release said. It seems that AI companies may be starting to catch on that getting enterprises to adopt their AI agent technology is not going to be as simple as they initially thought. While folks like Salesforce CEO Marc Benioff recently predicted there will be more than a billion agents running off of Salesforce alone in the next 12 months, reality will probably look a little different. A recent study from Deloitte predicted that about 25% of enterprises are either already using AI agents or expect to in 2025. The report also predicted that by 2027 about half of enterprises will be using agents. This shows that while enterprises are clearly interested in AI agents, they are not adopting AI tech at the same cadence as innovation is happening in the AI space. Nvidia likely hopes initiatives like this will make adopting AI agents seem more secure, and less experimental. Time will tell if that's actually true.
[5]
Guardrails for AI: Nvidia's new tools keep AI agents safe and in control
This announcement signals a significant step forward for developers seeking to create more secure and reliable AI applications across various industries, including automotive, healthcare, telecommunications, and retail. As AI technology revolutionizes productivity for knowledge workers globally, the need for trustworthy AI agents has never been more crucial. These "knowledge robots" are set to handle various tasks that demand utmost caution regarding trust, safety, security, and compliance. The introduction of Nvidia's latest microservices aims to address these critical concerns, providing developers with tools to implement robust safeguards in generative AI systems. The newly launched microservices include features specifically aimed at content safety, topic control, and jailbreak detection. These tools enable developers to enhance the framework of AI agents, ensuring they deliver safe and context-aware responses while effectively thwarting potential jailbreaking attempts. The NeMo Guardrails suite enables developers to manage these AI guardrails seamlessly, fostering a more controlled environment for large language model (LLM) applications.
Share
Share
Copy Link
Nvidia releases new NIM microservices as part of NeMo Guardrails to improve security, control, and performance of AI agents, addressing critical concerns in enterprise AI adoption.
Nvidia has announced the release of new Nvidia Inference Microservices (NIM) as part of its NeMo Guardrails collection, aimed at enhancing the safety, security, and control of AI agents. This development comes as enterprises increasingly seek to adopt AI technologies while addressing critical concerns surrounding trust, safety, and compliance 12.
The newly introduced NIM microservices focus on three primary areas:
These lightweight, specialized models are designed to work efficiently in various environments, including resource-constrained settings, making them ideal for scaling AI applications across industries such as healthcare, automotive, and manufacturing 2.
Nvidia claims that the new NIM microservices offer significant advantages:
Several industry leaders are already leveraging NeMo Guardrails to enhance their AI applications:
The introduction of these microservices addresses key barriers to enterprise AI adoption, including:
By providing more granular controls and optimized performance, Nvidia aims to make it easier for organizations to deploy AI agents confidently 34.
The new NeMo Guardrails NIMs are available under the Nvidia AI Enterprise license, priced at $4,500 per GPU per year. Developers can also try them out for free under an open-source license or on build.nvidia.com 4.
As AI continues to transform various industries, Nvidia's latest offering represents a significant step towards creating more secure, trustworthy, and efficient AI agents. By addressing critical concerns surrounding AI safety and control, these microservices may help accelerate the adoption of AI technologies across enterprises.
Reference
[2]
[4]
[5]
NVIDIA introduces new AI models and blueprints for building agentic AI applications, partnering with leading tech companies to simplify the development and deployment of AI agents for enterprises.
7 Sources
7 Sources
NVIDIA announces new Llama Nemotron and Cosmos Nemotron model families designed to enhance AI agent capabilities and boost enterprise productivity across various applications.
4 Sources
4 Sources
NVIDIA introduces AI Agent Blueprints, a new tool designed to simplify the creation of AI-powered enterprise applications. This release aims to democratize AI development and enable businesses to build custom AI experiences efficiently.
3 Sources
3 Sources
NVIDIA announces partnerships with major US technology companies to develop custom AI applications across various industries using its latest AI software tools, including NIM Agent Blueprints and NeMo microservices.
2 Sources
2 Sources
NVIDIA launches NeMo Retriever microservices for multilingual generative AI, partnering with DataStax to dramatically improve data processing efficiency and language understanding across industries.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved