5 Sources
[1]
ServiceNow and Nvidia's new reasoning AI model raises the bar for enterprise AI agents
Many have dubbed this year "the year of AI agents," as these AI systems that can carry out tasks for users are especially useful for optimizing enterprise workflows. At ServiceNow's annual Knowledge 2025 conference, the company unveiled a new model in partnership with Nvidia to advance AI agents. On Tuesday, ServiceNow and Nvidia launched Apriel Nemotron 15B, a new, open-source reasoning language model (LLM) built to deliver lower latency, lower inference costs, and agentic AI. According to the release, the model was trained on Nvidia Nemo, the Nvidia Llama Nemotron Post-Training Dataset, and ServiceNow's domain-specific data. Also: Nvidia's 70+ projects at ICLR show how raw chip power is central to AI's acceleration The biggest takeaway of the model is that it packages advanced reasoning capabilities in a smaller size. This makes the model cheaper and faster to run on Nvidia GPU infrastructure as an Nvidia NIM microservice while still delivering the enterprise-grade intelligence companies are looking for. The company shares that Apriel Nemotron 15B shows promising results for its model category in benchmark testing, confirming that the model could be a good fit for supporting agentic AI workflows. Also: Will synthetic data derail generative AI's momentum or be the breakthrough we need? Reasoning capabilities are especially important when using agentic AI because, in these automated experiences, AI performs tasks for the end user in various settings. Since it is performing tasks without human direction, it needs to do some processing or reasoning of its own to determine how to proceed best. In addition to the model, the two companies also unveiled a joint data flywheel architecture -- a feedback loop that collects data from interactions to further refine AI models. The architecture integrates ServiceNow Workflow Data Fabric and select Nvidia NeMo microservices, according to the release. Also: Nvidia launches NeMo software tools to help enterprises build custom AI agents This joint architecture allows companies to use enterprise workflow data to further refine their reasoning models while also having the necessary guardrails in place to protect customers, ensure the data is processed in a secure and timely manner, and give them the control they want. Ideally, this would feed into the creation of highly personalized, context-aware AI agents, according to the company. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[2]
Your Service Teams Just Got a New Coworker -- and It's a 15B-Parameter Super Genius Built by ServiceNow and NVIDIA
The open-source Apriel Nemotron 15B LLM was created with NVIDIA NeMo, NVIDIA Llama Nemotron open datasets and ServiceNow domain data trained on NVIDIA DGX Cloud. ServiceNow is accelerating enterprise AI with a new reasoning model built in partnership with NVIDIA -- enabling AI agents that respond in real time, handle complex workflows and scale functions like IT, HR and customer service teams worldwide. Unveiled today at ServiceNow's Knowledge 2025 -- where NVIDIA CEO and founder Jensen Huang joined ServiceNow chairman and CEO Bill McDermott during his keynote address -- Apriel Nemotron 15B is compact, cost-efficient and tuned for action. It's designed to drive the next step forward in enterprise large language models (LLMs). Apriel Nemotron 15B was developed with NVIDIA NeMo, the open NVIDIA Llama Nemotron Post-Training Dataset and ServiceNow domain-specific data, and was trained on NVIDIA DGX Cloud running on Amazon Web Services (AWS). The news follows the April release of the NVIDIA Llama Nemotron Ultra model, which harnesses the NVIDIA open dataset that ServiceNow used to build its Apriel Nemotron 15B model. Ultra is among the strongest open-source models at reasoning, including scientific reasoning, coding, advanced math and other agentic AI tasks. Apriel Nemotron 15B is engineered for reasoning -- drawing inferences, weighing goals and navigating rules in real time. It's smaller than some of the latest general-purpose LLMs that can run to more than a trillion parameters, which means it delivers faster responses and lower inference costs, while still packing enterprise-grade intelligence. The model's post-training took place on NVIDIA DGX Cloud hosted on AWS, tapping high-performance infrastructure to accelerate development. The result? An AI model that's optimized not just for accuracy, but for speed, efficiency and scalability -- key ingredients for powering AI agents that can support thousands of concurrent enterprise workflows. Beyond the model itself, ServiceNow and NVIDIA are introducing a new data flywheel architecture -- integrating ServiceNow's Workflow Data Fabric with NVIDIA NeMo microservices, including NeMo Customizer and NeMo Evaluator. This setup enables a closed-loop process that refines and improves AI performance by using workflow data to personalize responses and improve accuracy over time. Guardrails ensure customers are in control of how their data is used in a secure and compliant manner. In a keynote demo, ServiceNow is showing how these agentic models have been deployed in real enterprise scenarios, including with AstraZeneca, where AI agents will help employees resolve issues and make decisions with greater speed and precision -- giving 90,000 hours back to employees. "The Apriel Nemotron 15B model -- developed by two of the most advanced enterprise AI companies -- features purpose-built reasoning to power the next generation of intelligent AI agents," said Jon Sigler, executive vice president of Platform and AI at ServiceNow. "This achieves what generic models can't, combining real-time enterprise data, workflow context and advanced reasoning to help AI agents drive real productivity." "Together with ServiceNow, we've built an efficient, enterprise-ready model to fuel a new class of intelligent AI agents that can reason to boost team productivity," added Kari Briski, vice president of generative AI software at NVIDIA. "By using the NVIDIA Llama Nemotron Post-Training Dataset and ServiceNow domain-specific data, Apriel Nemotron 15B delivers advanced reasoning capabilities in a smaller size, making it faster, more accurate and cost-effective to run." The collaboration marks a shift in enterprise AI strategy. Enterprises are moving from static models to intelligent systems that evolve. It also marks another milestone in the partnership between ServiceNow and NVIDIA, pushing agentic AI forward across industries. For businesses, this means faster resolution times, greater productivity and more responsive digital experiences. For technology leaders, it's a model that fits today's performance and cost requirements -- and can scale as needs grow. ServiceNow AI Agents, powered by Apriel Nemotron 15B, are expected to roll out following Knowledge 2025. The model will support ServiceNow's Now LLM services and will become a key engine behind the company's agentic AI offerings. Learn more about the launch and how NVIDIA and ServiceNow are shaping the future of enterprise AI at Knowledge 2025.
[3]
ServiceNow Partners with NVIDIA to Launch Reasoning Model Apriel Nemotron 15B
At its flagship event Knowledge 2025, ServiceNow unveiled the reimagined ServiceNow AI Platform, introducing powerful new capabilities to help enterprises deploy any AI, any agent, any model across their operations. The major highlight of this next-generation platform is Apriel Nemotron 15B, a reasoning LLM built in collaboration with NVIDIA, optimised for enterprise reasoning, low latency, and cost-efficient inference. The new platform is designed for the agentic and open AI era, bringing together intelligence, data, and orchestration into a unified architecture. With partnerships expanding across Microsoft, Google, NVIDIA, and Oracle, ServiceNow aims to accelerate end-to-end enterprise automation at scale. "ServiceNow is igniting a new era of enterprise transformation with the ServiceNow AI Platform. We're unleashing the full power of AI, across any industry, any agent, any workflow," said Bill McDermott, chairman and CEO of ServiceNow. "Now is the moment to unlock tomorrow's opportunities with ServiceNow as the AI operating system of the 21st century." ServiceNow also introduced the AI Control Tower, a centralised dashboard to govern and manage both native and third-party AI agents, and the AI Agent Fabric, which allows agents from different vendors and teams to coordinate and share context across workflows. The AI Engagement Layer, Workflow Data Fabric, and Knowledge Graph further enable seamless orchestration across enterprise systems. Global enterprises like Adobe, Aptiv, the NHL, Visa, and Wells Fargo have already adopted ServiceNow AI to streamline operations and improve productivity. The California-based enterprise software company confidently stated that its flagship generative AI product, Now Assist, is expected to hit $1 billion in annual contract value by 2026. Analysts project ServiceNow's annual revenue will top $13 billion this year, which is up from $10.9 billion in FY2024. It has already reported robust financial performance for the first quarter of 2025, posting a 19% year-over-year increase in subscription revenues to $3.005 billion, or 20% in constant currency.
[4]
ServiceNow, Nvidia expand partnership, launch new AI agent
ServiceNow and Nvidia unveiled Apriel Nemotron 15B, a new AI reasoning model, and launched the AI Control Tower and Workflow Data Fabric to optimise enterprise AI use. Deepening ties with partners like AWS and Microsoft, they aim to drive real-time, AI-powered business transformation across industries.ServiceNow and Nvidia have announced an expansion of their partnership with the launch of a new class of intelligent AI agents across the enterprise, called Apriel Nemotron 15B. The announcement was made by ServiceNow Chairman and CEO Bill McDermott and NVIDIA founder and CEO Jensen Huang at ServiceNow's 'Knowledge 2025' annual conference in Las Vegas. Around 5,000 partners and customers attended the three-day event, which began on May 6. ServiceNow is a leading AI platform for business transformation, while NVIDIA is the leading provider of graphics processing units, which have powered the AI boom and lifted the company's market cap to almost USD 3 trillion. McDermott and Huang announced the debut of a new high-performance ServiceNow reasoning model, Apriel Nemotron 15B. Apriel Nemotron 15B, developed in partnership with NVIDIA, evaluates relationships, applies rules, and weighs goals to reach conclusions or make decisions. It is expected to be available in Q2 2025. ServiceNow and NVIDIA also unveiled a new collaboration on a joint data flywheel architecture that will integrate ServiceNow Workflow Data Fabric and select NVIDIA NeMo microservices. ServiceNow also launched the AI Control Tower, a centralised command centre to govern, manage, secure, and realize value from any ServiceNow and third-party AI agent, model, and workflow on a single unified platform. The AI Control Tower optimises AI investments and ensures seamless, responsible integration into customers' enterprise strategies. ServiceNow partners, including Accenture, Adobe, Box, Cisco, Google Cloud, IBM, Jit, Microsoft, Moonhub, RADCOM, UKG, and Zoom are among those to be offering the first AI Agent Fabric integrations for seamless, wall to wall enterprise workflows across third-party agents, a company statement said. ServiceNow opened its annual customer and partner event by unveiling the new ServiceNow AI Platform to put any AI agent, or model to work across the enterprise. This move incorporates deeper integrations with strategic partners like NVIDIA, Microsoft, Google, and Oracle to accelerate enterprise-wide orchestration, the statement said. Global leaders, including Adobe, Aptiv, the NHL, Visa, and Wells Fargo are already using ServiceNow AI to drive measurable outcomes. "We are unleashing the full power of AI across any industry, any agent, any workflow," said McDermott. "For decades, CEOs have wanted technology to accelerate the speed of business transformation. With this next generation architecture, we finally have the foundation to run the integrated enterprise in real time," he said. "We are the only ones who can orchestrate AI, data, and workflows on a single platform. Now is the moment to unlock tomorrow's opportunities with ServiceNow as the AI operating system of the 21st century," he added. At the conference, ServiceNow launched AI agents to power the rise of self-defending enterprises. The new agents, available within ServiceNow's industry-leading Security and Risk solutions, are designed to improve consistency, identify insights, and reduce response times. ServiceNow also unveiled new Workflow Data Fabric capabilities, including a data ecosystem built to power AI agents and workflows with real-time intelligence. The new Workflow Data Network is a broad ecosystem of data platforms, applications, and enterprise tools that enhance Workflow Data Fabric and connect, understand, and take action from any data source, all on the ServiceNow AI Platform. The company also introduced its new Core Business Suite-an AI-powered solution that quickly transforms core business processes such as HR, procurement, finance, facilities, and legal. At the conference, ServiceNow and Amazon Web Services (AWS) announced a new solution designed to help customers unify and act on enterprise data more efficiently through new, bi-directional data integration and automated workflow orchestration.
[5]
ServiceNow and NVIDIA fuel a new class of intelligent AI agents across the enterprise
New Apriel Nemotron 15B reasoning model delivers lower latency, lower inference costs, and faster agentic AI -- purpose built for performance, cost, and scale ServiceNow brings accelerated data processing to Workflow Data Fabric with the integration of NVIDIA NeMo microservices, driving a closed‑loop data flywheel process that enhances model accuracy and personalized user experiences Knowledge 2025 - Today at ServiceNow's annual customer and partner event, Knowledge 2025, ServiceNow and NVIDIA announced an expansion of their partnership to fuel a new class of intelligent AI agents across the enterprise. This includes the debut of a new high‑performance ServiceNow reasoning model, Apriel Nemotron 15B -- developed in partnership with NVIDIA -- that evaluates relationships, applies rules, and weighs goals to reach conclusions or make decisions. The open‑source LLM is post‑trained with NVIDIA and ServiceNow‑provided data, helping deliver lower latency, lower inference costs, and faster agentic AI. The companies also unveiled plans to bring accelerated data processing to ServiceNow Workflow Data Fabric with the integration of select NVIDIA NeMo microservices, driving a closed‑loop data flywheel process that enhances model accuracy and personalized user experiences. The Apriel Nemotron 15B reasoning model represents a significant step forward in developing compact, enterprise‑grade LLMs purpose‑built for real‑time workflow execution. The model was trained using NVIDIA NeMo, the NVIDIA Llama Nemotron Post‑Training Dataset, and ServiceNow domain‑specific data with NVIDIA DGX Cloud on Amazon Web Services (AWS). It delivers advanced reasoning capabilities in a smaller size -- making it faster, more efficient, and cost‑effective to run on NVIDIA GPU infrastructure as an NVIDIA NIM microservice. Benchmarks show promising results for the model's size category, reinforcing its potential to power agentic AI workflows at scale. The debut of this model comes as enterprise AI continues to rise as a transformative force -- helping businesses address growing complexity, navigate macroeconomic uncertainty, and drive smarter, more resilient operations. To support ongoing model innovation and AI agent performance, ServiceNow and NVIDIA also unveiled a new collaboration on a joint data flywheel architecture that will integrate ServiceNow Workflow Data Fabric and select NVIDIA NeMo microservices. This integrated approach curates and contextualizes enterprise workflow data to refine and optimize reasoning models, with guardrails in place to help ensure that customers are in control of how their data is used and processed in a secure and compliant manner. This enables a closed‑loop learning process that improves model accuracy and adaptability -- accelerating the development and deployment of highly personalized, context‑aware AI agents designed to enhance enterprise productivity. "With this new Apriel Nemotron 15B reasoning model, we're powering intelligent AI agents that can make context‑aware decisions, adapt to complex workflows, and deliver personalized outcomes at scale," said Jon Sigler, EVP of Platform and AI at ServiceNow. "But the model is just one part of the innovation. Our collaboration building a data flywheel -- powered by Workflow Data Fabric and NVIDIA NeMo -- enables a virtuous cycle of learning and improvement. This helps us build AI agents that are contextually aware, deeply personalized, and aligned to the real‑time needs of the enterprise." "NVIDIA and ServiceNow share a mission to reimagine employee productivity through AI tools that help people get more done," said Kari Briski, Vice President of Generative AI Software for Enterprise at NVIDIA. "Together, we've built the Apriel Nemotron 15B model to serve as an enterprise‑grade reasoning engine and plan to integrate NVIDIA NeMo microservices into ServiceNow Workflow Data Fabric, providing a powerful foundation for intelligent digital agents." The new Apriel Nemotron 15B reasoning model and data flywheel integration will better equip AI agents to meet the growing demands of customers with continuous data and process feedback. For example, imagine an AI agent resolving a complex billing issue by pulling in past customer interactions, reasoning through the problem, and recommending the next best step -- getting faster, more accurate, and more efficient with every case it handles. Together, ServiceNow and NVIDIA are turning enterprise data into real‑time, personalized action. This latest milestone builds on the recently announced AI agent evaluation tools and integration of NVIDIA Llama Nemotron models with the ServiceNow AI Platform to accelerate agentic AI development. ServiceNow and NVIDIA have a shared vision of designing innovations that ensure LLMs -- and the experiences they're powering -- are not only intelligent, but also measurable, secure, and ready for real‑world deployment. The co‑development of the Apriel Nemotron 15B reasoning model and data flywheel integration marks a natural next step in the companies' deep partnership -- furthering their collaboration to power enterprise workflows with greater speed, precision, and cost‑efficiency. About ServiceNow ServiceNow (NYSE: NOW) is putting AI to work for people. We move with the pace of innovation to help customers transform organizations across every industry while upholding a trustworthy, human centered approach to deploying our products and services at scale. Our AI platform for business transformation connects people, processes, data, and devices to increase productivity and maximize business outcomes. For more information, visit: www.servicenow.com.
Share
Copy Link
ServiceNow and NVIDIA unveil Apriel Nemotron 15B, a new reasoning AI model designed for enterprise-grade intelligence, lower latency, and cost-efficient inference, marking a significant advancement in AI agent technology for businesses.
ServiceNow and NVIDIA have joined forces to launch Apriel Nemotron 15B, a new open-source reasoning language model (LLM) that promises to revolutionize enterprise AI agents. Unveiled at ServiceNow's Knowledge 2025 conference, this collaboration marks a significant leap forward in AI technology for businesses 12.
Apriel Nemotron 15B is designed to deliver lower latency, lower inference costs, and agentic AI capabilities. The model was trained using NVIDIA NeMo, the NVIDIA Llama Nemotron Post-Training Dataset, and ServiceNow's domain-specific data 1. What sets this model apart is its ability to package advanced reasoning capabilities in a smaller size, making it more cost-effective and faster to run on NVIDIA GPU infrastructure 3.
The new model is specifically engineered for reasoning – drawing inferences, weighing goals, and navigating rules in real-time. Its compact size, compared to some general-purpose LLMs with over a trillion parameters, allows for faster responses and lower inference costs while maintaining enterprise-grade intelligence 2.
In addition to the model, ServiceNow and NVIDIA have introduced a joint data flywheel architecture. This innovative system integrates ServiceNow's Workflow Data Fabric with NVIDIA NeMo microservices, including NeMo Customizer and NeMo Evaluator 2. The architecture enables a closed-loop process that refines and improves AI performance by using workflow data to personalize responses and enhance accuracy over time 4.
The potential impact of this technology is already being demonstrated in real enterprise scenarios. For instance, AstraZeneca is implementing AI agents powered by this model to help employees resolve issues and make decisions more efficiently, potentially saving 90,000 hours of employee time 2.
This collaboration between ServiceNow and NVIDIA represents a shift in enterprise AI strategy, moving from static models to intelligent systems that evolve. It's expected to accelerate end-to-end enterprise automation at scale, with ServiceNow positioning itself as the "AI operating system of the 21st century" 35.
ServiceNow AI Agents, powered by Apriel Nemotron 15B, are expected to roll out following Knowledge 2025. The model will support ServiceNow's Now LLM services and become a key engine behind the company's agentic AI offerings 2. With projections indicating that ServiceNow's annual revenue will top $13 billion this year, and its flagship generative AI product, Now Assist, expected to hit $1 billion in annual contract value by 2026, the future looks promising for this AI-driven enterprise transformation 3.
Summarized by
Navi
[2]
[3]
Analytics India Magazine
|ServiceNow Partners with NVIDIA to Launch Reasoning Model Apriel Nemotron 15B[4]
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
10 Sources
Technology
19 hrs ago
10 Sources
Technology
19 hrs ago
Nvidia is reportedly developing a new AI chip, the B30A, based on its latest Blackwell architecture for the Chinese market. This chip is expected to outperform the currently allowed H20 model, raising questions about U.S. regulatory approval and the ongoing tech trade tensions between the U.S. and China.
11 Sources
Technology
19 hrs ago
11 Sources
Technology
19 hrs ago
SoftBank Group has agreed to invest $2 billion in Intel, buying common stock at $23 per share. This strategic investment comes as Intel undergoes a major restructuring under new CEO Lip-Bu Tan, aiming to regain its competitive edge in the semiconductor industry, particularly in AI chips.
18 Sources
Business
11 hrs ago
18 Sources
Business
11 hrs ago
Databricks, a data analytics firm, is set to raise its valuation to over $100 billion in a new funding round, showcasing the strong investor interest in AI startups. The company plans to use the funds for AI acquisitions and product development.
7 Sources
Business
3 hrs ago
7 Sources
Business
3 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
11 hrs ago
15 Sources
Technology
11 hrs ago