6 Sources
6 Sources
[1]
The era of agentic chaos and how data will save us
AI agents are moving beyond coding assistants and customer service chatbots into the operational core of the enterprise. The ROI is promising, but autonomy without alignment is a recipe for chaos. Business leaders need to lay the essential foundations now. Agents are independently handling end-to-end processes across lead generation, supply chain optimization, customer support, and financial reconciliation. A mid-sized organization could easily run 4,000 agents, each making decisions that affect revenue, compliance, and customer experience. The transformation toward an agent-driven enterprise is inevitable. The economic benefits are too significant to ignore, and the potential is becoming a reality faster than most predicted. The problem? Most businesses and their underlying infrastructure are not prepared for this shift. Early adopters have found unlocking AI initiatives at scale to be extremely challenging. Companies are investing heavily in AI, but the returns aren't materializing. According to recent research from Boston Consulting Group, 60% of companies report minimal revenue and cost gains despite substantial investment. However, the leaders reported they achieved five times the revenue increases and three times the cost reductions. Clearly, there is a massive premium for being a leader. What separates the leaders from the pack isn't how much they're spending or which models they're using. Before scaling AI deployment, these "future-built" companies put critical data infrastructure capabilities in place. They invested in the foundational work that enables AI to function reliably. To understand how and where enterprise AI can fail, consider four critical quadrants: models, tools, context, and governance. Take a simple example: an agent that orders you pizza. The model interprets your request ("get me a pizza"). The tool executes the action (calling the Domino's or Pizza Hut API). Context provides personalization (you tend to order pepperoni on Friday nights at 7pm). Governance validates the outcome (did the pizza actually arrive?). Each dimension represents a potential failure point:
[2]
Why agentic AI pilots stall - and how to fix them
Addressing a critical inflection point in enterprise AI - agentic AI adoption Agentic AI is the latest buzz in boardrooms. Unlike generative AI tools, agentic AI act as autonomous agents that can reason, make decisions, and act across workflows to achieve goals. Done right, they promise to reduce manual work and unlock new levels of productivity. But many early adopters of AI tools are struggling. Pilot projects stumble, costs escalate, and results fail to match expectations. The problem isn't that agentic AI is overhyped, it is that businesses are moving too fast without the strategy, infrastructure, and data foundations required to make them work as intended. And this isn't surprising when you consider that 80% to 90% of all enterprise data is unstructured -- based on multiple analyst reports in recent years. As someone who has built platforms through multiple waves of 'intelligent automation,' I've seen firsthand the same repeated patterns: technology alone doesn't transform organizations -- alignment, governance, and cultural readiness do. The real breakthrough comes when innovation is grounded in trust and connected to business outcomes. Where conventional AI might sort invoices, an agentic AI could approve payments, flag anomalies, and update compliance systems. That leap demands a contextual understanding of how data, processes, and rules fit together. Too many organizations are treating agentic AI as bolt-on upgrade, as if they're simply more advanced chatbots. The reality is more complex: agentic AI needs to be woven into the enterprise fabric, connected to the right data and workflows, and supported by governance. Without that foundation, autonomy quickly becomes chaos. One of the biggest stumbling blocks is infrastructure. Many enterprises still run on siloed content repositories, legacy systems, and fragmented integrations. In these environments, agentic AI can't access the full unstructured data they need to perform at their best. In government, for example, content and processes are spread across different agencies, often using decades-old applications. Asking an AI agent to make decisions without integrating those systems is like asking it to assemble a puzzle with half the pieces missing. Preparing for agentic AI requires investing in cloud-native foundations and interoperable content platforms that unify information and enable seamless connections across applications. Without this groundwork, agentic AI risk acting on partial or outdated information, and making flawed decisions as a result. Even with the right systems in place, poor data quality is a critical flaw. Agentic AI thrives on complete, accurate, and governed information. If datasets are inconsistent or scattered, agentic AI can't make sound decisions. Healthcare illustrates this challenge clearly. An agent supporting clinicians must pull from medical histories, lab results, and imaging data in real time. If one piece is missing or misaligned, the recommendations these agentic technologies produce could be flawed. The lesson for early adopters is clear: start with a data audit and gain a firm understanding of where your unstructured data is. Know what you have, where it lives, and how it's governed before handing decision-making power to AI. Another misconception is that agentic AI removes people from the loop. In reality, the most effective early use cases blend autonomy with oversight. Take financial services. Agentic AI may verify documents and draft compliance reports, but humans still make the final call on high-risk cases, or how to proceed when a document is flagged by an agent. This balance accelerates workflows without eroding trust and accountability. Strong governance must be embedded from the outset, covering regulation, ethics, and operational control. Without it, these agents risk amplifying bias, undermining trust, and exposing organizations to compliance failures. The experiences of early adopters reveal three clear lessons. The first, projects work best when they begin with a clear business outcome, not a fascination with the technology or jumping on a trend. Organizations that take time to define the processes they want to improve and the results they need to achieve are the ones seeing value. Second, they invest early in the groundwork. Modern infrastructure and clean data may not grab headlines, but they are essential to making the headline-grabbing innovations possible. And finally, they treat autonomy as something to scale gradually. The most effective implementations begin with human-in-the-loop models and only expand to greater autonomy once confidence and maturity grow. This approach builds trust in the technology while maintaining accountability. These early lessons are already shaping a picture of maturity. As agentic AI matures, it will move beyond isolated experiments and towards interconnected systems. The real breakthrough will come from agentic AI networks coordinating across workflows. In a hospital, for example, one agent might surface patient histories, another manages scheduling, and a third flag billing issues; all contributing to a shared context that supports clinicians. Proof-points will become a non-negotiable. Businesses will expect agents to show their work, like the data they used, the reasoning they followed, and the compliance checks they applied. Without this transparency, agentic AI won't be trusted to handle sensitive or high-value work. And the technology landscape itself will have to open up. Organizations will want the flexibility to integrate agentic AI powered by different models, switch providers as needs evolve, and scale across hybrid or multi-cloud environments. Flexibility and interoperability will be essential to protect long-term investments. Far from failing, agentic AI is in its adolescence. Just as cloud computing went through a difficult transition phase before proving indispensable, agents too will require a period of adjustment. The organizations that succeed will be those that prepare best, not adopt the fastest. By aligning strategy, modernizing infrastructure, cleaning data, and embedding governance, enterprises can move from experimentation to transformation. With the right foundations, agentic AI can do far more than just automate tasks. It will enable genuinely intelligent systems that reshape how work gets done - and that could be the most significant shift in enterprise technology for a generation. We've featured the best AI website builder.
[3]
Why agentic AI became the breakout trend of mid-2025: By Mayuri Jain
If 2024 was the year generative AI proved it could talk, mid-2025 was when agentic AI proved it could do. The market stopped obsessing over clever outputs and started demanding completed workflows. That single shift explains why AI agents became the loudest, fastest-moving trend across enterprise AI adoption. "The promise is not conversation. The promise is completed work." The evidence showed up in hard numbers, not just product launches. A leading research index reported that 78% of organizations used AI in 2024, up sharply from the prior year, signaling a broad base ready to absorb the next abstraction layer. Another widely-cited enterprise survey found a stark execution gap: adoption success rose to 80% with a formal strategy, but fell to 37% without one. In other words, the constraint was no longer model access. It was operating discipline. From chat to action - what changed For most leaders, the early phase of AI felt like an interface upgrade. People asked questions. Systems answered. Useful, yes, but bounded. Agentic AI changed the unit of value from "an answer" to "a result." That change happened because three building blocks matured at once: * Tool use became practical in mainstream stacks. * Orchestration patterns hardened into reusable architecture. * Evaluation became a production requirement, not a research hobby. A prominent enterprise survey reported that 23% of respondents were already scaling agentic systems, and another 39% were experimenting. That is not a niche. That is an early majority signal. The best mental model is simple. Traditional copilots assist a person. Agents coordinate work across systems. That includes searching, filing, updating records, routing tickets, and triggering downstream actions. The new definition of "value" - outcomes, not demos If you want a quick gut-check for whether a project is truly agentic, ask one question: "Does it reliably finish the last mile?" Most enterprise pilots died in the last mile. They produced drafts, summaries, or recommendations, then handed the messy work to humans. Agents aim to remove that handoff, or at least compress it into approval. This is why "agent washing" became a real complaint. A senior technical leader described a wave of products calling themselves agents, despite behaving like chatbots with a new label. The market's response was predictable: buyers raised the bar from novelty to proof. That is also why the most credible mid-2025 narratives emphasized measurable operational results, not marketing adjectives. "An agent without accountability is a demo. An agent with accountability is a system." The agent stack - orchestration, MCP protocol, and evals Agents are not one model plus a prompt. They are a stack. If you treat them like a feature, you get brittle behavior. If you treat them like software, you get compounding capability. "The winning teams build an agent like a product, not like a prompt." In practice, the stack has four layers: Mid-2025 was when the "tool layer" became the loudest bottleneck. People realized that capability was stranded without integration. Tool integration is now the bottleneck Enter the rise of standardized agent-to-tool patterns, with MCP protocol frequently discussed as a practical way to connect agents to real enterprise services. Technical guidance from a major model lab described how agents scale better by writing code to call tools, instead of repeatedly injecting tool definitions into prompts. Separately, a major developer platform framed MCP as an emerging de facto integration method, while stressing that tooling and API programs still mattered as much as the protocol itself. This matters because enterprises do not run on one system. They run on hundreds. Without a repeatable tool contract, every agent becomes a custom integration project. That is a slow path. So the "latest perspective" from serious builders was not "which model is best." It was "how do we standardize safe action across our stack." Evals move from research to operations The second shift was cultural. Teams stopped treating evaluation as a one-time benchmark. They started treating it as continuous quality control. Production-grade AI agents fail in new ways: * Tool calls can break silently. * Retrieval can drift as content changes. * Autonomy can amplify small errors into large consequences. That is why evaluation frameworks moved closer to what SRE teams already do: define success metrics, test edge cases, monitor regressions, and enforce change control. One major reason this shift accelerated is executive expectation. A workplace trend report found that leaders increasingly expect teams to redesign processes, build multi-agent systems, and manage hybrid teams of people and agents. When leadership expects a new operating model, governance and evals become table stakes. "Evals are the seatbelt. Autonomy is the accelerator." Trust is the product - governance, security, and accountability As agents gain autonomy, trust stops being a slogan and becomes a design constraint. The market is moving from "cool" to "controlled." "If you cannot audit it, you cannot scale it." Here is the key difference between classic automation and agentic AI. Classic automation is deterministic. Agents are probabilistic. That does not mean they are unsafe. It means they require a different control plane. Several 2025 data points underline why governance is rising: * A 2025 governance survey found 59% of organizations had established a role or office tasked with AI governance. * A responsible AI survey reported 61% of respondents were at strategic or embedded maturity stages for responsible AI. * Public discussion increasingly highlighted gaps between ambition and operational readiness. Why "agentic" multiplies risk surfaces Agents create new risk surfaces because they connect and act: * They can touch multiple systems in one flow. * They can store credentials or tokens. * They can be manipulated through tool outputs, not just prompts. Recent reporting on vulnerabilities in an MCP server ecosystem highlighted how security issues can emerge when components are combined, even if each looks safe alone. This is not a reason to pause adoption. It is a reason to design for containment. The safest organizations adopt a few habits early: * Assume every tool output is untrusted input. * Scope agent permissions by job role and task. * Log every action with human-readable rationale. * Build an approval step for irreversible operations. A practical control plane for autonomous work Governance does not need to be slow. It needs to be explicit. A control plane for AI agents should answer five questions: If you can answer those questions, you can scale. If you cannot, you are gambling with operational credibility. "The best agent is not the smartest. It is the most accountable." Where ROI is real - the workflows that scale first The ROI conversation matured in 2025. Leaders stopped asking, "Can it do it?" and started asking, "Can it do it every day?" That shift favors boring, high-frequency workflows. "Repetition is where agents earn trust." A 2025 enterprise spending analysis estimated $37B in generative AI spend in 2025, with a large share going to application-layer products. More spend means more scrutiny. Scrutiny means ROI must be defensible. So where does value show up first? High-frequency, low-regret automation These are workflows with clear inputs, repeatable steps, and reversible outcomes: * Triage and routing for service operations * Knowledge base updates and hygiene * Data enrichment and CRM cleanup * Scheduling, follow-ups, and status reporting The pattern is consistent. Start with work that humans do reluctantly, but consistently. That is where autonomy is least controversial and most measurable. Separately, agents are also emerging in commerce contexts, with industry efforts to set rules for "agentic commerce" and trusted checkout flows. Even that domain signals the same truth: trust rules must evolve with capability. Knowledge work that finally gets operational The second ROI zone is knowledge work that used to be "too fuzzy" to automate. Agents help by turning fuzzy tasks into structured steps: * Research to shortlist to decision memo * Draft to review to publish * Incident to diagnosis to remediation runbook A crucial nuance: humans still own risk. Agents can do first pass work, then escalate. That hybrid mode is often the winning adoption path. Download our free AI adoption checklist if you want a practical template for use-case selection, governance, and evaluation. "Agents win when humans set intent and verify outcomes." A 90-day playbook to deploy agentic AI safely Speed matters, but sequence matters more. The fastest teams are not reckless. They are structured. "Move fast, but instrument everything." Here is a pragmatic 90-day plan that aligns enterprise AI adoption with AI governance. Days 1-30 - pick the right wedge * Choose one workflow with high volume and clear success criteria. * Map the tools it touches and the permissions required. * Define failure states and escalation paths. * Establish a baseline with manual metrics. The goal is not autonomy on day one. The goal is a reliable loop. Days 31-60 - build the reliability loop * Implement evals that match real tasks, not generic benchmarks. * Add monitoring for tool failures, latency, and drift. * Create an approval step for irreversible actions. * Log actions for audit and learning. This is where teams separate "agentic theater" from production behavior. Days 61-90 - scale with guardrails * Expand to adjacent workflows that share tools and patterns. * Standardize integration using a protocol approach, where appropriate. * Formalize governance roles, even if lightweight. * Train users on when to trust and when to override. A simple heuristic helps: autonomy expands in proportion to observability. "Scale is earned. It is not declared." The bold prediction By mid-2026, competitive advantage will shift from "having models" to "running an agent operating system," where orchestration, MCP-style tool contracts, and evals are managed like core infrastructure. Organizations that treat agents as products will outpace those treating them as features.
[4]
Governance Helps Agentic AI Move Faster Inside Companies | PYMNTS.com
A new report from Harvard Business Review Analytic Services finds that enthusiasm for agentic AI is running well ahead of organizational readiness. Most executives expect agentic AI to transform their businesses, and many believe it will become standard across their industries. Early adopters are already seeing gains in productivity and decision-making. Yet for most organizations, real-world use remains limited. Only a minority are using agentic AI at scale, according to the report, and many struggle to translate high expectations into consistent business results. The gap is not about belief in the technology but about preparation. The report shows that data foundations are improving, but governance, workforce skills and clear measures of success lag behind. Few organizations have defined what success looks like or how to manage risk when AI systems act with greater autonomy. Leaders that are making progress tend to focus on practical use cases, invest in workforce readiness, and tie agentic AI efforts directly to business strategy. The report concludes that agentic AI can deliver meaningful value, but only for organizations willing to rethink processes, invest in people, and put strong guardrails in place before scaling. "The gap between expectation and reality remains wide," the report reads. "Organizational readiness can help bridge the gap by giving implementation a better chance of succeeding." Singapore Standards Governance can also be mandated. According to Computer Weekly, Singapore has introduced what it describes as the world's first formal governance framework designed specifically for agentic AI. Announced by the country's minister for digital development and information at the World Economic Forum in Davos, the framework is intended to help organizations deploy AI agents that can plan, decide and act with limited human input. Developed by the Infocomm Media Development Authority (IMDA), the framework builds on Singapore's earlier AI governance efforts but shifts the focus from generative AI to systems that can take real-world actions, such as updating databases or processing payments. The goal is to balance productivity gains with safeguards against new operational and security risks. The framework lays out practical steps for enterprises, including setting clear limits on how much autonomy AI agents have, defining when human approval is required and monitoring systems throughout their lifecycle. It also highlights risks such as unauthorized actions and automation bias, where people place too much trust in systems that have worked well in the past. Industry leaders welcomed the move, saying clear rules are needed as agentic AI begins to influence decisions with real-world consequences. IMDA has positioned the framework as a living document and is inviting feedback from companies as it continues to refine guidance for testing and oversight. Identity Factors Another report warns that enterprises are racing ahead with agentic AI adoption while falling behind on governance and security. Executives from Accenture and Okta say most companies already use AI agents across everyday business tasks, but very few have put effective oversight in place. According to Okta, while more than nine in ten organizations are using AI agents, only a small fraction believe they have strong governance strategies. Accenture's research points to the same imbalance, showing widespread use of AI agents without clear plans for managing the risks they introduce. The core challenge, the report argues, is that AI agents are increasingly acting like digital employees without being managed as such. These agents need access to systems, data, and workflows to be useful, which creates new risks if their identities and permissions are not clearly defined. The authors recommend treating AI agents as formal digital identities, with clear rules around authentication, access, monitoring and lifecycle management. Without this structure, organizations risk creating unmanaged "identity sprawl" that could turn agentic AI from a productivity gain into a major security and compliance problem. "Agents need their own identity," the report says. "Once you accept that, everything else flows -- access control, governance, auditing and compliance."
[5]
Agentic AI Breaks Out of the Lab and Into the Org Chart
Agentic AI isn't a futuristic concept anymore. In just a few months, more companies have shifted from testing AI to letting it do the work, changing what "fast" and "competitive" mean in product, operations and decision making. Artificial intelligence has moved rapidly from experimentation to execution inside large enterprises. New PYMNTS Intelligence data from the latest CAIO Report shows that firms have effectively settled the debate over the use of agentic AI. What is changing now is how much authority companies are willing to give these systems and how quickly they are putting them to work. Across industries, executives are shifting from cautious interest to active deployment, with implications for how they build products, serve customers and make operational decisions. The findings point to a market that has crossed a threshold, where trust, adoption and scale are advancing in tandem rather than sequentially. The Rise of Agentic AI Trust Shift In just three months, resistance to granting AI systems real autonomy fell sharply. In August 2025, nearly all surveyed firms refused to give agentic AI any meaningful authority. By November, that stance had softened considerably, with nearly 40% of product leaders now willing to allow some level of autonomous access. The technology sector is driving this change, with more than half of firms open to agentic autonomy and nearly one-third prepared to grant full execution rights across functions. This shift reflects a broader recalibration of risk, where the cost of inaction increasingly outweighs concerns about control. Agentic AI Interest Surge Interest in agentic AI has intensified across every core product function. By November, more than 86% of chief product officers reported a strong interest in using autonomous agents for customer and user experience research, up sharply from August. Product lifecycle management emerged as the top use case, with nearly 90% expressing high interest. The breadth of this demand signals that firms no longer view agentic AI as a niche efficiency tool. Instead, they increasingly see it as a foundational capability that can support decision-making from early research through post-launch analysis. The Widening Action Gap The share of companies merely exploring agentic AI is shrinking, while active use is rising. In August, more than half of firms said they were only considering the technology. By November, that figure had dropped to 30%. At the same time, nearly one-quarter of companies reported they were either piloting or fully using agentic AI. This shift suggests a widening divide between organizations moving quickly to operationalize AI and those that remain stalled at the evaluation stage, with speed becoming a competitive differentiator. Universal Demand for Agentic AI Agentic AI adoption is converging around a standard set of use cases across industries rather than fragmenting by sector. Interest levels for core functions such as customer research, product lifecycle management and reporting rarely fall below 70%, regardless of whether firms operate in technology, goods or services. This pattern points to the emergence of a universal AI playbook, in which companies expect autonomous systems to support the entire product stack rather than just isolated tasks. The implication is that vendors and platforms must deliver breadth, not just depth. The Mainstream Adoption Leap Agentic AI has crossed into the physical economy. Goods and manufacturing firms, which reported virtually no usage in August, moved to nearly 20% active pilots by November. Services firms saw adoption jump fivefold over the same period, while technology companies extended their lead. This rapid uptake across traditionally slower-moving sectors indicates that AI-driven automation is no longer confined to digital-first businesses. The gap between digital and physical industries is narrowing as agentic systems become embedded in everyday operations.
[6]
Closing the Control Gap: Mohar V on How Agentic AI is Redefining Enterprise Work
Artificial intelligence is quietly changing its personality. For years, AI has waited for instructions, responding only when asked. Now, a new kind of AI is stepping forward, one that can plan, decide, and act on its own. This shift, known as agentic AI, is reshaping how companies get work done. In a recent episode of the Analytics Insight podcast, host Priya Dialani spoke with Mohar V, Co-Founder of TECHVED Consulting, about why this moment feels different. According to Mohar, businesses have hit a wall. Digital systems are everywhere, customers expect instant responses, and teams are stretched thin. Traditional AI helped, but only up to a point. It could perform automated tasks, but the system lacked the capability to control the overall workflow. Agentic AI goes a step further. The system will start working once it is set up, with the objective of developing all necessary procedures.
Share
Share
Copy Link
Enterprise adoption of agentic AI surged in late 2025, with 40% of firms now willing to grant AI system autonomy. But the rapid deployment exposes critical gaps in AI governance, data infrastructure, and security protocols. While early adopters report significant enterprise productivity gains, 60% of companies see minimal ROI, revealing a widening divide between leaders and laggards in the race to operationalize autonomous AI agents.
The shift from experimentation to execution happened faster than most predicted. By November 2025, nearly 40% of product leaders expressed willingness to grant AI agents meaningful autonomy, a dramatic reversal from August when almost all firms refused such access
5
. The technology sector leads this transformation, with more than half of firms now open to agentic autonomy and nearly one-third prepared to grant full execution rights across functions5
.
Source: TechRadar
Agentic AI proved it could complete work, not just generate responses. AI agents now independently handle end-to-end workflows across lead generation, supply chain optimization, customer support, and financial reconciliation
1
. A mid-sized organization could easily run 4,000 agents, each making decisions that affect revenue, compliance, and customer experience1
. Research shows 78% of organizations used AI in 2024, creating a broad base ready to absorb this next abstraction layer3
.While enterprise adoption accelerates, the returns tell a troubling story. According to Boston Consulting Group research, 60% of companies report minimal revenue and cost gains despite substantial investment
1
. However, leaders achieved five times the revenue increases and three times the cost reductions compared to others1
. This massive premium for being a leader reveals what separates success from failure: not spending levels or model selection, but foundational data infrastructure capabilities.The share of companies merely exploring agentic AI dropped from over half in August to 30% by November 2025, while nearly one-quarter reported actively piloting or fully using the technology
5
. An enterprise survey found adoption success rose to 80% with a formal strategy but fell to 37% without one3
. The constraint is no longer model access but operating discipline and organizational readiness.A Harvard Business Review Analytic Services report finds enthusiasm for agentic AI running well ahead of organizational readiness
4
. While data foundations are improving, AI governance, workforce readiness, and clear measures of success lag behind4
. Few organizations have defined what success looks like or how to manage risk management when AI systems act with greater AI system autonomy.
Source: PYMNTS
Singapore introduced the world's first formal governance framework designed specifically for agentic AI, announced at the World Economic Forum in Davos
4
. Developed by the Infocomm Media Development Authority, the framework helps organizations deploy AI agents that can plan, decide and act with limited human input. It lays out practical steps including setting clear limits on autonomy, defining when human oversight is required, and monitoring systems throughout their lifecycle4
.Research from Okta reveals that while more than nine in ten organizations use AI agents, only a small fraction believe they have strong governance strategies
4
. The core challenge is that AI agents increasingly act like digital employees without being managed as such, creating AI security risks around authentication, access control, and compliance4
.Related Stories
Many enterprises still run on siloed content repositories, legacy systems, and fragmented integrations where AI agents can't access the full unstructured data they need
2
. The problem intensifies when considering that 80% to 90% of all enterprise data is unstructured2
. Without cloud-native foundations and interoperable content platforms, AI agents risk acting on partial or outdated information and making flawed decisions.
Source: MIT Tech Review
Tool integration emerged as the loudest bottleneck in mid-2025
3
. Standardized agent-to-tool patterns, with MCP protocol frequently discussed as a practical way to connect agents to enterprise services, became essential3
. Without a repeatable tool contract, every agent becomes a custom integration project. Orchestration patterns hardened into reusable architecture, and evaluation frameworks moved from research to operations as continuous quality control3
.The most effective implementations blend autonomy with human oversight rather than removing people from the loop
2
. In financial services, AI agents may verify documents and draft compliance reports, but humans make the final call on high-risk cases2
. This balance accelerates workflows without eroding trust and accountability.Early adopters reveal three clear lessons for achieving business outcomes. First, projects work best when they begin with a clear business outcome, not fascination with technology
2
. Second, they invest early in data infrastructure and clean data, which may not grab headlines but enable headline-grabbing innovations2
. Finally, they treat autonomy as something to scale gradually, beginning with human-in-the-loop models and expanding only once confidence and maturity grow2
.Interest in agentic AI intensified across every core product function, with more than 86% of chief product officers reporting strong interest in using autonomous agents for customer research by November 2025
5
. Product lifecycle management emerged as the top use case, with nearly 90% expressing high interest5
. Goods and manufacturing firms moved from virtually no usage in August to nearly 20% active pilots by November, while services firms saw adoption jump fivefold5
. The question facing organizations is no longer whether to adopt agentic AI, but whether they have the governance, infrastructure, and workforce readiness to avoid agentic chaos and capture the ROI that leaders are already achieving.Summarized by
Navi
[1]
[2]
[3]
1
Policy and Regulation

2
Technology

3
Technology
