7 Sources
7 Sources
[1]
The era of agentic chaos and how data will save us
AI agents are moving beyond coding assistants and customer service chatbots into the operational core of the enterprise. The ROI is promising, but autonomy without alignment is a recipe for chaos. Business leaders need to lay the essential foundations now. Agents are independently handling end-to-end processes across lead generation, supply chain optimization, customer support, and financial reconciliation. A mid-sized organization could easily run 4,000 agents, each making decisions that affect revenue, compliance, and customer experience. The transformation toward an agent-driven enterprise is inevitable. The economic benefits are too significant to ignore, and the potential is becoming a reality faster than most predicted. The problem? Most businesses and their underlying infrastructure are not prepared for this shift. Early adopters have found unlocking AI initiatives at scale to be extremely challenging. Companies are investing heavily in AI, but the returns aren't materializing. According to recent research from Boston Consulting Group, 60% of companies report minimal revenue and cost gains despite substantial investment. However, the leaders reported they achieved five times the revenue increases and three times the cost reductions. Clearly, there is a massive premium for being a leader. What separates the leaders from the pack isn't how much they're spending or which models they're using. Before scaling AI deployment, these "future-built" companies put critical data infrastructure capabilities in place. They invested in the foundational work that enables AI to function reliably. To understand how and where enterprise AI can fail, consider four critical quadrants: models, tools, context, and governance. Take a simple example: an agent that orders you pizza. The model interprets your request ("get me a pizza"). The tool executes the action (calling the Domino's or Pizza Hut API). Context provides personalization (you tend to order pepperoni on Friday nights at 7pm). Governance validates the outcome (did the pizza actually arrive?). Each dimension represents a potential failure point:
[2]
3 risks hindering enterprise-ready AI -- and how low-code workflows help
Addressing the hidden risks blocking enterprise agentic AI You've likely heard of agentic AI by now: systems that can autonomously plan, execute, and adapt across tasks with "minimal" human oversight (quotations intentional here). In this, we've seen a shift from AI as a mere tool to now AI as a collaborator. However, there is also a growing tension: Agentic AI needs deep access to your data to act autonomously, but that level of access makes it hard to deploy safely and responsibly at scale. Data access without proper guardrails makes agentic AI harder to audit and integrate into environments where trust, data governance, and maybe even operational complexity, are non-negotiable. Most enterprises don't struggle to build agentic systems -- they struggle to trust them. In this article, we'll explore three risks holding agentic AI back from enterprise-readiness, especially when it comes to agents working with data, and then three ways intuitive, low-code workflows can turn these systems into reliable colleagues. Workflows don't limit the "intelligence" of agentic AI, but rather, act as a "safe layer" between agentic AI and your data, making it possible to operationalize agentic AI in the enterprise. Risk #1: No transparency in decision making Most AI agents today rely on LLMs as the planners or "brains" behind the scenes. This means most agents don't follow predefined blueprints or logic. The way they work is dynamic and always changing. Their actions are based on likelihood derived from vast datasets -- not knowledge. We've all seen it: an AI telling us what we want to hear instead of what's true, simply because we (subconsciously or not) led it in the wrong direction with our responses. Think of that famous example where someone convinced an AI that 2+2=5. The result? These actions are difficult to inspect, explain, or trace. Without a clear, visible audit trail, enterprises cannot confidently answer a critical question: "Why did the agent do that?" Especially when the agent takes an unexpected action. Ultimately, this makes debugging challenging, and likely miserable. Instead of systemic debugging, enterprise teams have the time-consuming task of second-guessing the agent's behavior. This slow, manual, error-prone, unscalable process of "prompt forensics" is ineffective for enterprises. If you can't trace it, you can't trust it. And on the topic of trust... Risk #2: Indeterminism means no operational trust Agentic AI is not deterministic, which means it doesn't produce consistent, repeatable outputs. Identical tasks could yield different actions. In addition, agents could hallucinate actions based on what could be plausible, but end up being just plain wrong. And there is often no built-in layer to enforce or constrain what an agent can or can't do. This is particularly high-risk for enterprises, like financial systems or anything touching personal data, where data leakage is unacceptable. Especially in those cases, lack of consistency, transparency, explainability, and control ultimately leads to lack of trust. Risk #3: No clear boundary between data and AI In traditional enterprise systems, data and logic are clearly separated. IT teams know where the data is stored, how it's accessed (whether through permissions or trust), and there's an explicit code or set of rules that govern how this data is used. Agentic systems break these rules. They blend reasoning, knowledge, and actions into an opaque process. Drawing a clear boundary line between what information the agent has access to versus what the agent does can be challenging, and in some instances, impossible. The lack of separation is not only high-risk -- it's a dealbreaker. Enterprises are legally required to meet compliance and governance standards. This lack of a clear boundary discourages enterprises from AI adoption. So, what can we do to mitigate these risks and (safely) benefit from agentic AI -- and encourage adoption -- in business? Or better yet, how can agents reliably work with data? Workflows as a unifying language and bridge for enterprises and agentic AI The answer lies in transparency. Intuitive, low-code workflows bring in that transparency, acting as a clear separation between agents and your data. Workflows force agents to interact with tools, not directly with the data. While agentic systems are powerful because they can reason with minimal human input, workflows rein in that power and build trust by setting a defined, structured path for how these agentic systems can operate. Workflows bring control, clarity, and repeatability to dynamic and uncertain systems. 1. Workflows allow for auditability Workflows being visual in nature, each step -- and each potential failure point -- is more visible. The decision-making process is more clearly documented. The outputs are controllable and explainable. Also, the visual nature of workflows makes it an intuitive format. It allows teams with varying levels of technicality to have the ability "to speak the same language," in contrast to a mess of SQL, Python, or other code that other solutions may come with. This makes debugging and monitoring much more straight-forward for enterprise teams. 2. Workflows allow for trustworthy guardrails and reusability Workflows reduce risk because workflows set what data and tools agentic systems can access, and in what level of detail. Decision-makers can define this explicitly company-wide. Additionally, once these approvals and logic have been set, workflows allow for reusability and scalability. Enterprises can reuse these validated blueprints and implement workflows into other parts of the business without reinventing the wheel -- or at the very least, be a reliable starting point for other projects. 3. Workflows allow for governance and accountability Workflows enforce guardrails, observability, and accountability. By being the clear separation between data and AI (Remember, what the agent knows versus what the agent does), enterprises have complete governance. Organizations can protect data, monitor data access, and audit data lineage. Put simply: Workflows make sure agentic AI uses your data properly... and doesn't abuse it! Agentic AI is undeniably valuable in the enterprise context. Even with these risks, there doesn't have to be a tradeoff between transparency and complexity. By enforcing workflows as the safety layer to your agentic work, you allow for visual, modular, and governable ways to build intelligent agents that enterprises can trust and scale. Again: You don't give agents access to your data. You give agents access to your tools, which keeps your data protected from attacks or misuse. Agentic AI is not limited by workflows. Rather, these systems have more "freedom" to do cool things when operating within the data-safe boundaries of well-defined workflows as a language. And as a sidenote: This is nicely in line with newer trends to provide agents with sets of skills instead of detailed instructions on how to use hundreds of tools. Just like Pat Boone once said, "Freedom isn't the absence of boundaries. It's the ability to operate successfully and happily within boundaries." We've featured the best business intelligence platform. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
[3]
Why agentic AI pilots stall - and how to fix them
Addressing a critical inflection point in enterprise AI - agentic AI adoption Agentic AI is the latest buzz in boardrooms. Unlike generative AI tools, agentic AI act as autonomous agents that can reason, make decisions, and act across workflows to achieve goals. Done right, they promise to reduce manual work and unlock new levels of productivity. But many early adopters of AI tools are struggling. Pilot projects stumble, costs escalate, and results fail to match expectations. The problem isn't that agentic AI is overhyped, it is that businesses are moving too fast without the strategy, infrastructure, and data foundations required to make them work as intended. And this isn't surprising when you consider that 80% to 90% of all enterprise data is unstructured -- based on multiple analyst reports in recent years. As someone who has built platforms through multiple waves of 'intelligent automation,' I've seen firsthand the same repeated patterns: technology alone doesn't transform organizations -- alignment, governance, and cultural readiness do. The real breakthrough comes when innovation is grounded in trust and connected to business outcomes. Where conventional AI might sort invoices, an agentic AI could approve payments, flag anomalies, and update compliance systems. That leap demands a contextual understanding of how data, processes, and rules fit together. Too many organizations are treating agentic AI as bolt-on upgrade, as if they're simply more advanced chatbots. The reality is more complex: agentic AI needs to be woven into the enterprise fabric, connected to the right data and workflows, and supported by governance. Without that foundation, autonomy quickly becomes chaos. One of the biggest stumbling blocks is infrastructure. Many enterprises still run on siloed content repositories, legacy systems, and fragmented integrations. In these environments, agentic AI can't access the full unstructured data they need to perform at their best. In government, for example, content and processes are spread across different agencies, often using decades-old applications. Asking an AI agent to make decisions without integrating those systems is like asking it to assemble a puzzle with half the pieces missing. Preparing for agentic AI requires investing in cloud-native foundations and interoperable content platforms that unify information and enable seamless connections across applications. Without this groundwork, agentic AI risk acting on partial or outdated information, and making flawed decisions as a result. Even with the right systems in place, poor data quality is a critical flaw. Agentic AI thrives on complete, accurate, and governed information. If datasets are inconsistent or scattered, agentic AI can't make sound decisions. Healthcare illustrates this challenge clearly. An agent supporting clinicians must pull from medical histories, lab results, and imaging data in real time. If one piece is missing or misaligned, the recommendations these agentic technologies produce could be flawed. The lesson for early adopters is clear: start with a data audit and gain a firm understanding of where your unstructured data is. Know what you have, where it lives, and how it's governed before handing decision-making power to AI. Another misconception is that agentic AI removes people from the loop. In reality, the most effective early use cases blend autonomy with oversight. Take financial services. Agentic AI may verify documents and draft compliance reports, but humans still make the final call on high-risk cases, or how to proceed when a document is flagged by an agent. This balance accelerates workflows without eroding trust and accountability. Strong governance must be embedded from the outset, covering regulation, ethics, and operational control. Without it, these agents risk amplifying bias, undermining trust, and exposing organizations to compliance failures. The experiences of early adopters reveal three clear lessons. The first, projects work best when they begin with a clear business outcome, not a fascination with the technology or jumping on a trend. Organizations that take time to define the processes they want to improve and the results they need to achieve are the ones seeing value. Second, they invest early in the groundwork. Modern infrastructure and clean data may not grab headlines, but they are essential to making the headline-grabbing innovations possible. And finally, they treat autonomy as something to scale gradually. The most effective implementations begin with human-in-the-loop models and only expand to greater autonomy once confidence and maturity grow. This approach builds trust in the technology while maintaining accountability. These early lessons are already shaping a picture of maturity. As agentic AI matures, it will move beyond isolated experiments and towards interconnected systems. The real breakthrough will come from agentic AI networks coordinating across workflows. In a hospital, for example, one agent might surface patient histories, another manages scheduling, and a third flag billing issues; all contributing to a shared context that supports clinicians. Proof-points will become a non-negotiable. Businesses will expect agents to show their work, like the data they used, the reasoning they followed, and the compliance checks they applied. Without this transparency, agentic AI won't be trusted to handle sensitive or high-value work. And the technology landscape itself will have to open up. Organizations will want the flexibility to integrate agentic AI powered by different models, switch providers as needs evolve, and scale across hybrid or multi-cloud environments. Flexibility and interoperability will be essential to protect long-term investments. Far from failing, agentic AI is in its adolescence. Just as cloud computing went through a difficult transition phase before proving indispensable, agents too will require a period of adjustment. The organizations that succeed will be those that prepare best, not adopt the fastest. By aligning strategy, modernizing infrastructure, cleaning data, and embedding governance, enterprises can move from experimentation to transformation. With the right foundations, agentic AI can do far more than just automate tasks. It will enable genuinely intelligent systems that reshape how work gets done - and that could be the most significant shift in enterprise technology for a generation. We've featured the best AI website builder.
[4]
Why agentic AI became the breakout trend of mid-2025: By Mayuri Jain
If 2024 was the year generative AI proved it could talk, mid-2025 was when agentic AI proved it could do. The market stopped obsessing over clever outputs and started demanding completed workflows. That single shift explains why AI agents became the loudest, fastest-moving trend across enterprise AI adoption. "The promise is not conversation. The promise is completed work." The evidence showed up in hard numbers, not just product launches. A leading research index reported that 78% of organizations used AI in 2024, up sharply from the prior year, signaling a broad base ready to absorb the next abstraction layer. Another widely-cited enterprise survey found a stark execution gap: adoption success rose to 80% with a formal strategy, but fell to 37% without one. In other words, the constraint was no longer model access. It was operating discipline. From chat to action - what changed For most leaders, the early phase of AI felt like an interface upgrade. People asked questions. Systems answered. Useful, yes, but bounded. Agentic AI changed the unit of value from "an answer" to "a result." That change happened because three building blocks matured at once: * Tool use became practical in mainstream stacks. * Orchestration patterns hardened into reusable architecture. * Evaluation became a production requirement, not a research hobby. A prominent enterprise survey reported that 23% of respondents were already scaling agentic systems, and another 39% were experimenting. That is not a niche. That is an early majority signal. The best mental model is simple. Traditional copilots assist a person. Agents coordinate work across systems. That includes searching, filing, updating records, routing tickets, and triggering downstream actions. The new definition of "value" - outcomes, not demos If you want a quick gut-check for whether a project is truly agentic, ask one question: "Does it reliably finish the last mile?" Most enterprise pilots died in the last mile. They produced drafts, summaries, or recommendations, then handed the messy work to humans. Agents aim to remove that handoff, or at least compress it into approval. This is why "agent washing" became a real complaint. A senior technical leader described a wave of products calling themselves agents, despite behaving like chatbots with a new label. The market's response was predictable: buyers raised the bar from novelty to proof. That is also why the most credible mid-2025 narratives emphasized measurable operational results, not marketing adjectives. "An agent without accountability is a demo. An agent with accountability is a system." The agent stack - orchestration, MCP protocol, and evals Agents are not one model plus a prompt. They are a stack. If you treat them like a feature, you get brittle behavior. If you treat them like software, you get compounding capability. "The winning teams build an agent like a product, not like a prompt." In practice, the stack has four layers: Mid-2025 was when the "tool layer" became the loudest bottleneck. People realized that capability was stranded without integration. Tool integration is now the bottleneck Enter the rise of standardized agent-to-tool patterns, with MCP protocol frequently discussed as a practical way to connect agents to real enterprise services. Technical guidance from a major model lab described how agents scale better by writing code to call tools, instead of repeatedly injecting tool definitions into prompts. Separately, a major developer platform framed MCP as an emerging de facto integration method, while stressing that tooling and API programs still mattered as much as the protocol itself. This matters because enterprises do not run on one system. They run on hundreds. Without a repeatable tool contract, every agent becomes a custom integration project. That is a slow path. So the "latest perspective" from serious builders was not "which model is best." It was "how do we standardize safe action across our stack." Evals move from research to operations The second shift was cultural. Teams stopped treating evaluation as a one-time benchmark. They started treating it as continuous quality control. Production-grade AI agents fail in new ways: * Tool calls can break silently. * Retrieval can drift as content changes. * Autonomy can amplify small errors into large consequences. That is why evaluation frameworks moved closer to what SRE teams already do: define success metrics, test edge cases, monitor regressions, and enforce change control. One major reason this shift accelerated is executive expectation. A workplace trend report found that leaders increasingly expect teams to redesign processes, build multi-agent systems, and manage hybrid teams of people and agents. When leadership expects a new operating model, governance and evals become table stakes. "Evals are the seatbelt. Autonomy is the accelerator." Trust is the product - governance, security, and accountability As agents gain autonomy, trust stops being a slogan and becomes a design constraint. The market is moving from "cool" to "controlled." "If you cannot audit it, you cannot scale it." Here is the key difference between classic automation and agentic AI. Classic automation is deterministic. Agents are probabilistic. That does not mean they are unsafe. It means they require a different control plane. Several 2025 data points underline why governance is rising: * A 2025 governance survey found 59% of organizations had established a role or office tasked with AI governance. * A responsible AI survey reported 61% of respondents were at strategic or embedded maturity stages for responsible AI. * Public discussion increasingly highlighted gaps between ambition and operational readiness. Why "agentic" multiplies risk surfaces Agents create new risk surfaces because they connect and act: * They can touch multiple systems in one flow. * They can store credentials or tokens. * They can be manipulated through tool outputs, not just prompts. Recent reporting on vulnerabilities in an MCP server ecosystem highlighted how security issues can emerge when components are combined, even if each looks safe alone. This is not a reason to pause adoption. It is a reason to design for containment. The safest organizations adopt a few habits early: * Assume every tool output is untrusted input. * Scope agent permissions by job role and task. * Log every action with human-readable rationale. * Build an approval step for irreversible operations. A practical control plane for autonomous work Governance does not need to be slow. It needs to be explicit. A control plane for AI agents should answer five questions: If you can answer those questions, you can scale. If you cannot, you are gambling with operational credibility. "The best agent is not the smartest. It is the most accountable." Where ROI is real - the workflows that scale first The ROI conversation matured in 2025. Leaders stopped asking, "Can it do it?" and started asking, "Can it do it every day?" That shift favors boring, high-frequency workflows. "Repetition is where agents earn trust." A 2025 enterprise spending analysis estimated $37B in generative AI spend in 2025, with a large share going to application-layer products. More spend means more scrutiny. Scrutiny means ROI must be defensible. So where does value show up first? High-frequency, low-regret automation These are workflows with clear inputs, repeatable steps, and reversible outcomes: * Triage and routing for service operations * Knowledge base updates and hygiene * Data enrichment and CRM cleanup * Scheduling, follow-ups, and status reporting The pattern is consistent. Start with work that humans do reluctantly, but consistently. That is where autonomy is least controversial and most measurable. Separately, agents are also emerging in commerce contexts, with industry efforts to set rules for "agentic commerce" and trusted checkout flows. Even that domain signals the same truth: trust rules must evolve with capability. Knowledge work that finally gets operational The second ROI zone is knowledge work that used to be "too fuzzy" to automate. Agents help by turning fuzzy tasks into structured steps: * Research to shortlist to decision memo * Draft to review to publish * Incident to diagnosis to remediation runbook A crucial nuance: humans still own risk. Agents can do first pass work, then escalate. That hybrid mode is often the winning adoption path. Download our free AI adoption checklist if you want a practical template for use-case selection, governance, and evaluation. "Agents win when humans set intent and verify outcomes." A 90-day playbook to deploy agentic AI safely Speed matters, but sequence matters more. The fastest teams are not reckless. They are structured. "Move fast, but instrument everything." Here is a pragmatic 90-day plan that aligns enterprise AI adoption with AI governance. Days 1-30 - pick the right wedge * Choose one workflow with high volume and clear success criteria. * Map the tools it touches and the permissions required. * Define failure states and escalation paths. * Establish a baseline with manual metrics. The goal is not autonomy on day one. The goal is a reliable loop. Days 31-60 - build the reliability loop * Implement evals that match real tasks, not generic benchmarks. * Add monitoring for tool failures, latency, and drift. * Create an approval step for irreversible actions. * Log actions for audit and learning. This is where teams separate "agentic theater" from production behavior. Days 61-90 - scale with guardrails * Expand to adjacent workflows that share tools and patterns. * Standardize integration using a protocol approach, where appropriate. * Formalize governance roles, even if lightweight. * Train users on when to trust and when to override. A simple heuristic helps: autonomy expands in proportion to observability. "Scale is earned. It is not declared." The bold prediction By mid-2026, competitive advantage will shift from "having models" to "running an agent operating system," where orchestration, MCP-style tool contracts, and evals are managed like core infrastructure. Organizations that treat agents as products will outpace those treating them as features.
[5]
Governance Helps Agentic AI Move Faster Inside Companies | PYMNTS.com
A new report from Harvard Business Review Analytic Services finds that enthusiasm for agentic AI is running well ahead of organizational readiness. Most executives expect agentic AI to transform their businesses, and many believe it will become standard across their industries. Early adopters are already seeing gains in productivity and decision-making. Yet for most organizations, real-world use remains limited. Only a minority are using agentic AI at scale, according to the report, and many struggle to translate high expectations into consistent business results. The gap is not about belief in the technology but about preparation. The report shows that data foundations are improving, but governance, workforce skills and clear measures of success lag behind. Few organizations have defined what success looks like or how to manage risk when AI systems act with greater autonomy. Leaders that are making progress tend to focus on practical use cases, invest in workforce readiness, and tie agentic AI efforts directly to business strategy. The report concludes that agentic AI can deliver meaningful value, but only for organizations willing to rethink processes, invest in people, and put strong guardrails in place before scaling. "The gap between expectation and reality remains wide," the report reads. "Organizational readiness can help bridge the gap by giving implementation a better chance of succeeding." Singapore Standards Governance can also be mandated. According to Computer Weekly, Singapore has introduced what it describes as the world's first formal governance framework designed specifically for agentic AI. Announced by the country's minister for digital development and information at the World Economic Forum in Davos, the framework is intended to help organizations deploy AI agents that can plan, decide and act with limited human input. Developed by the Infocomm Media Development Authority (IMDA), the framework builds on Singapore's earlier AI governance efforts but shifts the focus from generative AI to systems that can take real-world actions, such as updating databases or processing payments. The goal is to balance productivity gains with safeguards against new operational and security risks. The framework lays out practical steps for enterprises, including setting clear limits on how much autonomy AI agents have, defining when human approval is required and monitoring systems throughout their lifecycle. It also highlights risks such as unauthorized actions and automation bias, where people place too much trust in systems that have worked well in the past. Industry leaders welcomed the move, saying clear rules are needed as agentic AI begins to influence decisions with real-world consequences. IMDA has positioned the framework as a living document and is inviting feedback from companies as it continues to refine guidance for testing and oversight. Identity Factors Another report warns that enterprises are racing ahead with agentic AI adoption while falling behind on governance and security. Executives from Accenture and Okta say most companies already use AI agents across everyday business tasks, but very few have put effective oversight in place. According to Okta, while more than nine in ten organizations are using AI agents, only a small fraction believe they have strong governance strategies. Accenture's research points to the same imbalance, showing widespread use of AI agents without clear plans for managing the risks they introduce. The core challenge, the report argues, is that AI agents are increasingly acting like digital employees without being managed as such. These agents need access to systems, data, and workflows to be useful, which creates new risks if their identities and permissions are not clearly defined. The authors recommend treating AI agents as formal digital identities, with clear rules around authentication, access, monitoring and lifecycle management. Without this structure, organizations risk creating unmanaged "identity sprawl" that could turn agentic AI from a productivity gain into a major security and compliance problem. "Agents need their own identity," the report says. "Once you accept that, everything else flows -- access control, governance, auditing and compliance."
[6]
Agentic AI Breaks Out of the Lab and Into the Org Chart
Agentic AI isn't a futuristic concept anymore. In just a few months, more companies have shifted from testing AI to letting it do the work, changing what "fast" and "competitive" mean in product, operations and decision making. Artificial intelligence has moved rapidly from experimentation to execution inside large enterprises. New PYMNTS Intelligence data from the latest CAIO Report shows that firms have effectively settled the debate over the use of agentic AI. What is changing now is how much authority companies are willing to give these systems and how quickly they are putting them to work. Across industries, executives are shifting from cautious interest to active deployment, with implications for how they build products, serve customers and make operational decisions. The findings point to a market that has crossed a threshold, where trust, adoption and scale are advancing in tandem rather than sequentially. The Rise of Agentic AI Trust Shift In just three months, resistance to granting AI systems real autonomy fell sharply. In August 2025, nearly all surveyed firms refused to give agentic AI any meaningful authority. By November, that stance had softened considerably, with nearly 40% of product leaders now willing to allow some level of autonomous access. The technology sector is driving this change, with more than half of firms open to agentic autonomy and nearly one-third prepared to grant full execution rights across functions. This shift reflects a broader recalibration of risk, where the cost of inaction increasingly outweighs concerns about control. Agentic AI Interest Surge Interest in agentic AI has intensified across every core product function. By November, more than 86% of chief product officers reported a strong interest in using autonomous agents for customer and user experience research, up sharply from August. Product lifecycle management emerged as the top use case, with nearly 90% expressing high interest. The breadth of this demand signals that firms no longer view agentic AI as a niche efficiency tool. Instead, they increasingly see it as a foundational capability that can support decision-making from early research through post-launch analysis. The Widening Action Gap The share of companies merely exploring agentic AI is shrinking, while active use is rising. In August, more than half of firms said they were only considering the technology. By November, that figure had dropped to 30%. At the same time, nearly one-quarter of companies reported they were either piloting or fully using agentic AI. This shift suggests a widening divide between organizations moving quickly to operationalize AI and those that remain stalled at the evaluation stage, with speed becoming a competitive differentiator. Universal Demand for Agentic AI Agentic AI adoption is converging around a standard set of use cases across industries rather than fragmenting by sector. Interest levels for core functions such as customer research, product lifecycle management and reporting rarely fall below 70%, regardless of whether firms operate in technology, goods or services. This pattern points to the emergence of a universal AI playbook, in which companies expect autonomous systems to support the entire product stack rather than just isolated tasks. The implication is that vendors and platforms must deliver breadth, not just depth. The Mainstream Adoption Leap Agentic AI has crossed into the physical economy. Goods and manufacturing firms, which reported virtually no usage in August, moved to nearly 20% active pilots by November. Services firms saw adoption jump fivefold over the same period, while technology companies extended their lead. This rapid uptake across traditionally slower-moving sectors indicates that AI-driven automation is no longer confined to digital-first businesses. The gap between digital and physical industries is narrowing as agentic systems become embedded in everyday operations.
[7]
Closing the Control Gap: Mohar V on How Agentic AI is Redefining Enterprise Work
Artificial intelligence is quietly changing its personality. For years, AI has waited for instructions, responding only when asked. Now, a new kind of AI is stepping forward, one that can plan, decide, and act on its own. This shift, known as agentic AI, is reshaping how companies get work done. In a recent episode of the Analytics Insight podcast, host Priya Dialani spoke with Mohar V, Co-Founder of TECHVED Consulting, about why this moment feels different. According to Mohar, businesses have hit a wall. Digital systems are everywhere, customers expect instant responses, and teams are stretched thin. Traditional AI helped, but only up to a point. It could perform automated tasks, but the system lacked the capability to control the overall workflow. Agentic AI goes a step further. The system will start working once it is set up, with the objective of developing all necessary procedures.
Share
Share
Copy Link
Despite heavy investment in agentic AI, 60% of companies report minimal returns due to inadequate governance and data infrastructure. Leaders achieve five times the revenue gains by prioritizing foundational work over rapid deployment. Singapore introduces the world's first formal AI governance framework as enterprises struggle to balance autonomy with accountability.
Agentic AI has moved beyond the experimental phase into the operational core of businesses, but the results reveal a troubling divide. According to research from Boston Consulting Group, 60% of companies report minimal revenue and cost gains despite substantial investment in enterprise AI
1
. Meanwhile, leaders in agentic AI adoption achieved five times the revenue increases and three times the cost reductions compared to laggards1
. The difference isn't spending or model selection—it's foundational data infrastructure and governance that separates success from failure.
Source: TechRadar
A mid-sized organization could easily run 4,000 agents, each making decisions that affect revenue, compliance, and customer experience
1
. AI agents in enterprise are independently handling end-to-end processes across lead generation, supply chain optimization, customer support, and financial reconciliation. The transformation toward an agent-driven enterprise is inevitable, but most businesses and their underlying infrastructure are not prepared for this shift. Early adopters have found unlocking AI initiatives at scale to be extremely challenging, creating what some experts call an era of agentic chaos1
.
Source: PYMNTS
The path to scaling agentic AI is blocked by three fundamental agentic AI risks that undermine trust and operational reliability. First, most AI agents today lack transparency in decision-making because they rely on LLMs as planners rather than predefined logic
2
. Their actions are based on likelihood derived from vast datasets, not knowledge, making it difficult to answer the critical question: "Why did the agent do that?"2
. Without a clear audit trail, enterprises face time-consuming "prompt forensics" that is ineffective and unscalable.Second, agentic AI is not deterministic, meaning identical tasks could yield different actions
2
. Autonomous agents could hallucinate actions based on what seems plausible but is actually wrong. This lack of consistency is particularly high-risk for financial systems or anything touching personal data, where data leakage is unacceptable. There is often no built-in layer to enforce or constrain what an agent can or cannot do, creating serious security risks2
.Third, enterprise agentic AI breaks traditional boundaries between data and logic. In conventional systems, IT teams know where data is stored and how it's accessed, with explicit rules governing its use. Agentic systems blend reasoning, knowledge, and actions into an opaque process, making it challenging to draw a clear line between what information the agent accesses versus what it does
2
. This lack of separation discourages adoption because enterprises are legally required to meet compliance standards.One of the biggest stumbling blocks is infrastructure, particularly when 80% to 90% of all enterprise data is unstructured
3
. Many enterprises still run on siloed content repositories, legacy systems, and fragmented integrations. In these environments, agentic AI cannot access the full data they need to perform optimally. Asking an AI agent to make decisions without integrating those systems is like asking it to assemble a puzzle with half the pieces missing3
.
Source: MIT Tech Review
Poor data quality represents another critical flaw. Agentic AI thrives on complete, accurate, and governed information. If datasets are inconsistent or scattered, agents cannot make sound decisions. Healthcare illustrates this challenge clearly: an agent supporting clinicians must pull from medical histories, lab results, and imaging data in real time
3
. If one piece is missing or misaligned, the recommendations could be flawed. The lesson for early adopters is clear: start with a data audit before handing decision-making power to AI.Tool integration has become the loudest bottleneck in mid-2025
4
. Without a repeatable tool contract, every agent becomes a custom integration project. Enterprises do not run on one system—they run on hundreds. Standardized agent-to-tool patterns and orchestration patterns are emerging as practical ways to connect agents to real enterprise services, but capability remains stranded without proper integration4
.Singapore has introduced what it describes as the world's first formal AI governance framework designed specifically for agentic AI
5
. Announced at the World Economic Forum in Davos, the framework is intended to help organizations deploy AI agents that can plan, decide, and act with limited human oversight. Developed by the Infocomm Media Development Authority, the framework builds on Singapore's earlier AI governance efforts but shifts focus to systems that can take real-world actions, such as updating databases or processing payments.The framework lays out practical steps for enterprises, including setting clear limits on how much autonomy AI agents have, defining when human approval is required, and monitoring systems throughout their lifecycle
5
. It also highlights risks such as unauthorized actions and automation bias, where people place too much trust in systems that have worked well in the past. Industry leaders welcomed the move, saying clear rules are needed as agentic AI begins to influence decisions with real-world consequences.Related Stories
Intuitive, low-code workflows are emerging as a critical solution to agentic AI risks by acting as a clear separation between agents and data
2
. Workflows force agents to interact with tools, not directly with data, bringing control, clarity, and repeatability to dynamic systems. The visual nature of agentic AI workflows makes each step and potential failure point more visible, allowing for better accountability and transparency2
. This approach doesn't limit the intelligence of agentic AI but acts as a safe layer that makes it possible to operationalize these systems at scale.Another critical development involves treating AI agents as formal digital identities. According to research from Accenture and Okta, while more than nine in ten organizations are using AI agents, only a small fraction believe they have strong governance strategies
5
. The core challenge is that AI agents are increasingly acting like digital employees without being managed as such. Experts recommend treating agents as formal digital identities with clear rules around authentication, access, monitoring, and lifecycle management. Without this structure, organizations risk creating unmanaged "identity sprawl" that could turn agentic AI from a productivity gain into a major security and compliance problem5
.A Harvard Business Review Analytic Services report finds that enthusiasm for agentic AI is running well ahead of organizational readiness
5
. Most executives expect agentic AI to transform their businesses, and many believe it will become standard across their industries. Early adopters are already seeing gains in productivity and decision-making. Yet for most organizations, real-world use remains limited. Only a minority are using agentic AI at scale, and many struggle to translate high expectations into consistent business outcomes5
.The experiences of early adopters reveal three clear lessons. First, projects work best when they begin with clear business outcomes, not fascination with technology
3
. Organizations that define the processes they want to improve and the results they need to achieve are the ones seeing value. Second, they invest early in the groundwork—modern infrastructure and clean data may not grab headlines, but they are essential to making innovations possible. Finally, they treat autonomy as something to scale gradually, beginning with human-in-the-loop models and only expanding to greater autonomy once confidence and maturity grow3
.If 2024 was the year generative AI proved it could talk, mid-2025 was when agentic AI proved it could do
4
. A leading research index reported that 78% of organizations used AI in 2024, signaling a broad base ready to absorb the next abstraction layer. Another enterprise survey found a stark execution gap: adoption success rose to 80% with a formal strategy but fell to 37% without one4
. The constraint is no longer model access—it's operating discipline, governance, and the ability to measure success metrics effectively. The most effective implementations balance autonomy with oversight, accelerating workflows without eroding trust in agentic AI and accountability3
.Summarized by
Navi
[1]
[3]
[4]
17 Sept 2025•Technology

19 Jun 2025•Technology

08 Dec 2025•Technology

1
Policy and Regulation

2
Technology

3
Technology
