11 Sources
11 Sources
[1]
Enabling agent-first process redesign
But unlocking their potential requires redesigning processes around agents rather than bolting them onto fragmented legacy workflows using traditional optimization methods. Companies must become agent first. In an agent-first enterprise, AI systems operate processes while humans set goals, define policy constraints, and handle exceptions. "You need to shift the operating model to humans as governors and agents as operators," says Scott Rodgers, global chief architect and U.S. CTO of the Deloitte Microsoft Technology Practice. With technology budgets for AI expected to increase more than 70% over the next two years, AI agents, powered by generative AI, are poised to fundamentally transform organizations and achieve results beyond traditional automation. These initiatives have the potential to produce significant performance gains, while shifting humans toward higher value work. AI is advancing so quickly that static approaches to task automation will likely only produce incremental gains. Because legacy processes aren't built for autonomous systems, AI agents require machine-readable process definitions, explicit policy constraints, and structured data flows, according to Rodgers. Further complicating matters, many organizations don't understand the full economic drivers of their business, such as cost to serve and per-transaction costs. As a result, they have trouble prioritizing agents that can create the most value and instead focus on flashy pilots. To achieve structural change, executives should think differently. Companies must instead orchestrate outcomes faster than competitors. "The real risk isn't that AI won't work -- it's that competitors will redesign their operating models while you're still piloting agents and copilots," says Rodgers. "Nonlinear gains come when companies create agent-centric workflows with human governance and adaptive orchestration." Routine and repetitive tasks are increasingly handled automatically, freeing employees to focus on higher value, creative, and strategic work. This shift improves operational efficiency, fosters stronger collaboration, and generates faster decision-making -- helping organizations modernize the workplace without sacrificing enterprise security. Download the article.
[2]
As models converge, the enterprise edge in AI shifts to governed data and the platforms that control it
As frontier models converge, the advantage in enterprise AI is moving away from the model and toward the data it can safely access. For most enterprises, that advantage lives in unstructured data: the contracts, case files, product specifications, and internal knowledge. For enterprise leaders, the question is no longer which model to use, but which platform governs the content those models are allowed to reason over. "It's not what the model does anymore, it's the enterprise's own unstructured data - their content, how it's organized, how it's governed, and how it's made accessible to the AI." says Yash Bhavnani, head of AI at Box. "The organizations that will lead in AI are the ones that built the governance infrastructure to make any model trustworthy, with the right permissions in place, the right content accessible, and a clear audit trail for every action taken," says Ben Kus, CTO of Box. Enterprise AI must be grounded in secure systems of record As the advantage in AI shifts from models to governed content, systems of record are becoming the foundation that makes enterprise AI trustworthy. Employees use frontier models to summarize documents, draft reports, answer questions, but when those tools are disconnected from authoritative internal repositories, the results are difficult to trust, impossible to audit, and potentially dangerous. AI that cannot trace its outputs back to a governed source of record becomes a liability. "It's not a theoretical concern," Bhavnani says. "For an insurance enterprise using AI to analyze client claims, low accuracy is simply not acceptable, and untraceable output can't be acted upon." Systems of record provide authoritative, version-controlled content with embedded permissions and compliance controls already built in, and RAG pipelines retrieve data from live repositories at inference time, connecting responses directly to current, traceable sources. Without integration into systems of record, employees build their own workarounds, content gets duplicated across tools that don't talk to each other, and shadow knowledge stores accumulate outside the visibility of IT and compliance teams. "Customers tell us employees are uploading sensitive documents to personal accounts and running their own AI workflows, with no visibility from the enterprise into what is being shared or what is being generated," he says. "It's not just a security risk, it's an organizational one." Permission-aware access is a requirement for agentic AI As AI moves into agentic territory, executing multi-step tasks autonomously across documents, workflows, and enterprise systems, the risk profile changes entirely. Agents act faster than humans, often without the contextual judgment needed to decide what data they should access, making permissions-aware access essential. "An AI platform without permissions-aware access is too dangerous to use," Kus says. "It's a precondition for safe enterprise AI deployment, and the more it appears to have been added after the fact rather than built into the foundation, the more it should concern the enterprise considering it." In regulated industries, frameworks like HIPAA, FedRAMP High, and SOC 2 demand audit trails, policy enforcement, and demonstrable controls over who and what has accessed sensitive data. "The audit trail should cover not only the source files but the AI session that used them, and accessed only with the same controls and the same encryption mechanism," Kus says. "We don't want customers to end up with a compliance breach because the agent was looking at sensitive data and the agent records got stored somewhere unexpected." Content platforms are evolving into AI control planes Enterprise content platforms are evolving from repositories into orchestration layers -- an AI control plane that sits between models, agents, and enterprise data. Rather than just storing documents, the platform governs how content is accessed, routes it to the right reasoning engine, enforces permissions, and maintains a complete audit trail of every action. "An AI-ready content platform needs to support human navigation and use in the way platforms always have, and it needs its own AI agents that understand the platform's data structures deeply enough to get the best out of them," Kus says. "It also needs to be open enough that any external agent can reach into it. An open agent ecosystem is the future of how these platforms will work." When content, permissions, audit trails, and application access are all handled by the same platform, governance stays attached to the content itself. More than any capability of the models on top of it, a unified governance layer is what allows enterprise AI to scale safely. Turning unstructured content into structured intelligence Unstructured data has long been a sticking point for organizations, which had to build specialized models to handle every subtype of unstructured data. "What's changed is that general-purpose large language models now bring enough intelligence to extract structured data from unstructured content without that level of bespoke investment," Kus says. "Box Extract applies this capability at scale, automatically pulling key information from contracts, forms, claims, and reports and applying it as structured metadata within Box. The content that previously had to be read by a person to yield its value can now be processed, structured, and made queryable across an entire repository." And once that data is extracted and operational logic lives in the system, users can visualize, search, and act on that extracted information through custom dashboards and no-code tools. Box Agents take this further by enabling multi-step reasoning and task execution grounded directly in enterprise content, with persistent sessions that support iterative knowledge work with simple, natural language direction. And because agent sessions in Box are persistent, the work is not lost between interactions. The practical result is that end-to-end workflows that previously required human coordination across multiple systems can be orchestrated directly on systems of record. "When those workflows are built on Box agents and automation operating directly on governed content, the handoffs become automated, the audit trail is built in, and the system of record remains the authoritative source throughout," Bhavani says. "Nothing falls through the cracks between systems, because there is only one system." The enterprises seeing real returns are not the ones that simply plugged in a frontier model and waited for results. They are the ones that connected AI to their systems of record, governed what it can access, and built the operational layer that makes its outputs trustworthy enough to use at scale. Platforms that bring together content management, security, automation, and AI integration in a single layer are emerging as the foundation for enterprise AI, because model capability alone is not enough. Without governance built into the platform, the gaps between systems become the point of failure. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they're always clearly marked. For more information, contact [email protected].
[3]
Why most agentic AI projects fail, and how to avoid being one of them
Data quality, governance and integration determine whether they scale successfully As businesses get used to using generative AI tools, attention is quickly turning to agentic AI. These systems are designed to plan tasks, interpret information and take action within defined guardrails. In theory, this moves AI from a tool that assists employees to one that helps run parts of the business. Investment is rising fast, with McKinsey predicting that the agentic AI market will rise from roughly $5-7 billion in 2024 to over $199 billion by 2034. But many businesses are finding it harder than expected to turn early pilots into something reliable and useful at scale. Gartner predicts that more than 40% of agentic AI projects will be cancelled by the end of 2027. Meanwhile, Qlik found that 97% of organizations have committed budget to agentic AI, but only 18% are fully deploying it. Many see the potential, yet practical deployment still proves difficult when systems are expected to operate reliably in real business environments. When AI starts acting inside workflows Early generative AI tools largely acted as assistants. Employees used them to answer questions, summarize documents or draft content. If the response was slightly wrong, the impact was usually limited. Agentic systems operate differently. They can interpret signals, recommend next steps and carry out tasks across enterprise systems. In practice, this might involve identifying unusual changes in financial performance, triggering a supply chain adjustment or initiating an operational workflow. Once AI interacts directly with business processes, the margin for error becomes much smaller. A generative AI recommendation can be reviewed before action is taken, but an automated workflow requires far greater confidence in the information and logic behind it. This is where many businesses discover their underlying data foundations are not ready. Fixing the data foundations first The most common reason agentic AI projects stall is a lack of data maturity. Agents depend on a consistent and trusted view of information across the organization, yet many businesses still operate with fragmented data, duplicated sources and unclear ownership. In these conditions, even the strongest AI models struggle to produce outputs that teams can comfortably rely on. Unstructured information adds another layer of complexity. Internal documents, emails and knowledge bases often contain useful context but rarely have clear ownership. That makes it difficult to verify whether the information is current, accurate or even still relevant when an AI agent draws on it. As agents begin interacting with operational systems, these weaknesses become more visible. If the information feeding those systems is inconsistent or outdated, the reliability of the agent's outputs quickly comes into question. Strengthening those data foundations is often the first step before agentic AI can be deployed with confidence. Who is responsible when AI takes action As agents take on more responsibility, governance becomes a practical issue rather than a theoretical one. Organizations need clear answers to some basic questions. Who owns the data feeding the system? Who signs off on actions an agent takes? And when should a person step in and review a decision? Clear accountability helps teams trust the system implemented and reduces the risk of mistakes. It also makes it possible to understand how decisions were reached, which matters when AI outputs affect revenue, compliance or business planning. Regulation can help provide structure here. Europe's AI rules, including the EU AI Act, aim to set expectations around transparency, accountability and risk early in the development of AI systems. While regulation is sometimes seen as slowing innovation, clearer rules can make it easier for organizations to use AI responsibly. Getting AI tools to work together Another challenge emerging with agentic AI is the growing number of assistants operating across a business. Most organizations are not relying on a single model or platform. Different teams often use different AI tools depending on their needs, from analytics platforms to internal systems and external assistants. For agents to work effectively in that environment, they need secure ways to access trusted data and interact with other systems. Without that connection, agents operate in isolation and their usefulness quickly becomes limited. This is where shared standards are starting to play a role. Technologies such as Model Context Protocol (MCP) allow AI assistants to connect with enterprise platforms while keeping access controls and governance in place. Instead of building custom integrations for every tool, organizations can expose data and analytics through consistent interfaces that different assistants can use. As more AI tools enter the workplace, making sure they can work together and access reliable data will become increasingly important. Organizations that plan for this early will find it much easier to scale agentic systems across the business. Building agentic AI that works Agentic AI has the potential to completely change how organizations operate for the better. But success depends on prepare the systems underneath first, putting the right data, accountability and controls in place before scaling beyond pilots.
[4]
AI agents can only be trusted as Junior Engineers
AI agents require strict governance, least privilege, and human oversight The new generation of agentic AI tools is rewriting how software gets built and managed. As we speak, more autonomous coding assistants, workflow agents, and AI-driven DevOps systems are embedded across tech stacks at unprecedented speed. Yet, as the pace of adoption accelerates, so too does the risk when oversight lags behind. AI code governance is no longer a compliance afterthought; it's the steering wheel that keeps AI-driven innovation on the road. This isn't theoretical. Reuters cited an organization-wide use of AI in professional services that almost doubled to 40% in 2026. IDC similarly predicts that agentic automation will enhance capabilities in over 40% of enterprise applications. These figures reflect a market transitioning from tentative trials to full operational reliance. The temptation to prioritize speed over safety will only grow, but it is governance that ensures velocity doesn't become volatility. The December 2025 AWS incident serves as a stark example. Reports suggest that engineers used an internal AI coding agent, Kiro, but misconfigured access controls granted the agent broader permissions than intended, leading to around 13 hours of downtime. Amazon later clarified that the primary cause was user error, a human misconfiguration rather than a technical failure within Kiro, and that the tool usually requires dual human approval before acting. But the takeaway is clear: When you give AI tools the same permissions as senior engineers but none of the judgment, small misconfigurations can become serious incidents very quickly. This instance isn't a warning about AI's dangers so much as a lesson in responsibility. For engineering leaders, AI agents should be seen as extremely fast junior engineers, brilliant at pattern‑matching and execution, but lacking judgment, context, and restraint. Governance systems are what ensure these digital juniors contribute safely and productively. AI should be given the least access The first rule of safe deployment is least privilege. In the realm of AI agents, unlimited potential should never translate to unlimited access. They should have restricted access to data and environments, no more than they need to fulfil a single defined task. Like a graduate software engineer, they must operate within a sandbox. This isolation ensures that the agent can iterate, hallucinate, or fail without bringing down the system. Production access is earned, not given, and only granted after outputs survive a gauntlet of tests, scans and human reviews. If a human junior isn't permitted to push code directly to a live environment without a senior's sign-off, an AI should be held to an even more rigorous standard. Bypassing this review process invites accidental privilege escalation, a quiet killer of code security. By enforcing these boundaries, you prevent a minor logic error from cascading into a critical misconfiguration. In the age of autonomous agents, rigorous oversight is essential to keeping systems safe. Oversight is essential for AI-generated code AI agents, while powerful, have inherent limitations that necessitate treating their contributions with caution -- analogous to the level of trust you would give a Junior Engineer. Their operational model relies heavily on pattern-based association, which means they lack the true system and architectural understanding of a seasoned human developer. This reliance can lead to unexpected mistakes or the generation of code that is technically functional but introduces unforeseen complexities or security vulnerabilities, as they lack the full context of the system's long-term health and design philosophy. The degree of oversight should scale with autonomy. The more an agent can act without human initiation, the tighter its audit and traceability mechanisms must become. In mature DevOps settings, this means embedding AI logging, version control, and rollback functionality directly into the deployment pipeline, ensuring every AI action can be explained or reversed. This disciplined approach ensures that while AI agents enhance speed and efficiency, they do not compromise the integrity, security, or stability of the production environment, effectively constraining them to a Junior Engineer role. Solving the visibility gap Once multiple teams start using agents, you quickly lose track of where AI-generated code has landed and what it's doing. You need portfolio-level tooling to see where AI code is running, how secure and maintainable it is, and where the riskiest changes are concentrated. Without unified oversight, leaders may not know where AI-generated code is deployed, how it interacts with other systems, or whether similar agents are repeating the same flawed process across teams. Central visibility is essential. Leaders need a current, portfolio-wide view of where AI-generated code is used, which systems carry the most risk, and what to fix first. Modern governance frameworks recommend mapping not just what AI writes or executes, but where and why, allowing early identification of unsafe patterns before they manifest in production. Governance is the handlebar, not the brakes The AWS case showed what happens when automation gains authority without equivalent accountability. The next generation of organizations won't avoid AI; they'll pair autonomy with oversight, building clear permission boundaries, enforcing review pipelines, and maintaining cross-organizational visibility. AI code governance does not slow AI innovation down. It gives organizations the control to adopt AI with confidence, focus on the right risks first, and go faster -- responsibly. We've featured the best AI website builder. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
[5]
How to manage the employees that don't clock in
AI is rapidly shifting from a technology organizations experiment with to one they're expected to use. In many businesses, it's already part of day-to-day operations, built into the tools employees depend on and embedded within background systems. What sets this moment apart isn't only the speed at which AI is being adopted, but the extent to which it's becoming fundamental to how employees work. There's plenty of reason for optimism. A recent KPMG study found that among the 85% of organizations already integrating AI into their operations, productivity has increased by an average of 35% following the introduction of AI agents into the workforce. Teams are unlocking new opportunities to accelerate workflows, automate repetitive tasks, and surface insights that previously took far longer to uncover. However, as AI becomes more deeply embedded across the enterprise, organizations must take a more intentional approach to its management. This is especially true when it comes to keeping identities secure, where decisions made today will determine how securely AI can scale in the future. Securing the AI workforce So far, most of the conversation has focused on humans using AI. Assistants and copilots that sit alongside employees have dominated headlines, and for good reason. They are changing how people write content, develop code, analyze data, and communicate with others. But that is only part of the story. A quieter shift is underway where AI is no longer just supporting the workforce, but becoming a distinct part of it. We're in the early stages of autonomous AI agents taking on tasks independently, accessing applications, pulling data, and making decisions with little or no human involvement. While it is tempting to see them simply as the next evolution of assistants, they are something fundamentally different. These agents operate as independent actors inside the environment and should be using their own credentials and permissions, which means they behave far more like digital employees than tools. This shift matters because most organizations are still treating these agents like software, even as they take on responsibilities that look a lot like human work. For example, many AI agents take the easy way out and ask the human to reuse their existing credentials and permissions. Why identity systems are playing catch up For decades, identity and access management (IAM) has been designed around a simple assumption: the primary user is human. Even when organizations extended IAM to cover service accounts and machine identities, those identities were tied to predictable systems performing narrow, repetitive tasks. Autonomous agents disrupt that model. They are adaptive, work through tasks in flexible and non-uniform ways, operate at machine speed, and may touch far more systems than any single employee ever would. Despite this, many environments are trying to squeeze them into frameworks that were never built for independent, decision-making digital workers. A recent 2025 data and AI security research report shows that only 16% of organizations treat AI as its own identity class with dedicated policies. The result is a growing gap between how these agents behave and how their identity management, creating blind spots that attackers are ready to exploit. There is no HR system for AI That gap begins the moment an organization tries to onboard an autonomous agent. When a new employee joins, HR software triggers identity creation, roles are assigned, access is provisioned, and ownership is clear. There is a record of who the person is, what they are responsible for, and who manages them. Autonomous agents arrive with none of that structure. They are created by developers, embedded into workflows, or introduced through new platforms, often without any central visibility or consistent process. There is no HR system for AI, no default manager, and no guarantee that anyone is accountable for what that agent can access or do. This is where identity governance must evolve. Organizations need to discover these agents, register them, and give them distinct identities tied to clear business ownership. Every autonomous agent should have a clear owner who understands why it exists, what it is meant to do, and which systems it should touch. Without that foundation, it becomes difficult to answer even basic questions about how many agents exist, who owns them, and whether their access is still justified. Given estimations that nearly 3 in 4 companies plan to deploy agentic AI in the next two years, with just 1 in 5 having a mature governance model for these autonomous agents -- according to Deloitte--these challenges are only set to expand. The challenge of governance at machine speed Onboarding is only the beginning. Once agents are in the environment, the real difficulty lies in governing what they can do and when. It's easy to focus on securing models or code, but governance is ultimately about managing identities and privileges in line with business intent. If an agent can act on behalf of the organization, its identity should be governed with the same rigor as a human employee. In many cases, it should be governed even more tightly, as AI agents operate autonomously, continuously, and across trust boundaries at machine speed and scale. That makes over-privileged access particularly dangerous. AI has fundamentally altered the identity security paradigm. Privileged actions are being increasingly performed across hybrid ecosystems -- from on-prem and cloud to databases and SaaS -- and organizations have lost the centralized point of control over privileged access they once relied on. Organizations can no longer depend on standing, always-on access. They must shift toward dynamic and ephemeral models. Short-lived credentials, just-in-time access, tightly scoped permissions, and continuous monitoring help ensure agents can complete specific tasks at the moment of action without holding more power than they need. This kind of approach supports innovation while reducing the blast radius if something goes wrong. Managing offboarding risks Just as important as onboarding and governance is offboarding. When a human leaves the organisation, access is revoked and accounts are closed. With autonomous agents, there is often no clear lifecycle event that triggers that same cleanup. An agent may be retired quietly, replaced by something new, or simply forgotten. If no one is watching, that identity can remain in place with access it no longer needs. An unmanaged agent with lingering privileges becomes an easy target and a hidden entry point into critical systems. Extending discovery and lifecycle processes to identify idle or orphaned agents, and removing them promptly, is essential to keeping the environment clean and reducing long-term risk. Human oversight is still key Even in a world of autonomous systems, humans remain central. Every agent should ultimately be tied back to a person or team responsible for its behaviour. Sensitive actions should require human approval. Activity should be clearly visible and auditable so teams can understand not just what happened, but why. Autonomy does not remove accountability. If anything, it raises the bar for oversight, because the pace and scale of machine-driven activity leave less room for error. Organizations that build clear ownership and human-in-the-loop controls into their identity programs will be far better positioned to earn trust in how they use AI. IAM for an always-on workforce The future of work isn't simply about humans using AI. It's about a blended workforce in which people and AI-native agents work alongside one another, each contributing to how the organization operates. With 62% of organizations already experimenting with AI agents, that future is rapidly becoming reality. Those that thrive will move beyond viewing autonomous agents as background software and begin managing them as digital employees. They'll establish onboarding processes aligned with HR, implement governance frameworks that can keep pace with machine-speed operations, and enforce offboarding practices that ensure no access points are left exposed. Now is the time to ready identity and access programs for a workforce that doesn't clock in, and to acknowledge that in the era of autonomous AI, identity and authorization extend far beyond people alone.
[6]
Why Agentic A.I. Deployments Are Failing Before They Scale
Early enterprise deployments show that success in agentic A.I. depends less on tools and more on data, governance and operating model design. Agentic A.I. is no longer a technology on the horizon. It is being deployed today in live enterprise environments, with real operational consequences. In 2026, the conversation in most boardrooms has already shifted from "should we pay attention to this?" to "how do we move safely and most effectively?" Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters The vendor landscape is not making these questions easier to answer. Incumbent software companies -- the platforms already embedded in enterprise architecture -- are racing to layer agentic capabilities onto their existing suites, repositioning products many organizations already own. Simultaneously, a new generation of companies built natively on agentic architectures is entering the market, often targeting the same workflows with different approaches. The result is a market that is genuinely moving fast and generating noise in roughly equal measure. In that environment, the promotional narrative tends to dominate. Early wins get amplified. Failure cases stay private. The gap between what vendors are projecting and what enterprises are experiencing in deployment is wider than it should be at this stage of the technology's maturity. Executives are being asked to make significant capital and operating model commitments against a signal-to-noise ratio that is, at best, unfavorable. Drawing on patterns emerging from early enterprise deployments -- about cost structures, risk exposure and operating model redesign -- and what those patterns suggest for organizations at different stages of their journey, this piece attempts to close some of that gap. The evidence base is still maturing, and these observations should be treated as informed early signals rather than settled conclusions. That said, early signals from well-observed deployments are often more useful than waiting for certainty that arrives too late to act on. This analysis is addressed to two audiences: those still weighing their first significant investment, and those already 12 to 24 months into deployment and now working through what the early returns actually look like. The cost structure is real -- and so is the return Early deployments point toward a pattern that experienced technology leaders will recognize: upfront costs tend to run higher and less predictably than projected, and returns take longer to materialize. What is less familiar is the nature of the prerequisite investment. This is not primarily a hardware or infrastructure question in the conventional sense. It is an architectural one. The more useful analogy is an operating system. Before agentic A.I. can function reliably, an organization needs to establish the underlying fabric on which agents and humans will work together: the data architecture that agents can navigate and trust, the policy and governance layer that defines what agents are and are not permitted to do, the orchestration layer that sequences and coordinates agent activity and the human interface layer that determines where autonomous execution stops and human judgment begins. Without this fabric in place, agents are not deployed into a functioning environment -- they are deployed, at best, into silos. The most consistent constraint that appears most frequently in early deployments is data readiness. While the evidence is still limited, it is strong enough to be treated as a working hypothesis rather than a proven rule. Agentic systems execute multi-step tasks autonomously across enterprise systems; they require high-quality, structured and accessible data to perform reliably. What early deployments suggest is that fragmented pipelines do not merely slow implementation; they tend to corrupt it. The technology has a way of surfacing data problems faster than it solves business ones. Where deployments have succeeded, some reported figures are striking. Some early adopters report an average return of 171 percent, reaching 192 percent in the U.S., largely driven by reductions in manual processing hours. Those figures should be treated cautiously, as early averages at this stage of a technology's maturity tend to reflect the most favorable deployments, not the median. What is more useful is the underlying pattern: returns appear highly use-case dependent. Customer service automation -- where performance is measurable and failure is immediately visible -- tends to yield faster returns than back-office process automation, where errors can compound quietly before surfacing. Organizations tracking the strongest outcomes tend to share a common profile with defined use cases, measurable baselines and data that was already well-governed before agents arrived. Timelines to attributable returns typically range from two to four years for complex, multi-system deployments. Narrower implementations with cleaner data can yield measurable returns within 12 months. Planning assumptions should reflect a portfolio approach: staging use cases by readiness and return profile within a shared architecture or "operating system." The cost items that most frequently surprise organizations in deployment are not the headline technology spend. They are high-frequency API calls to external systems at scale; custom connectors to legacy systems never designed for autonomous interaction; and the ongoing operational cost of agent monitoring and incident response. These are recurring costs that grow with deployment breadth, and they are worth modeling explicitly from the outset rather than treating them as implementation details. An estimated 40 percent of agentic A.I. projects will be canceled by the end of 2027. The primary driver is not technology failure but preparation failure: organizations that begin deployment before data, governance, and operating model questions are resolved are building on an unstable foundation. Risk exposure has new dimensions Agentic A.I. introduces a category of risk that static A.I. tools do not: runtime risk. Because agentic systems act autonomously, the consequences of failure are operational rather than merely analytical. A generative A.I. model producing a flawed output requires a human to act on it before harm occurs. An agentic system can act on it independently, at speed and across multiple systems simultaneously. The risk categories that security researchers and early enterprise deployments are beginning to identify include agent hijacking, unauthorized API or data access, data exfiltration and process loops that can escalate to denial-of-service conditions within internal systems. Some of these remain more theoretical than observed in practice; others are already documented in security research and beginning to appear in enterprise incident reporting. The direction of travel is clear enough to warrant proactive design, even where empirical evidence is still accumulating. Prompt injection -- the manipulation of an agent's behavior through crafted inputs -- is the most accessible attack vector for internal bad actors. An employee with system access and harmful intent does not need sophisticated technical capability; they need only understand how the agent processes instructions. Illustrative examples include triggering unauthorized financial transactions or accessing and exfiltrating sensitive records through a legitimately credentialled agent. The security architecture must treat agentic systems as it would any external-facing application: input validation, privilege separation and comprehensive audit logging are baseline requirements, not enhancements. The governance gap is the most underreported risk in current enterprise deployments. While many organizations report deploying A.I. agents -- McKinsey's 2025 State of A.I. survey found that 62 percent of organizations are at least experimenting with agents -- few report adequate governance and visibility into agent behavior. Organizations cannot govern what they cannot see. Full observability, such that every action, every decision path, every external system call, is not an aspirational goal for mature deployments. It is a prerequisite for any deployment. If your current instrumentation does not meet that standard, addressing it is the highest-priority technical debt you carry. The operating model problem The technology decisions in agentic A.I. deployment are, in most cases, the easier ones. The harder work is redesigning how organizations structure work, accountability and talent around systems that act autonomously. The most useful framing: prior A.I. tools augmented individual human decisions. Agentic A.I. executes processes. The unit of analysis shifts from the decision to the workflow, and accountability frameworks built around human decision-makers do not transfer cleanly. Every workflow handed to an agent team requires explicit answers to questions that previously had implicit answers: who is accountable when the agent is wrong? What constitutes an error requiring human escalation? At what transaction value or risk threshold does autonomous execution require a human gate? The pattern that appears to distinguish more productive early deployments is a deliberate choice to begin with the highest-volume, most rule-governed workflows, not the most visible ones. High-volume, rule-governed processes offer faster learning cycles, lower-stakes failure environments and clearer performance baselines. The operating model lessons from a well-run claims processing deployment tend to transfer to the next use case. Those from a failed attempt to automate strategic planning typically do not. Workforce implications are real and already evident. Approximately 45 percent of firms with high agentic A.I. adoption rates are anticipating reductions in middle management within the first 36 months. The mechanism is straightforward: as agent teams execute tasks previously requiring coordination layers, the managerial overhead of those layers declines. What receives less attention is that the transition is rarely clean. Organizations that reduce management capacity before agents are operating reliably create accountability vacuums -- nobody is watching the agent, and nobody is responsible when it fails. The sequencing matters as much as the decision itself. The talent requirement is shifting from task specialists to orchestrators -- people capable of designing, directing and overseeing teams of agents to accomplish complex objectives. This is a genuinely new skill profile, sitting at the intersection of domain expertise, systems thinking and A.I. fluency. Critically, it is not primarily a technology role. The most effective orchestrators in early deployments have been people who deeply understand the business process being automated, not those who most deeply understand the model architecture doing the automating. For organizations already in deployment For those past the decision stage, early operational experience is pointing to several pressure points that are worth examining against your current state, with the caveat that the deployments informing these observations are still limited in number and maturity. Governance visibility tends to be the first gap that surfaces under load. The observability tooling adequate for a pilot often becomes inadequate when agent breadth expands across departments or use cases. The cost of building observability retroactively is considerably higher than designing it from the start, particularly in an agentic context, where the "operating system" that governs agent behavior needs to be fully instrumented to be trusted. If your current deployment does not give you full visibility into every agent action, every decision path and every external system call, that is the gap to close before expanding further. A second pattern concerns use case selection in the second wave. First deployments are frequently chosen for visibility -- i.e executive sponsorship, proof-of-concept appeal, a high-profile process that tells a good story internally. Second deployments tend to benefit from being chosen for operational criteria instead: highest transaction volume, most rule-governed process, cleanest and most consistent data. The compounding effect of a well-chosen second deployment on organizational confidence and governance maturity is significant, and the reverse also appears to be true. A third observation: vendor relationships structured for a pilot are often not structured for scale. The conversations worth having now, before they become urgent, concern observability tooling capabilities, support SLAs for autonomous execution failures and contractual liability when an agent takes a costly wrong action. These are not likely to be standard terms in most current vendor agreements. What determines the outcome The enterprises generating the clearest early returns share a pattern, albeit from a still-limited dataset. They established the agentic operating fabric -- architecture, governance layer, policy boundaries -- before deploying agents into it, rather than attempting to construct it around agents already in flight. They chose use cases for operational clarity rather than strategic visibility. And they defined what success looked like before deployment, which meant they could actually tell whether they had achieved it. The technology will continue to advance, and deployment costs will decline. The organizations that will lead are not necessarily those that moved earliest, but those that moved with the right foundations in place. The decisions made in the next 24 months -- about architecture, governance design, operating model and talent -- are likely to be more consequential than the technology choices themselves. The early signal from deployments, imperfect as it is, is consistent: agentic A.I. rewards preparation far more than speed. David Stokes is a former Senior Executive, EMEA and Chief Executive, UK at IBM. He's now a Strategic Advisor at Quant, a pioneer in Agentic A.I., which develops cutting-edge digital employee technology.
[7]
Agentic AI: Transforming industries and tackling the interoperability imperative
Agentic AI growth demands interoperability for effective enterprise adoption The agentic AI buzz is more than noise. It's a sign of real transformation happening across organizations. Teams in every department are sharing stories of how intelligent agents are reshaping their daily workflows, uncovering insights, and improving their decision-making. From automating routine tasks to enabling strategic thinking, agentic AI is rapidly becoming indispensable. Teams are using this technology as collaborative partners to resolve incidents, rebalance capacity, and surface the next best action. Salesforce research shows that UK teams are saving 3 to 10 hours per week using AI agents, a tangible productivity gain that results in operational impact. The era of experimentation is giving way to greater adoption, driven by real and measurable benefits. This is not a passing trend, but a fundamental shift in how organizations operate. People and AI now work side by side, opening a new era of productivity. What changes now is scale, and with scale comes a hard question: how will all these AI agents work together? Interoperability is the difference between a clever demo and enterprise-wide efficiency. Without a cohesive strategy, businesses risk fragmented, inefficient, and even conflicting systems. The rise of agentic AI Agents plan, decide and act. They coordinate with other agents and humans. Done right, they strengthen teams by removing repetitive work and improving decision accuracy. In a recent industry survey, 93% of IT executives reported plans to implement Agentic AI this year. Agentic deployments are extending from edge use cases into EPR, CRM, and service operations. That momentum is already visible in the UK. According to the survey presented at Agentforce London 2025, about 78% of UK organizations have already deployed agentic AI, with another 14% planning adoption within 6 months. This means a vast majority of businesses are either already running agents in production or actively preparing to integrate autonomous capabilities into core operational systems. Key components of this emerging ecosystem include specialized agents for task execution, orchestration frameworks for coordination, and shared data layers for context and learning. As this architecture evolves, interoperability will determine whether Agentic AI fulfils its promise or fragments under its own complexity. The interoperability challenge As adoption accelerates, so does the complexity of managing a diverse ecosystem of agents with distinct capabilities, data access levels, and decision logic. Without clear coordination, agents can work at cross-purposes or act on incomplete context. Effective interoperability rests on clear governance frameworks which define roles, responsibilities, and escalation paths; standardized APIs and communication protocols to enable unambiguous data exchange; and observability tools to monitor behavior, detect anomalies, and optimize performance in real time. Together, these elements establish a foundation that helps organizations avoid common pitfalls such as siloed deployments, poor coordination, and insufficient oversight -- issues that erode efficiency and diminish ROI. This operating model is built on four promises that keep agent ecosystems effective: predicting and preventing failures before they happen; unifying data into a single, accurate view; turning signals into immediate, trusted actions; and continuously optimizing resources for cost and sustainability. Businesses report significant challenges on the road to AI adoption, with skills gaps and data readiness cited as one of the biggest barriers. However, most leaders believe a positive return on AI investment is achievable within 1 to 3 years. That belief puts pressure on organizations to get interoperability right early rather than treating it as a later optimization. Integration complexities Integrating multiple AI agents into a cohesive ecosystem is inherently complex. Conflicts can arise when agents overlap or pursue misaligned goals. Coordination is even harder in dynamic environments where agents must adapt to evolving data, user inputs and priorities. Success requires treating Agentic AI as a system of systems instead of a loose collection of bots. That means designing orchestration with a central conductor to assign work, manage conflicts and enforce policy. It means instrumenting everything -- logging every decision, tool call and outcome -- so results are transparent. And it means closing the loop by feeding outcomes back into models to make successes repeatable and failures exceptions. While the initial effort may be significant, the long-term benefits of greater resilience, efficiency, and trust are worth it. Setting up for success In the Agentic AI era, visibility is everything. Managing modern, complex IT environments requires a 360-degree view of the tech stack. Without it, integrating new technologies with existing systems is nearly impossible. That's why observability platforms, integration hubs, and AI governance tools are indispensable. They provide the infrastructure needed to manage, monitor, and evolve an Agentic AI ecosystem with confidence. What's next for AI agents? The future of Agentic AI is still unfolding. While we can't yet predict the full scale of the AI universe, we do know that as these systems become more autonomous and interconnected, their roles will evolve. Organizations must remain agile, ready to adapt to new capabilities, standards, and risks. Agentic AI is not a passing trend. It represents a foundational shift in how work is done. Leaders who master integration will shape the future of the intelligent enterprise. We've featured the best AI tool. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
[8]
The leadership dilemma: Governing the "Agentic AI" workforce
Artificial intelligence is no longer a back office enabler or a set of isolated automation software tools. It is becoming a core component of how organizations operate, compete, and deliver value. As businesses accelerate their adoption of increasingly autonomous systems, often referred to as agentic AI, a significant leadership dilemma is emerging. The workforce is no longer exclusively human. Digital agents capable of making decisions, initiating actions, and influencing outcomes, are now woven into the operational fabric of the company. This shift represents far more than a technological upgrade. It is a structural transformation that puts business leaders in uncharted territory. The World Economic Forum's Four Futures framework warns of rising technological fragmentation, declining trust, and widening governance gaps. In this context, the question for leaders is no longer whether to deploy autonomous AI, but how to govern a hybrid workforce of humans and digital agents without introducing systemic risk. For many organizations, this is becoming one of the defining leadership challenges of the decade. The Rise of the Non Human Workforce Agentic AI systems differ from traditional automation in one critical way: they do not merely execute predefined tasks but interpret data, make decisions, and adapt their behavior to context. In many organizations, these systems are already performing functions once reserved for skilled employees, triaging customer requests, optimizing supply chains, generating code, or even making financial recommendations. The productivity gains are undeniable, but so is the complexity. When digital agents act with autonomy, they also introduce new forms of organizational risk. Decisions may be opaque, accountability may be unclear, and the potential for unintended consequences increases dramatically. Leaders must now grapple with a workforce that does not think, behave, or act like humans, and who cannot be governed through traditional management structure. This is where structured identity, access, and behavioral governance become essential. The Governance Gap: A Growing Leadership Risk The most significant challenge is not the technology itself, but the governance vacuum surrounding it. Many organizations deploy autonomous systems faster than they establish the controls and guardrails required to manage them. This creates a widening gap between capability and oversight. Several risks are already becoming visible: 1. Accountability gaps: When an AI agent makes a decision that leads to financial loss, regulatory exposure, or reputational harm, who is responsible? Without clear lines of accountability, organizations face legal and ethical uncertainty. 2. Insider threat like behavior: Autonomous systems often operate with high levels of privilege and can access sensitive data, trigger workflows, or interact with customers. If misconfigured or compromised, they can behave like highly privileged insider threats, an issue we frequently encounter when assessing digital identity posture. 3. Fragmentation and drift: As organizations deploy multiple AI agents across different functions, the risk of inconsistent behavior, configuration drift, and misaligned objectives increases. Without centralized governance, autonomous systems can evolve in ways that diverge from organizational intent. 4. Erosion of trust: Employees, customers, and regulators are increasingly concerned about how AI systems make decisions. A lack of transparency and explainability can undermine confidence and impede adoption. AI adoption alone is no longer sufficient. Governance has become the true leadership mandate. A Governance First Mindset: The New Leadership Imperative To navigate this new landscape, business leaders must adopt a governance first mindset that aligns with the World Economic Forum's call for Digital Trust and systemic resilience. This requires treating agentic AI not as a standalone technology, but as a governed member of the workforce. Several principles should guide this shift: Establish Clear Accountability Structures Every AI agent must have an identified human owner responsible for its actions, performance, and outcomes. This includes defining escalation paths, decision boundaries, and audit requirements. Without explicit accountability, organizations risk regulatory exposure and operational ambiguity. Apply Identity and Access Controls to Digital Agents Just as employees have identities, permissions, and access levels, so too must AI agents. Leaders should ensure that digital agents are integrated into identity management frameworks with least privilege access, continuous monitoring, and lifecycle management. This reduces the risk of insider threat like behavior and prevents privilege creep, these are key principles central to our approach to digital workforce governance. Implement Behavioral Guardrails Autonomous systems require constraints that define acceptable behavior. These guardrails may include ethical guidelines, operational limits, safety checks, and real time monitoring. Guardrails ensure that AI agents act within organizational intent and do not drift into unsafe or unintended territory. Build Oversight and Auditability into the System Transparency is essential for trust. AI agents must be auditable, explainable, and observable. This includes maintaining logs of decisions, enabling post incident analysis, and ensuring that humans can intervene when necessary. Oversight is foundational to responsible autonomy. Foster a Culture of Digital Trust Governance is more than a technical challenge, it is a cultural one. Leaders must champion a culture that values transparency, accountability, and responsible innovation. This includes educating employees about how AI agents operate, how decisions are made, and how risks are managed. Organizations that succeed here tend to be those that treat governance as a strategic capability, not a compliance burden. From Liability to Advantage: Building the Hybrid Workforce of the Future When governed effectively, agentic AI can become a powerful force multiplier. It can enhance productivity, accelerate innovation, and enable organizations to operate with greater agility and precision. But without governance, the same systems can introduce systemic vulnerabilities that undermine resilience. The role of business leaders is to ensure that autonomy does not outpace oversight. By reframing agentic AI as part of the workforce, subject to the same expectations, controls, and accountability as human employees, leaders can transform a potential liability into a strategic advantage. The future of work will be hybrid. The organizations that continue to evolve in 2026 will be those that recognize that governing AI is not a technical task delegated to IT, but a core leadership responsibility. Leaders who embrace this governance first approach will not only mitigate risk, but they will also build resilient, high performing organizations that define the future of the workplace and how businesses function.
[9]
How CIOs can create a strong foundation for an AI-enabled workplace
As with any new tech, there's a scale for AI adoption among businesses leaving some are ahead of the curve and others much further behind as they continue to resist and delay. But what's clear is that adoption is happening with or without formal strategy because nearly two-thirds (65%) of employees now say they intentionally use AI for work. This shift is impacting expectations on many levels. It changes what organizations expect from their people, and it changes what people expect from their organizations. Polished sounding, in-depth output can now be generated in minutes, meaning everyone has the ability at their fingertips to produce more in less time. As managers and organizations increasingly realize that this doesn't always lead to good work, the differentiator that defines what good really is, is becoming less about speed and more about who can work alongside AI well. That means having the ability to analyze and assess its output and use it to make better human decisions - not replace them. This marks a turning point for CIOs especially. The role that used to center simply on identifying and providing access to new tools to improve efficiency, is now increasingly responsible for shaping an environment in which AI tools truly raise the bar. AI is resetting the performance baseline AI is, and has for some time, been accelerating routine and repeatable work across every function, from drafting documents and analysing data to summarizing meetings and generating code. At first, many employees approached these tools with caution. AI made them faster, but they still treated its output as something to sense-check and refine. Now, as AI becomes more normalized and trusted, that caution can slip. In some cases, speed is no longer paired with scrutiny and teams rely on confident-sounding outputs that may be incomplete, biased or wrong if they haven't been properly reviewed. So, while managers are getting used to quicker turnaround and coming to expect it, they may also be receiving work that looks finished but hasn't been validated. If work is easier to produce across the board, then volume alone becomes a much less reliable indicator of value. It's more about the ability to work with AI's output, interpreting and analysing it in context and feeding it into final outputs and decisions rather than relying on it to do that for you. Because of this, every role becomes more technical by default. This new expectation means employees need to be able to use AI tools but also use them well and understand their outputs. That includes framing prompts effectively, challenging assumptions, identifying bias and translating outputs within the right commercial and organizational context. Without leaders prioritizing AI and how to use it correctly, this shift can create divergence. Some teams build confidence quickly, while others feel nervous and hesitate or over-rely on automation which can result in uneven standards and unnecessary risk. The responsibility for avoiding that fragmentation sits with the CIO. The foundation is capability, not just tools The answer isn't simply introducing more technology, in fact in many ways that may complicate things further. What employees need is better ways of working with existing tools that are embedded across the organization. This starts with being clear about where AI is genuinely helping the business. Rather than experimenting everywhere at once, organizations need to identify the areas where AI can improve outcomes, whether that's speeding up analysis, reducing manual work or improving decision-making. Leadership teams play an important role here by setting priorities and making sure AI initiatives stay focused on solving real business challenges rather than chasing the latest trend. But introducing tools alone aren't enough. Employees need practical training on how to use AI well and how to check and interpret its outputs. Without that support, AI risks becoming either underused or over-relied on. In many cases, the most effective approach is building confidence and competence over time through hands-on learning in the flow of work. When employees can experiment, feedback on what's working and refine how they use AI in real situations, organizations create a much stronger foundation for long-term progress. Governance that enables trust and better decisions If capability enables AI use, governance ensures it is used responsibly and consistently. Without clear guardrails, AI adoption can quickly become fragmented, with employees using different tools, handling data inconsistently or relying on outputs that haven't been properly checked. In practice, governance means giving employees clear guidance on how AI should be used across the organization. That could include clearly outlining which AI tools or large language models are approved for work, when enterprise or paid versions must be used and what kinds of data can or cannot be entered into these systems. It also means making sure teams understand how to handle sensitive information and comply with local regulations. When these boundaries are clear, employees can innovate confidently and leadership can better trust their employees, tools and the outputs that the two together are able to produce. Without governance, the risk is unchecked, low-value outputs that affect results and increase exposure. The CIO has the power to connect aligning technology, ethics and responsibility. Embedding review mechanisms, defining who owns what and making sure human judgement sits firmly at the center of it all. Conclusion AI is raising the bar across the workplace. The organizations that approach it in the right way build in clear direction on where it should be applied, practical support that helps people use it well and a governance model that protects the integrity of decisions. For CIOs, the aim is to create an environment where experimentation is encouraged while standards stay high and accountability is clear. When capability and trust are built in tandem, AI becomes a lever for stronger outcomes over time, not just quicker output in the short term. Technology may be redefining how work is produced, but it is leadership that determines whether those higher standards translate into long-term advantage.
[10]
2026: The year enterprise AI finally gets to work
AI agents are finally redefining productivity and operational efficiency across industries After years of hype, 2026 is shaping up to be the year AI agents finally move from being experimental AI tools to trusted digital coworkers embedded across everyday business workflows. Industry forecasts now project that nearly half of enterprise applications will include task-specific AI agents within the next year, driven by breakthroughs in contextual memory, workflow automation, and local, on-device AI. What's changing is not just the intelligence, but the ability of software to move seamlessly from understanding context to taking real, accountable action within the tools where work already happens. However, trust and security remain a critical issue for widespread adoption. According to Gartner's 2025 research, approximately 130 of the thousands of vendors claiming to offer agentic AI are delivering real autonomous capabilities. Misleading claims could jeopardize the organization's confidence in implementing agents at scale. Gartner predicts that over 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls. The difference between the failed 40% and successful deployments will come down to the ability to demonstrate business value, advanced security, and strong privacy. If organizations can demonstrate these we will see increased activation of agents across industries in 2026. Here are five reasons why. 1. Elimination of the Operational Drag AI agents have already begun to handle the drudgery of daily work, increasing efficiency and enabling greater focus on strategic work in enterprises. They remove small, friction-heavy tasks such as finding files or remembering filenames, essentially the tasks no one enjoys, such as updating CRMs for salespeople or writing product requirement documents. This automation of administrative tasks frees up humans to focus on high-value interactions or strategic initiatives. 2. The Convergence of Context and Action Context closes the utility gap. Current agents fail because they lack deep knowledge of the user. In 2026, context will blend more seamlessly with action. Just as human employees require onboarding to be functional, agents must also be onboarded with historical context to make intelligent decisions. This will allow agents to move beyond simple responses to proactive execution, such as locating existing project documents in Notion before a user even asks. As a result, the workflow shifts from humans creating work to humans approving it, such as an agent opening a linear help desk ticket and a human providing final approval. 3. Privacy and Security as the Prerequisite for Trust For an agent to be truly effective, it needs access to a user's subconscious private thoughts and history. With cloud-based agents, users withhold data for fear of training leaks and data breaches. By processing locally and keeping data on device, users can safely allow the agent full access to their digital life. This will open up adoption in highly secure and sensitive industries such as government and defense, healthcare and financial services. For example, hedge funds and VCs can record high-staked meetings without risking data breaches, and healthcare can ensure HIPAA-compliant environments with sensitive doctor-patient interactions. 4. Audio-First Revolution Users will increasingly interact with agents through voice to capture stream-of-consciousness thoughts via on-device desktop PC and mobile while walking the dog, cooking, or just capturing beginning or end of the day actions and thoughts. Agents can then instantly structure these thoughts into formal outputs. More cross-platform execution with audio context can immediately translate into actions across third-party platforms. For example, such as Linear generating and assigning engineering tasks; Notion creating or updating product documentation; Gamma drafting beautiful presentations and Lovable/Devin pushing code prototypes directly from verbal descriptions, and many more. 5. Your Agent Becomes your Central Source of Truth A productivity tool is a stranger but your agent is a digital coworker and partner. We have all worked in organizations where there is that one person that has deep understanding of an industry or customer and we all have to go to "Jennifer" because she knows all and has all the information we need. With agents serving as your digital twin, every conversation, every meeting note, every Slack message, every brainstorm is captured so you don't have to wait for Jennifer to respond. This isn't about cloning personalities but about creating an assistant you've trained to work with you all the time. An AI agent that operates based on your unique perspective, historical decisions, and execution history. It's not just a tool; it's a reflection, a projection, a virtual extension of your professional self. The future of AI agents and work isn't just about AI doing tasks. It's about AI being personalized to you across business workflows for your specific needs and industry. The question for all of us isn't whether to engage with AI, but how to ensure that when the machine learns, it serves your interests, and that the soul in the machine remains unequivocally yours.
[11]
The pilot phase is over. Here's what's next for enterprise AI automation
Enterprise AI shifts from pilots to orchestrated automation For years, companies approached new technology cautiously. Teams ran small pilots, tested AI tools in one department, and waited to see if the investment paid off. Budgets were tight, and leaders worried about committing too much too soon for both financial and organizational reasons. That approach made sense. Large-scale technology deployments carry risk, and incremental experimentation allowed organizations to learn without disrupting the business. But the pace of innovation in artificial intelligence is beginning to change that model. According to new research, organizations aren't asking if the latest tool, agentic AI, can work -- they're asking how to make it work across the business right now. The conversation has developed from experimentation to execution at an uncommon pace, and that shift is quietly reshaping how work actually gets done. In many organizations, AI is no longer an experimental capability sitting on the edge of operations. It is gradually becoming embedded into the processes that power everyday work. From experiments to everyday impact A 2025 deep industry study from MIT found that adoption of Generative AI (GenAI) has exploded. But for most organizations exploring the technology, the number tracking measurable business outcomes remained surprisingly small. In fact, only a tiny fraction of organizations (5%) achieve sustained value when AI tools aren't integrated into core workflows. This "divide" between hype and impact is real. It exists because experimentation and enterprise transformation are fundamentally different beasts. Holding a demo that wows a room is one thing; embedding a capability that changes how work is done every day -- from customer support to engineering -- is another. Real transformation requires systems to interact with existing infrastructure, data pipelines, and operational processes. It requires teams to rethink workflows, adjust responsibilities, and establish new governance models. In short, it demands organizational change, not just technological adoption. In contrast, the latest benchmarking shows something encouraging: 78% of agentic AI automation projects are already delivering real value. Far from being trapped in pilot limbo, most organizations are seeing progress. That's reassuring in a time where headlines sometimes suggest widespread failure rates. But there's a nuance worth unpacking: the value doesn't automatically equate to deep structural change. In many cases, organizations are still in the early stages of scaling what works. A growing digital workforce One of the clearest signs of that change is the rise of agentic AI systems that can handle tasks across departments with minimal supervision. These systems can analyze data, trigger workflows, and make limited decisions based on defined parameters. On average, IT leaders report that their organizations now rely on around 28 of these autonomous or semi-autonomous systems, with plans to grow to 40 within the next year. Larger companies are scaling even faster. This effectively represents the emergence of a new kind of digital workforce. These systems aren't replacing people, but they are taking on repetitive or time-consuming work, freeing employees to focus on strategy, problem-solving, and creativity. Tasks like processing service requests, analyzing operational data, updating systems, or coordinating workflows can increasingly be handled by automated agents. For teams already stretched thin, this is a transformative helping hand. But with growth comes new challenges. The more systems you deploy, the more coordination, oversight, and governance you need to manage them effectively. If you are planning to hire "digital employees" for tasks, you've also got to be prepared to become a "digital manager". That means tracking performance, ensuring systems interact correctly, and making sure automation aligns with broader business objectives. Managing growth before it becomes chaos Rapid adoption can introduce branching complexity. When different teams deploy agentic AI independently, it's easy for systems to operate in silos. Reporting can overlap, processes may conflict, and no one has the full picture. Organizations often refer to this phenomenon as "automation sprawl," and it's a real risk as AI capabilities expand. Without coordination, businesses may end up with dozens of tools performing similar tasks, disconnected workflows, or conflicting automated decisions. What starts as productivity improvement can slowly evolve into operational confusion. Simply put, the solution is getting organized. Companies need clear frameworks for how these systems are used, who is accountable for outcomes, and how different systems interact. Planning for orchestration upfront saves headaches later and allows businesses to scale with confidence. Increasingly, this means treating automation as a coordinated platform rather than a collection of isolated tools. When agentic systems are designed to work together, they can share data, trigger one another's actions, and support end-to-end processes across the organization. That's where the real productivity gains begin to emerge. Trust over cost Interestingly, the biggest barrier to adoption -- cost -- is no longer the top concern when it comes to agentic automation. Only 15% of leaders report their budget as a barrier. Today, the focus has shifted to trust. Can agentic AI systems operate safely, predictably, and transparently? Can organizations understand how decisions are made, audit outcomes, and intervene when necessary? Security, oversight, and AI accountability are now the key criteria for adoption, and the larger the enterprise, the greater that concern tends to be. This is especially true in regulated industries, where mistakes can carry significant financial, legal, or reputational consequences. Decision-makers are no longer just asking whether they can adopt the technology. They're asking whether they can adopt it responsibly, at scale, and with full confidence in the outcomes. Agentic AI for growth But why are organizations investing so heavily in these capabilities? While efficiency and customer experience remain important drivers, the primary motivation today is speed. Over a third of companies say their top priority is getting new products and services to market faster. This is subtle but significant. Agentic AI has evolved from a back-office efficiency tool into a competitive lever. By streamlining routine work, automating operational processes, and accelerating decision-making, these systems allow teams to move faster. Faster-moving organizations can test ideas more quickly, iterate on products more effectively, and bring new offerings to market ahead of competitors. In fast-moving industries, that advantage can be decisive. From adoption to orchestration As organizations expand their AI capabilities, success will depend less on how many tools they deploy and more on how well those tools work together. Adding more automation alone doesn't guarantee progress. To succeed, C-suite and IT leaders will need to focus on aligning teams, processes, and workflows so that new capabilities reinforce each other rather than operate in silos. Success depends on coordination, transparency, and clear accountability. The technology itself isn't the hardest part -- in many ways, it's never been easier to deploy advanced automation. The real challenge lies in orchestration. Companies that master this coordination will move faster, operate more efficiently, and seize new opportunities. Those that don't risk wasted effort, fragmented systems, and missed potential. We've featured the best AI chatbot for business. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Share
Share
Copy Link
Organizations are rapidly adopting AI agents with technology budgets expected to increase over 70% in two years. Yet Gartner predicts more than 40% of agentic AI projects will be cancelled by 2027. The challenge isn't the technology itself—it's the lack of governance frameworks, data maturity, and identity management systems needed to deploy autonomous agents safely at scale.

AI agents are moving beyond simple copilots that assist employees to autonomous systems that execute multi-step tasks, interact with enterprise systems, and make decisions within defined guardrails. With technology budgets for AI expected to increase more than 70% over the next two years, these autonomous AI agents promise to deliver significant performance gains while shifting humans toward higher-value work
1
. McKinsey predicts the agentic AI market will surge from roughly $5-7 billion in 2024 to over $199 billion by 20343
.Yet this transformation introduces new challenges. Unlike generative AI tools that provide recommendations for human review, agentic systems operate directly within business processes where the margin for error becomes much smaller. When AI starts acting inside workflows—triggering supply chain adjustments, initiating operational tasks, or executing financial decisions—the risk profile changes entirely
3
. Organizations are discovering that unlocking the potential of enterprise AI requires more than deploying advanced models.Despite significant investment, most agentic AI projects struggle to move beyond pilots. Gartner predicts that more than 40% of agentic AI projects will be cancelled by the end of 2027
3
. Meanwhile, Qlik found that 97% of organizations have committed budget to agentic AI, but only 18% are fully deploying it3
. The disconnect reveals a critical problem: many businesses lack the governance infrastructure needed to deploy agents safely at scale.The most common reason agentic AI projects stall is insufficient data maturity. Agents depend on consistent, trusted information across the organization, yet many businesses operate with fragmented data, duplicated sources, and unclear ownership
3
. Without reliable governed data, even sophisticated models struggle to produce outputs teams can confidently act upon. As Ben Kus, CTO of Box, explains, "The organizations that will lead in AI are the ones that built the governance infrastructure to make any model trustworthy, with the right permissions in place, the right content accessible, and a clear audit trail for every action taken"2
.Successful deployment demands more than bolting AI agents onto existing systems. Companies must embrace agent-first process redesign, fundamentally rethinking operating models around autonomous systems rather than traditional optimization methods. "You need to shift the operating model to humans as governors and agents as operators," says Scott Rodgers, global chief architect and U.S. CTO of the Deloitte Microsoft Technology Practice
1
.This shift means AI agents require machine-readable process definitions, explicit policy constraints, and structured data flows—capabilities legacy processes weren't built to provide
1
. The real risk isn't that AI won't work, but that competitors will redesign their workflows while others remain stuck piloting assistants. Organizations that achieve nonlinear gains create agent-centric workflows with human governance and adaptive orchestration1
.As agents become independent actors within enterprise environments, they behave less like software tools and more like digital employees. This creates a fundamental challenge: identity and access management systems were designed around human users, not autonomous agents that adapt, operate at machine speed, and may touch far more systems than any single employee
5
.Only 16% of organizations treat AI as its own identity class with dedicated policies, creating blind spots that increase security risk
5
. Unlike human employees who arrive through structured HR onboarding processes, agents are created by developers, embedded into workflows, or introduced through platforms—often without central visibility or consistent accountability. Every autonomous agent needs clear ownership tied to someone who understands why it exists, what it should do, and which systems it should access5
.The December 2025 AWS incident illustrates the consequences of inadequate governance. Engineers used an internal AI coding agent, but misconfigured access controls granted broader permissions than intended, leading to approximately 13 hours of downtime
4
. While Amazon clarified the primary cause was human error rather than technical failure, the lesson remains clear: when you give AI tools the same permissions as senior engineers but none of the judgment, small misconfigurations become serious incidents4
.Engineering leaders should view AI agents as extremely fast junior engineers—brilliant at pattern-matching and execution, but lacking judgment, context, and restraint. This requires implementing least privilege principles, restricting agent access to only what they need for defined tasks
4
. Human oversight must scale with autonomy: the more an agent can act without human initiation, the tighter its audit and traceability mechanisms must become4
.Related Stories
As frontier models converge in capability, the competitive advantage in enterprise AI shifts from the model itself to the data it can safely access. For most enterprises, that advantage lives in unstructured data—contracts, case files, product specifications, and internal knowledge
2
. "It's not what the model does anymore, it's the enterprise's own unstructured data - their content, how it's organized, how it's governed, and how it's made accessible to the AI," says Yash Bhavnani, head of AI at Box2
.Enterprise content platforms are evolving into AI control planes—orchestration layers that sit between models, agents, and enterprise data. Rather than just storing documents, these platforms govern how content is accessed, route it to the right reasoning engine, enforce permissions, and maintain complete audit trails
2
. Permission-aware access becomes essential as agents execute tasks autonomously across systems, acting faster than humans and often without the contextual judgment needed to decide what data they should access2
.As agents take on more responsibility, organizations need clear answers to fundamental questions about accountability: Who owns the data feeding the system? Who approves actions an agent takes? When should a person step in to review decisions?
3
. Clear accountability helps teams trust deployed systems and reduces the risk of mistakes, especially when AI outputs affect revenue, compliance, or business planning.Once multiple teams deploy agents, organizations quickly lose track of where AI-generated code has landed and what it's doing. Portfolio-level visibility becomes essential—leaders need a current, organization-wide view of where AI agents operate, which systems carry the most risk, and whether similar agents repeat flawed processes across teams
4
. Without unified oversight and integration across tools, content gets duplicated, shadow knowledge stores accumulate outside IT visibility, and employees build workarounds that create security and organizational risk2
.With nearly three in four companies planning to deploy agentic AI in the next two years but only one in five having mature governance models, the gap between ambition and readiness continues to widen
5
. Organizations that strengthen data foundations, establish clear ownership structures, implement identity governance for autonomous agents, and build permission-aware systems will position themselves to scale AI safely. Those that continue piloting agents without addressing these fundamental governance challenges risk being left behind as competitors redesign their operating models for the agent-first era. The question for enterprise leaders is no longer whether to adopt AI agents, but whether their governance infrastructure can support the autonomous workforce they're building.Summarized by
Navi
[1]
[2]
[4]
[5]
08 Dec 2025•Technology

17 Sept 2025•Technology

07 Aug 2025•Technology

1
Technology

2
Science and Research

3
Technology
