10 Sources
10 Sources
[1]
Why AI agents failed to take over in 2025 - it's 'a story as old as time,' says Deloitte
Companies that succeeded were thoughtful about implementation. This past year was deemed the year of AI agents by experts and industry leaders alike, with a promise to revolutionize how people work and increase productivity. However, consultancy Deloitte's new Tech Trends report found that these autonomous AI assistants actually failed to take off, identifying both the obstacles preventing widespread adoption and ways to overcome them. Every year, Deloitte releases its Tech Trends report, which examines the biggest trends of the past year to help inform enterprise leaders and workers alike about what to look out for in the upcoming year. Naturally, the main topic of the 17th report, like that of the past two years' reports, which ZDNET covered in 2023 and 2024, was AI. However, with investments in AI at an all-time high this year, the report offered valuable insights on how to maximize the ROI on agentic strategy. Also: While Google and OpenAI battle for model dominance, Anthropic is quietly winning the enterprise AI race "Getting beyond the headlines to really focus on the so what, and now what, is the service that Tech Trends tries to do," said Bill Briggs, CTO at Deloitte. "The world's going to continue to advance and evolve, and you can't wait, or you will be left behind." The emergence of agentic AI technology had enterprise leaders excited about the idea of expanding their workforces and upping productivity with AI assistants. As cited in the report, Gartner predicted that, by 2028, 15% of day-to-day work decisions will be made autonomously by agents, up from 0% in 2024, highlighting the momentum behind the technology. "In a way, AI, especially three years ago, kind of triggered this wave of excitement, enthusiasm from the C-suite and the board... but it was treated as if it was something separate by itself, and the ceiling was pretty low on the impact and return from that -- and now agents are having that same [moment]," said Briggs. Also: Reinventing your career for the AI age? Your technical skill isn't your most valuable asset Deloitte's 2025 Emerging Technology Trends study, which surveys 500 US Tech Leaders, found that 30% of the surveyed organizations are exploring agentic options, with 38% piloting solutions and only 14% having solutions ready to deploy. The number of organizations actively using the systems in production is even lower, at 11%. Some companies are not yet close to deploying the technology, with 42% of the organizations reporting they are still developing their agentic strategy roadmap and 35% having no strategy in place at all. This slow rate of deployment is noteworthy because there is potential for agentic AI to optimize business operations. However, many companies are not in a position to leverage the technology. "You have to have the investments in your core systems, enterprise software, legacy systems, SAS, to have services to consume and be able to actually get any kind of work done, because, at the end of the day, they're [AI agents] still calling the same order systems, pricing systems, finance systems, HR systems, behind the scenes, and most organizations haven't spent to have the hygiene to have them ready to participate," said Briggs. Also: Dreading AI job cuts? 5 ways to future-proof your career - before it's too late Obstacles identified by the report included the legacy enterprise systems that many organizations still rely on, which were not designed for agentic AI operations and cause bottlenecks in accessing systems, hindering the agents' ability to carry out actions and perform tasks. Similarly, the data architectures of the data repositories, which feed information to the AI agents, are not organized in a way that enables the AI agents to consume it. Deloitte cited a 2025 survey it conducted that found that 48% of organizations identified the searchability of data as a challenge to their AI automation strategy, and 47% cited the reusability of data as an obstacle. Also: Stop using ChatGPT for everything: The AI models I use for research, coding, and more Lastly, organizations often fail to create the proper governance and oversight mechanisms for the agentic systems to operate autonomously, as traditional IT governance doesn't account for AI agents' ability to make their own decisions. "You've got this layer on top, which is the orchestration/agent ops. How do we instrument, measure, put controls, and thresholds, so if we got it right, the meter wouldn't be spinning out of control, kind of like we saw with the early days of cloud adoption," said Briggs. Deloitte identified a pattern among organizations with successful implementations of AI: being thoughtful about how agents are implemented. Business processes were created to fit human needs, not those of AI agents, so the shift to automation means rethinking existing business processes. Rather than just "layer agents onto existing workflows," Deloitte said successful organizations "redesign processes" to take the best advantage of AI's agentic capabilities, leaning into their ability to tackle a high volume of tasks collaboratively without breaks. The human element also involves ensuring that employees in the organization are properly trained. According to the report, 93% of AI spend still goes to technology, while only 7% goes to changing the culture and training, and learning. Also: Gemini vs. Copilot: I tested the AI tools on 7 everyday tasks, and it wasn't even close Briggs said this disproportionality is "out of whack, because that's the piece where almost everything is going to fall down." Yet, he said the lack of focus on training is a "story as old as time" and a repeated pattern seen in many of the tech transformations witnessed during his 30 years in the industry. Working with AI agents will also raise new questions, such as who will manage these AI agents and teams? What will an HR team for these agents look like? Similarly, Microsoft's 2025 Work Trend Index Annual Report explored the concept of a Frontier Firm, or organizations with both AI agents and humans working in tandem, and found that humans will eventually lead teams of AI agents, necessitating HR processes for these assistants. "We've got to rethink most of our HR process in a world where we're going to increasingly have people working with algorithms and agents and robots," said Briggs.
[2]
How to rebuild the enterprise for the Age of Agentic AI
With execution and trust no longer constraints, the question is what leaders choose to build. I've sat in over fifty AI workshops this year with some of the world's largest, most complex organizations. There's always a moment when the room goes quiet. A team watches one of their own workflows run end-to-end with a single instruction to an AI agent. Their 13-step market commentary process or a claims process that takes six weeks is reduced to minutes. The realization hits fast: agentic AI is no longer a promising capability; it's a real, on-demand collaborator working alongside your team today. It's thrilling, but it's disorienting. Century-old enterprises weren't built for this velocity. With their rigid processes, governance designed around humans making every decision and leadership playbooks for managing high-cost execution, the enterprise starts to crack. Without a new operating model, even the most promising proofs of concept can't scale. The urgent question now is how to rebuild the enterprise around agentic AI. This is not a technical project for your CIO. It's a full re-architecture, squarely in the hands of leadership. We've been on the ground as these transformations unfold and a blueprint is taking shape: redesign how work gets done and build a foundation of trust that allows people and systems to collaborate at scale. This first shift is about the work itself - what we do and how we do it. The enterprise was designed for a world where execution was the primary constraint. Today, execution is cheap, abundant and instantaneous. Now the constraint is orchestration. Work must flow simply and cleanly across teams. Here's what we see consistently working: Grab your whiteboard and sticky notes and ask your teams: What is the true outcome of this workflow? Circle the steps that are essential and mark those that exist purely as organizational muscle memory - duplicate requests, redundant checks, legacy forms. You'll find yourself asking, "How did this survive for 20 years?" Remove organizational drag. Once you see the real flow of work, help it move across teams without friction. The biggest delays aren't inside a task; they're between tasks. Look for the moments where work gets stuck - legal approval for a campaign, financial signoff on a contract. Replace the endless 'check-ins' with a single, accountable owner and let an agent handle the routing. Agentic AI shifts human value beyond old-school productivity (tasks executed, hours logged). AI can do that work instantly. Roles need to evolve towards directing work. Start by identifying the tasks an agent can take on, then reorient your people around the outcome behind those tasks. That marketing analyst spending 60% of their time pulling data becomes a 'revenue impact strategist,' with agents building dashboards while they guide ad spend or surface insights. Career ladders assume that depth in a single task equals success. In the agentic AI era, the people who thrive are systems thinkers - those who can design, direct and improve agent-driven workflows. Anchor growth to expanding into new adjacencies, not mastering one domain. Sideways becomes the new upward. Create an environment where a social media manager learning prompt design becomes a content architect or a financial analyst understanding workflow logic pivots into revenue operations. This is the new radically simple enterprise: more adaptive, more capable, more human in the work that matters most. Redesigning your organization is only half the work. Building trust is the other. Enterprise governance systems were built for deterministic tools with predictable rules, not agentic systems that interpret context and act. Manual oversight might work for five agents, but it won't work for five hundred. Scaling requires a new paradigm. You can't just comply your way into this. Trust is structural, human and needs to be part of your day-one architecture. This is the critical, second half of the blueprint: We see it every day - if IT teams can't see what an agent did or why it did it, trust collapses. Put real observability and supervision in place - audit trails and behaviour logs - so every action is traceable and explainable and business teams can innovate securely, safely and at scale. Be explicit with your vendors: transparency is non-negotiable. Your responsibility assignment framework falls apart the moment agents take action. Get specific about boundaries - what agents can handle on their own, what needs a human green light and what remains human-only - and map clear escalation paths. Set policy rules, role-based access and permissioning so agents operate in lanes. Your teams aren't afraid of AI, they're afraid of not knowing how they fit in. Don't turn away from discomfort, face it head-on: what work goes away, what becomes hybrid and what still depends on our uniquely human qualities, such as judgement, creativity and intuition. A five-slide deck won't cut it here. Formalize it. Create a new operating agreement with shared expectations, clear division of labour and a firm definition of what 'good' looks like. Frameworks like the Agentic Compact can help guide your strategy. When trust is built into your systems from day one, your enterprise moves with AI - not against it. Once execution and trust are no longer barriers, the question becomes: What are you brave enough to build? What felt impossible now feels inevitable. A retailer can stand up a full storefront around a micro-trend in 72 hours. A wealth management firm can deliver personalized investment strategies to millions, not just clients with seven figures. A life sciences organization could compress three years of drug development into months, bringing life-saving medicines to patients faster than ever thought possible. Teams finally have the space to think bigger, move faster and tackle problems they've been punting for years. Get the foundation right - the leadership, the operating model, the trust - and suddenly the whole enterprise opens up.
[3]
Why most enterprise AI coding pilots underperform (Hint: It's not the model)
Gen AI in software engineering has moved well beyond autocomplete. The emerging frontier is agentic coding: AI systems capable of planning changes, executing them across multiple steps and iterating based on feedback. Yet despite the excitement around "AI agents that code," most enterprise deployments underperform. The limiting factor is no longer the model. It's context: The structure, history and intent surrounding the code being changed. In other words, enterprises are now facing a systems design problem: They have not yet engineered the environment these agents operate in. The shift from assistance to agency The past year has seen a rapid evolution from assistive coding tools to agentic workflows. Research has begun to formalize what agentic behavior means in practice: The ability to reason across design, testing, execution and validation rather than generate isolated snippets. Work such as dynamic action re-sampling shows that allowing agents to branch, reconsider and revise their own decisions significantly improves outcomes in large, interdependent codebases. At the platform level, providers like GitHub are now building dedicated agent orchestration environments, such as Copilot Agent and Agent HQ, to support multi-agent collaboration inside real enterprise pipelines. But early field results tell a cautionary story. When organizations introduce agentic tools without addressing workflow and environment, productivity can decline. A randomized control study this year showed that developers who used AI assistance in unchanged workflows completed tasks more slowly, largely due to verification, rework and confusion around intent. The lesson is straightforward: Autonomy without orchestration rarely yields efficiency. Why context engineering is the real unlock In every unsuccessful deployment I've observed, the failure stemmed from context. When agents lack a structured understanding of a codebase, specifically its relevant modules, dependency graph, test harness, architectural conventions and change history. They often generate output that appears correct but is disconnected from reality. Too much information overwhelms the agent; too little forces it to guess. The goal is not to feed the model more tokens. The goal is to determine what should be visible to the agent, when and in what form. The teams seeing meaningful gains treat context as an engineering surface. They create tooling to snapshot, compact and version the agent's working memory: What is persisted across turns, what is discarded, what is summarized and what is linked instead of inlined. They design deliberation steps rather than prompting sessions. They make the specification a first-class artifact, something reviewable, testable and owned, not a transient chat history. This shift aligns with a broader trend some researchers describe as "specs becoming the new source of truth." Workflow must change alongside tooling But context alone isn't enough. Enterprises must re-architect the workflows around these agents. As McKinsey's 2025 report "One Year of Agentic AI" noted, productivity gains arise not from layering AI onto existing processes but from rethinking the process itself. When teams simply drop an agent into an unaltered workflow, they invite friction: Engineers spend more time verifying AI-written code than they would have spent writing it themselves. The agents can only amplify what's already structured: Well-tested, modular codebases with clear ownership and documentation. Without those foundations, autonomy becomes chaos. Security and governance, too, demand a shift in mindset. AI-generated code introduces new forms of risk: Unvetted dependencies, subtle license violations and undocumented modules that escape peer review. Mature teams are beginning to integrate agentic activity directly into their CI/CD pipelines, treating agents as autonomous contributors whose work must pass the same static analysis, audit logging and approval gates as any human developer. GitHub's own documentation highlights this trajectory, positioning Copilot Agents not as replacements for engineers but as orchestrated participants in secure, reviewable workflows. The goal isn't to let an AI "write everything," but to ensure that when it acts, it does so inside defined guardrails. What enterprise decision-makers should focus on now For technical leaders, the path forward starts with readiness rather than hype. Monoliths with sparse tests rarely yield net gains; agents thrive where tests are authoritative and can drive iterative refinement. This is exactly the loop Anthropic calls out for coding agents. Pilots in tightly scoped domains (test generation, legacy modernization, isolated refactors); treat each deployment as an experiment with explicit metrics (defect escape rate, PR cycle time, change failure rate, security findings burned down). As your usage grows, treat agents as data infrastructure: Every plan, context snapshot, action log and test run is data that composes into a searchable memory of engineering intent, and a durable competitive advantage. Under the hood, agentic coding is less a tooling problem than a data problem. Every context snapshot, test iteration and code revision becomes a form of structured data that must be stored, indexed and reused. As these agents proliferate, enterprises will find themselves managing an entirely new data layer: One that captures not just what was built, but how it was reasoned about. This shift turns engineering logs into a knowledge graph of intent, decision-making and validation. In time, the organizations that can search and replay this contextual memory will outpace those who still treat code as static text. The coming year will likely determine whether agentic coding becomes a cornerstone of enterprise development or another inflated promise. The difference will hinge on context engineering: How intelligently teams design the informational substrate their agents rely on. The winners will be those who see autonomy not as magic, but as an extension of disciplined systems design:Clear workflows, measurable feedback, and rigorous governance. Bottom line Platforms are converging on orchestration and guardrails, and research keeps improving context control at inference time. The winners over the next 12 to 24 months won't be the teams with the flashiest model; they'll be the ones that engineer context as an asset and treat workflow as the product. Do that, and autonomy compounds. Skip it, and the review queue does. Context + agent = leverage. Skip the first half, and the rest collapses. Dhyey Mavani is accelerating generative AI at LinkedIn. Read more from our guest writers. Or, consider submitting a post of your own! See our guidelines here.
[4]
The next phase of AI is agentic, and it starts with data architecture
AI's next breakthrough isn't bigger models -- it's better architecture If you look at the last decade of AI progress, most of it has been measured in a single dimension: bigger models and better benchmarks. That approach worked for a while, but we're now running into the limits of what "bigger" can buy. The next breakthrough isn't about cranking parameters into the billions. It's about the architecture underneath, the part most people don't see but absolutely feel when it isn't working. That's where agentic AI comes in. Not agents as a buzzword, but as a practical shift in how intelligence is distributed. Instead of one model waiting for a prompt and producing an answer, you get groups of smaller, purpose-built agents that watch what's happening, reason about it, and act. The intelligence is in how they collaborate, not in one giant model doing everything. Once you start thinking about it that way, the conversation shifts from "What can the model do?" to "What does the system let the model do?" And that's all architecture. Generative AI changed how people interact with software, sure. But the pattern hasn't changed much: question in, answer out, and then everything resets. Agentic systems don't operate like that. They stay alert. They respond to signals you didn't explicitly ask about, like changes in customer behavior, shifts in demand, and little anomalies that usually slip past dashboards. And the biggest difference is time. These aren't one-off tasks. Agents run loops. They observe, decide, try something, and come back when the situation shifts. It looks a lot more like how teams actually work when they're at their best. But none of that coordination works without shared context. If you have one agent basing decisions on unified profiles and another pulling from a stale, duplicated dataset, you're going to get drift. And once agents drift, they stop being intelligent and start being unpredictable. We've all known that fragmented data is annoying. In agentic systems, it becomes dangerous. Agents operate in parallel, and they need the same understanding of customers, products, events -- everything. Otherwise, you get contradictory decisions that only show up after damage is done. A unified, identity-resolved layer becomes the shared memory. It's what keeps agents grounded and lets them collaborate instead of stepping on each other. This isn't a philosophical point. Without that shared memory, agents "learn" different realities, and your system becomes incoherent fast. For years, enterprises gravitated toward big, do-everything platforms because they were afraid that stitching systems together would break things. Ironically, agentic AI flips that idea on its head. Instead of giant platforms, you get small, specialized agents that talk to each other, almost like microservices, except they're reasoning, not just processing. Here's the catch: it's not enough for these agents to simply exchange data. They have to interpret the data in the same way. That's where interoperability becomes a real engineering challenge. The APIs matter less than the meaning attached to them. Two agents should receive the same signal and reach the same basic understanding of what it represents. Get this wrong and you don't have autonomy -- you have chaos. But when it works, you get an environment where you can add or upgrade agents without every change turning into a rewrite. The system gets smarter over time rather than more brittle. Many teams today still treat AI as a plug-in, something you add to an existing system after everything else is in place. That approach just doesn't work with agentic systems. You need data models designed for evolving schemas, governance that can handle autonomous behavior, and infrastructure built for feedback loops, not one-time transactions. In an AI-first architecture, intelligence isn't a feature. It's part of the plumbing. Data moves in ways that support long-running decisions. Schemas evolve. Agents need context that lasts longer than a single request. It's a different mindset from traditional software design, closer to designing ecosystems than applications. There's always a worry that "agentic AI" means people step aside. The reality is sort of the opposite. Agents take on the minute-by-minute decision loops, but humans define the goals, priorities, boundaries, and tradeoffs that make those loops meaningful. It actually makes oversight easier. Instead of reviewing every action, people look for patterns -- drift, bias, misalignment -- and course-correct the system as a whole. One person can guide a lot of agents because the job shifts from giving instructions to refining intent. Humans bring the judgment. Agents bring the stamina. Agentic AI isn't just the next model trend. It's a shift in how intelligence gets embedded into systems. But autonomy without the right architecture will never produce the outcomes people expect. You need unified data so that agents are aligned. You need interoperable systems so agents can communicate. And you need infrastructure designed for a long-lived context and continuous learning. If generative AI was about answers, agentic AI is about ongoing intelligence, and that only works if the architecture underneath it is built for the world it's operating in.
[5]
I lead Microsoft's enterprise AI agent strategy. Here's what every company should know about how agents will rewrite work | Fortune
Across customers and industries, I am seeing AI agents move into the workflows that matter most, and they're already beginning to transform how businesses work and lead. Agents connect AI to tools, APIs, data, and organizational knowledge, and they can operate autonomously inside critical processes. Agents run continuously, escalate to people when needed, and deliver results at a speed and scale we have not seen before. That represents a real shift, one that's already visible across finance, operations, supply chain, and customer support. Agents are improving accuracy, reducing manual effort, and boosting customer experience. They're emerging as a dependable layer inside the enterprise and are shaping how organizations will operate in the years ahead. At Microsoft, we started by bringing AI into the workplace as an assistant, something that could help with tasks, accelerate work, and lighten the load. Then came agents that could follow human direction and keep work moving. Now we are entering the next phase: autonomous agents operating alongside people. This is not about replacing existing AI assistants; it is an expansion of what is possible. Each capability serves a different type of work. In our 2025 Work Trends Index, 80% of leaders said their company plans to integrate agents into their AI strategy in the next 12 to 18 months, with more than one-third planning to make them central to major business processes. Agents are becoming integral to operations across organizations of all sizes and are already delivering measurable results. According to a recent IDC study, Frontier Firms use AI across an average of seven business functions. More than 70% of them leverage AI in customer service, marketing, IT, product development, and cybersecurity, while 67% are monetizing industry-specific AI use cases to drive revenue growth. History shows that breakthrough technologies don't just slot into existing systems, they make us rethink those systems entirely. When steam power arrived, factories didn't simply replace water wheels with engines. They redesigned the entire layout. Instead of clustering machines around a single power source, they spread them out, creating assembly lines and workflows that unlocked massive productivity gains. Electricity did the same, enabling flexible layouts and lighting that extended working hours and transformed manufacturing. We are at a similar moment now for information and knowledge work. Agents' deeper impact will come from reshaping how work itself is structured. As they grow more capable, teams will include agents working alongside people who provide oversight, coaching, and strategic direction. This shift requires rethinking how people interact with applications and how organizations use data. It points toward a workplace where agents handle routine tasks and humans focus on creativity, judgment, and innovation. New roles will emerge, from agent builders to AI strategists, and existing positions will expand to include supervising and managing digital workers. Just as the internet era created UX designers and social media managers, the agentic era will produce a new generation of professionals who thrive in hybrid human-agent teams. Agents unlock new levels of scale. They operate without downtime or bottlenecks, enabling organizations to serve more customers, move faster, and reduce costs. As they take on more repeatable work, companies can redirect talent and budgets toward higher-value activities and AI-driven outcomes. Security is integral to making this possible. Applying Zero Trust principles to agents, giving only the necessary access and adjusting it as responsibilities evolve, provides a foundation for responsible innovation. With strong guardrails and a culture that treats AI security as a shared responsibility, teams can deploy and scale agents with confidence. Pairing the scale of agents with rigorous safeguards is how organizations unlock transformative impact while protecting trust, data, and people. Many leaders want to know what it looks like to bring agents into daily work. The strongest approach begins with democratized access: making agents available broadly so every employee can experiment and find value. Start with rules-based, repetitive processes such as data entry, invoicing, customer follow-ups, and approvals. These are low-risk, high-volume tasks where agents deliver immediate impact. From there, scale by building systems where agents collaborate, escalate, and learn. This means designing workflows where agents can hand off complex cases, adapt based on feedback, and continuously improve. Over time, they move from task automation to process orchestration. Adoption benefits from a two-pronged model. Empower people at every level to use AI daily, for bottoms up innovation, while senior leaders drive high-impact projects from the top. Pressure from both sides - top and bottom - accelerates transformation and ensures agents reach every workflow where they can add value. And remember: this is the least capable these systems will ever be. In six months, they will do much more; in six years they will be everywhere. Build with that trajectory in mind -- design for scale, interoperability, and governance from day one. This shift also calls for evolving how we think about leadership. It requires humility and curiosity in equal measure, because none of us have all the answers. The leaders who excel are the ones engaging their teams, showing where agents are delivering value, and positioning AI as a tool for empowerment rather than replacement. Most importantly, use AI every day. Make it part of your daily workflow. Ground the hype in real projects. The fundamentals of work still matter. Relationships matter in sales, ethics matter in accounting, and culture matters in HR. AI will not change that, but it will change how we deliver them. The agentic era has begun, and it will unfold over years rather than months. IDC expects the number of companies using agentic AI to triple over the next two years. Organizations that lean in early will scale faster, operate smarter, and unlock new value. Every company will need an AI strategy. Every leader will need to rethink how work gets done. And eventually, every process will have an agent. Let's build that future together.
[6]
Why AI agents still outrun the reality of enterprise ambition - SiliconANGLE
AI agents are fast becoming the defining force behind the enterprise shift from simple automation to true decision intelligence. If the first satisfactory phase of enterprise artificial intelligence was about automation, the next is clearly about augmentation: enhancing human intelligence in knowledge work. TheCUBE Research's "Agentic AI Futures Index" shows that shift accelerating. Sixty-two percent of companies now see AI agents as a key part of decision-making, marking a decisive move from automation-focused deployments toward AI-driven decision intelligence. But ambition is outpacing execution. Organizations are investing heavily in capabilities beyond automation, including digital coworkers that collaborate with humans, pursue goals and make judgment-based decisions. As they shift from experimentation to execution, the distance between what leaders believe AI can deliver and what their organizations can operationalize continues to widen. AI agents don't just automate tasks -- they expose weaknesses in governance, data quality and operational readiness. At the center of that tension sits a question most enterprises still can't answer: "Can we trust these systems to make decisions that matter?" "Trust is emerging as the currency of innovation," said theCUBE Research's Scott Hebner on the Next Frontiers of AI podcast. "No trust, no ROI." The "Agentic AI Futures Index" provides the first comprehensive benchmark of where enterprises actually stand in this transition, according to Hebner. Conducted in the third quarter of 2025, the research surveyed cross-industry AI business and technology leaders, measuring enterprise readiness across five areas. The Index also draws on insights from theCUBE's coverage of the AI Agent Builder Summit and adds real-world insights from Doozer.ai Inc. These data points reveal not just where organizations are investing, but where they're getting stuck -- and why execution keeps breaking down even as conviction remains high. AI agent innovation cycles are accelerating faster than most enterprises can track. Leaders need more than anecdotal benchmarks to understand their position and the gaps they can't yet see. The five dimensions measured in the "Agentic AI Futures Index" include: Each dimension is scored on a 0-5 maturity scale based on responses from 61 questions submitted by 625 qualified AI professionals across 13 industries. Respondents were screened for direct involvement in AI strategy, development or governance, ensuring the data reflects practitioners actively shaping the field rather than observers speculating about it. "Together, these indices define a strategic maturity curve that helps organizations craft AI strategies, showcase solution leadership, benchmark progress against peers and anticipate the technologies that will shape the next decade of innovation," Hebner explained. Enterprises aren't lacking conviction. More than 90% of leaders surveyed for the "Agentic AI Futures Index" see digital labor -- and agentic AI more broadly -- as inevitable, and conviction is highest among those with the most direct AI experience, according to the Index. Digital coworkers are essential solutions to talent shortages, rising costs and competitive pressure, not optional innovations. This pattern shows up consistently across the Index: Leaders express strong conviction in agentic AI, but organizations struggle to translate that vision into real-world execution. The "Digital Labor Transformation Index", part of the overall "Agentic AI Futures Index," illustrates the point. Its overall maturity score sits at 3.1 -- evidence that organizations have begun to move past experimentation, but not enough to signal readiness for scaled digital coworkers. Collaboration between human resources and IT is forming, and early strategies are taking shape. Targeted use cases are emerging, but execution continues to fall short. "Right now, it's still considered a technical implementation, but this is really a business play," said Christophe Bertrand, principal analyst at theCUBE Research. "Without cross-organization collaboration, things can head to the wall very quickly because there's only so much you can ask IT to do for you." Even with that conviction, operational readiness remains out of reach. Within the "Digital Labor Transformation Index," aspirations score 4.1 on the maturity scale, strategy drops to 3.1, and execution falls to just 1.8. That pattern signals a structural gap between what organizations envision for digital labor and what they can actually deliver today, echoing the broader vision-to-value gap seen across the "Agentic AI Futures Index." No single failure drives that gap. Many organizations still treat agentic AI and digital coworkers as technical implementations owned by IT rather than as workforce transformation initiatives that demand shared accountability. Strategy scores show that investment and planning are underway, but execution scores reveal that the collaborative structures, governance and change management required to operationalize digital labor at scale remain immature. "These are the signs of initiatives in their infancy," Bertrand said. "We're still putting the foundations in place, and we're years from where we want to be." The cultural dimension is equally critical. Human resources is increasingly involved in shaping how enterprises integrate AI agents as digital coworkers rather than as replacements for human workers. This shift reframes agentic AI as workforce evolution rather than pure automation. "If you're not at the cadence of speed, culture is the game," said John Furrier, co-founder and co-chief executive officer of SiliconANGLE Media Inc., during the AI Agent Builder Summit. "It's the only game in town, because if you don't have the speed, you don't win. If you're not in the game, on the field with AI, you're going to lose to somebody else who's going to be faster." The "Agentic AI Futures Index" highlights the accelerating investment in AI reasoning and decision intelligence. Seventy-three percent of enterprises are making significant or strategic commitments to capabilities that move beyond automation into judgment-based work, and investment maturity scores 3.8 out of 5 - the highest of any dimension measured. But confidence hasn't kept pace with spending. Only 49% of leaders interviewed express high confidence that AI agents can make trustworthy, accurate decisions. The Index's trust scores, 2.4 out of 5, are the lowest across all measured dimensions. The gap between investment and confidence is where ROI stalls, according to Paul Chada, chief executive officer of Doozer.ai. "A prediction is not a decision," Chada said. "You can only trust an agent when it shows that it understands the goal, the context and the consequences of its actions." The tension between belief and doubt plays out in real deployments. Enterprises range from expecting agentic AI to solve everything to deep skepticism that it's ready for production, according to Michael Garas, AI partnerships leader at IBM Corp. "I think the top-of-mind concern for enterprises is how to make sure AI agents have the context that they need," Garas said during the AI Agent Builder Summit. Generative AI identifies patterns and correlations, but decision intelligence requires systems to understand cause and effect, Chada noted. As enterprises move from prediction into judgment-based work, the trust mechanisms that supported automation no longer apply. Explainability, human collaboration and the ability to evaluate trade-offs become essential to validating an agent's decisions. "Organizations are just wanting to run [AI agents] in parallel right now," Chada said. "As they see it, make the correct decision repeatedly [and] trust builds naturally; that's when autonomy becomes acceptable." These tensions are why trust is emerging as the real gatekeeper of enterprise adoption. The "Agentic AI Futures Index" revealed that organizations are willing to invest and eager to experiment, but autonomy only advances at the speed of confidence. As AI agents take on work with real consequences, the ability to understand and verify their decisions will shape how quickly enterprises move into the next phase of this shift.
[7]
What the new wave of agentic AI demands from CEOs | Fortune
For decades, technologies have largely been built as tools, extensions of human intent and control that have helped us lift, calculate, store, move, and much more. But those tools, even the most revolutionary ones, have always waited for us to 'use' them, assisting us in doing the work -- whether manufacturing a car, sending an email, or dynamically managing inventory -- rather than doing it on their own. With recent advances in AI, however, that underlying logic is shifting. "For the very first time, technology is now able to do work," Nvidia CEO Jensen Huang recently observed. "[For example], inside every robotaxi is an invisible AI chauffeur. That chauffeur is doing the work; the tool it uses is the car." This idea captures the transition underway today. AI is no longer just an instrument for human use: Rather, it is becoming an active operator and orchestrator of "the work" itself, not only capable of predicting and generating, but also planning, acting, and learning. This emerging class -- "agentic" AI -- represents the next wave of artificial intelligence. Agents can coordinate across workflows, make decisions, and adapt with experience. In doing so, they also blur the line between machine and teammate. For business leaders, that means agentic AI upends the fundamental management calculation around technology deployment. Their job is no longer simply installing smarter tools but guiding organizations where entire portions of the workforce are synthetic, distributed, and continuously evolving. With agents on board, companies must rethink their very makeup: how work is designed, how decisions are made, and how value is created when AI can execute on its own. How organizations redesign themselves around these agentic capabilities will determine whether AI becomes not just a more efficient technology, but a new basis for strategic differentiation altogether. To better understand how executives are navigating this shift, BCG and MIT Sloan Management Review conducted a global study of more than 2,000 leaders from 100+ countries. The findings show that while organizations are rapidly exploring agentic AI, most enterprises still need to define the overall strategies and operating models needed to integrate AI agents into their daily operations. Agentic AI's perceived dual identity -- as both machine and teammate -- creates tensions that traditional management frameworks cannot easily resolve. Leaders can't eliminate these tensions altogether; they must instead learn to manage them. There are four organizational tensions that stand out: The companies furthest ahead aren't resolving these tensions outright. Instead, they're embracing them -- redesigning systems, governance, and roles to turn the frictions into forward momentum. They see agentic AI's complexity as a feature to harness, not a flaw to fix. For CEOs, the challenge now is figuring out how to lead an organization where technology acts alongside people. Managing this new class of systems requires different frameworks than previous waves of AI. While predictive AI helped organizations analyze faster and better and generative AI helped create faster and better, agentic AI now enables them to operate faster and better, by planning, executing, and improving on its own. That shift upends traditional management approaches, requiring a new playbook for leadership. Reimagine the work, not just the workflow. In predictive or generative AI, the leadership task is to insert models into workflows. But agentic AI demands something different: It doesn't just execute a process -- it reimagines it dynamically. Because agents plan, act, and learn iteratively, they can discover new, often better ways of achieving the same goal. Historically, many work processes were designed to make humans mimic machine-like precision and predictability: Each step was standardized so work could be replicated reliably. Agentic systems, however, invert that logic: Leaders only need to define the inputs and desired outcomes. The work that happens in between those starting and ending points is then organic, a living system that optimizes itself in real time. But most organizations are still treating AI as a layer on top of existing workflows -- in essence, as a tool. To take advantage of agentic AI's true potential, leaders should start by identifying a few high-value, end-to-end processes -- where decision speed, cross-functional coordination, and learning feedback loops matter most -- and redesign them around how humans and agents can learn and act together. The opportunity is to create systems that can both scale predictably and adapt dynamically, not one or the other. Guide the actions, not just the decisions. Earlier AI waves required oversight of outputs; agentic AI requires oversight of actions. These systems can act autonomously, but not all actions carry the same risk. That makes the leadership challenge broader than determining decision rights. It's defining how agents operate within an organization: what data they can see, which systems they can trigger, and how and to what extent their choices ripple through an organization. While leaders will need to decide which categories of decisions remain human-only, which can be delegated to agents, and which require collaboration between the two, the overall focus should be around setting boundaries for agent behaviors. Governance can therefore no longer be a static policy; it must flex with context and risk. And just as leaders coach people, they will also need to coach agents -- deciding what information they need, which goals they optimize for, and when to escalate uncertainty to human judgment. Companies that embrace these new approaches to governance will be able to build trust, both internally and with regulators, by making accountability transparent even when machines may be executing. Rethink structures and talent. Generative AI changed how individuals work; agentic AI changes how organizations are structured. When agents can coordinate work and information flow, the traditional middle layer built for supervision will shrink. That's not a story of replacement -- it's a redesign. The next generation of leaders will be orchestrators, not overseers: people who can combine business judgment, technical fluency, and ethical awareness to guide hybrid teams of humans and agents. Companies should start planning now for flatter hierarchies, fewer routine roles, and new career paths that reward orchestration and innovation over task execution. Institutionalize learning for humans and agents. Like people, agents drift, learn, and -- most critically -- improve with feedback. Every action, interaction and correction makes them more capable. But that improvement depends on people staying engaged, not to control every step, but to help systems learn faster and better. To make that happen, leaders should create continuous learning loops connecting humans and agents. Employees must learn how to work with agents -- how to improve them, critique them, and adapt to their evolving capabilities -- while agents improve through those same interactions, across onboarding, monitoring, retraining, and even "retirement." Organizations that treat this as a shared development process -- where people shape how agents learn and agents elevate how people work -- will see the biggest gains. Managing this loop requires viewing both humans and agents as learners, and creating structures for ongoing training, retraining, and knowledge exchange. When this process is done right, the organization itself becomes a continuously improving system, one that gets smarter every time its humans and agents interact. Build for radical adaptability. Traditional transformation programs were designed for predictability. Agentic AI, however, moves too fast for those to keep up. Leaders need organizations that can adapt continuously -- financially, operationally, and culturally. But adaptability in the agentic era isn't just about keeping up with a faster technology cycle, it's about being ready to evolve as your organization learns alongside its agents. Each new capability can reshape responsibilities, decision flows, and even what "good performance" looks like. Leaders will need to treat adaptability not as crisis management but as an organizing principle. That means budgeting for constant reinvestment, building modular structures that allow functions to reconfigure as agents take on new roles, and cultivating cultures where experimentation is routine rather than exceptional. Agentic AI rewards organizations that can lean into continuous, radical change. This kind of "agent-centricity" means reassigning talent, updating processes, and refreshing governance in response to what the system itself learns. The most resilient companies will see adaptability not as a defensive reflex, but as a defining source of advantage. For years, the story of AI has been one of automation -- doing the same work faster, cheaper, and with fewer people. But that era is coming to an end. Agentic AI changes the nature of value because it can reshape the organization itself: how it learns, collaborates, and evolves. The next frontier is radical redesign, not repetition. The real opportunity is to set up an enterprise that can reinvent itself continuously, where agentic AI becomes the connective tissue -- linking knowledge, decision-making, and adaptation into one living system. This is the foundation of what we call the Agentic Enterprise Operating System: a model where human creativity and machine initiative evolve together, dynamically redesigning how the company works. Companies that embrace this shift will outgrow those still chasing efficiency -- they will be the ones defining how value, capability, and competition work in the age of AI.
[8]
The race to deploy an AI workforce faces one important trust gap: What happens when an agent goes rogue? | Fortune
To err is human; to forgive, divine. But when it comes to autonomous AI "agents" that are taking on tasks previously handled by humans, what's the margin for error? At Fortune's recent Brainstorm AI event in San Francisco, an expert roundtable grappled with that question as insiders shared how their companies are approaching security and governance -- an issue that is leapfrogging even more practical challenges such as data and compute power. Companies are in an arm's race to parachute AI agents into their workflows that can tackle tasks autonomously and with little human supervision. But many are facing a fundamental paradox that is slowing adoption to a crawl: Moving fast requires trust, and yet building trust takes a lot of time. Dev Rishi, general manager for AI at Rubrik, joined the security company last summer following its acquisition of his deep learning AI startup Predibase. Afterward, he spent the next four months meeting with executives from 180 companies. He used those insights to divide agentic AI adoption into four phases, he told the Brainstorm AI audience. (To level set, agentic adoption refers to businesses implementing AI systems that work autonomously, rather than responding to prompts.) According to Rishi's learnings, the four phases he unearthed include the early experimentation phase where companies are hard at work on prototyping their agents and mapping goals they think could be integrated into their workflows. The second phase, said Rishi, is the trickiest. That's when companies shift their agents from prototypes and into formal work production. The third phase involves scaling those autonomous agents across the entire company. The fourth and final stage -- which no one Rishi spoke with had achieved -- is autonomous AI. Roughly half of the 180 companies were in the experimentation and prototyping phase, Rishi found, while 25% were hard at work formalizing their prototypes. Another 13% were scaling, and the remaining 12% hadn't started any AI projects. However, Rishi projects a dramatic change ahead: In the next two years, those in the 50% bucket are anticipating that they will move into phase two, according to their roadmaps. "I think we're going to see a lot of adoption very quickly," Rishi told the audience. However, there's a major risk holding companies back from going "fast and hard," when it comes to speeding up the implementation of AI agents in the workforce, he noted. That risk -- and the No.1 blocker to broader deployment of agents -- is security and governance, he said. And because of that, companies are struggling to shift from agents being used for knowledge retrieval to being action oriented. "Our focus actually is to accelerate the AI transformation," said Rishi. "I think the number one risk factor, the number one bottleneck to that, is risk [itself]." Kathleen Peters, chief innovation office at Experian who leads product strategy, said the slowing is due to not fully understanding the risks when AI agents overstep the guardrails that companies have put into place and the failsafes needed for when that happens. "If something goes wrong, if there's a hallucination, if there's a power outage, what can we fall back to," she questioned. "It's one of those things where some executives, depending on the industry, are wanting to understand 'How do we feel safe?'" Figuring out that piece will be different for every company and is likely to be particularly thorny for companies in highly regulated industries, she noted. Chandhu Nair, senior vice president in data, AI, and innovation at home improvement retailer Lowe's, noted that it's "fairly easy" to build agents, but people don't understand what they are: Are they a digital employee? Is it a workforce? How will it be incorporated into the organizational fabric? "It's almost like hiring a whole bunch of people without an HR function," said Nair. "So we have a lot of agents, with no kind of ways to properly map them, and that's been the focus." The company has been working through some of these questions, including who might be responsible if something goes wrong. "It's hard to trace that back," said Nair. Experian's Peters predicted that the next few years will see a lot of those very questions hashed out in public even as conversations take place simultaneously behind closed doors in boardrooms and among senior compliance and strategy committees. "I actually think something bad is going to happen," Peters said. "There are going to be breaches. There are going to be agents that go rogue in unexpected ways. And those are going to make for a very interesting headlines in the news." Big blowups will generate a lot of attention, Peters continued, and reputational risk will be on the line. That will force the issue of uncomfortable conversations about where liabilities reside regarding software and agents, and it will all likely add up to increased regulation, she said. "I think that's going to be part of our societal overall change management in thinking about these new ways of working," Peters said. Still, there are concrete examples as to how AI can benefit companies when it is implemented in ways that resonate with employees and customers. Nair said Lowe's has seen strong adoption and "tangible" return on investment from the AI it has embedded into the company's operations thus far. For instance, among its 250,000 store associates, each has an agent companion with extensive product knowledge across its 100,000 square foot stores that sell anything from electrical equipment, to paints, to plumbing supplies. A lot of the newer entrants to the Lowe's workforce aren't tradespeople, said Nair, and the agent companions have become the "fastest-adopted technology" so far. "It was important to get the use cases right that really resonate back with the customer," he said. In terms of driving change management in stores, "if the product is good and can add value, the adoption just goes through the roof." But for those who work at headquarters, the change management techniques have to be different, he added, which piles on the complexity. And many enterprises are stuck at another early-stage question, which is whether they should build their own agents or rely on the AI capabilities developed by major software vendors. Rakesh Jain, executive director for cloud and AI engineering at healthcare system Mass General Brigham, said his organization is taking a wait-and-see approach. With major platforms like Salesforce, Workday, and ServiceNow building their own agents, it could create redundancies if his organization builds its own agents at the same time. "If there are gaps, then we want to build our own agents," said Jain. "Otherwise, we would rely on buying the agents that the product vendors are building." In healthcare, Jain said there's a critical need for human oversight given the high stakes. "The patient complexity cannot be determined through algorithms," he said. "There has to be a human involved in it." In his experience, agents can accelerate decision making, but humans have to make the final judgment, with doctors validating everything before any action is taken. Still, Jain also sees enormous potential upside as the technology matures. In radiology, for example, an agent trained on the expertise of multiple doctors could catch tumors in dense tissue that a single radiologist might miss. But even with agents trained on multiple doctors, "you still have to have a human judgment in there," said Jain. And the threat of overreach by an agent that is supposed to be a trusted entity is ever present. He compared a rogue agent to an autoimmune disease, which is one of the most difficult conditions for doctors to diagnose and treat because the threat is internal. If an agent inside a system "becomes corrupt," he said, "it's going to cause massive damages which people have not been able to really quantify." Despite the open questions and looming challenges, Rishi said there's a path forward. He identified two requirements for building trust in agents. First, companies need systems that provide confidence that agents are operating within policy guardrails. Second, they need clear policies and procedures for when things will inevitably go wrong -- a policy with teeth. Nair, additionally, added three factors for building trust and moving forward smartly: identity and accountability and knowing who the agent is; evaluating how consistent the quality of each agent's output is; and, reviewing the post-mortem trail that can explain why and when mistakes have occurred. "Systems can make mistakes, just like humans can as well," said Nair. " But to be able to explain and recover is equally important."
[9]
The infinite digital workforce and the road from promise to practice
Autonomous agents offer a future of affordable digital workers. Enterprises are exploring these AI colleagues for tasks like ticket resolution and content drafting. While promising, successful adoption hinges on careful implementation. Teams are building safeguards and integrating humans into workflows. This approach transforms unruly interns into reliable staff, driving efficiency and innovation. Autonomous agents promise an abundant supply of low-cost digital labour. Turning that promise into reality takes more than a clever prompt. The promise Enterprises stand on the cusp of a familiar dream from the RPA era. This time it is agents, powered by large language models and equipped with tools that let them act rather than merely interact. Many executives picture a tireless digital workforce of robots resolving tickets, reconciling ledgers and producing first drafts while human colleagues focus on clients and strategy. Will agents live up to that image, and if so when? The appeal is clear. Agents can operate around the clock at near-zero marginal cost. They do not tire, resign or switch employers. Correct one instance and, through shared prompts or fine-tuning, every sibling improves. Knowledge propagates quickly, lifting quality in minutes rather than months. Routine work benefits most: machines deliver steady competence while people apply judgement, empathy and imagination. Interest is rising. Early pilots show modest but real gains, and surveys abound. EY's recent AIdea of India survey suggests that 24% of Indian enterprises are adopting Agentic AI, with most Indian knowledge workers expressing positive sentiment about working alongside AI colleagues. Seasonal spikes are particularly ripe: retailers during festivals, auditors at quarter-end and analysts in earnings season can scale capacity without burnout. Customer support, bookkeeping, data entry and first-draft content are already shifting to digital labour. What "Agentic" means An Agent is software that acts toward a goal with limited supervision. Given a high-level brief, it plans steps, chooses tools, takes actions, observes outcomes and adapts. It can call external APIs, query databases, search, send emails or run code to achieve an objective. Autonomy, access to a useful toolkit and a budget that constrains time or spend are the essentials. Ask for a campaign plan and an agent can research audiences, propose messages, list channels and draft copy, adjusting as it learns. Why now Reasoning models alone are brilliant conversationalists that can think, reason and plan but do not act. Combine them with orchestration frameworks and the picture changes. Techniques that blend reasoning and action let models plan, execute, observe and decide in a loop. Guardrails set boundaries and budgets so agents do not wander. Early open-source experiments such as AutoGPT and BabyAGI mapped the mechanics: decompose goals into tasks, perform them in sequence, stop when done or out of budget. Recent gains in compute, models and infrastructure have turned a lab trick into a serviceable tool. Where it breaks A gap remains between demos and dependable production. So how does one make it work? Pragmatic teams are building guardrails and discipline along these lines: The approach to be taken is clear: treat agents as fallible colleagues who require oversight, not as oracles. People & Process matter more than Tools & Tech Dropping an agent into an unreformed process rarely works. Redesign workflows to specify what agents do, what humans decide and how hand-offs occur. Train employees to supervise, to review and to exercise judgement rather than to retype. Manage change deliberately: clear communication, realistic expectations and incentives that reward collaboration with machines. The most successful organisations start small, measure outcomes and then scale, replacing enthusiasm with disciplined adoption. A Realistic Outlook The infinite digital workforce is not here yet. What is here is a useful apprentice that can take on parts of real work with encouraging consistency. Invest in the plumbing, govern the risks, redesign the work and teach people how to partner. Do that and today's unruly interns begin to look like tomorrow's steady staff. The author is Partner, Technology Consulting, EY India
[10]
Agentic AI in Business: From Workflows to Workforces: By Ankit Patel
For years, businesses used AI mainly to automate repetitive tasks. It handled things like sorting emails, organizing data, or suggesting responses in customer service chats. These systems were helpful, but they were limited. They followed rules, waited for instructions, and rarely acted on their own. Today, this is changing quickly as AI adoption in business becomes a priority for companies looking to work faster and smarter. But things are changing fast. A new wave of technology -- often called agentic AI -- is pushing businesses into a new era. Instead of simply automating steps in a process, AI is beginning to take on roles, make decisions, and complete tasks from start to finish. This shift is transforming how companies operate and how teams work every day. Traditional AI completes single actions. It answers a question. It analyzes a document. It predicts a number. These actions are useful, but they do not resemble how humans work. Agentic AI, on the other hand, behaves more like a digital worker. It can plan, break down goals, move between tools, and adapt when things change. It understands context, not just commands. This means companies are not only automating "workflows" anymore -- they are building digital workforces. Businesses today deal with massive amounts of information, scattered systems, and time-consuming manual tasks. Most companies already have workflow tools, but workflows fall apart when something unexpected happens. They're rigid. Agentic AI is different because it can: This flexibility makes it ideal for modern business environments, especially those with dynamic processes. Many companies are now moving from experimenting with AI to actually using it across departments. Most organizations began with small projects -- like improving customer support or automating reports -- but are now exploring deeper transformation. As technology becomes easier to deploy, more teams are learning how agentic systems can support daily operations and handle higher-value work. In large organizations, the value of agentic AI becomes even clearer. These digital agents can log into apps, gather information, make decisions, and report results with minimal human oversight. This makes them especially useful in environments with complex systems, multiple teams, and heavy data workloads. As a result, interest in AI agents in enterprise environments is growing rapidly. AI agents can handle customer queries end-to-end. Instead of offering simple replies, they can understand the issue, check the customer's account, update records, and provide solutions. They work 24/7 and stay consistent. In operations, agentic AI can monitor tasks, assign actions, update systems, send alerts, and ensure processes stay on schedule. It can handle tasks that used to require several employees. AI can now research leads, create outreach messages, update CRM records, and track campaign performance. These agents help sales teams close deals faster by reducing administrative work. Instead of simply scanning invoices, agentic AI can match them with purchase orders, flag exceptions, follow up with vendors, and generate end-of-month summaries. AI can screen resumes, schedule interviews, prepare onboarding documents, and help employees find resources. It increases speed and improves internal efficiency. In the past, companies built workflows that followed strict rules. These systems needed clear instructions and always depended on people to handle exceptions. Now, agentic AI brings a completely different approach. A digital agent can act like an employee -- using tools, solving problems, and improving over time. It becomes part of the workforce instead of just a workflow component. This shift means companies can scale operations faster without hiring huge teams. It also means human workers can focus more on creativity, innovation, and relationship-driven tasks. Agentic AI brings many advantages, but it also comes with important considerations. Here's a simple approach to get started: In the coming years, almost every company will use agentic AI in some form. As the technology improves, these systems will become more capable, more reliable, and more central to daily business operations. Eventually, businesses may run with hybrid teams -- part human, part AI -- working together on equal footing. Humans will focus on strategy, creativity, and relationships. AI will handle execution, coordination, and routine decision-making. This evolution will reshape how companies grow and compete. Agentic AI marks a major shift in how businesses operate. It goes beyond simple automation and introduces digital workers that can plan, act, and learn. Companies are moving from rigid workflows to flexible, intelligent workforces powered by AI. As adoption increases, businesses of all sizes will benefit from faster processes, smarter decisions, and more efficient operations. The businesses that embrace this shift early will gain the greatest advantage -- because the future of work is not just automated, it is agentic.
Share
Share
Copy Link
2025 was supposed to be the year of AI agents, but enterprise adoption fell far short of expectations. Deloitte's Tech Trends report reveals that only 11% of organizations have deployed AI agents in production, with 42% still developing their strategy. Legacy enterprise systems, fragmented data architecture, and inadequate governance emerged as the primary obstacles preventing widespread adoption.

2025 was heralded as the breakthrough year for AI agents, with industry experts predicting these autonomous assistants would transform enterprise workflows and boost productivity across organizations. The reality proved far different. According to Deloitte's 2025 Tech Trends report, AI agents failed to achieve widespread adoption, with only 11% of surveyed organizations actively using agentic AI in production environments
1
. The gap between promise and execution reveals fundamental challenges in how enterprise approaches autonomous agents.Deloitte's 2025 Emerging Technology Trends study surveyed 500 US tech leaders and found that while 30% of organizations are exploring agentic options and 38% are piloting solutions, only 14% have solutions ready to deploy
1
. Even more concerning, 42% of organizations report they are still developing their enterprise AI strategy roadmap, and 35% have no strategy in place at all. This sluggish deployment rate stands in stark contrast to Gartner's prediction that by 2028, 15% of day-to-day work decisions will be made autonomously by agents, up from 0% in 20241
.The primary obstacle preventing productivity gains from AI agents isn't the technology itself but the infrastructure supporting it. Legacy enterprise systems that organizations still rely on were not designed for agentic AI operations, creating bottlenecks that hinder agents' ability to carry out actions and perform tasks
1
. "You have to have the investments in your core systems, enterprise software, legacy systems, SAS, to have services to consume and be able to actually get any kind of work done," explained Bill Briggs, CTO at Deloitte. "At the end of the day, they're [AI agents] still calling the same order systems, pricing systems, finance systems, HR systems, behind the scenes, and most organizations haven't spent to have the hygiene to have them ready to participate"1
.Data architecture emerged as another critical failure point. The data repositories feeding information to autonomous agents are not organized in ways that enable effective consumption. A 2025 Deloitte survey found that 48% of organizations identified the searchability of data as a challenge to their AI automation strategy, while 47% cited the reusability of data as an obstacle
1
. Without unified, identity-resolved data layers, agents operate with fragmented understanding, leading to contradictory decisions and system incoherence4
.In enterprise AI coding implementations, the limiting factor is no longer models but context engineering—the structure, history, and intent surrounding the code being changed
3
. When agents lack structured understanding of codebases, including relevant modules, dependency graphs, test harnesses, and architectural conventions, they generate output that appears correct but disconnects from reality. A randomized control study showed that developers using AI assistance in unchanged workflows completed tasks more slowly, largely due to verification, rework, and confusion around intent3
.Governance represents another critical gap. Traditional IT governance doesn't account for AI agents' ability to make their own decisions, and organizations often fail to create proper oversight mechanisms for agentic systems to operate autonomously
1
. "You've got this layer on top, which is the orchestration/agent ops. How do we instrument, measure, put controls, and thresholds, so if we got it right, the meter wouldn't be spinning out of control," said Briggs1
. Without real observability, audit trails, and behavior logs, trust collapses when IT teams can't see what an agent did or why2
.Related Stories
Deloitte identified a clear pattern among organizations with successful implementations: being thoughtful about how agents are implemented rather than simply layering them onto existing workflows
1
. Business processes were created to fit human needs, not those of autonomous agents, so the shift to automation means fundamentally rethinking existing operations. McKinsey's 2025 report noted that productivity gains arise not from layering AI onto existing processes but from redesigning business processes themselves3
.The enterprise was designed for a world where execution was the primary constraint. Today, execution is cheap, abundant, and instantaneous through agentic AI. The new constraint is process orchestration—ensuring work flows simply and cleanly across teams
2
. Organizations need to identify what work exists purely as organizational muscle memory—duplicate requests, redundant checks, legacy forms—and remove organizational drag that creates delays between tasks rather than inside them.Despite the slow start, Microsoft remains committed to advancing enterprise adoption of AI agents. According to the company's 2025 Work Trends Index, 80% of leaders said their company plans to integrate agents into their AI strategy in the next 12 to 18 months, with more than one-third planning to make them central to major business processes
5
. An IDC study found that Frontier Firms use AI across an average of seven business functions, with more than 70% leveraging AI in customer service, marketing, IT, product development, and cybersecurity5
.Microsoft's enterprise AI strategy emphasizes starting with democratized access, making agents available broadly so every employee can experiment and find value with rules-based, repetitive processes such as data entry, invoicing, customer follow-ups, and approvals
5
. Security remains integral, with Zero Trust principles applied to agents, giving only necessary access and adjusting it as responsibilities evolve. The strongest adoption benefits from a two-pronged model: empowering people at every level to use AI daily for bottom-up innovation while senior leaders drive high-impact projects from the top5
.The transformation requires treating agents as data infrastructure, where every plan, context snapshot, action log, and test run becomes part of an engineered environment
3
. Organizations that succeed will treat context as an engineering surface, creating tooling to snapshot, compact, and version the agent's working memory. As agentic AI matures, new roles will emerge—from agent builders to AI strategists—while existing positions expand to include supervising and managing digital workers, creating hybrid human-agent teams that redefine how enterprise operates.Summarized by
Navi
19 Jun 2025•Technology

03 Sept 2025•Technology

23 Dec 2025•Technology

1
Policy and Regulation

2
Technology

3
Technology
