6 Sources
6 Sources
[1]
Why AI agents failed to take over in 2025 - it's 'a story as old as time,' says Deloitte
Companies that succeeded were thoughtful about implementation. This past year was deemed the year of AI agents by experts and industry leaders alike, with a promise to revolutionize how people work and increase productivity. However, consultancy Deloitte's new Tech Trends report found that these autonomous AI assistants actually failed to take off, identifying both the obstacles preventing widespread adoption and ways to overcome them. Every year, Deloitte releases its Tech Trends report, which examines the biggest trends of the past year to help inform enterprise leaders and workers alike about what to look out for in the upcoming year. Naturally, the main topic of the 17th report, like that of the past two years' reports, which ZDNET covered in 2023 and 2024, was AI. However, with investments in AI at an all-time high this year, the report offered valuable insights on how to maximize the ROI on agentic strategy. Also: While Google and OpenAI battle for model dominance, Anthropic is quietly winning the enterprise AI race "Getting beyond the headlines to really focus on the so what, and now what, is the service that Tech Trends tries to do," said Bill Briggs, CTO at Deloitte. "The world's going to continue to advance and evolve, and you can't wait, or you will be left behind." The emergence of agentic AI technology had enterprise leaders excited about the idea of expanding their workforces and upping productivity with AI assistants. As cited in the report, Gartner predicted that, by 2028, 15% of day-to-day work decisions will be made autonomously by agents, up from 0% in 2024, highlighting the momentum behind the technology. "In a way, AI, especially three years ago, kind of triggered this wave of excitement, enthusiasm from the C-suite and the board... but it was treated as if it was something separate by itself, and the ceiling was pretty low on the impact and return from that -- and now agents are having that same [moment]," said Briggs. Also: Reinventing your career for the AI age? Your technical skill isn't your most valuable asset Deloitte's 2025 Emerging Technology Trends study, which surveys 500 US Tech Leaders, found that 30% of the surveyed organizations are exploring agentic options, with 38% piloting solutions and only 14% having solutions ready to deploy. The number of organizations actively using the systems in production is even lower, at 11%. Some companies are not yet close to deploying the technology, with 42% of the organizations reporting they are still developing their agentic strategy roadmap and 35% having no strategy in place at all. This slow rate of deployment is noteworthy because there is potential for agentic AI to optimize business operations. However, many companies are not in a position to leverage the technology. "You have to have the investments in your core systems, enterprise software, legacy systems, SAS, to have services to consume and be able to actually get any kind of work done, because, at the end of the day, they're [AI agents] still calling the same order systems, pricing systems, finance systems, HR systems, behind the scenes, and most organizations haven't spent to have the hygiene to have them ready to participate," said Briggs. Also: Dreading AI job cuts? 5 ways to future-proof your career - before it's too late Obstacles identified by the report included the legacy enterprise systems that many organizations still rely on, which were not designed for agentic AI operations and cause bottlenecks in accessing systems, hindering the agents' ability to carry out actions and perform tasks. Similarly, the data architectures of the data repositories, which feed information to the AI agents, are not organized in a way that enables the AI agents to consume it. Deloitte cited a 2025 survey it conducted that found that 48% of organizations identified the searchability of data as a challenge to their AI automation strategy, and 47% cited the reusability of data as an obstacle. Also: Stop using ChatGPT for everything: The AI models I use for research, coding, and more Lastly, organizations often fail to create the proper governance and oversight mechanisms for the agentic systems to operate autonomously, as traditional IT governance doesn't account for AI agents' ability to make their own decisions. "You've got this layer on top, which is the orchestration/agent ops. How do we instrument, measure, put controls, and thresholds, so if we got it right, the meter wouldn't be spinning out of control, kind of like we saw with the early days of cloud adoption," said Briggs. Deloitte identified a pattern among organizations with successful implementations of AI: being thoughtful about how agents are implemented. Business processes were created to fit human needs, not those of AI agents, so the shift to automation means rethinking existing business processes. Rather than just "layer agents onto existing workflows," Deloitte said successful organizations "redesign processes" to take the best advantage of AI's agentic capabilities, leaning into their ability to tackle a high volume of tasks collaboratively without breaks. The human element also involves ensuring that employees in the organization are properly trained. According to the report, 93% of AI spend still goes to technology, while only 7% goes to changing the culture and training, and learning. Also: Gemini vs. Copilot: I tested the AI tools on 7 everyday tasks, and it wasn't even close Briggs said this disproportionality is "out of whack, because that's the piece where almost everything is going to fall down." Yet, he said the lack of focus on training is a "story as old as time" and a repeated pattern seen in many of the tech transformations witnessed during his 30 years in the industry. Working with AI agents will also raise new questions, such as who will manage these AI agents and teams? What will an HR team for these agents look like? Similarly, Microsoft's 2025 Work Trend Index Annual Report explored the concept of a Frontier Firm, or organizations with both AI agents and humans working in tandem, and found that humans will eventually lead teams of AI agents, necessitating HR processes for these assistants. "We've got to rethink most of our HR process in a world where we're going to increasingly have people working with algorithms and agents and robots," said Briggs.
[2]
What the new wave of agentic AI demands from CEOs | Fortune
For decades, technologies have largely been built as tools, extensions of human intent and control that have helped us lift, calculate, store, move, and much more. But those tools, even the most revolutionary ones, have always waited for us to 'use' them, assisting us in doing the work -- whether manufacturing a car, sending an email, or dynamically managing inventory -- rather than doing it on their own. With recent advances in AI, however, that underlying logic is shifting. "For the very first time, technology is now able to do work," Nvidia CEO Jensen Huang recently observed. "[For example], inside every robotaxi is an invisible AI chauffeur. That chauffeur is doing the work; the tool it uses is the car." This idea captures the transition underway today. AI is no longer just an instrument for human use: Rather, it is becoming an active operator and orchestrator of "the work" itself, not only capable of predicting and generating, but also planning, acting, and learning. This emerging class -- "agentic" AI -- represents the next wave of artificial intelligence. Agents can coordinate across workflows, make decisions, and adapt with experience. In doing so, they also blur the line between machine and teammate. For business leaders, that means agentic AI upends the fundamental management calculation around technology deployment. Their job is no longer simply installing smarter tools but guiding organizations where entire portions of the workforce are synthetic, distributed, and continuously evolving. With agents on board, companies must rethink their very makeup: how work is designed, how decisions are made, and how value is created when AI can execute on its own. How organizations redesign themselves around these agentic capabilities will determine whether AI becomes not just a more efficient technology, but a new basis for strategic differentiation altogether. To better understand how executives are navigating this shift, BCG and MIT Sloan Management Review conducted a global study of more than 2,000 leaders from 100+ countries. The findings show that while organizations are rapidly exploring agentic AI, most enterprises still need to define the overall strategies and operating models needed to integrate AI agents into their daily operations. Agentic AI's perceived dual identity -- as both machine and teammate -- creates tensions that traditional management frameworks cannot easily resolve. Leaders can't eliminate these tensions altogether; they must instead learn to manage them. There are four organizational tensions that stand out: The companies furthest ahead aren't resolving these tensions outright. Instead, they're embracing them -- redesigning systems, governance, and roles to turn the frictions into forward momentum. They see agentic AI's complexity as a feature to harness, not a flaw to fix. For CEOs, the challenge now is figuring out how to lead an organization where technology acts alongside people. Managing this new class of systems requires different frameworks than previous waves of AI. While predictive AI helped organizations analyze faster and better and generative AI helped create faster and better, agentic AI now enables them to operate faster and better, by planning, executing, and improving on its own. That shift upends traditional management approaches, requiring a new playbook for leadership. Reimagine the work, not just the workflow. In predictive or generative AI, the leadership task is to insert models into workflows. But agentic AI demands something different: It doesn't just execute a process -- it reimagines it dynamically. Because agents plan, act, and learn iteratively, they can discover new, often better ways of achieving the same goal. Historically, many work processes were designed to make humans mimic machine-like precision and predictability: Each step was standardized so work could be replicated reliably. Agentic systems, however, invert that logic: Leaders only need to define the inputs and desired outcomes. The work that happens in between those starting and ending points is then organic, a living system that optimizes itself in real time. But most organizations are still treating AI as a layer on top of existing workflows -- in essence, as a tool. To take advantage of agentic AI's true potential, leaders should start by identifying a few high-value, end-to-end processes -- where decision speed, cross-functional coordination, and learning feedback loops matter most -- and redesign them around how humans and agents can learn and act together. The opportunity is to create systems that can both scale predictably and adapt dynamically, not one or the other. Guide the actions, not just the decisions. Earlier AI waves required oversight of outputs; agentic AI requires oversight of actions. These systems can act autonomously, but not all actions carry the same risk. That makes the leadership challenge broader than determining decision rights. It's defining how agents operate within an organization: what data they can see, which systems they can trigger, and how and to what extent their choices ripple through an organization. While leaders will need to decide which categories of decisions remain human-only, which can be delegated to agents, and which require collaboration between the two, the overall focus should be around setting boundaries for agent behaviors. Governance can therefore no longer be a static policy; it must flex with context and risk. And just as leaders coach people, they will also need to coach agents -- deciding what information they need, which goals they optimize for, and when to escalate uncertainty to human judgment. Companies that embrace these new approaches to governance will be able to build trust, both internally and with regulators, by making accountability transparent even when machines may be executing. Rethink structures and talent. Generative AI changed how individuals work; agentic AI changes how organizations are structured. When agents can coordinate work and information flow, the traditional middle layer built for supervision will shrink. That's not a story of replacement -- it's a redesign. The next generation of leaders will be orchestrators, not overseers: people who can combine business judgment, technical fluency, and ethical awareness to guide hybrid teams of humans and agents. Companies should start planning now for flatter hierarchies, fewer routine roles, and new career paths that reward orchestration and innovation over task execution. Institutionalize learning for humans and agents. Like people, agents drift, learn, and -- most critically -- improve with feedback. Every action, interaction and correction makes them more capable. But that improvement depends on people staying engaged, not to control every step, but to help systems learn faster and better. To make that happen, leaders should create continuous learning loops connecting humans and agents. Employees must learn how to work with agents -- how to improve them, critique them, and adapt to their evolving capabilities -- while agents improve through those same interactions, across onboarding, monitoring, retraining, and even "retirement." Organizations that treat this as a shared development process -- where people shape how agents learn and agents elevate how people work -- will see the biggest gains. Managing this loop requires viewing both humans and agents as learners, and creating structures for ongoing training, retraining, and knowledge exchange. When this process is done right, the organization itself becomes a continuously improving system, one that gets smarter every time its humans and agents interact. Build for radical adaptability. Traditional transformation programs were designed for predictability. Agentic AI, however, moves too fast for those to keep up. Leaders need organizations that can adapt continuously -- financially, operationally, and culturally. But adaptability in the agentic era isn't just about keeping up with a faster technology cycle, it's about being ready to evolve as your organization learns alongside its agents. Each new capability can reshape responsibilities, decision flows, and even what "good performance" looks like. Leaders will need to treat adaptability not as crisis management but as an organizing principle. That means budgeting for constant reinvestment, building modular structures that allow functions to reconfigure as agents take on new roles, and cultivating cultures where experimentation is routine rather than exceptional. Agentic AI rewards organizations that can lean into continuous, radical change. This kind of "agent-centricity" means reassigning talent, updating processes, and refreshing governance in response to what the system itself learns. The most resilient companies will see adaptability not as a defensive reflex, but as a defining source of advantage. For years, the story of AI has been one of automation -- doing the same work faster, cheaper, and with fewer people. But that era is coming to an end. Agentic AI changes the nature of value because it can reshape the organization itself: how it learns, collaborates, and evolves. The next frontier is radical redesign, not repetition. The real opportunity is to set up an enterprise that can reinvent itself continuously, where agentic AI becomes the connective tissue -- linking knowledge, decision-making, and adaptation into one living system. This is the foundation of what we call the Agentic Enterprise Operating System: a model where human creativity and machine initiative evolve together, dynamically redesigning how the company works. Companies that embrace this shift will outgrow those still chasing efficiency -- they will be the ones defining how value, capability, and competition work in the age of AI.
[3]
Why AI agents still outrun the reality of enterprise ambition - SiliconANGLE
AI agents are fast becoming the defining force behind the enterprise shift from simple automation to true decision intelligence. If the first satisfactory phase of enterprise artificial intelligence was about automation, the next is clearly about augmentation: enhancing human intelligence in knowledge work. TheCUBE Research's "Agentic AI Futures Index" shows that shift accelerating. Sixty-two percent of companies now see AI agents as a key part of decision-making, marking a decisive move from automation-focused deployments toward AI-driven decision intelligence. But ambition is outpacing execution. Organizations are investing heavily in capabilities beyond automation, including digital coworkers that collaborate with humans, pursue goals and make judgment-based decisions. As they shift from experimentation to execution, the distance between what leaders believe AI can deliver and what their organizations can operationalize continues to widen. AI agents don't just automate tasks -- they expose weaknesses in governance, data quality and operational readiness. At the center of that tension sits a question most enterprises still can't answer: "Can we trust these systems to make decisions that matter?" "Trust is emerging as the currency of innovation," said theCUBE Research's Scott Hebner on the Next Frontiers of AI podcast. "No trust, no ROI." The "Agentic AI Futures Index" provides the first comprehensive benchmark of where enterprises actually stand in this transition, according to Hebner. Conducted in the third quarter of 2025, the research surveyed cross-industry AI business and technology leaders, measuring enterprise readiness across five areas. The Index also draws on insights from theCUBE's coverage of the AI Agent Builder Summit and adds real-world insights from Doozer.ai Inc. These data points reveal not just where organizations are investing, but where they're getting stuck -- and why execution keeps breaking down even as conviction remains high. AI agent innovation cycles are accelerating faster than most enterprises can track. Leaders need more than anecdotal benchmarks to understand their position and the gaps they can't yet see. The five dimensions measured in the "Agentic AI Futures Index" include: Each dimension is scored on a 0-5 maturity scale based on responses from 61 questions submitted by 625 qualified AI professionals across 13 industries. Respondents were screened for direct involvement in AI strategy, development or governance, ensuring the data reflects practitioners actively shaping the field rather than observers speculating about it. "Together, these indices define a strategic maturity curve that helps organizations craft AI strategies, showcase solution leadership, benchmark progress against peers and anticipate the technologies that will shape the next decade of innovation," Hebner explained. Enterprises aren't lacking conviction. More than 90% of leaders surveyed for the "Agentic AI Futures Index" see digital labor -- and agentic AI more broadly -- as inevitable, and conviction is highest among those with the most direct AI experience, according to the Index. Digital coworkers are essential solutions to talent shortages, rising costs and competitive pressure, not optional innovations. This pattern shows up consistently across the Index: Leaders express strong conviction in agentic AI, but organizations struggle to translate that vision into real-world execution. The "Digital Labor Transformation Index", part of the overall "Agentic AI Futures Index," illustrates the point. Its overall maturity score sits at 3.1 -- evidence that organizations have begun to move past experimentation, but not enough to signal readiness for scaled digital coworkers. Collaboration between human resources and IT is forming, and early strategies are taking shape. Targeted use cases are emerging, but execution continues to fall short. "Right now, it's still considered a technical implementation, but this is really a business play," said Christophe Bertrand, principal analyst at theCUBE Research. "Without cross-organization collaboration, things can head to the wall very quickly because there's only so much you can ask IT to do for you." Even with that conviction, operational readiness remains out of reach. Within the "Digital Labor Transformation Index," aspirations score 4.1 on the maturity scale, strategy drops to 3.1, and execution falls to just 1.8. That pattern signals a structural gap between what organizations envision for digital labor and what they can actually deliver today, echoing the broader vision-to-value gap seen across the "Agentic AI Futures Index." No single failure drives that gap. Many organizations still treat agentic AI and digital coworkers as technical implementations owned by IT rather than as workforce transformation initiatives that demand shared accountability. Strategy scores show that investment and planning are underway, but execution scores reveal that the collaborative structures, governance and change management required to operationalize digital labor at scale remain immature. "These are the signs of initiatives in their infancy," Bertrand said. "We're still putting the foundations in place, and we're years from where we want to be." The cultural dimension is equally critical. Human resources is increasingly involved in shaping how enterprises integrate AI agents as digital coworkers rather than as replacements for human workers. This shift reframes agentic AI as workforce evolution rather than pure automation. "If you're not at the cadence of speed, culture is the game," said John Furrier, co-founder and co-chief executive officer of SiliconANGLE Media Inc., during the AI Agent Builder Summit. "It's the only game in town, because if you don't have the speed, you don't win. If you're not in the game, on the field with AI, you're going to lose to somebody else who's going to be faster." The "Agentic AI Futures Index" highlights the accelerating investment in AI reasoning and decision intelligence. Seventy-three percent of enterprises are making significant or strategic commitments to capabilities that move beyond automation into judgment-based work, and investment maturity scores 3.8 out of 5 - the highest of any dimension measured. But confidence hasn't kept pace with spending. Only 49% of leaders interviewed express high confidence that AI agents can make trustworthy, accurate decisions. The Index's trust scores, 2.4 out of 5, are the lowest across all measured dimensions. The gap between investment and confidence is where ROI stalls, according to Paul Chada, chief executive officer of Doozer.ai. "A prediction is not a decision," Chada said. "You can only trust an agent when it shows that it understands the goal, the context and the consequences of its actions." The tension between belief and doubt plays out in real deployments. Enterprises range from expecting agentic AI to solve everything to deep skepticism that it's ready for production, according to Michael Garas, AI partnerships leader at IBM Corp. "I think the top-of-mind concern for enterprises is how to make sure AI agents have the context that they need," Garas said during the AI Agent Builder Summit. Generative AI identifies patterns and correlations, but decision intelligence requires systems to understand cause and effect, Chada noted. As enterprises move from prediction into judgment-based work, the trust mechanisms that supported automation no longer apply. Explainability, human collaboration and the ability to evaluate trade-offs become essential to validating an agent's decisions. "Organizations are just wanting to run [AI agents] in parallel right now," Chada said. "As they see it, make the correct decision repeatedly [and] trust builds naturally; that's when autonomy becomes acceptable." These tensions are why trust is emerging as the real gatekeeper of enterprise adoption. The "Agentic AI Futures Index" revealed that organizations are willing to invest and eager to experiment, but autonomy only advances at the speed of confidence. As AI agents take on work with real consequences, the ability to understand and verify their decisions will shape how quickly enterprises move into the next phase of this shift.
[4]
The race to deploy an AI workforce faces one important trust gap: What happens when an agent goes rogue? | Fortune
To err is human; to forgive, divine. But when it comes to autonomous AI "agents" that are taking on tasks previously handled by humans, what's the margin for error? At Fortune's recent Brainstorm AI event in San Francisco, an expert roundtable grappled with that question as insiders shared how their companies are approaching security and governance -- an issue that is leapfrogging even more practical challenges such as data and compute power. Companies are in an arm's race to parachute AI agents into their workflows that can tackle tasks autonomously and with little human supervision. But many are facing a fundamental paradox that is slowing adoption to a crawl: Moving fast requires trust, and yet building trust takes a lot of time. Dev Rishi, general manager for AI at Rubrik, joined the security company last summer following its acquisition of his deep learning AI startup Predibase. Afterward, he spent the next four months meeting with executives from 180 companies. He used those insights to divide agentic AI adoption into four phases, he told the Brainstorm AI audience. (To level set, agentic adoption refers to businesses implementing AI systems that work autonomously, rather than responding to prompts.) According to Rishi's learnings, the four phases he unearthed include the early experimentation phase where companies are hard at work on prototyping their agents and mapping goals they think could be integrated into their workflows. The second phase, said Rishi, is the trickiest. That's when companies shift their agents from prototypes and into formal work production. The third phase involves scaling those autonomous agents across the entire company. The fourth and final stage -- which no one Rishi spoke with had achieved -- is autonomous AI. Roughly half of the 180 companies were in the experimentation and prototyping phase, Rishi found, while 25% were hard at work formalizing their prototypes. Another 13% were scaling, and the remaining 12% hadn't started any AI projects. However, Rishi projects a dramatic change ahead: In the next two years, those in the 50% bucket are anticipating that they will move into phase two, according to their roadmaps. "I think we're going to see a lot of adoption very quickly," Rishi told the audience. However, there's a major risk holding companies back from going "fast and hard," when it comes to speeding up the implementation of AI agents in the workforce, he noted. That risk -- and the No.1 blocker to broader deployment of agents -- is security and governance, he said. And because of that, companies are struggling to shift from agents being used for knowledge retrieval to being action oriented. "Our focus actually is to accelerate the AI transformation," said Rishi. "I think the number one risk factor, the number one bottleneck to that, is risk [itself]." Kathleen Peters, chief innovation office at Experian who leads product strategy, said the slowing is due to not fully understanding the risks when AI agents overstep the guardrails that companies have put into place and the failsafes needed for when that happens. "If something goes wrong, if there's a hallucination, if there's a power outage, what can we fall back to," she questioned. "It's one of those things where some executives, depending on the industry, are wanting to understand 'How do we feel safe?'" Figuring out that piece will be different for every company and is likely to be particularly thorny for companies in highly regulated industries, she noted. Chandhu Nair, senior vice president in data, AI, and innovation at home improvement retailer Lowe's, noted that it's "fairly easy" to build agents, but people don't understand what they are: Are they a digital employee? Is it a workforce? How will it be incorporated into the organizational fabric? "It's almost like hiring a whole bunch of people without an HR function," said Nair. "So we have a lot of agents, with no kind of ways to properly map them, and that's been the focus." The company has been working through some of these questions, including who might be responsible if something goes wrong. "It's hard to trace that back," said Nair. Experian's Peters predicted that the next few years will see a lot of those very questions hashed out in public even as conversations take place simultaneously behind closed doors in boardrooms and among senior compliance and strategy committees. "I actually think something bad is going to happen," Peters said. "There are going to be breaches. There are going to be agents that go rogue in unexpected ways. And those are going to make for a very interesting headlines in the news." Big blowups will generate a lot of attention, Peters continued, and reputational risk will be on the line. That will force the issue of uncomfortable conversations about where liabilities reside regarding software and agents, and it will all likely add up to increased regulation, she said. "I think that's going to be part of our societal overall change management in thinking about these new ways of working," Peters said. Still, there are concrete examples as to how AI can benefit companies when it is implemented in ways that resonate with employees and customers. Nair said Lowe's has seen strong adoption and "tangible" return on investment from the AI it has embedded into the company's operations thus far. For instance, among its 250,000 store associates, each has an agent companion with extensive product knowledge across its 100,000 square foot stores that sell anything from electrical equipment, to paints, to plumbing supplies. A lot of the newer entrants to the Lowe's workforce aren't tradespeople, said Nair, and the agent companions have become the "fastest-adopted technology" so far. "It was important to get the use cases right that really resonate back with the customer," he said. In terms of driving change management in stores, "if the product is good and can add value, the adoption just goes through the roof." But for those who work at headquarters, the change management techniques have to be different, he added, which piles on the complexity. And many enterprises are stuck at another early-stage question, which is whether they should build their own agents or rely on the AI capabilities developed by major software vendors. Rakesh Jain, executive director for cloud and AI engineering at healthcare system Mass General Brigham, said his organization is taking a wait-and-see approach. With major platforms like Salesforce, Workday, and ServiceNow building their own agents, it could create redundancies if his organization builds its own agents at the same time. "If there are gaps, then we want to build our own agents," said Jain. "Otherwise, we would rely on buying the agents that the product vendors are building." In healthcare, Jain said there's a critical need for human oversight given the high stakes. "The patient complexity cannot be determined through algorithms," he said. "There has to be a human involved in it." In his experience, agents can accelerate decision making, but humans have to make the final judgment, with doctors validating everything before any action is taken. Still, Jain also sees enormous potential upside as the technology matures. In radiology, for example, an agent trained on the expertise of multiple doctors could catch tumors in dense tissue that a single radiologist might miss. But even with agents trained on multiple doctors, "you still have to have a human judgment in there," said Jain. And the threat of overreach by an agent that is supposed to be a trusted entity is ever present. He compared a rogue agent to an autoimmune disease, which is one of the most difficult conditions for doctors to diagnose and treat because the threat is internal. If an agent inside a system "becomes corrupt," he said, "it's going to cause massive damages which people have not been able to really quantify." Despite the open questions and looming challenges, Rishi said there's a path forward. He identified two requirements for building trust in agents. First, companies need systems that provide confidence that agents are operating within policy guardrails. Second, they need clear policies and procedures for when things will inevitably go wrong -- a policy with teeth. Nair, additionally, added three factors for building trust and moving forward smartly: identity and accountability and knowing who the agent is; evaluating how consistent the quality of each agent's output is; and, reviewing the post-mortem trail that can explain why and when mistakes have occurred. "Systems can make mistakes, just like humans can as well," said Nair. " But to be able to explain and recover is equally important."
[5]
The infinite digital workforce and the road from promise to practice
Autonomous agents offer a future of affordable digital workers. Enterprises are exploring these AI colleagues for tasks like ticket resolution and content drafting. While promising, successful adoption hinges on careful implementation. Teams are building safeguards and integrating humans into workflows. This approach transforms unruly interns into reliable staff, driving efficiency and innovation. Autonomous agents promise an abundant supply of low-cost digital labour. Turning that promise into reality takes more than a clever prompt. The promise Enterprises stand on the cusp of a familiar dream from the RPA era. This time it is agents, powered by large language models and equipped with tools that let them act rather than merely interact. Many executives picture a tireless digital workforce of robots resolving tickets, reconciling ledgers and producing first drafts while human colleagues focus on clients and strategy. Will agents live up to that image, and if so when? The appeal is clear. Agents can operate around the clock at near-zero marginal cost. They do not tire, resign or switch employers. Correct one instance and, through shared prompts or fine-tuning, every sibling improves. Knowledge propagates quickly, lifting quality in minutes rather than months. Routine work benefits most: machines deliver steady competence while people apply judgement, empathy and imagination. Interest is rising. Early pilots show modest but real gains, and surveys abound. EY's recent AIdea of India survey suggests that 24% of Indian enterprises are adopting Agentic AI, with most Indian knowledge workers expressing positive sentiment about working alongside AI colleagues. Seasonal spikes are particularly ripe: retailers during festivals, auditors at quarter-end and analysts in earnings season can scale capacity without burnout. Customer support, bookkeeping, data entry and first-draft content are already shifting to digital labour. What "Agentic" means An Agent is software that acts toward a goal with limited supervision. Given a high-level brief, it plans steps, chooses tools, takes actions, observes outcomes and adapts. It can call external APIs, query databases, search, send emails or run code to achieve an objective. Autonomy, access to a useful toolkit and a budget that constrains time or spend are the essentials. Ask for a campaign plan and an agent can research audiences, propose messages, list channels and draft copy, adjusting as it learns. Why now Reasoning models alone are brilliant conversationalists that can think, reason and plan but do not act. Combine them with orchestration frameworks and the picture changes. Techniques that blend reasoning and action let models plan, execute, observe and decide in a loop. Guardrails set boundaries and budgets so agents do not wander. Early open-source experiments such as AutoGPT and BabyAGI mapped the mechanics: decompose goals into tasks, perform them in sequence, stop when done or out of budget. Recent gains in compute, models and infrastructure have turned a lab trick into a serviceable tool. Where it breaks A gap remains between demos and dependable production. So how does one make it work? Pragmatic teams are building guardrails and discipline along these lines: The approach to be taken is clear: treat agents as fallible colleagues who require oversight, not as oracles. People & Process matter more than Tools & Tech Dropping an agent into an unreformed process rarely works. Redesign workflows to specify what agents do, what humans decide and how hand-offs occur. Train employees to supervise, to review and to exercise judgement rather than to retype. Manage change deliberately: clear communication, realistic expectations and incentives that reward collaboration with machines. The most successful organisations start small, measure outcomes and then scale, replacing enthusiasm with disciplined adoption. A Realistic Outlook The infinite digital workforce is not here yet. What is here is a useful apprentice that can take on parts of real work with encouraging consistency. Invest in the plumbing, govern the risks, redesign the work and teach people how to partner. Do that and today's unruly interns begin to look like tomorrow's steady staff. The author is Partner, Technology Consulting, EY India
[6]
Agentic AI in Business: From Workflows to Workforces: By Ankit Patel
For years, businesses used AI mainly to automate repetitive tasks. It handled things like sorting emails, organizing data, or suggesting responses in customer service chats. These systems were helpful, but they were limited. They followed rules, waited for instructions, and rarely acted on their own. Today, this is changing quickly as AI adoption in business becomes a priority for companies looking to work faster and smarter. But things are changing fast. A new wave of technology -- often called agentic AI -- is pushing businesses into a new era. Instead of simply automating steps in a process, AI is beginning to take on roles, make decisions, and complete tasks from start to finish. This shift is transforming how companies operate and how teams work every day. Traditional AI completes single actions. It answers a question. It analyzes a document. It predicts a number. These actions are useful, but they do not resemble how humans work. Agentic AI, on the other hand, behaves more like a digital worker. It can plan, break down goals, move between tools, and adapt when things change. It understands context, not just commands. This means companies are not only automating "workflows" anymore -- they are building digital workforces. Businesses today deal with massive amounts of information, scattered systems, and time-consuming manual tasks. Most companies already have workflow tools, but workflows fall apart when something unexpected happens. They're rigid. Agentic AI is different because it can: This flexibility makes it ideal for modern business environments, especially those with dynamic processes. Many companies are now moving from experimenting with AI to actually using it across departments. Most organizations began with small projects -- like improving customer support or automating reports -- but are now exploring deeper transformation. As technology becomes easier to deploy, more teams are learning how agentic systems can support daily operations and handle higher-value work. In large organizations, the value of agentic AI becomes even clearer. These digital agents can log into apps, gather information, make decisions, and report results with minimal human oversight. This makes them especially useful in environments with complex systems, multiple teams, and heavy data workloads. As a result, interest in AI agents in enterprise environments is growing rapidly. AI agents can handle customer queries end-to-end. Instead of offering simple replies, they can understand the issue, check the customer's account, update records, and provide solutions. They work 24/7 and stay consistent. In operations, agentic AI can monitor tasks, assign actions, update systems, send alerts, and ensure processes stay on schedule. It can handle tasks that used to require several employees. AI can now research leads, create outreach messages, update CRM records, and track campaign performance. These agents help sales teams close deals faster by reducing administrative work. Instead of simply scanning invoices, agentic AI can match them with purchase orders, flag exceptions, follow up with vendors, and generate end-of-month summaries. AI can screen resumes, schedule interviews, prepare onboarding documents, and help employees find resources. It increases speed and improves internal efficiency. In the past, companies built workflows that followed strict rules. These systems needed clear instructions and always depended on people to handle exceptions. Now, agentic AI brings a completely different approach. A digital agent can act like an employee -- using tools, solving problems, and improving over time. It becomes part of the workforce instead of just a workflow component. This shift means companies can scale operations faster without hiring huge teams. It also means human workers can focus more on creativity, innovation, and relationship-driven tasks. Agentic AI brings many advantages, but it also comes with important considerations. Here's a simple approach to get started: In the coming years, almost every company will use agentic AI in some form. As the technology improves, these systems will become more capable, more reliable, and more central to daily business operations. Eventually, businesses may run with hybrid teams -- part human, part AI -- working together on equal footing. Humans will focus on strategy, creativity, and relationships. AI will handle execution, coordination, and routine decision-making. This evolution will reshape how companies grow and compete. Agentic AI marks a major shift in how businesses operate. It goes beyond simple automation and introduces digital workers that can plan, act, and learn. Companies are moving from rigid workflows to flexible, intelligent workforces powered by AI. As adoption increases, businesses of all sizes will benefit from faster processes, smarter decisions, and more efficient operations. The businesses that embrace this shift early will gain the greatest advantage -- because the future of work is not just automated, it is agentic.
Share
Share
Copy Link
Agentic AI was supposed to revolutionize enterprise work in 2025, but deployment remains minimal. Deloitte's latest report reveals only 11% of organizations actively use AI agents in production, while trust, governance, and legacy systems create roadblocks. The gap between executive conviction and operational readiness continues to widen.
The year 2025 was supposed to mark the breakthrough moment for AI agents, with industry experts predicting autonomous AI agents would transform how companies work and boost productivity across sectors. Instead, Deloitte's latest Tech Trends report reveals a sobering reality: these systems have largely failed to take off
1
. Despite massive investments and executive enthusiasm, only 11% of surveyed organizations are actively using agentic AI in production environments, while 42% are still developing their strategy roadmap and 35% have no strategy in place at all1
.
Source: ET
The disconnect between ambition and execution is stark. Deloitte's 2025 Emerging Technology Trends study found that while 30% of organizations are exploring agentic options and 38% are piloting solutions, only 14% have solutions ready to deploy
1
. This sluggish AI agent deployment stands in sharp contrast to Gartner's prediction that by 2028, 15% of day-to-day work decisions will be made autonomously by agents, up from 0% in 20241
.The number one barrier preventing broader AI agent deployment isn't technical capability—it's trust and governance issues. Dev Rishi, general manager for AI at Rubrik, identified security and governance as the primary blocker preventing companies from shifting agents from knowledge retrieval to action-oriented tasks after meeting with executives from 180 companies
4
. Organizations struggle with fundamental questions: What happens when an agent goes rogue? Who bears responsibility when something fails? How do companies build accountability into systems that make autonomous decisions4
?
Source: Fortune
Kathleen Peters, chief innovation officer at Experian, warned that the industry will likely see agents "go rogue in unexpected ways," creating reputational risk and forcing uncomfortable conversations about liability
4
. Chandhu Nair from Lowe's captured the organizational challenge succinctly: "It's almost like hiring a whole bunch of people without an HR function"4
. TheCUBE Research's Scott Hebner emphasized this dynamic: "Trust is emerging as the currency of innovation. No trust, no ROI"3
.Beyond trust, legacy systems present significant obstacles to enterprise adoption. Bill Briggs, CTO at Deloitte, explained that organizations need proper investments in core systems, enterprise software, and SaaS platforms before AI agents can function effectively
1
. These legacy systems weren't designed for agentic AI operations and create bottlenecks in accessing systems, hindering agents' ability to perform tasks1
.Data quality and architecture compound the problem. A 2025 Deloitte survey found that 48% of organizations identified data searchability as a challenge to their AI automation strategy, while 47% cited data reusability as an obstacle
1
. Data repositories feeding information to AI agents aren't organized in ways that enable agents to consume it effectively, creating friction in workflows1
.The organizations achieving success with agentic AI aren't simply layering agents onto existing workflows—they're fundamentally redesigning business processes. As Jensen Huang, Nvidia CEO, observed: "For the very first time, technology is now able to do work"
2
. This shift requires CEOs and leaders to rethink how work is designed when AI can execute autonomously rather than just assist2
.Historically, business processes were created to fit human needs, not those of AI agents. The transition to automation means defining inputs and desired outcomes while letting the AI workforce handle what happens in between
2
. BCG and MIT Sloan Management Review's study of over 2,000 leaders from 100+ countries found that most enterprises still need to define the strategies and operating models needed to integrate agents into daily operations2
.Related Stories
The transition to an AI workforce demands new approaches to human oversight and collaboration. TheCUBE Research's "Agentic AI Futures Index" shows 62% of companies now see AI agents as key to decision-making, marking a shift from automation-focused deployments toward AI-driven decision intelligence
3
. Yet enterprise readiness lags behind ambition, with the Digital Labor Transformation Index showing aspirations scoring 4.1 on a maturity scale while execution falls to just 1.83
.
Source: Fortune
Christophe Bertrand, principal analyst at theCUBE Research, noted that many organizations still treat agentic AI as a technical implementation rather than a business transformation requiring cross-organizational collaboration
3
. Successful implementation requires treating autonomous AI agents as fallible colleagues who need supervision, establishing clear guardrails, defining what agents do versus what humans decide, and managing hand-offs deliberately5
.Despite current challenges, momentum is building. Rishi projects that roughly half of the 180 companies currently in experimentation and prototyping phases anticipate moving into formal production within two years
4
. EY's AIdea of India survey suggests 24% of Indian enterprises are adopting agentic AI, with most knowledge workers expressing positive sentiment about working alongside AI colleagues5
.The path forward requires discipline over enthusiasm. Organizations must build proper governance frameworks, invest in data architecture, modernize legacy systems, and redesign workflows around human-AI collaboration. As Briggs warned: "The world's going to continue to advance and evolve, and you can't wait, or you will be left behind"
1
. The companies that solve for trust, security, and risk management while maintaining human accountability will define the next wave of enterprise AI transformation.Summarized by
Navi
19 Jun 2025•Technology

03 Sept 2025•Technology

18 Jul 2025•Technology

1
Science and Research

2
Policy and Regulation

3
Technology
