2 Sources
2 Sources
[1]
Workday Rising 25 - Workday rejects the AI jobs apocalypse, and makes their agentic SaaS case
The impact of AI on jobs is a serious issue, worthy of vigorous discussion. But frantic 'AI jobs apocalypse' narratives do a disservice to that same discussion. No question, AI has impacted jobs in creative and service professions, as well as junior level roles. But when enterprise vendors double down on so-called 'autonomous enterprises,' you get the impression that in a handful of years, we might need to leave a few humans to wipe the dust off of GPUs, and that's about it. That's not a great vibe for AI adoption. At Workday Rising, we heard a decidedly different tone, starting with CEO Carl Eschenbach's opening keynote remarks. As I wrote: In yesterday's opening keynote, Workday CEO Carl Eschenbach spoke directly to these issues. He acknowledged we have a "trust issue" with employees about AI. Though Eschenbach did talk up productivity gains, driven by a "new Workday" with AI at the center, Eschenbach made clear that "we need to change the narrative," and that we need to "bring our employees along with us on this journey." He made clear that AI, in Workday's vision, "is not a threat." In subsequent remarks, Eschenbach went further. As he told industry analysts during an executive Q/A panel: I just don't share the same sense that AI is going to displace all human workers. Some of our peers are out there talking about 10, 20, 30 percent of the work going away because of AI. I just don't see that happening. It happens in every tectonic shift. People move on and do different roles, different tasks, different jobs. They become more strategic, and we're seeing that happen here. The Economic Forum believes there will be 11 million jobs, along with 11 million AI jobs created in the next five years, and 78 million net new jobs globally will be added as well. So people aren't going away. We are the fabric of what we represent in the workplace. We leverage technology to become more more productive. So I'm a big fan of AI and people working together. That's why we say 'AI-powered, human-centric in the middle, ready for the future' - and that's hopefully what we've highlighted this week. Workday's take on humans-versus-agents derives from hands-on engineering lessons. Bolstered by executive AI talent from the likes of Google, Workday pushes back on the so-called "death of SaaS," in a technically-informed way. As you might expect, these topics were very much on the minds of the analyst contingent, who are not immune to hype cycles either. Things came to a head when an analyst asked Workday's exec panel if they foresee a day "where agents are actually going to fulfill the whole end-to-end process and communicate with that back-end data, and the process layer itself disappears" - therefore ushering in the "death of software." Gerrit Kazmaier, President, Product and Technology at Workday - and one of those Google exec recruits - pushed back. As he explained: I know it makes for killer headline, when someone says, 'SaaS is dead, or RIP SaaS.' How far could this go? Kazmaier: [With AI], you have a process running, and now you have the opportunity, for the first time, to take human reasoning and have a computational reasoning step running to take the next step. That's the automation opportunity that you actually see, when AI is being applied successfully. There is this frontier idea, which is really out there, which is saying that, basically, the model just figures out everything. There is no process anymore, right? But Kazmaier's AI experience leads him to a different take: Models are very poor still at instruction following, and it remains to be seen if they ever go past these barriers. It's evident that models are really good at single-task completions - actually only the specific category of tasks, everything which can be trained from public domain knowledge. And it's pretty clear that, when you build sophisticated systems, regardless of what they are, it's actually very hard systems engineering to make models do the right thing as a part of a process. This is what Peter and I worked on at Google, basically engineering systems that makes AI work in an enterprise context by providing the right information, by telling them what to do now - what's the next step to do? What's the business objective? Are we meeting this objective, having many smaller models around it, basically to figure out: did the model do something correctly? If it di something wrong, how do we correct this error now? [Author's note: 'Peter' is Peter Bailis, Workday CTO, who also previously worked at Google]. However: Kazmaier sees a major fork in the road for SaaS vendors: The bottom line to your question is: there's going to be two sets of enterprise SaaS vendors. One category is the ones who are just taking AI and running this over the legacy APIs to make the system look smarter. Usually, you see this in the form of a side panel, right? We have this great new system called 'so and so,' right? It pops up on the side, and it tries to automate with a new UI. Those are the ones who will probably never even get close to the value that AI can deliver - and those are the ones who are dead in the future. So what does the winning type of SaaS category look like? Kazmaier: The other category is the one who is saying, 'Let's re engineer the system for AI' - what you heard this week. 'How do we create a uniform data foundation so that all of our AI models can be trained and powered from it?' 'How do we incorporate AI into our business process engine so that it can call AI agents, basically to take the human cost and compress them down into a software cost'. And: let's rethink - which I think is the most important part - how people interface with software in the future? You know Sana, right? What is the work experience of the future? It's contextual and intelligent, and it helps people get work done quicker. And I think we are the one company in the second category who has been bold enough to re-engineer the core, build on it, and frankly, make some big bets. [Author's note: see Stuart Lauchlan's piece on Workday's Sana acquisition, Workday Rising 25 - will Workday's $1.1 billion acquisition of Sana Labs really be L&D's "iOS for enterprise" moment? Those "big bets" include an unprecedented flurry of acquisitions, including Sana. For Workday, enterprise AI is about providing superior context for AI than a consumer LLM can pull off. And yes, much of that context is thanks to Workday's SaaS platform. In the inevitable second round of "Is SaaS dead?" questions, Kazmaier explained why context matters: The key point is that all of these agents, they need to have a software interface for which they get exposed. Chat, which is our primary way of consuming Gemini, ChatGPT, or whatever your favorite model is, that just works for a limited number of tasks. It's not really conducive for the enterprise context, right? We do have workflows, like we talked about. We do have states. We have controls, like tables, where people need to check, like Peter said, 'This is the payroll - do we want to run it like that?' So there is an interplay between app and agent. Yep - that's why Workday is acquiring Sana. Workday intends to transform the user experience in the agentic layer: This is going to converge in these new experiences like Sana. What it means, from a vendor perspective, is basically giving our applications either agents to automate the workflows, or providing interfaces, so that customers can basically connect their own agents. What we showed with Microsoft yesterday: build something in Microsoft Copilot, interface with Workday for the onboarding experience, and run it through Microsoft's third party ecosystem. SaaS and agents pitted against each other? That's not how Workday sees it. Kazmaier adds: But I personally think that whole AI-versus-the-core-application-narrative - that's a falsehood, right? Because in the end, applications are running processes. Processes provide the underpinning for agents to run on. If you design them well together, they're like a flywheel. They make each other better. They're not trading each other off. I've critiqued the now-infamous MIT "95 percent of AI projects don't make it out of the pilot phase study." This is not the only study to find that generative AI projects really haven't given companies a boost productivity. But here's what the media frenzy missed: the report's authors are still bullish on AI, because of the five percent that are succeeding - and, more importantly, how they are doing it. The studies that cast a grim view of generative AI productivity are based on out-of-the-box use of LLMs. Whereas vendors like Workday are putting LLMs into what I call a more "constrained" architecture, using everything from tool calls to smaller models to improve "context" - and output relevance. In an upcoming video with the Workday Evisort team, I talk with Evisort about how they provide superior context with smaller models - models that include only the customers' documents. Along those lines, we're starting to see some results. These results look much more compelling than "AI helped write me an email." As for documenting the customer benefits of this type of contextual AI architecture at scale, it's early days, but last week in San Francisco, Workday shared a few of their own. One of the best examples came via Rob Enslin, who told analysts: As an example, our contract negotiation agent, we know it's saving us 45,000 hours, right? We know that we can actually now take that across our master services agreements, our legal agreements, our services, statements of work - and just get super-efficient at a very, very core process right, and actually take the lawyers out of the discussion. And that's made our legal organization effective and focusing on things that matter... But hold up: if Eschenbach's AI-for-growth, not-just-productivity message is valid, then shouldn't the legal team be having outsized impact? . As Enslin told me: So if you look at the things that matter, going into new countries, setting up legal systems in new countries, going into governments, building out government legal processes that are different - now, the team's focused on that kind of stuff, versus negotiations. Before, we would have to go find and hire somebody, and get a new person added and so on. We're now capable of actually doing that with the existing teams that we have. And that also allowed legal folks to start focusing on things like policy. Policy for AI - we've still got to figure out how we're going to drive that. It's not going to go away, right? During my interview with Shane Luke, Workday's VP Product and Engineering, Head of AI & Machine Learning, I asked him: what aspects of this context approach are most compelling to him? Reasoning engines? RAG with knowledge graphs? Agents using tool verifiers? Or perhaps the impact of smaller models? Luke singled out the orchestration of smaller models: I do think that multi-step model systems are really exciting. The typical case for somebody who's interacting with LLMs today is that they're interacting with really large scale LLMs online, right? So they're using Grok, or they're using ChatGPT, or they're using Gemini or Claude or whatever, and those are 500-billion-plus parameter models in that model family... Those are great, but they're not something that you can actually reasonably run in an enterprise setting, and stay tenanted, right? In certain cases, that's totally fine. If all you're doing is prompting, that might be totally fine. But in a lot of cases, you might need something that does more than that. It might actually need to be tuned in some fashion. And so then you can get the benefit of these large scale models - probably interacting with them through an API from a provider - but then also get the benefit of a really specialized model that's maybe been either taught by a larger model, or tuned in some other way. That's pretty exciting. To me, that gets down to where you have a generalist at the top, and then you have the specialist model that's actually doing the particular task. Small models are key for enterprise tasks, which have a degree of variability: One thing that I think is really important - and this is probably under-discussed in enterprise - is that the task, when we talk about a task, and we give it the same name from one company to another... Yes, at a high level, it might be the same task: how you do a change job, or how you sign up for benefits, or how you do whatever task. But the tasks actually vary a lot, right? That's part of the Workday product. You configure it, set it up, and run it the way you need to do it, the way you want it for your particular business. And a lot of those variations are kind of hidden, and so being able to have these smaller models actually tuned down at a tenant level gives you the potential for accuracy that's much higher than a highly generalized model that doesn't know all those nuances and differences. That gets back to my Evisort example: train a small model on a few thousand company-specific documents - and if the user wants to add new features, quickly spin up a new model on the fly. That's not viable with "frontier" models. Luke anticipates enterprise wins here: The potential of having that, the way you could maintain tenancy while doing that, the way you could also get higher accuracy for a particular business, and have these things tuned through business, I think that's exciting. And it's not talked about that much, but I actually think that that's where the winds are going to be in enterprise. [Author's note: Luke's comments on 'maintaining tenancy' are significant, because this is a way for customers to adapt/tune/retrain smaller models with their own private data, in their own 'tenant', while still being managed by Workday. To do this with internally-hosted Large Language Models would be cumbersome, expensive, and not always feasible, especially when it comes to frequently training, tuning or dynamically spinning up smaller models with new training content]. Not to mention that smaller models bring better price points - and lower energy consumption for training and inference than large models. Even though Workday has been careful about AI pricing, at some point the price of compute factors in - and LLM vendors are feeling the investor squeeze, amidst "AI bubble" talk. When LLMs providers bump up pricing, it has downstream impacts across AI markets. Luke: Unit economics have to work out. Otherwise it doesn't hold in the long run. In the early days of this wave of LLMs and AI, the funding has been so generous at the frontier level that nobody's worried about about it. But that's starting to change, and it has to. Workday seems to have turned a corner, by linking its "human-centric" AI narrative to what this tech is (currently) capable of. But it's early days for this phase of enterprise AI. We'll need many more success stories. We'll also need to see how Workday's new flex credit pricing for AI factors into customers' ROI calculations. Nor do we have significant studies about the project success of contextual AI - what I call "constrained LLMs in compound architectures." (Some have even gone so far as to call this type of architecture "neurosymbolic AI." I don't agree, but that's a debate for another time). Workday is right: a shift from productivity to productivity-driven growth is important - not just for AI job growth, but for employee morale. Perhaps being fused to an agent is better than losing your job to one, but humans perform best when they are not just overcaffienated productivity machines, chasing high-volume KPIs from behind. That said, you know me - I always see room for improvement. Of the new agents announced, Workday's Performance Agent is, on paper, the highest risk agent of the group. I believe the audience could have gained from more detail on how Workday has mitigated/address those risks. Yes, Workday said the right things about agents not actually writing the performance reviews. But in my interviews with Workday's AI leadership, I raised other questions. I'll get more answers on those as we go. For now, the point is: if you've done good work on AI risk, as Workday has done, then by all means use that - and help customers think though this as well. The type of AI architecture Workday uses is strong on explainability. "Reasoning" brings more explainability to LLMs themselves (though I don't believe the black box factor ever goes away). Meanwhile, Workday's retrieval context for agentic workflows includes source documents (this is the case for the Performance Agent, for example). I'd like to see even more on AI observability and agentic evaluation, though that is less of an issue when agents aren't fully autonomous, and humans are approving/supervising. But given how robust some agent evaluation tools are, I'd like to see Workday bring that into the mix, at least more than I heard this year. On we go... For now, check out the full diginomica team coverage of Workday Rising.
[2]
Deloitte's Harry Datwani On Agentic AI: Orchestrating 'Human Plus Machine'
'We view agentic AI as augmentation and orchestration. Where can you eliminate tasks instead of eliminating roles? We start by reimagining what the experience and the process might look like, how to think about the combination of humans plus AI agents, and how to orchestrate that to drive the types of behaviors in so many organizations that we spend time with,' says Harry Datwani, principal and partner at Deloitte Digital. Business adoption of AI and agentic AI continues to grow, with global management consulting firm McKinsey in March estimating that over three-quarters of $500-million-plus annual revenue companies now use AI in at least one business function. However, as Forbes reported in August, MIT estimates that 95 percent of GenAI pilots deliver no return on investment. AI, and agentic AI in particular, has the potential to take over the work currently done by many human employees. But that doesn't have to be the case, and it won't be if businesses go about deploying agentic AI in a thoughtful fashion, said Harry Datwani, principal and partner at Deloitte Digital. [Related: Cognizant CEO: 'Still At The Early Innings Of Capturing The AI Opportunity'] Datwani, in an exclusive conversation with CRN, said his company's successful approach to agentic AI is to not think about replacing human workers, but orchestrating how humans and AI agents work together to expand employees' opportunities. "It's really about orchestrating what you can use AI to do and what you use humans to do when you reimagine how work gets done," he said. "So rather than just minimal human supervision, it's about orchestrating humans and agents as you reimagine how work gets done in a given industry, sector, or function." Datwani also said the future looks bright for organizations that look at agentic AI as a way to improve productivity and not as a way to reduce headcount. "We will see new levels of productivity out of organizations such as new ways to think about how one person can handle this many sales calls, or one person can handle this many finance transactions," he said. "I think we will start to see productivity growth that will drive enterprise value." There's a lot going on with agentic AI and its impact on business. To learn more, read CRN's full conversation with Datwani, which has been lightly edited for clarity. How does Deloitte define agentic AI? We think of agentic AI as systems, versus a single tool or application or use case, focused on driving key outcomes and metrics with intelligence from large language models and insights gleaned from data. Where we differ from others is, often in the market it's about no human supervision, or minimal human supervision. One of the hallmarks of the way we think about it is this notion of human plus machine. It's really about orchestrating what you can use AI to do and what you use humans to do when you reimagine how work gets done. So rather than just minimal human supervision, it's about orchestrating humans and agents as you reimagine how work gets done in a given industry, sector, or function. How does that definition change how Deloitte looks at the future of agentic AI in the workplace? When we think about the future of the workplace, we see a lot of organizations repeating their less-than-ideal methods of the past. If you think about RPA (robotic process automation) a few years back, I guess it's more than a few years. Often the idea was to apply RPA to a current business process. And the way it was often done was like pouring asphalt over an old road. It looks nice. It's great on top. It looks perfect. But you didn't address the underlying issues, the foundations, cracks, etc. You just poured another layer of asphalt on top. And you effectively hardened the bad business process or suboptimal business process that you automated. We believe agentic AI requires taking a step back to reimagine the process, the function, and the jobs to be done with AI using the customer's lens. I grew up doing a lot of customer experience projects, and we would talk about personas and journeys. You never had a persona of an AI agent before. You solved business problems. Your employees, customers, perhaps a partner, perhaps a sales manager, a service manager, those were your personas. Now you have AI agents as additional personas. You can't just replicate the process. You have to take a step back and reimagine how work gets done. So I think the first thing is to reimagine versus try to automate or identify an existing business process. The second one is, we see a lot of organizations focused on where they can replace a human or replace job roles. We view agentic AI as augmentation and orchestration. Where can you eliminate tasks instead of eliminating roles? We start by reimagining what the experience and the process might look like, how to think about the combination of humans plus AI agents, and how to orchestrate that to drive the types of behaviors in so many organizations that we spend time with. There's always that proverbial backlog that you never get to. You do your annual planning process. You identify your big strategic initiatives, prioritize them, sequence them. But inevitably, there's a ton of things that you never get to because of prioritization. .... There's typically another tier or two of things that if you could, you would get to. We believe a lot of enterprise value and productivity will be unlocked when you reimagine work and look at where you can apply agentic AI to those tasks. It's not that I necessarily want to replace you. I want to give you time to talk to more clients, to deal with more complex transactions, to take on more opportunities. AI has been in the news recently with a couple of studies by Capgemini and MIT saying that up to 95 percent of AI pilots don't reach production or don't deliver measurable benefits. So why should we believe moving forward now with agentic AI will have any better chance of success? This is a little bit of Harry's view, plus Deloitte's view. Change for organizations has always been hard, and I suspect it will always be hard. And it's particularly hard for the large, complex companies that we deal with, because there are organizational dynamics you have to manage. Often there's awareness and fluency gaps that you must manage. Data, we all know, is challenging. The regulatory landscape, particularly in AI, is evolving rapidly. We believe we're still in the very early stages of this transformation curve. If you think of a baseball game, we still think it's early innings. Clients started out by saying, 'Does this technology work? What are the use cases that I could apply these use cases to? Let me quickly go prove the technology works. Let me prove that I can extract value.' What they haven't necessarily done is, well, 'Did I get all the underlying data right? I probably will never get to all the data, but did I get to a sufficient level of the data being where I needed it? Did I build enough organizational buy-in and consensus around this being the right thing to do?' It's going to take time. And I think as you see particular sectors or examples start to reimagine key business processes, you start to hit a tipping point. I was talking to the head of AI and transformation at a large insurance company, and they are fully committed to a multi-year roadmap. We are doing some of the work, but they are doing a lot of it themselves, reimagining the insurance underwriting process with AI agents. One of the examples she gave me was that if you have a certain number of pictures of a house, you can predict with a fairly high level of accuracy what the next claim is going to be. The water heater? The roof? A leak? And then with generative AI, you can start to make recommendations about what the insurance underwriter can do during the sales process to drive repairs within some period of time to improve the quality of the risk that you're underwriting and therefore be able to give beneficial pricing. That's a huge advantage if you can offer a cheaper insurance policy because you've now figured this out. What are some other use cases where agentic AI can be applied more quickly than, say, mainstream use cases? Because of our size, Deloitte tends to do work in almost every industry and sector from tech and media to health and human services, in the government, and from banking to healthcare. Each of our teams consist of people who've spent two or three decades working on the business processes and technologies unique to that sector. The set of technologies and processes at a commercial bank are very different from those of a healthcare company. So we've spent time with them to identify a whole library of use cases across the functions of an organization: sales, service, finance, marketing, HR. And we're seeing there are many ways to categorize them. Maybe a very simple way is autonomous use cases versus [human] assist use cases. Autonomous is where complexity is low, transaction volume is high, predictability is high, and you apply AI agents to do all of that business process without a human in the loop, freeing up human time to do other transactions. And assist is where the AI serves potential recommendations or answers for a human to apply criteria to. And often the human is the one delivering the answers. The area and functions that we see the most conversation around and the most things moving to production is broadly around customer service, and largely assist customer service. For instance, how do I help the person that's either engaging on chat or on the phone have a quicker, more insightful, less frictional customer service interaction. Given that customer service has traditionally been a people-heavy process, how do you employ agentic AI without pushing customer service agents out of a job? In other words, doesn't the adoption of agentic AI lead to a need for fewer customer services representatives? So I don't have a number to share that we think X percent will get redeployed. But what we're finding is that, because we're still in the early stages, the conversations are much more about, 'How do I get them to take on more customer interactions? How do I get them to support more of our customer segments,' and less of, 'How do I reduce headcount.' We were doing work for a large B2B organization. I can't talk about the actual client, but they sell their products to big box retailers like Costco, Target, Walmart. We helped them build a set of agents and capabilities to support this, it's a cute name, we call it 'WisMO,' or 'where is my order.' Inevitably, there are multiple steps that have to get checked. For instance, when you and I order something from Amazon, it's one specific product to one specific household. When a big box retailer buys from this place, they could be buying multiple different things that come from multiple different shipping places, and then go to multiple distribution centers, so lots of variability around part of the orders, but not all of the orders. Are we shipping them under one invoice? Are we shipping them separately? There's lots of manual steps and checks. The business case was not around, 'Can we fire these people and let them go.' It was, 'Our business is growing, and can we do this without growing our headcount exponentially. We will still grow our headcount, but can we grow it at a slower rate? And can we grow it in the regions we want to cover the time zones?' That was the business problem we're solving, and I suspect we're going to continue to see that, particularly while we're still in early innings of this. Is Deloitte involved in bringing agentic AI to coding? Yes. We have built our own set of capabilities using what's in the marketplace plus some of our own IP to help address productivity, quality, and consistency of code generation using AI. We're finding that it helps benefit implementation timelines and implementation costs. But there is still a human element needed from a design perspective and code review and quality perspectives as well. I ask because several large organizations have actually laid off coders because of the increased use of AI in coding. Does Deloitte see that as a potential issue? What we're seeing in our conversations is that many of our clients, particularly in IT because it's the coding question, have massive amounts of backlog that they're never able to get to. The conversations we're having with clients are much more about how to get to all that stuff in the backlog? Can we help accelerate how they get there? Looking forward, what does Deloitte expect to see in terms of agentic AI and how agentic AI will impact the workforce in the future? One thing I think we're going to continue to see is the need for enterprises to embed agentic AI and agentic AI tools in the way their employees work. If you look at the adoption rates, and they're publicly available for all these consumer-facing generative AI tools, the adoption rates are ridiculously high, and it happened ridiculously fast. I don't think we've ever seen that kind of adoption before. Take [me, for example]. We're actually planning our holiday. And we said [to ChatGPT], 'Give us a five-day itinerary for Mexico City. We have a 12-year-old and a 9-year-old. We are pretty adventurous eaters. We have these food allergies, and we'd like to stay in a safe neighborhood.' We got a pretty good itinerary. That's a common use case people are doing outside of work. I'm sure there's many others. But then you come to work, and many organizations don't have such tools available for their employees to use in many places. They have not yet opened their firewalls to these types of tools. And so I think we're going to see organizations invest in building tools to help their employees be more productive in the things they do. Interdependent with that will be programs to increase fluency, awareness, and comfortability with these tools, because there's a whole plethora of questions: Is it safe? Is my data okay? Is this going to replace my job? You also have people just afraid to learn how to use these things. So I think the broad enablement of these tools, as well as the broad enablement of the workforce, will be areas that that we see. Additionally, we will see new levels of productivity out of organizations such as new ways to think about how one person can handle this many sales calls, or one person can handle this many finance transactions. I think we will start to see productivity growth that will drive enterprise value. Is there anything else you think we need to know? Our role as a trusted advisor to our clients is to not only think about use cases but bring them a point of view. It is easy to walk into somebody and say, 'Well, you should reimagine your entire finance function.' But we bring a point of view of what we think that re-imagined future looks like for a bank, a healthcare company, human resources, or finance. One of the reasons we believe we're well suited is that no single technology is the answer. It's not an AWS or a Salesforce or a Google or a ServiceNow answer in the complex enterprises that we play. It's how does this whole thing come together? Where should I use Google? Where should I use Salesforce? When should I build it myself? Those are tough, complicated questions. And because we are [often] the most strategic partner, we can play not only the trusted advisor role, but also the ecosystem orchestrator role to bring the pieces together. Often a tech platform will come in and aggressively say, 'We're the only answer, just buy all of our stuff.' We come in and say, 'Yes, but you also need this and this, and here's what we should use each for.' We're finding that clients really need that perspective.
Share
Share
Copy Link
Industry leaders from Workday and Deloitte challenge the AI job apocalypse narrative, advocating for a human-centric approach to AI integration in the workplace. They emphasize the importance of reimagining work processes and focusing on task elimination rather than job replacement.
As artificial intelligence (AI) continues to advance, concerns about its impact on employment have intensified. However, industry leaders are pushing back against the notion of an 'AI jobs apocalypse,' instead advocating for a more nuanced and human-centric approach to AI integration in the workplace.
At the recent Workday Rising event, CEO Carl Eschenbach addressed the issue head-on, acknowledging the 'trust issue' with employees regarding AI. He emphasized the need to 'change the narrative' and 'bring our employees along with us on this journey,' asserting that AI, in Workday's vision, 'is not a threat'
1
.Workday's perspective on AI integration aligns with the concept of 'agentic AI,' which focuses on orchestrating human and machine capabilities rather than replacing human workers entirely. This approach is echoed by Harry Datwani, principal and partner at Deloitte Digital, who defines agentic AI as 'systems focused on driving key outcomes and metrics with intelligence from large language models and insights gleaned from data'
2
.Datwani emphasizes the importance of 'orchestrating humans and agents' as organizations reimagine how work gets done. This perspective challenges the notion of minimal human supervision in AI-driven processes, instead promoting a collaborative model where humans and AI complement each other's strengths.
Both Workday and Deloitte stress the need to fundamentally rethink work processes when integrating AI, rather than simply automating existing procedures. Gerrit Kazmaier, President of Product and Technology at Workday, warns against the simplistic view that AI will figure out everything on its own. He notes that 'models are very poor still at instruction following' and that sophisticated systems require careful engineering to make AI work effectively in an enterprise context
1
.Datwani similarly cautions against repeating past mistakes, such as those made with robotic process automation (RPA), where automation was often applied to existing processes without addressing underlying issues. He advocates for 'taking a step back to reimagine the process, the function, and the jobs to be done with AI using the customer's lens'
2
.Related Stories
A key aspect of the agentic AI approach is its focus on eliminating tasks rather than entire job roles. Datwani explains that Deloitte views agentic AI as 'augmentation and orchestration,' emphasizing the question, 'Where can you eliminate tasks instead of eliminating roles?'
2
This perspective aligns with Eschenbach's view that AI integration will lead to people moving on to 'different roles, different tasks, different jobs,' becoming more strategic in their work
1
.Both Workday and Deloitte are optimistic about the potential of agentic AI to drive significant productivity growth and create value for organizations. Eschenbach cites projections from the Economic Forum, suggesting that 11 million AI-related jobs will be created in the next five years, alongside 78 million net new jobs globally
1
.Datwani envisions a future where organizations achieve 'new levels of productivity,' with AI enabling employees to handle increased workloads efficiently. He believes this approach will 'drive enterprise value' and transform how businesses operate across various industries and functions
2
.Summarized by
Navi
[1]
18 Mar 2025•Technology
19 Jul 2025•Technology
29 Aug 2025•Technology