3 Sources
3 Sources
[1]
Was 2025 Really the Year of AI Agents?
On 5 January 2025, OpenAI CEO Sam Altman outlined his vision for 2025 in a post on his personal blog. In it, Altman proclaimed that "in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies." His remarks set the tenor for the AI industry through 2025. But did AI agents actually join the workforce in 2025? The answer is yes, absolutely -- or no, not at all. It depends on who you ask. Michael Hannecke, a sovereign AI and security consultant at Bluetuple.ai, says that "everyone" is looking into how to use AI agents. "But there's also a kind of disillusionment. It's not that easy. You don't just throw AI at anything, and it just works." While many industries have expressed interest in AI agents, programmers and software engineers have seemingly leapt towards the front of the pack. Brandon Clark, senior director of product and engineering at Digital Trends Media Group, shares this enthusiasm. He has fully moved his work into AI tools and now trusts the capabilities of AI agents in most situations. "I use Cursor as my daily driver to develop code," says Clark. He also frequently uses Anthropic's Claude Code, bouncing between Cursor and Claude, not only because he has some preference for the tasks each are best at solving, but also to get around usage caps -- which gives an indication of just how frequently he uses agents. "Sometimes I run out of tokens on Claude Code [...] at that point I'll switch back over to Cursor and just continue my work." As with many programmers, Clark's willingness to use agents is in part because of his background. Clark has years of experience working with integrated development environment (IDE) software. An AI-infused IDE, such as Cursor, presents agentic AI in a way that plugs into existing tools and workflows with relative ease. His quick adoption also shows the ways AI agents are well equipped to handle some software engineering tasks. For example, tests are code used to verify that software is operating properly by testing its function against inputs and outputs known to be correct. Tests are important but repetitive and don't often require novel thinking to implement, which makes them easier for AI agents to handle. "It's at the point where I don't even need to be involved. As part of the [AI] system instructions, I say that any time it writes a new feature, make sure to also write tests for it. And while you're at it, run the tests, and if anything breaks, fix it," Clark says. Programmers have also felt empowered by the invention of new ways to integrate AI across software, such as Anthropic's Model Context Protocol (MCP) servers (introduced in November 2024) and Google's Agent2Agent protocol (introduced in April 2025). These allow agents to call on software to complete or verify their work. For example, Cursor has browser tools that can be called as an MCP server. An agent programming for the web can use it to check the results of its work. For Clark, 2025 truly was the year of AI agents. He was able to experiment with them early in the year, and the results he saw only improved as better models were released and AI-focused coding tools improved. Others have had a more mixed experience. Hannecke, an AI consultant based in Germany, saw no shortage of interest in AI agents through 2025. Yet when it came time to think more seriously about deployment, organizations often ran into trouble. "I have only seen three or four use cases where companies have [AI agents] in production," says Hannecke. "Most others are still in a development phase, still evaluating, still testing, due to the insecurities that can come with it." He says many organizations react with a degree of "German angst" over the risks that come with AI automation. "There are a lot of things we're not quite 100 percent sure about with AI agents." Some people believe that agents are productivity boosters with little to no downside; others see them as promising but early technology; still others see them as fundamentally dangerous. German and European regulations contribute to this reaction, to be sure, but they're not the only reason for caution. Jason Bejot, senior manager of experience design at Autodesk, which makes 3D design software, articulated a concern that will be relatable to engineers across many fields: accountability. "That's one of the big challenges. [...] How do I actually get it to work, to make it precise, so that I can get it built?" Bejot asks. Autodesk has an agentic AI tool, Assistant, that can field questions from users of Autodesk software including AutoCAD, Autodesk Fusion, and Revit. However, as it exists today, the assistant is largely designed to be only that -- an assistant. It can summarize information and provide guidance, but it's not meant to take the reins and engineer a solution autonomously. "You need to be able to have a clear through-line. If architect A has updated their sketches using the assistant, that person is still accountable for those updates," says Bejot. "So, how do you create that level of accountability across the board? It's something we're very conscious of." The varying experiences of Clark, Bejot, and Hannecke underscore the wide range of outcomes from AI agents through 2025 and into 2026. For some, agents are already working as Altman speculated. For others, there's a lot more work to be done before agents can deliver. Kiana Jafari, a postdoctoral researcher at Stanford University, has studied this gap. She co-authored a paper that found technical metrics like accuracy and task completion dominate 83 percent of AI agent assessments. These are metrics that can be verified and systematized, reflecting Clark's experience as a programmer. However, technical accuracy isn't the only metric worth attention. "Most of the agentic systems that we are working with right now are in theory doing very well in terms of accuracy," says Jafari. "But when it comes down to people using it, there are a lot of hurdles." In fields where professionals bear personal responsibility for outcomes, even AI agents that achieve a high standard of technical accuracy may not perform well enough. Jafari's interviews with medical professionals have made it clear why this is the case. "What they all say is, 'If there is a 0.001 percent chance that this could make mistakes, that is still my name. That is on me if it's wrong.'" This can result in AI agents backsliding from an active to advisory role. This can help explain the extreme divergence in the reception of AI agents. Some people believe that agents are productivity boosters with little to no downside; others see them as promising but early technology; still others see them as fundamentally dangerous. The reality is that AI agents can be all of these things, depending on the task they're set to solve. "There's still the need for the human in the loop," says Hannecke. "2025 was a lot of 'let's play with it, let's prototype it.' 2026 will be the year we put it into production, and find out what will be the difficulties we have to deal with when we scale it."
[2]
How CEOs bringing AI agents to work are preparing customers and employees
But CEOs who have overseen successful AI agent deployments say it's about treating the tools as an additive to existing jobs, not replacements. Companies are investing heavily in AI-powered agents, pointing towards a future where customers' first lines of communication when seeking an answer or wanting to buy something may be a chatbot. In October, Walmart announced a deal with OpenAI that will enable shoppers to both find and buy items without leaving ChatGPT. The retailer also now has an AI agent in its app that can answer and recommend products. On Walmart's earnings call in November, CEO Doug McMillon said agentic AI will be one of the growth drivers for the retailer's e-commerce business. He said the technology will "help people save time and have more fun shopping." In January, Walmart said customers will soon be able to use Google's artificial intelligence assistant Gemini to more easily discover and buy products from the retail giant and its warehouse club, Sam's Club. Those investments in agentic software are also being tasked with helping workers send emails, summarize notes and increase their overall productivity. All of that puts added pressure on companies to ensure this approach actually works for all stakeholders. At the recent annual customer conference of telecommunications software and services provider Calix, CEO Michael Weening asked the room of broadband service provider executives if any of them didn't have enough to do. "And we raised the lights, and no one raised their hand. I asked if any of them were sitting around lazily, waiting for their jobs to be displaced because they didn't have anything to do. No hands," Weening said. "The message I hear from everyone all the time is 'I have way too much to do,' so my message was how do you free up time to do more and how do you add capacity so you can grow." In October, Calix rolled out AI agents across the platforms that its broadband service provider customers use. That includes agents to help marketers generate subscriber offers, customer service representatives improve their troubleshooting, have subscriber questions and interactions directed to the right people, and help field technicians automate diagnostics and optimize installations, among other things. From Weening's perspective, this should be welcomed help, but he acknowledges that the messaging around agentic AI from bigger technology company executives that it will lead to layoffs has many scared. Artificial intelligence was cited as the reason for more than 55,000 layoffs across the U.S. in 2025, according to data from Challenger, Gray & Christmas. That included job cuts at major employers like Amazon, Microsoft and Salesforce. That is being further amplified by tech and AI sector executives touting the potential for AI to wipe out jobs across several industries. Earlier this week, Anthropic CEO Dario Amodei wrote in an essay that AI will have a broader shock to the labor market than other technological advances and could wipe out jobs across several industries. "The technology is not replacing a single job but acting as a 'general labor substitute for humans," Amodei wrote. All of that is leading to declining worker sentiment towards AI. A January 2026 poll by Mercer found that 40% of employees are concerned about job loss due to AI, compared to 28% in 2024. Weening said that the "demonization and freaking out" that is stemming from executive messaging around AI a real concern and will distract from the technology's potential. "Agentic AI is purely a workflow, and every task in a workflow is an agent," he said. Instead, he said companies need to make an effort to showcase how AI agents are "your new teammates to help you do a better job." There are ways to try to soften the introduction of AI agents. When Calix rolled out the technology across its platform, it took the step to transform the agents into what Weening called, "really non-aggressive, very friendly, Teletubby-like characters." "My view is they're becoming part of your workforce. You think of them as part of your team," he said. In fact, some companies have begin counting AI agents within their overall workforce numbers. Consulting firm McKinsey now has 25,000 personalized AI agents and 40,000 human employees, according to data shared by the firm's global managing partner at a recent live taping of the "All-In" podcast at the CES trade show in Las Vegas. That's a similar message to what Weening shared internally, where Calix was an early adopter of Microsoft's Copilot AI companion. "My thought is if we used Copilot ubiquitously, the benefit is we've got data protection, but more importantly, we can signal to the entire employee population we are serious about innovating," he said. So far, the company has had more than 700 employee-generated agents built, Weening said. Calix also identified 40 workflows that the company believed would have a significant impact on productivity if improved by AI. The company's IT team then formalized those tools and rolled them out across the organization. "Are all those change-the-world agents? No, it may be something as simple as a tool to write an email faster, but at least they're playing with it," Weening said. Weening said in all his communications around AI, he has been clear about the balance that needs to be struck around risk and speed, especially when it comes to critical data. "I see all the spectrums from those that are very concerned and the others who are so focused on moving fast they're oblivious to the risk," he said. "I think many people are struggling with finding the right way to this, and I think that comes back to having very clear guidelines with regard to protecting data, whether that's your customer data or how your partners are managing our data." Weening said he acknowledges that jobs will be impacted by these new agentic AI tools. "I heard a saying the other day and now I repeat it all the time that 80% of jobs will change 20%, 20% of jobs will change 80%," he said. That means his message internally is that these tools will allow workers to take on new tasks as Calix continues the growth trajectory that it is on. While headcount growth might not continue to double because of AI productivity gains, it will still grow exponentially. "We're in this disillusionment phase with AI right now where everyone is asking, 'Where's the ROI?'" he said. "What we have to build is a mindset of change inside the company to embrace AI and look at it pragmatically so that we can evolve where we are." "We're making great progress in that regard, but it's going to accelerate at an insanely fast pace in 2026," he added. From Everest Group CEO Jimit Arora's vantage point, there have been several enterprise-level systems that have helped to transform the way that business is done, from systems of record like ERPs, systems of engagement like CRMs, and systems of insight where insights and data start to be put into action. AI agents, as he put it, are part of the new "systems of execution" category. "When you use a combination of deterministic machine learning, AI, generative AI and agentic AI as currently defined, that's when value happens," Arora said. While experimentation is well underway to meet that future, Arora said he would still call this moment "pre-agentic." "We still don't have true agency with the agents; we are building agents that can do actions, and there's a difference," he said. "We've reached autonomy in some ways, but we haven't given them agency." However, Arora said that he expects to start to see efforts to do just that happen in 2026, especially in what he said are the "three biggest use cases" for true agentic AI: in the software development lifecycle; service desk applications within the HR, IT and finance functions; and customer experience. But as those efforts move along, Arora said companies should make sure they avoid what he calls "PTSD," or "process debt, tech debt, skills debt and data debt." "If you have the right data, but you're trying to identify a broken process, you're going to amplify the brokenness," he said. "You also need the right skills, because applying yesterday's skills to tomorrow's problems won't work. And through all of this, technology can be the easy part." Still, Arora said that he does caution CEOs and executives that are hoping to see significant agentic AI results next year. "I want to take some inspiration from cloud: AWS came out in 2006, Google Cloud Platform in 2008 and Azure in 2010. It took us a good 15 years to get to 50% public cloud adoption," he said. "That true unlock is going to happen in the next three to five years, but we're gong to see some meaningful progress. We have to think of it as a capex project; that's when you'll get the true unlock, otherwise you'll be stuck in the valley of incrementalism, or pilot purgatory." Bruno Guicardi, the co-founder of information technology company CI&T, said that when it comes to building out your own agentic AI, he favors a structure that "gives autonomy to the agents gradually in systems where there is a level of supervision that you can define when you actually retract the supervision." Guicardi used the example of automated client responses. Where someone would initially review every AI-created response before the response is sent, over time, if the responses were deemed acceptable, that person would start to review less of them and then allow the AI to send them automatically. "We think that this will be a way to build confidence," he said. "It's about building a system that earns control, that earns the trust to be autonomous."
[3]
Determining an agentic future - why Salesforce is evolving its AI thinking beyond the limitations of the LLM
There's been an interesting evolution in some of the AI messaging from Salesforce around the role of Large Language Models (LLMs) in recent months as various senior execs have come out with some strong commentary. Take Sanjna Parulekar, Senior Vice President of Product Marketing at Salesforce, cited in The Information, who said bluntly: All of us were more confident about Large Language Models a year ago. Parulekar is not alone. CEO Marc Benioff, never knowingly underselling an AI-enabled future, himself told Business Insider recently that he was busy drafting Salesforce's annual strategy document and leaning on data foundations rather than AI models to assist him, citing reliability and the danger of hallucinations as an issue. Now, Benioff warning about the threat from hallucinations and the trust erosion that ensues is not a new story - he's been consistent on that theme ever since Salesforce pivoted around its Agentforce strategy, even when faced with OpenAI CEO Sam Altman's preposterous attempts to convince him that hallucinations are a feature of the gen AI experience, not a problematic bug! But as the firm looks to encourage its user base out of the pilot phase and into scaling up and mainstream adoption of both generative and agentic AI, Salesforce is striking a new tone of candor about the limitations of some of the previous received wisdom around LLMs and laying out some realities for customers to get their heads around. The principle issue is the so-called 'last mile' where AI has to be on top of its game. In key markets, such as financial services, being 95% right isn't going to fly. Reliability is critical to adoption and frankly LLMs today aren't providing the necessary levels of confidence that they can deliver this. While LLMs do have their uses and certainly aren't going away, a more deterministic approach that can enforce non-negotiable rules and standard operating procedures is needed for enterprise AI adoption to accelerate. Muralidhar Krishnaprasad, better known as MK, is CTO of Agentforce Platform at Salesforce. He argues that following the shift from GPTs to agents, the next evolutionary stage is here, whereby agents are not just helping, but becoming more autonomous. This demands end user trust, he notes, a key Salesforce corporate value over the past quarter of a century, long before the AI focus. An enterprise is made up of rules and regulations, he argues, so as an AI provider you need to provide guarantees that these can be met. Salesforce is approaching this by blending deterministic and non-deterministic tech: Trust obviously has to start from the data level, so we have created a whole new data governance level at the Data 360 level. Even if you bring data from many places in with our main data, we have an AI-based Policy Governance layer that can work across any of your data so the data feeding into agents is governed. Second, we are making sure the agent itself is governed with determinism built around it, policies picked around it. We are expanding [that] into the app layers as well. So in Service Cloud, you will get specialized templates that are useful, say, for Customer Service, where you may have rules in a particular organization about how you can give a refund, for example. Another aspect to be factored in is what MK calls "a post-analytics scenario. He explains: You're viewing everything happening in real time. The Agentforce platform can alert you if there is anything going wrong, etc, so you can take action right away. But you also want to look at the corpus of conversations that are happening and then start deriving reason over it, to see if your agents are tuned correctly. Because one conversation may go right or wrong, but when you see a cluster, then you can start really getting insights. We have invested to bring all this data together and then create our analytical dashboards so that you can go optimize your agent. So we have closed the loop in terms of making sure your data is governed securely [and] your agents are governed securely, deterministically and non-deterministically. You're able to observe and react in real-time as the things are evolving, and then do analysis of everything that's going on. We've created that loop, if you may, all on a unified platform. That way you can do whatever you did before in terms of actioning workflows, calling an API, doing your analysis with Tableau, or being able to use it in your Sales, Service, Marketing, or Slack - they all come together into that same loop. As the AI strategy has evolved, there have been some important learnings en route, says Madhav Thattai, COO for Salesforce AI: Number one, experience is nothing without context, nothing without data. We're actually using LLMs to enrich that context to make sure that the data is interpretable in the right way and that the data is being used in the right ways. LLMs play a big role in that context gathering. The second thing, probably our biggest innovation last year, was the idea of hybrid reasoning. As agents are planning and reasoning, what the industry has typically relied on is more and more and more sophisticated prompts. You've probably seen this - pages and pages of prompts and instructions to try to outline every decision that has to be made as an agent is reasoning through the experience. But that is not a reliable reasoning loop, he warns: What ends up happening as the instruction set gets more complex is that instructions might get skipped [or] the agent might interpret things the wrong way. Customers actually require the ability to execute deterministic process consistently. In a healthcare company, 90% [accuracy] is not enough. He cites the example of Salesforce customer - and partner - Adecco, which is a flagship Agentforce use case: They're a recruiting company, they're using it for candidate qualification. You can't have an experience that works or sometimes it doesn't work. This is where customers leverage what they use Salesforce for today, which is a lot of those deterministic workflows, but paired with this agentic experience. Echoing MK's theme, Thattai emphasises the importance of observability: One of our biggest learnings is that as customers go up the maturity curve, you go from kind of toy demo and proof of concept to, 'I want to put this thing at scale. Now I need to understand, am I hitting my KPIs? What do I need to improve?'. This is an ongoing journey for enterprises. Thattai says: These agents are not a fire-and-forget technology. There's a continuous improvement. You're adding skills to the agent. You're adding capabilities to the agent. How is it performing? Is really critical? So the context, the control and determinism, the observability,...are really significant. There's a home-grown example of this in practice in the shape of Salesforce's own help.salesforce.com support site, Thattai argues: It's probably one of the largest agentic Customer Service experiences in the world, in terms of just sheer number of conversations that are happening. The first two-to-three months after we launched, that was entirely about honing the experience. It was not good out the gate. It did hallucinate, which is not acceptable. It didn't always execute on processes. So constant iteration and improvement using the observability stack is really important. So with the messaging evolving into something that sounds more pragmatic than blind faith in the silver bullet of LLMs pitched by so many vendors with skin in the game for making that story stick, what comes next. Thattai predicts: I think as we get into this year, what we're really excited about is these agents are going to move from 'I perform a job, I perform a task', to' I am now orchestrating a system, I am using an agentic interface to orchestrate across multiple different customer experiences. It's not just a [single] customer service experience'. He points to US retailer William Sonoma as a good example here: That experience begins with product discovery and product recommendations, then it goes into fulfillment, and it goes into customer service. That's how we see things really playing out as far as the consumer experiences are concerned. Agents are going to increasingly play this orchestration role, but it's not just orchestrating across agents; it's really orchestrating a system of humans and agents together, and we think that that's really going to be a core part of the experience in the future. And that idea of human intelligence alongside its artificial counterpart remains the beating heart of successful implementation. He notes: Our own Engineering team can now see the ability to build and develop software is significant and profound, no doubt. But when we talk about an Agentic Enterprise, we're not just talking about build - we're talking about operate, and operating an Agentic Enterprise requires more than just the acceleration in the build. It requires the foundation of trust. It requires the connectivity to the data and the context. It requires observability and analysis. The emphasis on the complementary roles of human and tech is a critical differentiator, concludes MK: We believe in humans + AI. We are not like Sam Altman, saying AI is going to replace everybody. We truly believe humans are an integral part of the decision-making process, and the tools we are building really help humans, whether it be in engineering, whether in Consumer Service and Marketing, wherever they may be. While LLMs are amazing, they can't run your business by themselves. An official Salesforce media comment from the tail end of last year, one that frames the current shift in thinking and messaging well. In 2026, we're moving from theory and dabbling to practical implementation on a larger scale - or we will do if we can bring ourselves to trust the reliability of the AI tech that we're about to bet the farm on. With that in mind, it's time to shift the focus away from the bells-and-whistles of 'Oooh, look at all the clever things ChatGPT can do!' to, 'Does this stuff scale across my enterprise and in such a way that I can rely on it to keep me in business?'. That demands an emphasis on the importance of a deterministic approach and data-driven decision-making. The key to the latter is putting your data house in order first. That's maybe a message that many organizations are going to struggle with after spending a ton of money over the years on data warehouses, data mining, data lakes, data whatever-we're-calling-it-this-year tech that was supposed to address this. The harsh reality is that most of them aren't fit for purpose yet to tackle scaled-up, mission-critical AI. But that can be fixed. A major asset that Salesforce has here is Data 360, formerly known as Data Cloud, which has been pitched throughout as the other side of the Agentforce coin. As Salesforce's Chief Revenue Officer Miguel Milano noted at the end of last year: It's not only about Agentforce...If I look at my Top 10 deals [in Q3], seven of them contain Data Cloud [now Data 360] and six of them contain Agentforce. He went on to add: Trust me, you don't want LLMs to execute...because if we let an LLM execute, they will execute differently, even with the same data, in different times. So the future lies this way - and the future is deterministic. As for the last mile - it's a solid enough metaphor, although this is a long, long, long journey ahead and in many ways one that's never going to end as continuous iteration becomes the order of the day. But a journey of a million miles begins with etc etc.
Share
Share
Copy Link
OpenAI's Sam Altman predicted 2025 would be the year AI agents join the workforce, but reality proved more complex. While programmers embraced tools like Cursor and Claude Code, enterprise AI adoption faced hurdles around trust and reliability, with concerns over AI-driven job displacement mounting as artificial intelligence was cited in over 55,000 U.S. layoffs.
On January 5, 2025, OpenAI CEO Sam Altman declared that "in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies." His proclamation set expectations for what many hoped would be a transformative year for artificial intelligence in the workplace
1
. Yet as the year unfolded, the answer to whether AI agents truly joined the workforce became decidedly ambiguous—it depends entirely on who you ask and which industry you examine.For software engineers and programmers, 2025 delivered on the promise. Brandon Clark, senior director of product and engineering at Digital Trends Media Group, has fully integrated AI agents into his daily workflow, using Cursor as his primary development tool and frequently switching to Anthropic's Claude Code when he hits usage caps
1
. His experience highlights how AI productivity gains have materialized for those working with code, particularly for repetitive tasks like writing tests. Clark now instructs his AI system to automatically write tests for new features and fix any issues that arise, requiring minimal human intervention1
.While programmers raced ahead, broader enterprise AI adoption encountered significant obstacles. Michael Hannecke, a sovereign AI and security consultant at Bluetuple.ai, observed that while "everyone" is exploring how to use AI agents, there's also "a kind of disillusionment" as organizations discover implementation isn't straightforward
1
. He reported seeing only three or four use cases where companies have autonomous AI agents in production, with most still in development or evaluation phases due to security concerns and uncertainty1
.The accountability challenge looms large. Jason Bejot, senior manager of experience design at Autodesk, articulated a concern resonating across engineering fields: "How do I actually get it to work, to make it precise, so that I can get it built?"
1
Autodesk's agentic AI tool, Assistant, remains deliberately limited to providing guidance rather than autonomously engineering solutions, reflecting widespread hesitation about granting AI agents full autonomy in mission-critical workflows1
.As enterprise AI deployments scaled, major players like Salesforce began publicly acknowledging the limitations of Large Language Models (LLMs). Sanjna Parulekar, Senior Vice President of Product Marketing at Salesforce, stated bluntly: "All of us were more confident about Large Language Models a year ago"
3
. Even CEO Marc Benioff cited reliability and hallucinations as persistent issues, emphasizing that in key markets like financial services, being 95% right isn't acceptable3
.Muralidhar Krishnaprasad, CTO of Agentforce Platform at Salesforce, explained the company's shift toward blending deterministic and non-deterministic technology to enforce non-negotiable rules and standard operating procedures. This approach includes enhanced data governance at the Data 360 level and policy governance layers that work across all data feeding into AI agents
3
. The agentic future, according to Salesforce's evolving strategy, requires moving beyond pure LLM reliance toward systems that can provide enterprise-grade trust and reliability3
.The promise of AI agents joining the workforce came with a darker reality: artificial intelligence was cited as the reason for more than 55,000 layoffs across the U.S. in 2025, according to Challenger, Gray & Christmas data, including job cuts at Amazon, Microsoft, and Salesforce
2
. Executive messaging around AI's potential to eliminate jobs has intensified worker fears, with Anthropic CEO Dario Amodei writing that AI will have a broader shock to the labor market than previous technological advances, acting as a "general labor substitute for humans"2
.
Source: diginomica
A January 2026 Mercer poll revealed that 40% of employees are concerned about job displacement due to AI, up from 28% in 2024
2
. This growing anxiety has prompted some CEOs to reframe their messaging around AI agent deployments as workforce augmentation rather than replacement.Related Stories
Successful AI agent deployments have required thoughtful implementation strategies. Walmart announced deals with OpenAI in October 2025 that enable shoppers to find and buy items without leaving ChatGPT, with CEO Doug McMillon identifying agentic AI as a growth driver for e-commerce
2
. In January, Walmart added Google's Gemini assistant to help customers discover and purchase products2
.Calix CEO Michael Weening took a different approach when rolling out AI agents across platforms used by broadband service providers in October 2025. He transformed agents into "really non-aggressive, very friendly, Teletubby-like characters" to soften their introduction
2
. Weening emphasized that agentic AI should be viewed as "your new teammates to help you do a better job," positioning the technology as a solution to capacity constraints rather than a threat2
. Some companies have begun counting AI agents within workforce numbers, with McKinsey now reporting 25,000 personalized AI agents alongside 40,000 human employees2
.The programmers who experienced success with AI agents benefited from infrastructure advances like Anthropic's Model Context Protocol (MCP) servers, introduced in November 2024, and Google's Agent2Agent protocol from April 2025
1
. These protocols allow agents to call on software to complete or verify their work, with use cases like Cursor's browser tools enabling agents to check the results of web programming1
. The ease of integration into existing workflows and tools explains why software engineers leapt ahead in adoption while other sectors remained cautious about AI for customer service and other applications requiring absolute accuracy and accountability.
Source: IEEE
Summarized by
Navi
1
Policy and Regulation

2
Technology

3
Technology
