Curated by THEOUTPOST
On Wed, 7 May, 8:02 AM UTC
2 Sources
[1]
AI agents promise big things. How can we support them?
Sponsored feature If you thought that having ChatGPT create recipes based on what's in your fridge was cool, wait a while - what's coming next will make that seem decidedly retro. That's the hope for AI advocates who are convinced that the next big thing is agentic technology. It's an evolution of AI that enables it to do far more complex, powerful things, and it has the market excited. Over four in five companies told IDC that AI agents are the new enterprise apps, and they're reconsidering their software procurement plans around this new technology. All this is going to take a lot of AI models running concurrently to do well, and they'll all need managing. That's where AI-ready infrastructure from Nutanix aims to help. So what is agentic AI? Early large language models (LLMs) focused on carrying out basic tasks that only human beings could formerly do. Transcribing text, suggesting recipes, and formatting spreadsheets are great applications, but LLMs lacked the depth to do lots of these tasks in succession for more complex outcomes. This is where agentic AI - built atop reasoning models - comes in. Reasoning models go beyond just retrieving and remixing information, instead working through multi-step problems sequentially. They can often apply logic to novel situations, rather than ones they've seen before. This enables them to display what we call agentic behaviour. Instead of acting like a simple tool, an agentic model will set sub-goals in the pursuit of a more complex goal, reflecting on its outputs along the way to ensure that they're correct and adapting to context in real time. They might also use tools such as software applications or online services to help them achieve their ends. Let's say you want to analyse the fluid dynamics of an aerofoil wing and come up with some alternative designs to improve fuel efficiency. If you wanted to control every part of the project, you'd get out a calculator and allocate half a day. An LLM is the equivalent of that calculator. If you wanted someone trusted to do the groundwork for you, you'd ask a PhD student to handle it without worrying about the details. An agentic AI is the equivalent. An agentic AI will use multiple LLMs for its reflections, says Debo Dutta, Chief AI Officer at Nutanix. "These large language models leverage traditional databases, storage, and some newer components like vector databases," he says. The power of reasoning LLMs, combined with these underlying infrastructure tools, breathes new life into enterprise automation, he adds. "Now the large language models can do better decision-making and better planning." Such decisions could range from evaluating a customer complaint and advising on a best course of action, for example. "They're pretty good at a lot of tasks for which traditional software was hard to write," Dutta observes. It takes considerable resources to build and deploy agentic AI, especially as it becomes more complex. Each agentic application usually employs multiple models simultaneously, tailored specifically for their respective roles, rather than using a single general-purpose model. These include general thinking and inferencing for basic decision-making and reasoning tasks, embedding (which converts existing data into a format understandable by LLMs), and re-ranking. The latter prioritises and determines the relevance of search results within agentic workflows. Agents also usually require a model guard, which prevents models from generating offensive or inappropriate outputs, he explains. Dutta differentiates between models - the LLMs that power the AI - and endpoints. The latter are the APIs that applications will access to exploit the model's capabilities. As these models proliferate, the processes involved in deploying and using them become more complex. That's compounded by the expense of running the models, which are compute-intensive, says Dutta. Cloud service providers charge for these models on a per-token basis, and their undisciplined use can quickly escalate costs. Nutanix focuses on software for efficient deployment of cloud technologies both on customers' premises, in cloud and multi-cloud environments, and in hybrid scenarios. The company offers Nutanix Enterprise AI, a unified platform designed to simplify, secure, and scale the deployment of large language models (LLMs) and agentic workflows across private, public, and hybrid cloud environments. Nutanix Enterprise AI is the latest step in the company's journey to make its customers' workloads more manageable and portable across the entire infrastructure, from the edge to the core and the cloud. "Enterprises are really looking for vendors and solutions that can help with an 'easy button'," he says, harking back to Staples' famous marketing campaign from the mid-2000s. Nutanix, which cut its teeth in hyperconverged infrastructure hardware, has been doing that for years following its concentration on cloud infrastructure software. The move to AI and particularly generative AI has upped the ante for companies grappling with what can often be volatile, expensive workloads. Deploying LLMs in the cloud is easy, but you'll pay for the privilege, especially if all your developers start doing it at once. And deploying these compute- and connectivity-hungry assets on your own premises is harder still, Dutta warns. How do you spec the hardware accordingly? How do you handle capacity planning, and cost analysis? "So how do you get that 'easy button' for deploying my large language models and all the other things you need to build AI agents?" he says. This need has sharpened as we've moved from simple chat bots, to RAG-based LLMs talking with private company data, to more complex agentic models made of multiple models. This is where Nutanix Enterprise AI comes in, Dutta explains. It's a single control point to run all of a company's LLMs and agentic endpoints with three objectives: simplicity, full control, and predictable cost. Nutanix Enterprise AI is now part of the GPT-in-a-Box 2.0 solution, which is the Nutanix full-stack solution for rapid generative AI deployments. The Enterprise AI part offers day-two operations and management capabilities for LLMs after customers have set up their pre-validated generative AI tools and use cases in GPT-in-a-Box 2.0. The simplicity comes in the product's centralised architecture. It allows administrators to deploy LLMs from NVIDIA inference microservices (NIM) and Hugging Face, with options to upload custom models of their own, even in dark sites (disconnected environments). They can install and control these from a single point, either on bare-metal hardware of their own using Nutanix Kubernetes Platform, or running on CNCF-certified Kubernetes environments in the cloud such as those from Google, Amazon, and Microsoft. The full control and the cost management aspects of Nutanix Enterprise AI are linked. After deployment, administrators can use Nutanix Enterprise AI to produce a secured access API token for each developer. Instead of accessing models directly, developers use these API tokens to access endpoints, which are instances of models running on a GPU-enabled infrastructure and exposed via a secured API. Admins can grant developers role-based access control to these endpoints. That's a change from traditional, less mature approaches where developers could set up their own models autonomously - and it promises big gains in cost effectiveness. "On average for any application, you'll see about four to five LLMs," Dutta says. "Now, imagine 100 of us trying to set those up. Enterprise IT has to deal with security, an extra management headache - and rising costs." Handing admins the reins for these compute-intensive resources helps them to control model usage and manage costs more efficiently. "We've seen customers really appreciate the fact that there is one layer for the enterprise IT to have full control," Dutta explains. As a vendor-agnostic solution Nutanix works on a range of hardware, but that hasn't stopped it crafting partnerships with specific hardware partners for tighter integration. That naturally includes the 500lb gorilla in the room: NVIDIA. Nutanix supports NVIDIA against its bare metal and Kubernetes deployments. Nutanix Enterprise AI ties into NVIDIA NIM for deploying and operating generative AI models. The Nutanix software makes it easier to deploy NIMs on GPUs wherever they're needed, from data centres to public clouds. The Nutanix software also supports NVIDIA's Dynamo product, which is a distributed inference engine with caching capabilities. "These are amazing Lego blocks. But if 100 people are doing the same thing, it causes sprawl," Dutta says. Managing it via Nutanix Enterprise AI tames it for customers. Working with NVIDIA enables Nutanix to validate and certify NIMs against its hardware partners' servers and GPUs, among other devices. That ensures that the NIMs are ready for operation, wherever Nutanix's customers decide to run them. Nutanix has also certified its Enterprise AI software against NVIDIA's own AI Enterprise software stack, including NVIDIA's Blueprints for common use cases and its full inference engine suite. Dutta says that this is just the beginning for agentic AI, which he envisages evolving at a rapid pace. Reasoning models open up new possibilities as they become more capable, he says. "That kind of an analytical thinking process when applied to AI agents means that we are not very far away from creating digital minions," he says. He's quite happy with the idea of being a real-world Gru (but without the villainy of course), directing hundreds of cute little agentic characters in his digital workforce. Individual minions won't be good at everything, warns Dutta: "Creating a minion that's good for everything is very hard and expensive from a computational and energy point of view." Instead, he foresees each agentic minion excelling at a relatively narrow task. Perhaps an appointment-booking agent here, one that's good at summarising ticket histories there. And maybe another that is adept at performing multi-source retrieval and ranking for research. As these agentic systems - essentially beefed up AI-powered microservices - catch on, companies will need the ability to manage the fabric of compute-hungry services that they create. So Dutta sees a bright future for Nutanix as it helps customers to manage these services more efficiently for the developers that use them.
[2]
AI agents: from co-pilot to autopilot
AI is moving from "co-pilot" to "autopilot". The development of generative artificial intelligence is increasingly focused on "agentic AI": the use of AI agents that perform tasks autonomously, either within fixed parameters or to achieve goals set by the user. AI agents are not new but they are becoming ever more sophisticated. In their basic form they are simply tools built to carry out tasks such as answering queries to a script, as chatbots do, or fetching information from the web. These functions are limited, requiring no follow-up action without further input. Such reactive AI systems operate solely on programmed responses. More complex AI agents, with autonomy and adaptability, have also been around for a long time. They control home thermostats and automate factory processes. This type of technology is, however, rapidly developing capabilities beyond fetching and delivering information or performing distinct tasks. AI agents powered by large language models (LLMs) can analyse data, learn from it and make decisions based on both programmed rules and information acquired through interaction with their environment. Such adaptable AI can perform increasingly complex actions in pursuit of a goal and without taking a prescribed path. Using advanced machine learning and neural networks, it can understand context, analyse and respond to dynamic situations, learn from experience and use problem-solving and reasoning to make strategic decisions. Predictive capabilities based on historical statistical analysis add another layer, enabling AI agents to plan, automate and execute tasks as well as to make informed decisions with specific goals in mind. They carry out their tasks after being given natural language prompts and without constant user input. They can also be designed to check each other's work in an iterative process that improves quality and reliability. Several developments have enabled AI agents to become more complex while at the same time being easier to use. Generative AI has provided a natural language interface, broadening access to AI, especially for users who are less tech-savvy. Generative AI interprets a prompt by the user then other AI fulfils the task. Google says: "Generative AI is just one piece of the AI puzzle. Other AI technologies, like predictive AI, vision AI, and conversational AI, are crucial for building sophisticated AI agents." Advances in computing power and memory have enabled large language models and more sophisticated machine learning. The understanding of context and the ability to plan has improved as AI systems learn more data and improve their capacity to remember interactions. These are the foundations for AI agents, with the ease of interaction accelerating development as more users gain access. At the same time AI itself is speeding up the innovation cycle, refining its outputs and creating iterative processes at ever higher speeds. AI agents can speed up analysis and decisions as well as taking over certain functions from employees but they still fall short of full autonomy. Cassie Kozyrkov, the founder and chief executive of Decision Intelligence and formerly chief decision scientist at Google, says AI agents' main role in an enterprise still lies in taking over repetitive tasks with "well understood and well designed processes" that do not require "creative spin". While there is huge potential for agentic AI to perform ever more complex tasks, Pascal Bornet, an expert in automation and author of Agentic Artificial Intelligence, points to a "significant gap" between hype and reality. Even with a clear directive, systems cannot yet perform complex tasks end to end, especially in nuanced or novel situations, without some human oversight. That said, the field "is advancing rapidly". Bornet likens development to the progression from fully manual to fully autonomous cars, which is rated from level zero to level five. Currently, autonomous cars operate at levels two to four, depending on the environment. Automation can handle many tasks but human oversight, and occasional intervention, is needed. AI agents are at a similar stage. Most operate at levels two or three, with some "specialised systems" reaching level four in tightly defined domains. Level five, where agents fully understand, plan and execute complex missions with minimal human input across any domain or corporate boundary, remains theoretical. Given the challenges involved in folding capabilities into a coherent system, fully integrated multimodal agents are some way off but Bornet says the building blocks are in place. He says some applications, such as that developed for veterinarians by Pets at Home, the UK FT250 company, exemplify audio processing but multimodal systems will require a sophisticated orchestration of agents with different types of expertise. While some sectors have adopted agentic AI more than others, as covered below, it can be put to work in functions that are common to most businesses. Bornet says the opportunity is systemic. "Agentic AI isn't coming for a [single] department, it's coming for all of them. Every workflow with friction is a use case waiting to be transformed." Currently agents are used mostly in internal roles to gain efficiency and savings rather than top-line growth. A 2025 report from UK Finance co-authored with Accenture said: "Most near-term uses involve single-agent deployments targeting productivity and efficiency gains and improvements to customer and colleague experience". The trade body found "relatively few" examples within financial services aimed at increasing sales or revenue. It also noted that most deployments were "closely monitored by an employee acting as a competent supervisor". Across industry, AI that can reduce the time spent on mundane work to "free up" employees for more creative or skilled tasks has been adopted faster than elsewhere. Bornet and his team have gathered data from 167 companies in various sectors that have deployed what he classifies as level three LLM-based agents in production environments. Customer service, internal operations, and sales and marketing functions have seen the highest adoption, with benefits ranging from time savings of 12 to 30 per cent in customer service, 30 to 90 per cent in internal operations and increased revenue of nine to 21 per cent for sales and marketing teams. It should be noted that the use of AI agents alongside humans does not always enhance performance. An analysis of a customer service software company by the US National Bureau of Economic Research found that AI both improved issue resolution and cut the time taken. However it was newer staff who benefited most, with the AI electronically transferring the knowledge of experienced people. The performance of older hands did not improve. The reverse can be true in roles that are highly skilled. Attila Kecsmar, the co-founder and chief executive of Antavo, the AI loyalty cloud programme platform, says that in more technical areas, such as programming, those who use AI without an adequate understanding of the output will struggle, while the productivity and speed of competent workers will be supercharged. This has been the most visible deployment of AI from a consumer perspective but feedback has been mixed. Industry proponents say how well chatbots perform but customer surveys suggest the opposite. Preferences could change as customer service agents develop and digital natives make up more of the consumer base. Better responses and 24/7 support may improve customer perceptions. Older agents answered queries based on set scripts that quickly ran out of road, especially with complex queries. Newer agents, given their ability to remember and respond to dynamic inputs, are more responsive. They are able to act based on up to date client data as well as to recall historical interactions with customers. With agentic AI, customer service interfaces have developed beyond dial-up chatbots. Google Gemini is behind Volkswagen's MyVW app, a virtual assistant that answers a driver's queries about their car. The application of AI in coding is well documented. In a report by the McKinsey consultancy, Lenovo said that its engineers' speed and quality of code production improved by 10 per cent. Kecsmar agrees that agent-supported engineers can achieve much more but says this in turn will lead to rising expectations for human productivity and performance. Given natural language interfaces, it is increasingly feasible for laypeople to write code. This is the real revolution in agentic AI, Kozyrkov says. "Before, you had to go and get yourself schooled in the arcane arts of some new language and now you don't -- you speak your mother tongue and it works." While this presents an opportunity, she cautions that it is also one of the greatest risks in deploying AI in an enterprise. "Unfortunately the mother tongue is vague and not everybody knows when they're being ambiguous. Now you can program a machine without thinking it through, so it's hardly a surprise that you get unintended consequences." As covered in our report on personalisation and marketing, AI has hugely expanded the reach of marketing departments, enabling mass communications to be targeted at ever smaller segments. AI agents can take this further. Antavo has developed an AI agent for its brand customers which helps them to devise and communicate loyalty programmes and campaigns. It can decide an appropriate approach for a brand in any sector and analyse data and give ideas, illustrated with charts, on how to optimise and develop a programme. It can also look inwards, finding and delivering relevant information to help customer service employees resolve consumers' queries. AI agents can be used in hiring, scheduling meetings, retention and management, predicting turnover and identifying where training may be required. These are capable of executing simple tasks with minimal supervision, such as scheduling meetings with clients, sending standard emails and general client communications. Claude, Anthropic's AI model, can find information from many sources in a computer so that it can complete a form. Applications include AI systems that can make trading decisions based on real-time data analysis or systems that suggest investment strategies based on a client's profile. AI can also help with identifying fraud, flagging its suspicions in real time. Autonomous diagnostic tools can identify problems using patient histories and images, recommend personalised healthcare treatments, monitor patient health and recommend or remind people about follow-up actions. AI agents can be deployed in robotic-assisted surgery to improve control and accuracy. Pattern recognition, deep learning and computer vision all enhance machines' ability to adjust surgery incisions in real time. Systems such as Philips' IntelliVue Guardian manage postsurgical complications by providing early warnings for those patients most at risk. In addition to simple and repetitive tasks such as contract drafting, agents can advise on cases. Based on analysis of historical data or judges' rulings they can predict potential outcomes to a suit and suggest arguments. Already A&O Shearman, the international law firm, is using an AI tool created in collaboration with Harvey, a start-up. This makes use of a business's financial information to assess in which jurisdictions a client needs to file in the event of a merger. It then identifies any missing data and drafts the information requests for each party. While autonomous cars have yet to reach the mainstream, autonomous lorries are about to arrive. Aurora Innovation, which works with Volvo, Uber and FedEx in the US, plans to use 10 driverless lorries between Dallas and Houston. AI agents are also used in manufacturing for monitoring and maintaining equipment and optimising processes. They can perform quality control on both inputs and outputs with greater consistency than humans. Beside the chatbots deployed in customer service, AI agents can be used along the supply chain to monitor and manage inventory levels based on historical data and to predict trends and demands. There are various issues that enterprises need to consider when adopting AI. Companies operating with legacy tech or which have inadequate or inconsistent data will find it harder to make progress. Any data quality issues experienced when training agents will be exacerbated by "slop" the colloquial name for the proliferation of LLM-created content. EY says this could be solved in part by agents sourcing information from several inputs rather than relying on static scraped data. For instance iterative AI could gather data from wearables, which would layer current and contextual data on top of historical information. Generative AI is just one piece of the AI puzzle Connection within and between companies is hampered by data incompatibilities as well as the inadequacies of existing application programming interfaces. Bornet says the lack of a standard protocol presents a hurdle to multi-agent systems that might otherwise cross corporate boundaries. Kecsmar believes this problem may itself be solved by agents. "In future the agents developed around data exchange skills will be able to create their own data exchange. They will be uploaded with how their host company communicates data and they will have a tool call to interface data between different sources." Trust is a problem in several areas, for instance in sectors where the options for reversal are limited. "'Fully automate and leave it' in the financial services industry is a terrible idea," Kozyrkov says, adding that "the golden rule of AI is that it makes mistakes". Consumers might be unwilling to let agents have autonomy over their bank accounts or credit cards. There is also a lack of trust among leaders in terms of AI performance and with workers who face the risk of replacement. Once systems can link up across business boundaries, will companies trust external agents? Use of untrammelled AI also adds to cyber security threats by increasing points of access and the risk of unexpected actions. Kozyrkov says: "One of the top suggestions is: limit its access. Don't give it any data that you wouldn't want leaked." Granting AI the same access as a human employee dramatically increases the attack surface, meaning systems are more vulnerable. Constraint on computing capacity is a further hurdle. Despite the investment in infrastructure the competition for stretched resources is fierce. Still, no user pays what it costs to run an AI query even in energy terms, a point raised at an FT Climate Capital Council round table last year. For companies using commercial services, current pricing is based on the number of employees -- but what will happen if staff levels shrink due to AI adoption? Companies also need to consider the ethical implications of AI adoption. Research at Cambridge university notes that -- if they cannot already -- agents may soon be able to predict our habits and spending patterns and influence or manipulate them, although this is likely to be of greater concern to consumers. Accountability is another imponderable. With whom does this lie when agents are carrying out end to end tasks without human intervention, or with connections to other companies? As with any new technology, it is important to identify business needs first. Bornet says the most sophisticated option is not necessarily always the best -- successful implementation lies in choosing the right level for each application. "Consider a financial services company implementing AI agents," he says. "They might choose level one or two agents for transaction processing, where predictability and audit trails are crucial. However they might implement level three agents for customer service, where adaptability and context awareness are more valuable than strict control." The golden rule of AI is that it makes mistakes Keeping an agent's function as simple as possible means there is less scope for problems. Bornet recommends starting with repetitive tasks such as meeting documentation and follow-ups. Transparency is also key. Bornet says his team has encountered the consequences of both a lack of control over AI adoption and an employee's unchecked enthusiasm. This ranges from "worker anxiety and resignations in a manufacturing company to reputational damage when agents made unauthorised decisions in a financial firm". They found that inadequate technical knowledge, governance, or change management stymied adoption in several cases. Kozyrkov, while "incredibly excited for all the ways AI can be used to fuel innovation", cautions that it must be used wisely. It is vital to have safeguards and to clearly define objectives to avoid unanticipated consequences. "The future is modularisation. You wouldn't trust the smartest human to do everything, so why would you trust an AI?" She sees people having a central role, even in a future with AI. "If your goal is to remove humans as quickly as possible, you may find yourself removing key human functions without perhaps realising what you've removed." The most fruitful results, she says, will come to those who see AI agents as a way to "elevate the worker" rather than viewing the latter as "an overseer for the agentic system". Designing processes with AI in mind will give an advantage, Kecsmar says, advising that companies should think about developing or deploying AI-native rather than AI-enabled tools. The effect of "native AI" is more meaningful than what he calls "uplift AI", where agents such as chatbots simply make jobs easier. This means building AI capabilities from the ground up, not just seeing them as a bolt-on. Companies should think of AI as a strategic capability, they should rethink processes to optimise the function of AI agents. It is clear that AI is already disrupting workforces. Klarna, the Swedish fintech company, said in late 2024 that it would be able to halve its employee count by using AI, while customer services companies have been changing the mix of human and AI agents. The logistics sector has also seen the effect of AI: Amazon has used autonomous robots in its warehouses for years. This potential for AI agents to unseat entire work teams might delay their adoption in existing businesses, which will give an advantage to start-ups that build agents into processes and systems. For such AI-native companies, agents will be integrated into workflows from day one and they will also act as virtual workers with specialisations previously outside the range of most small companies. Kecsmar says Antavo adopted this "AI-first" mindset in developing its agent to help customers plan their loyalty programmes. Rather than design a technology that could take step by step inputs to create a loyalty strategy, the agent digests a brand's goals and devises an execution plan. Kecsmar believes such tools will turn any company strategy into an executable plan. Ultimately AI might also help to devise plans to develop products and markets, shifting its contribution from cost and efficiency to top-line gains. Further advances will be possible once agents can talk to each other across data and company boundaries. Kecsmar believes people will then be able to command specialised agents from different providers to work together via an "orchestration layer". For instance, agents from a marketing specialist could talk to those from point of sale and loyalty specialists to assess a customer's data and devise a campaign. This could threaten horizontal workflow managers whose selling point is interoperability, for instance third-party logistics fulfilment or customer resources management. In a sign of where things might head, Klarna said it would abandon its use of Workday and Salesforce and develop its own software using AI. Not everyone agrees. Kozyrkov says many software-as-a-service companies are building their own agents. "It will likely make a lot more sense for you to use Agentforce over building your own agent unless there's some very compelling reason why you wouldn't want a company that you already trust with that data to be helping you save time using its products." Connecting that company's agents to the rest of your business is another matter. It is clear that there is potential for the use of AI agents but companies must have a clear, needs-based strategy and be fully aware of the risks and how to mitigate them. For companies that are early adopters of more advanced agents there will be huge benefits. These systems learn as they go along, which means they improve with time, providing even more advantages than previous, more static technologies. "AI agents create what we call 'compounding intelligence advantages',", Bornet says. "Early adopters will train agents faster, redefine business models and develop AI expertise," leaving behind any companies that delay. "AI agents are really going to help those who know what they need done, what it looks like when it's done and have a way to limit surprises," Kozyrkov says.
Share
Share
Copy Link
AI agents are evolving from simple task performers to complex, autonomous decision-makers, promising to revolutionize enterprise applications. This development is driving changes in infrastructure needs and deployment strategies.
AI technology is rapidly evolving from "co-pilot" to "autopilot" with the emergence of more sophisticated AI agents. These agents, powered by advanced large language models (LLMs), are capable of performing complex tasks autonomously, setting sub-goals, and adapting to context in real-time 1. Unlike earlier AI models that focused on basic tasks, these new agentic AI systems can handle multi-step problems sequentially, applying logic to novel situations 1.
AI agents are becoming increasingly adept at analyzing data, learning from it, and making decisions based on both programmed rules and acquired information. They can understand context, respond to dynamic situations, and use problem-solving and reasoning to make strategic decisions 2. In enterprise settings, these agents are being deployed for tasks ranging from evaluating customer complaints to automating complex workflows 1.
The deployment of agentic AI presents significant challenges. Each application typically employs multiple models simultaneously, tailored for specific roles such as general thinking, inferencing, embedding, and re-ranking 1. This complexity, coupled with the compute-intensive nature of these models, necessitates robust infrastructure support.
Companies like Nutanix are developing solutions to address these challenges. Nutanix Enterprise AI, for instance, offers a unified platform designed to simplify, secure, and scale the deployment of LLMs and agentic workflows across various cloud environments 1.
While AI agents show immense potential, there remains a gap between hype and reality. Most current AI agents operate at levels two or three on a five-level autonomy scale, with some specialized systems reaching level four in tightly defined domains 2. Full autonomy (level five) across any domain or corporate boundary remains theoretical.
AI agents are not limited to specific departments but have the potential to transform various business functions. They are currently used primarily for internal roles, focusing on efficiency gains and improved customer and colleague experiences 2. However, their application for increasing sales or revenue is still relatively limited.
As AI agent technology continues to advance, we can expect to see more sophisticated, multimodal systems capable of handling increasingly complex tasks. While full integration and autonomy are still on the horizon, the building blocks are in place for significant advancements in the near future 2.
Reference
[1]
[2]
AI agents are gaining widespread adoption across industries, but their definition and implementation face challenges. Companies are rapidly deploying AI agents while grappling with issues of autonomy, integration, and enterprise readiness.
5 Sources
5 Sources
AI agents are emerging as the next frontier in artificial intelligence, promising to revolutionize how businesses operate and how technology is developed and utilized. This story explores the current state of AI agents, their potential impact, and the challenges that lie ahead.
4 Sources
4 Sources
A comprehensive look at the current state of AI adoption in enterprises, covering early successes, ROI challenges, and the growing importance of edge computing in AI deployments.
4 Sources
4 Sources
Snowflake and SAP introduce AI agents and data unification strategies, highlighting the growing importance of AI in enterprise operations and data management.
2 Sources
2 Sources
Agentic AI is gaining traction in enterprise software, promising autonomous decision-making capabilities. However, safety, reliability, and technical challenges temper the enthusiasm, limiting its current applications to non-critical business processes.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved