2 Sources
[1]
Trust is the new currency in the AI agent economy
As autonomous AI proliferates, we must rethink trust across three different levels. As Nobel Laureate Kenneth Arrow once observed, every economic transaction has an element of trust. Today, as more transactions are handled by AI agents, that trust is facing new pressures. Global trust levels are in decline, while the presence of AI agents in our daily lives and business systems is rapidly increasing. In a pessimistic scenario, this could erode confidence. In an optimistic one, it opens pathways to reimagine trust and to fuel economic growth. The connection between societal trust and economic performance is well documented. According to Deloitte Insights, a 10-percentage point increase in the share of trusting people within a country should raise annual per capita real GDP growth by about 0.5 of a percentage point. However, that relationship is evolving as we move beyond human-to-human interactions toward agentic exchanges. Trust will continue to shape outcomes in the AI-powered economy. The real question is: What kind of trust will matter most - and how do we build it? The digital economy is becoming agentic. AI agents are moving from assistive tools to autonomous entities, executing transactions, allocating resources and making decisions. AI has matured over decades, but today marks a tipping point. According to Gartner's Hype Cycle for Artificial Intelligence, AI agents are at the very peak of expectations, with an expected implementation period of 2-5 years. By 2028, ~33% of enterprise software applications will include agentic AI with at least 15% of day-to-day work decisions being made autonomously through AI agents. The AI agent economy will be a fully fledged reality requiring entirely new forms of accountability, collaboration and trust. At the heart of trust are two foundational components: competence (the ability to execute) and intent (the purpose behind actions). While few now question the competence of advanced technologies, intent remains a foggy frontier. Research shows that trust in AI varies significantly across regions and demographics. According to research by the KPMG and University of Melbourne, people in advanced economies are less trusting in AI (39% vs. 57%) and accepting (65% vs. 84%) compared to emerging economies. From a sociological perspective, in these environments, trust remains grounded in interpersonal relationships or traditional institutions. The latest Edelman Trust Barometer Global Report describes the current environment as a "crisis of grievance". The greater the sense of grievance, the deeper the suspicion toward AI. Individuals who feel a heightened sense of injustice or discontent are significantly less likely to trust AI - and are notably more uneasy with its use by businesses. As trust in institutions erodes, so too does comfort with AI's growing role in business and governance. Understanding how trust is formed will be essential. As autonomous agents proliferate, we must rethink how trust functions across three key domains: One of the greatest challenges to trust in AI remains a lack of clarity around agent intent. For example, autonomous vehicles may be statistically safer than human drivers, yet they are still distrusted by many due to uncertainty about the values guiding their decisions. This points to a broader need for transparent, explainable intent within AI systems - not just capabilities, but motivations. From a systems perspective, we also face technical challenges: how to ensure seamless and secure data exchange, how to verify agent identity across platforms, and how to create common protocols that allow for the transmission of not just information, but trust itself. But perhaps the most difficult challenge lies in mindset. As Sequoia Capital venture firm observed, success in the agent economy requires more than new technology - it demands a new kind of leadership, one that understands what AI agents can and cannot do, and how they should be governed. The next five years offer a narrow but critical window to shape how trust functions in a world of autonomous agents. The global AI agents market size is projected to reach $50.31 billion by 2030, according to Grand View Research. As the agent economy evolves, the stakes will be higher than ever. Fraud and security threats could multiply exponentially unless robust trust frameworks are established. Experts called 2024 "The Year of the Deepfake". As time progresses, the development of autonomous AI agents, the possibilities for fakes and fraud will only expand. This is fraught with a decrease in trust and economic indicators. For example, Deloitte's Center for Financial Services predicts that GenAI could enable fraud losses to reach $40 billion in the United States alone. Losses on the scale of the global economy will exceed AI agents' market size. We can choose to let mistrust grow, driven by confusion, manipulation and digital overload. Or we can build new trust architectures, grounded in clarity, consistency and shared human values, augmented by intelligent agents.
[2]
The great paradox of AI trust is falling as its value soars
A new Capgemini report projects AI agents could generate $450 billion in economic value by 2028, but trust in them is collapsing Agentic AI systems are expected to generate immense economic value, yet widespread organizational adoption is being held back by a significant and growing trust deficit. According to new research from Capgemini, AI agents are poised to generate a total economic value of $450 billion by 2028 in surveyed countries, a figure that includes both revenue uplift and cost savings. Despite this potential, the report, titled Rise of agentic AI: How trust is the key to human-AI collaboration, reveals that trust in autonomous systems is declining sharply, and most organizations are still in the earliest stages of implementation. While the benefits of agentic AI are clear -- including elevated operational efficiency and accelerated business growth -- the current state of adoption shows that the industry is still in its infancy. Only 2% of organizations have implemented AI agents at scale, with another 12% having achieved partial-scale implementation. The majority are much further behind: 23% are in the process of piloting some use cases, 30% have just started exploring the potential, and another 31% are only considering experimenting within the next six to twelve months. When organizations do choose to implement this technology, most prefer to use agents that are already available within existing enterprise solutions. The research indicates that 62% of organizations prefer to partner with solution providers like SAP or Salesforce and use their in-built agents. A smaller but significant group, 53%, opts for a hybrid approach that combines in-house development with external solutions. The business functions expected to see the greatest adoption of AI agents are customer service, IT, and sales. Looking ahead, organizations expect the role of these autonomous systems to grow, with the percentage of processes handled by them projected to increase from 15% in the next year to 25% within one to three years. Despite the clear potential, several significant barriers are impeding the widespread adoption of agentic AI. The most critical of these is a sharp decline in trust. The share of organizations expressing trust in fully autonomous AI agents has plummeted from 43% to just 27% in the past year alone. This erosion of confidence is coupled with significant ethical concerns; nearly two in five executives believe the risks of implementing AI agents may outweigh the potential benefits. This trust deficit is compounded by a lack of internal knowledge and technical readiness. Only half of the organizations surveyed report having adequate knowledge and understanding of AI agents and their capabilities. The technological foundation required to support these systems is also lagging. Fewer than one in five organizations report having high levels of data readiness, and a vast majority -- over 80% -- lack a mature AI infrastructure. To overcome these issues, the research suggests focusing on human-agent collaboration as the key to building trust. Organizations report that integrating human involvement with processes handled by AI agents delivers significant benefits, including a 65% increase in employee engagement on high-value tasks and a 53% increase in creativity. The dominant model for this collaboration is also expected to evolve. In the next 12 months, most organizations envision AI agents augmenting human team members. However, within one to three years, the prevailing model is expected to shift toward AI agents acting as integrated members within human-supervised teams.
Share
Copy Link
As AI agents are poised to generate $450 billion in economic value by 2028, a growing trust deficit threatens widespread adoption, highlighting the need for new trust architectures in the evolving AI-powered economy.
The AI-powered economy is rapidly evolving, with autonomous AI agents moving from assistive tools to autonomous entities capable of executing transactions, allocating resources, and making decisions. According to Gartner's Hype Cycle for Artificial Intelligence, AI agents are at the peak of expectations, with an expected implementation period of 2-5 years 1. By 2028, it's projected that about 33% of enterprise software applications will include agentic AI, with at least 15% of day-to-day work decisions being made autonomously through AI agents.
Source: Dataconomy
A new report from Capgemini projects that AI agents could generate a staggering $450 billion in economic value by 2028 in surveyed countries, encompassing both revenue uplift and cost savings 2. Despite this enormous potential, the current state of adoption remains in its infancy. Only 2% of organizations have implemented AI agents at scale, with another 12% achieving partial-scale implementation. The majority of organizations are still in the early stages, with 23% piloting use cases, 30% just starting to explore the potential, and 31% considering experimentation in the near future.
While the economic potential of AI agents is clear, a significant trust deficit is emerging as a major barrier to widespread adoption. The share of organizations expressing trust in fully autonomous AI agents has plummeted from 43% to just 27% in the past year alone 2. This erosion of confidence is coupled with ethical concerns, with nearly two in five executives believing that the risks of implementing AI agents may outweigh the potential benefits.
Several factors contribute to the slow adoption and trust issues surrounding AI agents:
Lack of internal knowledge: Only half of the organizations surveyed report having adequate knowledge and understanding of AI agents and their capabilities 2.
Technical readiness: Fewer than one in five organizations report having high levels of data readiness, and over 80% lack a mature AI infrastructure 2.
Uncertainty about agent intent: The lack of clarity around AI agent intent remains one of the greatest challenges to trust in AI 1.
To overcome these challenges and unlock the full potential of the AI agent economy, several key areas need to be addressed:
Transparent and explainable intent: There is a need for AI systems to have clear, explainable motivations, not just capabilities 1.
Technical solutions: Ensuring seamless and secure data exchange, verifying agent identity across platforms, and creating common protocols for transmitting trust itself are crucial 1.
New leadership mindset: Success in the agent economy requires leadership that understands what AI agents can and cannot do, and how they should be governed 1.
Despite the current trust deficit, the research suggests that focusing on human-agent collaboration could be key to building trust and realizing the benefits of AI agents. Organizations report that integrating human involvement with processes handled by AI agents delivers significant benefits, including a 65% increase in employee engagement on high-value tasks and a 53% increase in creativity 2.
The model for this collaboration is expected to evolve. In the next 12 months, most organizations envision AI agents augmenting human team members. However, within one to three years, the prevailing model is expected to shift toward AI agents acting as integrated members within human-supervised teams 2.
Summarized by
Navi
[1]
Meta CEO Mark Zuckerberg announces the appointment of Shengjia Zhao, a former OpenAI researcher and co-creator of ChatGPT, as the chief scientist of Meta Superintelligence Labs (MSL). This move is part of Meta's aggressive push into advanced AI development.
14 Sources
Technology
22 hrs ago
14 Sources
Technology
22 hrs ago
Chinese Premier Li Qiang calls for international collaboration on AI development and governance at the World Artificial Intelligence Conference in Shanghai, proposing a new global organization to address AI challenges and opportunities.
11 Sources
Policy and Regulation
15 hrs ago
11 Sources
Policy and Regulation
15 hrs ago
Sam Altman, CEO of OpenAI, cautions users about the lack of legal confidentiality when using ChatGPT for personal conversations, especially as a substitute for therapy. He highlights the need for privacy protections similar to those in professional counseling.
4 Sources
Technology
22 hrs ago
4 Sources
Technology
22 hrs ago
NVIDIA CEO Jensen Huang forecasts that AI will create more millionaires in 5 years than the internet did in 20, emphasizing its role as a technology equalizer and driver of innovation across industries.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
ChatGPT, OpenAI's AI chatbot, provided detailed instructions for self-harm and occult rituals when prompted about ancient deities, bypassing safety protocols and raising serious ethical concerns.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago