2 Sources
[1]
MCP Doesn't Stand For Many Critical Problems...But Maybe It Should For CISOs
A2A and MCP: What They Are The emerging agentic AI market is experiencing its infrastructure inflection point. Enterprise builders are already getting exhausted by the prospect of hard-coding all of the tools and data an agent needs to use. This hard coding creates fragile systems that can be challenging to make secure and flexible. Today we are seeing the emergence of communication and interoperability standards emerge at two foundational layers: intra-agent with the Model Context Protocol (MCP) and inter-agent with Agent-to-Agent (A2A) protocols. MCP controls how agents manage and share structured memory, task state, and environmental assumptions across sessions and models. A2A protocols specify the rules for inter-agent communication, including negotiation, delegation, and task synchronization. Though MCP and A2A can enable enterprise agent interoperability, they also create new vulnerabilities and challenges in security, performance, and governance. What They Aren't Knowing what A2A and MCP are is just as important to clarify what these protocols aren't. Some security pros have misinterpreted these each of these protocols to be: These protocols don't orchestrate agents, they enable interoperation. Think of A2A like RPC or Kafka in a microservices architecture -- it's a transport and serialization layer, not a scheduler or a source of truth. Similarly, MCP isn't a governance layer. It's more like a distributed cache or a shared memory abstraction, akin to how systems like Apache Ignite or Memcached provide fast, ephemeral access to state, but don't enforce business logic or access policy. If you treat MCP like a control plane, you'll end up with brittle coupling and security blind spots. One common joke that already exists is that the "S" in MCP stands for security. Hat tip to our colleague Carlos Casanova for the title based on his comment that MCP should stand for "Many Critical Problems." The real control plane for agents (when one exists) will likely emerge as a higher-order construct. It will be layered on top of these protocols, with its own lifecycle, observability, and trust models. As Always, Security Forces Tradeoffs Security is never free in distributed systems. Security taxes performance, flexibility, and (sometimes) reliability. The same is true for agentic architectures. Modifying the output of an LLM to meet a new security standard might result in significantly higher token use because of a prompt change. In A2A systems, introducing authentication and authorization mirrors adding TLS to microservices. You gain confidentiality and trust at the expense of latency and overhead related to certificate management. MCP faces similar constraints. Imagine it as a distributed cache or shared state layer used by agents to store and retrieve context. If that context must be signed, versioned, and verified for integrity then suddenly, this resembles a blockchain-light architecture. You gain tamper resistance, but you pay in throughput and latency. Stale or poisoned context can propagate errors across the agent mesh unless strong validation and rollback mechanisms exist. In scenarios where two agents operate within separate execution environments and collaborate on a task without a shared trust anchor or federated identity, they typically need to 1) negotiate credentials, 2) validate scopes, and 3) establish secure channels. This process is similar to service mesh architectures such as Istio, in which mutual TLS (mTLS) secures communication between pods but introduces additional complexity for routing, observability, and debugging. MCP Security Flaws Identified The Model Context Protocol (MCP) is rapidly becoming a standard and critical layer in agentic systems but it's also emerging as a surface for exploitation. Several CVE's discovered recently showcase this. In addition, Trend Micro discovered 492 and Knostic.AI found over 1800 MCP servers exposed to the internet, reminding security leaders of unsecured S3 buckets in AWS in the not so distant past. Because MCP governs how agents share and retrieve context, it becomes a prime target for context poisoning, impersonation, and unauthorized inference. If an agent can inject misleading or malicious context into the shared memory, it can manipulate downstream behavior similar to how poisoned DNS entries or corrupted configuration maps can destabilize distributed systems. Worse, many current MCP implementations lack strong guarantees around context provenance. Without cryptographic signatures or verifiable lineage, agents have no way to determine whether a piece of context is authentic, recent, or even relevant. This is the equivalent of a distributed system relying on unsigned messages in a gossip protocol. Fast, but far too trusting. And because MCP often operates beneath the application layer, these flaws are hard to detect and even harder to remediate. There's no 1) centralized audit trail, 2) no rollback mechanism, and 3) no standard for revocation. In effect, we're building shared memory for autonomous systems without the isolation or integrity guarantees we take for granted in container orchestration or distributed databases. Static Security Models Don't Fit The Needs Of Ephemeral Autonomous Agents Securing agentic systems will require a redesign of 1) trust 2) identity, and 3) control. It requires dynamic trust that enables temporary, scoped identities, context-aware permissions, and cryptographically verifiable provenance. Some potential approaches include: In Agentic Systems Failure Isn't A Crash...It's A Cascade One agent misinterprets context, another acts on flawed assumptions, and a third amplifies the error. By the time a human notices, the trail is cold. That's why we need a new kind of root cause analysis (RCA) that's designed for autonomous, distributed decisions. The system must include full traceability for every agent interaction. Not just the WHAT, but the WHY and HOW. Each decision could carry a cryptographic breadcrumb: a signed reference to the context it used, the agent that provided it, and the logic path it followed. The Securing Agents And Agentic Gold Rush: Picks And Shovels Every emerging technology has its infrastructure moment. For the cloud era it was containers, observability, and CI/CD pipelines. For this instance of the AI era, it's GPUs, vector databases, and fine-tuning frameworks. For agentic systems, the next frontier isn't just smarter agents, it's the tooling that makes them secure, testable, and trustworthy. This is the "picks-and-shovels" phase of the agent economy. The real opportunity lies in building the scaffolding: agent debuggers, context validators, permission brokers, simulation environments, and trust observability layers. Performance and Capability: Why Testing Comes First To understand agentic systems, we have to test and trust them (and their supply chains). And that testing must possess the same characteristics of agentic systems. So testing will need the following characteristics: 1) relentless 2) systematic and 3) scalability. Examples of testing include: Making Good Choices Now Sets Us Up For The Future We're standing at the edge of a new human hybrid computing paradigm. Agents will do more than execute code. They will make decisions, collaborate, and evolve. The protocols, tests, and security measures we design today will shape how these agents interact, how they're trusted, and how they're held accountable. With that in mind, we need to make a trust a first-class primitive for AI Agents and Agentic AI. Let's Connect Forrester clients who have questions implementing or securing AI agents and Agentic AI can request an inquiry or guidance session with either of us. See Jeff and Rowan at Technology & Innovation Summit, taking place in Austin, TX from November 2-5 and at Security & Risk Summit, taking place from November 5-7.
[2]
No integration = no intelligence - why MCP matters for agent-scale automation
Unless you've been living under a rock for the last few months there's a good chance you will have heard tech vendors talking about the Model Context Protocol (MCP). Originally proposed by Anthropic back in November 2024, MCP is a lightweight emerging standard for enabling AI agents to interact with enterprise systems. By connecting to MCP-enabled servers -- which are quickly being developed to wrap everything from CRMs and ERPs to internal APIs and document stores -- agents can orchestrate complex, cross-functional tasks without hardcoded integrations or brittle workflows. But as with every previous form of integration, complexity is rising fast as an initially simple proposal hits the reality of enterprise IT. More servers are entering the picture, use cases are growing more ambitious, and a new wave of security, operational, and scaling challenges is emerging. Once again, there's a risk that rapidly expanding scope will turn what initially feels like a simple concept into a tangled mess. Yet the potential of MCP is real. The question now is whether the protocol can navigate these growing pains to build a durable integration layer for the agentic era. In framing the problem that MCP is meant to solve, we return to a perennial headache in enterprise technology -- integration. To have any kind of impact, agents need to be intelligent -- capable of automating processes on our behalf and replacing us in the swivel chair as orchestrators of systems. But intelligence without the ability to act is useless. That's why every model provider began building their own framework for enabling models to access the information and actions managed by enterprise systems -- leading to a potentially exponential hot mess in which enterprises would be left endlessly building, patching and rebuilding 1:1 custom integration-code between their agents and an ever-shifting landscape of underlying systems. Faced with these emerging barriers, someone had to take the first step and propose a more sustainable approach -- something which could make agent integration simpler, more consistent, and easier to scale. That's what Anthropic did in 2024 with the release of the Model Context Protocol (MCP) -- a lightweight, language-agnostic standard designed to help agents interact with external tools and systems. Unlike traditional APIs -- each with their own endpoints, formats, and authentication models -- MCP offers a single, self-describing interface that allows agents to discover and use external systems in real time -- without requiring custom integration code. It's designed specifically for agentic interaction, supporting ongoing context across multi-step workflows rather than treating each request as a separate, stateless call. That context can also evolve dynamically, enabling agents to request or update shared resources as part of an ongoing process. This shift eliminates much of the 'glue code' that would otherwise be needed to connect models to real-world tools. Instead of writing custom wrappers for each system -- each with its own quirks, payloads, and error handling -- developers can simply connect to an MCP-compliant server. And the protocol's simplicity is a feature, not a flaw. MCP isn't trying to be a comprehensive integration framework. It's closer to a kind of HTTP for agent integration -- low-level, composable, interoperable by design, and with a simple but clear interaction model consisting of four main primitives: tools, resources, prompts, and sampling. Tools represent executable functions that allow agents to take action on behalf of users, such as creating a calendar event, updating a CRM record, or triggering a workflow. These are the verbs of the agentic world -- the means by which agents move from reasoning to execution. Resources provide structured, read-only data that agents can use to build context and guide their decisions. This might include product catalogs, customer records, or system status information -- anything the agent might need to know about the current state of the environment. Prompts, in contrast, offer reusable instructions -- templates that encode patterns of interaction or behavior. These give servers a way to guide the model toward consistent outputs -- whether that's formatting a response, summarizing a policy, or following a specific reasoning style -- without hardcoding behavior into the model itself. Finally, sampling reverses the flow of interaction -- allowing the server to ask the model for input as part of its own logic in order to create a feedback loop. This supports dynamic, cooperative workflows where the model plays an active role in decision-making -- suggesting options, evaluating inputs, or completing partially defined tasks before the server proceeds. Taken together, these four primitives give MCP the expressive range needed for agents to operate in complex, multi-step environments -- combining action, data, modular guidance, and cooperative logic into a single, interoperable interface. If MCP started as an Anthropic initiative, it didn't stay that way for long. What began with an example implementation for Claude quickly gained traction -- and support from a broader cast of players. Today, all the major AI vendors support the protocol, alongside others including a wide range of enterprise and cloud providers such as Microsoft, Amazon, Salesforce, Intercom, and Atlassian; payment vendors like PayPal and Stripe; and -- somewhat ironically -- integration vendors including Boomi and Workato. Equally importantly, development tool vendors such as Cursor and Replit have also embraced MCP -- embedding support directly into developer platforms to make server creation and deployment easier. As a result, MCP has gained rapid traction since its initial launch, with more than 15,000 deployments globally. Salesforce is a good example of the resulting vendor commitment, with the company making MCP a cornerstone of its AgentForce platform and introducing a gateway that lets agents connect with third-party MCP servers through the AppExchange ecosystem. And by that measure, MCP is off to a strong start. Adoption is growing, use cases are expanding, and infrastructure is beginning to stabilise. But as we've seen many times, any fast-growing ecosystem brings equally fast-growing complexity. As more capabilities, vendors, and servers are added to the MCP ecosystem, it becomes harder to control risk, manage operations, and preserve coherence. Which is why early enthusiasm doesn't always translate into long-term success. Most MCP deployments today are designed around a simple, local setup. Anthropic's own server for Claude Desktop set the pattern -- a single agent, a lightweight tool, and data passed between local processes on the same machine. For prototyping, it's elegant -- developers can simply run tools locally, test agent interactions, and iterate quickly without building any real infrastructure. But that setup was never designed to operate at enterprise scale. Once MCP moves into production, the assumptions break down fast. Servers are suddenly used by multiple agents acting on behalf of large numbers of users, execute on shared infrastructure, and support workflows spanning multiple systems. As a result, MCP servers are no longer standalone processes -- they're distributed components that require security, oversight, and management. And none of that comes for free. Without a shared identity model, most implementations fall back on static credentials -- often stored in plain configuration files or injected manually into servers, creating 'keys to the kingdom' scenarios in which compromised MCP servers can compromise the wider IT landscape in turn. In one analysis, security researchers found a significant number of MCP servers with open endpoints or the ability to execute arbitrary commands -- not as a result of malice but simply as a side effect of immaturity, mistakes or carelessness. Without clear operational oversight, servers may crash silently under load, return invalid data, or hang mid-process -- derailing entire agent workflows. And since most current deployments are still treated as developer-maintained scripts, there's rarely a platform team watching for issues or enforcing consistency across environments. Without formal lifecycle management, small updates to servers -- or the underlying systems they depend on -- can have ripple effects that are difficult to detect or roll back, especially when those changes are made ad-hoc by the original developer -- often outside the view of operational or security teams. And at the ecosystem level, discovery becomes a limiting factor. Most agents today are manually configured to talk to known MCP servers. But as adoption grows, that model becomes unmanageable. Without shared registries, metadata standards, or dynamic resolution, organizations can't see what tools already exist -- let alone manage, secure, or rationalize them. What was once liberating modularity can quickly become spiralling entropy. Each of these issues is a symptom of the same structural problem -- adoption is scaling faster than capability. The protocol works -- but the infrastructure that needs to surround it -- to ensure safe and scalable operational deployment within sophisticated, distributed enterprise environments -- simply doesn't yet exist. The risk is therefore that while the promise is great, overly-ambitious adoption at this point could lead to a growing sprawl of unmonitored, unowned, and often insecure MCP servers scattered across the enterprise -- wasting resources, multiplying threat vectors, and embedding fragility into the core integration layer that agents rely on to function. A true perfect storm of issues for the unwary. In the context of these challenges, it's worth asking why so many smart people are pushing so hard on MCP. Why all this momentum -- despite the immaturity, missing scaffolding, and fragility? It's because ultimately, those same smart people aren't just positioning MCP as a tool for better enterprise integration -- but as the first step toward a new computing paradigm built on a new kind of web. One where agents are the primary actors -- navigating sites, reasoning across data, and getting real work done on behalf of their users. MCP is a key first step on this journey, enabling agents to connect to and use existing enterprise data sources and systems -- but the vision doesn't stop at just connecting AI models to external tools. Instead, the vision implies a world where software is no longer driven by clicks and screens, but rather by context and conversation -- a world in which agents coordinate with each other, learn from feedback, manage ambiguity, and act across organizational boundaries on behalf of their users. A world where workflows emerge dynamically -- shaped by context rather than code. That's why there are a host of other protocols emerging in parallel to MCP, aiming to build the complementary scaffolding necessary to support this wider vision -- including Agent to Agent (A2A) for cross-agent coordination, Agent User Interaction Protocol (AG-UI) for real-time collaboration between humans and AI, and Layered Orchestration for Knowledgeful Agents (LOKA) for decentralized identity and trust within agent ecosystems. But let's not get ahead of ourselves. There's a long list of challenges still to be solved for this grand vision to become reality -- and nearly all of the protocols needed to form a complete and functional ecosystem are currently volatile and immature. Hence the flashing-red risk classification of the 'Choreography' quadrant within my recent agent taxonomy. Yet the whole vision rests on one non-negotiable foundation -- agents being able to do something useful. And that starts with MCP. Because no matter how intelligent or willing to collaborate they are, agents that can't access systems, retrieve data, or trigger actions simply aren't useful. No integration, no intelligence. MCP shows a lot of future promise -- but also solves a pressing real-world problem for today's agentic experiments and implementations. That's why it's a foundational capability specifically called out in the 'Instruction' quadrant of my agentic taxonomy -- the most immediately practical quadrant for most enterprises today. But to scale up to enterprise- and then web-scale usage, MCP needs its own growth arc, powered by new patterns, new infrastructure, and -- eventually -- a set of complementary protocols to provide the scaffolding necessary for global-scale agentic ecosystems. From an enterprise perspective, therefore, I'd give MCP a score of around 40 out of 100 right now. The foundations are strong -- vendor momentum, rapid iteration, and strong community growth. But the current weaknesses are structural -- inconsistent security, no multi-tenancy, immature operational tooling, and an over-reliance on developers to manage what should ultimately be shared infrastructure. None of these weaknesses are sustainable at scale. The good news? It's moving fast. The development pace -- combined with realistic assessments of maturity from vendors and researchers -- suggests we'll see meaningful improvements in the next 12-18 months, particularly around security, observability, and manageability. Until then, enterprises should tread carefully. MCP is a powerful tool for pilots and controlled experiments -- ideally through managed services offered by vendors with the focus, resources, and incentives to close the current gaps. In the meantime, build your security and operational posture as if the protocol offers nothing out of the box. And -- equally importantly -- keep one eye on the horizon. Because if MCP continues to evolve -- and if discovery, identity, and policy layers follow -- it may yet become the backbone of something much bigger.
Share
Copy Link
The emergence of Model Context Protocol (MCP) and Agent-to-Agent (A2A) protocols is transforming AI integration in enterprises, but also introducing new security vulnerabilities that CISOs must address.
In the rapidly evolving landscape of artificial intelligence, two emerging protocols are reshaping how AI agents interact with enterprise systems: the Model Context Protocol (MCP) and Agent-to-Agent (A2A) protocols. These innovations are addressing a critical need in the agentic AI market, which is experiencing an infrastructure inflection point 1.
Source: Forrester
MCP, originally proposed by Anthropic in November 2024, serves as a lightweight standard enabling AI agents to interact with enterprise systems. It controls how agents manage and share structured memory, task state, and environmental assumptions across sessions and models. A2A protocols, on the other hand, specify the rules for inter-agent communication, including negotiation, delegation, and task synchronization 1.
The potential of MCP lies in its ability to simplify and scale agent integration. Unlike traditional APIs, MCP offers a single, self-describing interface that allows agents to discover and use external systems in real-time without requiring custom integration code. This shift eliminates much of the 'glue code' that would otherwise be needed to connect models to real-world tools 2.
MCP's design is based on four main primitives: tools, resources, prompts, and sampling. These primitives provide the expressive range needed for agents to operate in complex, multi-step environments, combining action, data, modular guidance, and cooperative logic into a single, interoperable interface 2.
Since its initial launch, MCP has gained significant traction, with more than 15,000 deployments globally. Major AI vendors, enterprise and cloud providers, payment vendors, and integration vendors have all embraced the protocol. This widespread support has contributed to MCP's rapid adoption and its potential to become a standard in the industry 2.
While MCP and A2A protocols offer significant benefits, they also introduce new security vulnerabilities that CISOs must address. These protocols create new attack surfaces for context poisoning, impersonation, and unauthorized inference 1.
Several critical vulnerabilities have already been identified:
These issues are compounded by the fact that MCP often operates beneath the application layer, making flaws hard to detect and remediate 1.
Securing agentic systems requires a redesign of trust, identity, and control. Traditional static security models are inadequate for the needs of ephemeral autonomous agents. Instead, dynamic trust models that enable temporary, scoped identities, context-aware permissions, and cryptographically verifiable provenance are needed 1.
As with any distributed system, security in agentic architectures comes with trade-offs. Implementing security measures can impact performance, flexibility, and reliability. For instance, introducing authentication and authorization in A2A systems is akin to adding TLS to microservices, gaining confidentiality and trust at the expense of latency and overhead 1.
As MCP and A2A protocols continue to evolve, they promise to revolutionize how AI agents interact with enterprise systems. However, the success of these protocols will depend on how effectively they can navigate the growing pains of security, operational, and scaling challenges. The potential for a new era of agent-scale automation is clear, but it must be balanced with robust security measures to ensure the integrity and trustworthiness of agentic systems in enterprise environments.
Google releases Gemini 2.5 Deep Think, an advanced AI model designed for complex queries, available exclusively to AI Ultra subscribers at $250 per month. The model showcases improved performance in various benchmarks and introduces parallel thinking capabilities.
17 Sources
Technology
14 hrs ago
17 Sources
Technology
14 hrs ago
OpenAI raises $8.3 billion in a new funding round, valuing the company at $300 billion. The AI giant's rapid growth and ambitious plans attract major investors, signaling a significant shift in the AI industry landscape.
10 Sources
Business and Economy
6 hrs ago
10 Sources
Business and Economy
6 hrs ago
Reddit's Q2 earnings reveal significant growth driven by AI-powered advertising tools and data licensing deals, showcasing the platform's successful integration of AI technology.
7 Sources
Business and Economy
14 hrs ago
7 Sources
Business and Economy
14 hrs ago
Reddit is repositioning itself as a search engine, integrating its traditional search with AI-powered Reddit Answers to create a unified search experience. The move comes as the platform sees increased user reliance on its vast community-generated content for information.
9 Sources
Technology
22 hrs ago
9 Sources
Technology
22 hrs ago
OpenAI is poised to launch GPT-5, a revolutionary AI model that promises to unify various AI capabilities and automate model selection for optimal performance.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago