- Authors: Anthropic and Mahesh Murag
- Sources:
Key Themes and Concepts
- Motivation and Philosophy:MCP is designed to standardize how AI applications interact with external systems. The current landscape suffers from fragmentation, with each team often creating custom implementations for connecting AI apps to data. “We were seeing is across the industry but also even inside of the companies that we were speaking to there was a ton of fragmentation about how to build AI systems in the right way.”
- MCP aims to solve the “N times M problem” of many different permutations for how client applications talk to servers.
- MCP is positioned as a layer between application developers and tool/API developers, simplifying access to data for LLMs.
- The core concept that models are only as good as the context we provide to them
- MCP as a Standard:MCP standardizes AI development, analogous to how APIs standardized web app interaction and LSP (Language Server Protocol) standardized IDE interaction with language-specific tools.
- It provides a “standard interface” for client applications to connect to servers with minimal additional work.
- According to Anthropic, “MCP standardizes how AI applications interact with external systems”.
- Key Components:MCP consists of three primary interfaces: prompts, tools, and resources.Tools: Model-controlled functionalities exposed by the server. The LLM can “choose when the best time to invoke those tools is”. Examples include read/write operations, database updates, and file system interactions.
- Resources: Data exposed to the application, controlled by the application itself. “Resources are data exposed to the application and they’re application controlled controlled.” These can be static or dynamic files, images, JSON data, etc. Applications can decide how to use these resources.
- Prompts: User-controlled, predefined templates for common interactions with a server. Examples include slash commands or standardized ways of doing document Q&A. “Prompts are user controlled we like to think of them as the tools that the user invokes as opposed to something that the model invokes.”
- Value Proposition:Application Developers: “Once your client is mCP compatible you can connect it to any server with zero additional work.” Simplifies integration with various tools and data sources.
- Tool/API Providers: “If you’re a tool or API provider or someone that wants to give llms access to the data that matters you can build your mCP server once and see adoption of it everywhere across all of these different AI applications.” Build a single MCP server and see adoption across various AI applications.
- End Users: Access to more powerful and context-rich AI applications.
- Enterprises: Clear way to separate concerns, allowing different teams to own and maintain specific services (like a Vector DB interface) without requiring constant communication. “Enterprises there’s now a clear way to separate concerns between different teams that are building different things”.
- Adoption and Examples:Growing adoption, with “something like 1100 Community built servers” already developed.
- Examples of MCP clients include Anthropic’s Cloud for Desktop, cursor, windsurf, agents like Goose by block, and applications and IDEs.
- Servers are being built by companies like and a bunch of others that have published official Integrations for ways to hook into their systems.
- Tools and resources available from open-source contributors, including various anthropic libraries.
- MCP for Agents:MCP is seen as foundational for building agents. “MTP will be the foundational protocol for agents broadly.”
- Facilitates the “augmented LLM” concept, allowing agents to query/write data, invoke tools, and maintain state.
- Enables agents to expand capabilities after initialization.
- Allows agent builders to focus on the core agentic loop and context management.
- Agent Development Frameworks:MCP complements agent frameworks like LangGraph. LangGraph has released connectors/adapters to connect to MCP servers.
- mCP allows them to expose those servers to the agent without having to change the system itself as long as that adapter is installed
- MCP might replace parts of agent frameworks related to bringing context and calling tools, but frameworks still valuable for knowledge management and the agentic loop.
- Protocol Capabilities for Agents:Sampling: Allows an MCP server to request completions (LLM inference calls) from the client, delegating LLM interactions to the client. “Sampling allows an mCP server to request completions AKA llm inference calls from the client”.
- Composability: Any application/API/agent can be both an MCP client and an MCP server, enabling chaining and complex architectures. “Any application or API or agent can be both an mCP client and an mCP server”.
- Roadmap and Future Directions:Remote Servers and OAuth: Support for remotely hosted servers using OAuth 2.0 for authentication, currently in implementation phase. “This will enable remotely hosted servers. This means servers that live on a public URL. You don’t have to mess with standard IO. You as a user don’t even need to know what mCP is.”
- MCP Registry API: A unified, hosted metadata service for discovering and publishing MCP servers, currently under development. Addresses the lack of a centralized discovery mechanism. “We are working on is an official mCP registry API this is a unified and hosted metadata Service owned by the mCP team itself”.
- Versioning: Support for versioning MCP servers and tracking changes to APIs and tool descriptions.
- Stateful vs. Stateless Connections: Exploring options for both long-lived and short-lived connections.
- Streaming: Supporting streaming data from the server to the client.
- Namespacing: Addressing potential tool name conflicts when multiple servers are installed.
- Proactive Server Behavior: Enabling servers to proactively request user input or send notifications.
- Trust and Security”You should be judicious about what servers you connect to”, important that users are aware of what access their servers have.
- Developers are encouraged to enforce server data governance for themselves.
Open Questions and Considerations
- Security and Trust: How to establish trust in MCP servers, especially those from unknown sources.
- Data Governance: Best practices for data governance and access control.
- Debugging and Observability: Need for better debugging tools and observability patterns for complex MCP-based systems.
- Evaluation: How to evaluate and test MCP servers, including handling regressions as servers evolve.
- Best Practices: Still developing, community is in search of best practice
Conclusion
Anthropic’s MCP represents a significant step towards a more connected and scalable AI ecosystem. By standardizing the way AI applications interact with external systems, MCP promises to unlock the full potential of AI assistants and agents, enabling them to access and leverage data more effectively. The ongoing development of features like remote servers, OAuth support, and the MCP Registry will further enhance the protocol’s usability and adoption.