AI Infrastructure

Model Context Protocol (MCP)

Model Context Protocol (MCP) is an open protocol for connecting language models to external tools, data sources, and capabilities through a standard wire format.

MCP defines how a language model (or an agent wrapping one) talks to external systems. It standardizes the shape of tool definitions, the way tools are invoked, the way results come back, and the way data sources expose their content for retrieval. Before MCP, every integration between a model and an external system was a custom adapter. With MCP, any compliant client can talk to any compliant server.

The protocol covers three main surfaces. Tools (functions the model can call to take actions). Resources (data the model can read on demand). Prompts (templates the model can instantiate). A server exposes any combination of these. A client (typically an agent runtime or an IDE extension) discovers what a server offers and negotiates access.

MCP is the wire format by which agents call external capabilities. It is doing for agent-tool integration what HTTP did for document exchange and what Language Server Protocol did for IDE tooling. The value grows with adoption. Every new compliant server expands what every compliant client can do.

The Amdahl view

Every serious B2B SaaS will have an MCP surface within 12 months. The ones that do not will become invisible to AI-native workflows, because agents will simply route around them. Amdahl was MCP-first from before MCP was a category. Our bet is that the long-term interface for customer intelligence is not a dashboard, it is an MCP server that any agent can query, with the dashboard as one of several clients.

See customer intelligence running on your own customer conversations.