AI
Model Context Protocol
What is Model Context Protocol?
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
MCP helps you build agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides:
- A growing list of pre-built integrations that your LLM can directly plug into
- The flexibility to switch between LLM providers and vendors
- Best practices for securing your data within your infrastructure
General Architecture
- MCP Hosts: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
- MCP Clients: Protocol clients that maintain 1:1 connections with servers
- MCP Servers: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
- Local Data Sources: Your computer’s files, databases, and services that MCP servers can securely access
- Remote Services: External systems available over the internet (e.g., through APIs) that MCP servers can connect to
How it Works: Client-Server Architecture
MCP operates on a client-server model:
- MCP Server: This is the application you will build or host. It provides a set of tools and resources that the LLM can use. Here are some pre-built MCP servers for your reference.
- MCP Client: The LLM application (e.g., Cursor, Claude Desktop) acts as a client that connects to your server to access the tools you've provided.
Communication between the client and server typically uses one of two transport methods:
- Stdio(stdio) : The client runs the server as a local command-line process and communicates with it over standard input/output. 4
- HTTP with Server-Sent Events (SSE): The server runs as a web service, and the client connects to it over HTTP. This is more flexible for remote servers.
Server Features
概述
服务器通过 MCP 为语言模型提供添加上下文的基本构建模块。这些原语(primitives)使客户端、服务器和语言模型之间能够进行丰富的交互:
- Prompts(提示词):预定义的模板或指令,用于引导语言模型的交互
- Resources(资源):结构化的数据或内容,为模型提供额外的上下文
- ools(工具):可执行的函数,使模型能够执行操作或检索信息
每种原语可以通过以下控制层级来概括:
原语 | 控制方 | 描述 | 示例 |
---|---|---|---|
Prompts | 用户控制 | 用户选择触发的交互式模板 | 斜杠命令、菜单选项 |
Resources | 应用程序控制 | 客户端附加和管理的上下文数据 | 文件内容、git 历史 |
Tools | 模型控制 | 暴露给大语言模型用于执行操作的函数 | API POST 请求、文件写入 |
Prompts
模型上下文协议(MCP)为服务器向客户端暴露提示模板提供了标准化的方法。提示词(Prompts)允许服务器提供用于与语言模型交互的结构化消息和指令。客户端可以:
- 发现可用的提示模板
- 获取提示模板的内容
- 提供参数以自定义提示模板
1. 用户交互模型
提示词(Prompts)设计为 用户控制,意味着它们由服务器向客户端暴露,用户可以显式选择使用。通常,提示词会通过用户界面中的命令触发,使用户能够自然地发现并调用可用提示。例如,作为斜杠命令(slash commands):
示例:提示词通过斜杠命令暴露
然而,实现者可以根据需要通过任何界面模式暴露提示词——协议本身不强制特定的用户交互模型。
2. 功能能力
支持提示词的服务器 必须 在初始化时声明 prompts
功能:
{
"capabilities": {
"prompts": {
"listChanged": true
}
}
}
listChanged
表示当可用提示列表发生变化时,服务器是否会发送通知。
3. 协议消息
3.1 列出提示词
客户端通过发送 prompts/list 请求获取可用提示词。此操作支持分页。
请求示例:
{
"jsonrpc": "2.0",
"id": 1,
"method": "prompts/list",
"params": {
"cursor": "optional-cursor-value"
}
}
响应示例:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"prompts": [
{
"name": "code_review",
"title": "Request Code Review",
"description": "Asks the LLM to analyze code quality and suggest improvements",
"arguments": [
{
"name": "code",
"description": "The code to review",
"required": true
}
],
"icons": [
{
"src": "https://example.com/review-icon.svg",
"mimeType": "image/svg+xml",
"sizes": "any"
}
]
}
],
"nextCursor": "next-page-cursor"
}
}
3.2 获取单个提示词
客户端通过发送 prompts/get 请求获取特定提示词。参数可通过补全 API 自动填充。
请求示例:
{
"jsonrpc": "2.0",
"id": 2,
"method": "prompts/get",
"params": {
"name": "code_review",
"arguments": {
"code": "def hello():\n print('world')"
}
}
}
响应示例:
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"description": "代码审查提示词",
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "请审查这段 Python 代码:\ndef hello():\n print('world')"
}
}
]
}
}
3.3 列表变更通知
当可用提示列表变化时,声明了 listChanged
功能的服务器应
发送通知:
{
"jsonrpc": "2.0",
"method": "notifications/prompts/list_changed"
}
4. 消息流
5. 数据类型
5.1 Prompt
提示词定义包括:
name
:提示词的唯一标识title
:可选,供显示的人类可读名称description
:可选,提示词描述arguments
:可选,提示词参数列表,用于自定义
PromptMessage
提示消息可以包含:
role
:user 或 assistant,表示说话方content
:内容类型之一
所有内容类型支持可选注解(metadata),包括受众、优先级和修改时间
5.2.1 文本内容
文本内容应以纯文本呈现
{
"type": "text",
"text": "The text content of the message"
}
最常用的自然语言交互类型。
5.2.2 图片内容
{
"type": "image",
"data": "base64-encoded-image-data",
"mimeType": "image/png"
}
图片数据必须 base64
编码,并包含有效 MIME
类型,用于多模态交互。
5.2.3 音频内容
{
"type": "audio",
"data": "base64-encoded-audio-data",
"mimeType": "audio/wav"
}
音频数据必须 base64
编码,并包含有效 MIME
类型,用于多模态交互。
5.2.4 嵌入资源
{
"type": "resource",
"resource": {
"uri": "resource://example",
"name": "example",
"title": "My Example Resource",
"mimeType": "text/plain",
"text": "Resource content"
}
}
- 可引用服务器端管理的文本或二进制资源。
- 须包含有效 URI、MIME 类型,以及文本或 base64 编码的二进制数据。
- 将文档、代码示例或其他参考资料无缝嵌入对话流程。
6. 错误处理
- 无效提示名称:-32602(Invalid params)
- 缺少必需参数:-32602(Invalid params)
- 内部错误:-32603(Internal error)
7. 实现注意事项
- 服务器 应 验证提示词参数
- 客户端 应 支持大提示列表的分页
- 双方 应 遵守功能能力协商
8. 安全
实现者 必须 严格验证所有提示词输入和输出,以防止注入攻击或未授权访问资源。
Server
Init Project
Install uv
$ curl -LsSf https://astral.sh/uv/install.sh | sh
Typing the following create-mcp-server command, it will guide you create your project via a wizard:
$ uvx create-mcp-server
Code Your Server
from typing import Any
import httpx
from mcp.server.fastmcp import FastMCP
# Initialize FastMCP server
mcp = FastMCP("weather")
# Constants
NWS_API_BASE = "https://api.weather.gov"
USER_AGENT = "weather-app/1.0"
Claude App Configuration
- Open Claude App, Open Settings -> Developer, click on Edit Config
- Update the following json configurations into claude_desktop_config.json
{
"mcpServers": {
"weather": {
"command": "uv",
"args": [
"--directory",
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather",
"run",
"weather.py"
]
}
}
}
Note
You may need to put the full path to the uv executable in the command field. You can get this by running which uv on MacOS/Linux or where uv on Windows.
Client
MCP Inspector
$ npx @modelcontextprotocol/inspector <command> <arg1> <arg2>
For instance, your MCP server project located in the root directory and named my-mcp-server:
$ npx @modelcontextprotocol/inspector uv --directory /my-mcp-server run mymcp