知識がなくても始められる、AIと共にある豊かな毎日。
未分類

MCP Introduction 2026: How Model Context Protocol Connects AI Agents to External Tools

swiftwand

Until recently, connecting AI agents to external tools required writing custom integration code for each tool. Model Context Protocol (MCP) is an open protocol published by Anthropic in 2024 to solve this problem. Often described as “the USB-C for AI and tools,” MCP was donated to the Agentic AI Foundation (AAIF) under the Linux Foundation by the end of 2025, evolving into a true open standard.

The AI agent market reached approximately $7.6–7.8 billion in 2025 and is projected to surge to $52.6 billion by 2030 (CAGR 46.3%). Gartner predicts that by 2026, 40% of enterprise applications will embed task-specific AI agents. MCP is gaining attention as the foundational technology supporting this explosive growth.

This article comprehensively covers MCP’s core architecture, transport layer evolution, security, ecosystem status, and a practical implementation guide—all based on the official specification (2025-11-25 edition).

忍者AdMax

What Is MCP: A Standard Protocol Connecting AI Agents to External Tools

Model Context Protocol (MCP) is an open protocol for connecting AI applications to external data sources and tools in a standardized way. It is now managed by the Agentic AI Foundation (AAIF), co-founded by Anthropic, OpenAI, and Block, with support from Google, Microsoft, AWS, Cloudflare, and Bloomberg.

The MCP ecosystem is expanding rapidly. By late 2025, the Smithery registry listed over 2,880 MCP servers, MCP.so indexed over 3,000 tools, over 300 client apps support MCP, and SDK monthly downloads exceeded 97 million. Claude Desktop, VS Code, Cursor, Gemini CLI, and GitHub Copilot—virtually all major AI development tools—now support MCP.

Client-Server Model Overview

MCP uses a client-server architecture. An AI application (host) connects to one or more MCP servers, and each connection is managed by a dedicated MCP client.

Two-Layer Architecture

  • Protocol Layer: A JSON-RPC 2.0-based protocol defining tool execution and resource retrieval. Request, response, and notification formats are standardized, enabling common message formats across any client-server pair.
  • Transport Layer: Manages the actual communication channel between client and server. The standard choices are stdio (standard I/O) for local environments and Streamable HTTP for remote environments.

Three Primitives Provided by Servers

  • Tools: Executable functions the AI can invoke—file operations, API calls, database queries, etc. The design philosophy requires user approval for tool execution.
  • Resources: Data sources providing context to the AI—file contents, database records, API responses in structured form. Read-only data controlled by the application side.
  • Prompts: Templates for structuring LLM interactions. Server-defined prompt templates are presented to users by the client to guide workflows.

MCP vs. Competing Technologies

Difference from OpenAI Function Calling

OpenAI’s Function Calling embeds tool definitions directly into LLM requests. It’s simple and immediately usable but vendor-specific. MCP’s client-server separation allows a server built once to be reused across different clients—Claude Desktop, Cursor, VS Code, etc. Function Calling suits low-latency in-app automation; MCP excels at cross-runtime portability and shared integrations.

Relationship with Google A2A Protocol

Google’s A2A (Agent-to-Agent) protocol enables direct communication and task delegation between agents. While MCP standardizes “agent-to-tool connections,” A2A standardizes “agent-to-agent coordination.” They’re complementary, not competing: use MCP for tool access and A2A for inter-agent collaboration.

Transport Layer Evolution: From stdio to Streamable HTTP

stdio (Standard I/O) Transport

stdio is the simplest transport—it launches the MCP server as a local process and exchanges JSON-RPC messages via standard I/O. Easy to set up and ideal for local clients like Claude Desktop and Cursor. However, it’s limited to same-machine communication and cannot connect to remote services.

Streamable HTTP Transport

Introduced in late 2025, Streamable HTTP replaces the legacy SSE (Server-Sent Events) transport. It enables bidirectional communication over a single HTTP connection with chunked transfer encoding and progressive message delivery. Enhanced connection stability and error recovery make it suitable for enterprise remote MCP server deployments.

MCP Security: OAuth 2.1 and Tool Poisoning Countermeasures

OAuth 2.1 Authentication and Authorization

The June 2025 MCP spec update requires OAuth 2.1 implementation for MCP servers. PKCE (Proof Key for Code Exchange) applies to all authorization code flows, and short-lived access tokens are recommended. JIT (Just-In-Time) access tokens auto-expire after task completion, minimizing unauthorized access risk.

Tool Poisoning Risks and Countermeasures

In multi-server environments, malicious MCP servers can manipulate tool manifests and descriptions to trick AI agents into executing harmful commands—a “tool poisoning” attack creating a “Confused Deputy” problem. Countermeasures include running MCP server commands in sandboxed environments with minimal default permissions, deploying untrusted servers in Docker containers with restricted filesystem and network access, and requiring explicit user approval for tool execution—a fundamental MCP security principle.

Top 6 MCP Servers

  • GitHub MCP Server: Create issues, approve pull requests, and execute Git operations via AI agents. Used for automated code reviews and repository management.
  • Slack MCP Server: List channels, post messages, reply to threads, add reactions, and retrieve history through AI agents. Used for incident response and standup summary automation.
  • Filesystem MCP Server: File read/write and directory operations with access controls. A foundational server for delegating local file search and editing to AI.
  • Database MCP Servers: Support for SQLite, PostgreSQL, MySQL, MariaDB, Oracle, and MS-SQL. AI agents can execute SQL queries directly for data analysis and reporting.
  • Playwright MCP Server: Browser-level automation for web scraping, form filling, and test automation via AI agents.
  • Salesforce MCP Server: Streamlines AI access to CRM data for customer lookup and automated sales report generation.

Enterprise Use Cases

  • DevOps Incident Response: Search keywords across Slack channels and auto-post status updates to incident channels
  • Automated Code Review: Combine GitHub MCP and Slack MCP to read PR content, search related messages, and generate review summaries
  • Project Standup Summaries: Auto-extract decisions, questions, and action items from project channel messages
  • Data Analysis Pipelines: Execute queries via Database MCP and share results to Slack in automated workflows

Key MCP Milestones

  • November 2024: Anthropic announces MCP as an open standard
  • March 2025: OpenAI officially adopts MCP, integrates into ChatGPT Desktop
  • April 2025: Google DeepMind announces MCP support for Gemini models
  • May 2025: Microsoft announces Windows 11 MCP support at Build 2025
  • June 2025: OAuth 2.1 spec update strengthens security
  • Late 2025: Streamable HTTP transport and Elicitation feature introduced
  • December 2025: Anthropic donates MCP to AAIF under Linux Foundation, co-founded by Anthropic, OpenAI, and Block

Getting Started with MCP Development: Practical Guide

Preparing Your Environment

Official MCP SDKs are available for TypeScript, Python, Java, Kotlin, C#, plus expanding support for Go, PHP, Ruby, Rust, and Swift. TypeScript and Python SDKs have the most maturity with abundant documentation and sample code. Use stdio transport for local environments and Streamable HTTP for remote—that’s the basic decision rule.

3 Steps to Build an MCP Server

  • Step 1: Project Initialization — Install the official SDK and create the MCP server skeleton. Use npm for TypeScript, pip for Python. With FastMCP (Python) or MCP SDK (TypeScript), you can launch a server in just dozens of lines.
  • Step 2: Define Tools — Define the tools your server provides: tool name, description, input parameter schema, and execution logic. The description is crucial as it’s what the AI uses to decide when to select the tool—write it clearly and specifically.
  • Step 3: Test and Debug — Use MCP Inspector to verify and test server behavior in the browser. Tool invocation, response verification, and error handling testing can all be done interactively.

Latest Draft Features

The MCP specification includes draft-stage features. Tasks enables management of long-running operations with polling and deferred result retrieval. Elicitation allows MCP servers to request user input during tool execution—enabling confirmation dialogs, forms, and multi-step workflows. When formally adopted, these features will significantly expand MCP’s expressiveness.

FAQ

Q1. Do I need programming knowledge to use MCP?

Building MCP servers requires development knowledge, but if you’re just using existing servers, you can get started by simply adding server information to Claude Desktop or Cursor’s configuration file.

Q2. Is MCP server communication secure?

The MCP specification requires OAuth 2.1 authentication/authorization with PKCE and short-lived tokens recommended. However, security implementation is the server developer’s responsibility—only use trusted servers.

Q3. Does MCP compete with LangChain?

MCP and LangChain are complementary, not competing. LangChain is an agent orchestration framework; MCP is a tool connection protocol. A hybrid approach connecting LangChain/LangGraph to MCP servers via adapters is common.

Q4. Which transport should I choose?

stdio is best for local CLI tools and IDE integration. Streamable HTTP is the choice for web applications and cloud service integration. stdio offers easy setup; Streamable HTTP provides superior scalability.

Q5. Can individual developers use MCP?

Absolutely. Just connect official MCP servers (Filesystem, GitHub, SQLite, etc.) to Claude Desktop and you can delegate file operations and Git operations to your AI agent. It’s widely used as a tool to dramatically boost individual developer productivity.

Q6. Where can I find MCP servers?

Thousands of servers are registered on registries like Smithery and MCP.so. The official GitHub repository also publishes reference implementations of major servers. Search for a server matching your needs and start using it immediately.

Q7. What’s the outlook for MCP?

All major AI companies—Anthropic, OpenAI, Google, and Microsoft—support MCP, and it’s managed under the Linux Foundation. Cross-vendor adoption has progressed at unprecedented speed compared to standards like OpenAPI, OAuth 2.0, and HTML. MCP is highly likely to become the de facto standard for AI agent integration.

Conclusion

MCP is rapidly becoming the “common language” for AI agents to interact with the outside world. Just one year after Anthropic’s announcement in late 2024, the entire industry—including OpenAI, Google, and Microsoft—has adopted it, growing into an ecosystem with over 5,800 servers and 300+ client apps.

With Streamable HTTP transport, OAuth 2.1 security enhancements, and proposed features like Elicitation and Tasks, the specification is steadily maturing. While still a relatively new protocol, it will undoubtedly become foundational technology for AI agent development. Start by connecting an official MCP server to Claude Desktop and experience it firsthand.

ブラウザだけでできる本格的なAI画像生成【ConoHa AI Canvas】
ABOUT ME
swiftwand
swiftwand
AIを使って、毎日の生活をもっと快適にするアイデアや将来像を発信しています。 初心者にもわかりやすく、すぐに取り入れられる実践的な情報をお届けします。 Sharing ideas and visions for a better daily life with AI. Practical tips that anyone can start using right away.
記事URLをコピーしました