Model Context Protocol (MCP): Build a Tool Server
Build a Model Context Protocol server that exposes your company's tools and data — then connect a LangChain agent to it. Learn how MCP decouples tools from agents, when to use MCP vs Anthropic Skills vs native @tool, and why MCP is the emerging standard for AI tool interop.
What you'll learn
- 1Why MCP Exists: The N×M ProblemEvery LLM agent needs tools — look up a user, query a database, call an API. Without a standard, every agent re-implements every tool:
- 2Build Your First MCP ServerThe official mcp Python SDK is correct but low-level. FastMCP is the high-level framework built on top of it — decorator-based, no boilerplate. If you've used FastAPI, FastMCP will feel familiar.
- 3Connect a Client (Direct Call, No Agent)An MCP client and server talk JSON-RPC messages over a transport. For stdio:
- 4Full Server with Tools + ResourcesA minimal server with one tool is enough to teach the protocol, but production MCP servers typically expose:
- 5Agent + MCP: End-to-EndEverything so far has been infrastructure. Now the agent actually uses the MCP server — discovers tools at startup, the LLM decides when to call them, results flow back through the protocol.
- 6Decision Framework: MCP vs Skills vs @toolYou now have hands-on experience with MCP. But MCP isn't the only option — and it isn't always the right one. Production agent engineers routinely choose between three patterns:
Prerequisites
- Completed Lab 1 (ReAct Agent with NIM) or equivalent
- Comfort with Python decorators and async/await
- Familiarity with JSON-RPC or stdio-based protocols helps (not required)
Exam domains covered
What you'll build in this MCP tool server lab
Model Context Protocol is the tool-interop standard that Claude Desktop, Claude Code, Cursor, Windsurf, LangChain agents, and OpenAI's Agents SDK all speak — and MCP server skills are becoming a hiring-relevant signal for platform engineers building AI infrastructure in 2026. This lab builds an MCP server from scratch on FastMCP, connects direct and LangChain clients to it, then pits MCP against Anthropic Skills and native @tool in a decision framework you can point at in an architecture review. You finish with a working server, a client that speaks the raw protocol, a LangChain agent consuming MCP tools alongside native ones, and concrete intuition for when each abstraction fits. Everything runs against NVIDIA NIM endpoints we provision.
The substance is tool decoupling. You internalise why MCP flips the N×M problem (M agents re-implementing N tools) into N+M (write each tool once, every client plugs in). You build server_basic.py with a @mcp.tool-decorated check_account(email) served over stdio, then write a direct client that spawns the server as a subprocess, completes the initialize handshake, lists tools via tools/list, and calls tools/call with no LLM in the loop so the protocol is visible. You expand to server_full.py with three tools plus a read-only Resource and an HTTP transport, then wire a LangChain agent via langchain-mcp-adapters. You also internalise Tools versus Resources (Tools are invocable with side effects, Resources are read-only data surfaces authorised differently) and stdio versus HTTP (stdio when colocated, HTTP the moment you share across hosts or teams).
Prerequisites: Python decorators and async/await, the react-agent-nim lab or equivalent LangChain exposure, and a rough sense of JSON-RPC. The hosted environment ships with mcp, fastmcp, langchain-mcp-adapters, and the LangChain NIM integration preinstalled, running against our managed NIM proxy — no keys, no GPU provisioning. About 40 minutes of focused work. You leave with an MCP server, a direct client that round-trips tools/call, an MCP-consuming LangChain agent, a hybrid agent mixing MCP tools with native @tool, and a decision framework (Skills vs MCP vs native tools) you can cite when the next design debate lands.
Frequently asked questions
How is MCP different from just exposing tools as a REST API?
initialize, tools/list, tools/call, resources/list, resources/read, and prompts/list uniformly. You also get typed tool schemas, capability negotiation, and a notification channel built in. Think of it as 'LSP for agent tools': the same reason every editor can use every language's analyser once they all speak the Language Server Protocol.When should I pick stdio transport versus HTTP?
What's the difference between an MCP Tool and an MCP Resource?
create_ticket(subject, body), update_record(id, fields). Resources are read-only data surfaces the agent can fetch by URI — a policy document, a product catalogue, a team roster. The distinction matters because clients authorise them differently: Claude Desktop will happily load Resources into context automatically but prompts the user before calling Tools. The lab includes one Resource alongside the three Tools so you see both halves of the protocol and the authorisation consequences.Do MCP servers have to be written in Python?
mcp / fastmcp, used here), Rust, Go, C#, Swift, and others. The lab uses FastMCP because its decorator API is the fastest path to a working server and because Python is the default for agent teams, but the exact same client code in Step 3 would talk to a TypeScript server with zero changes. That portability is the whole point — it's why Anthropic pushed a protocol rather than yet another library.Should I use MCP or LangChain's native @tool decorator?
@tool decorator?@tool when the tool is tightly coupled to one agent and implemented in the same repo — it's zero-overhead, type-safe, and trivial. Reach for MCP when the tool needs to be shared across multiple agents, especially agents owned by other teams or written in other languages. The hybrid-agent step is deliberate: real production systems use both. Inner-loop tools that only this agent calls stay as @tool; company-wide capabilities (account lookup, KB search, ticketing) become an MCP server everyone plugs into. The final decision framework step formalises the split.