This content originally appeared on DEV Community and was authored by James
Mastering MCP Servers with LangChain and LangGraph: A Beginner’s Guide
Welcome to this hands-on tutorial on integrating MCP servers with LangChain and LangGraph! If you’re new to these technologies, don’t worry – we’ll break everything down step by step. MCP (which stands for something like “Modular Compute Protocol” in this context, though it’s not explicitly defined) servers are powerful for building workflows that let language models interact with custom tools. We’ll use LangChain to create agents that can call these tools and even touch on LangGraph for more advanced setups. By the end, you’ll have a working example of a math-focused MCP server, plus tips on using pre-built ones.
For a visual walkthrough, check out the accompanying YouTube video: Mastering MCP Servers with LangChain and LangGraph. It complements this guide with live demos and explanations – click to watch and follow along!
This guide is based on real code you can copy and run. We’ll cover setup, tool creation, running the server, and integrating it all with a LangChain agent. You’ll need Python installed, along with packages like langchain
, langgraph
, langchain-openai
(or alternatives like langchain-ollama
), and mcp
. Install them via pip, for example: pip install mcp langchain langgraph langchain-openai langchain-ollama langchain-community
.
Introduction to MCP Servers
MCP servers act as backends that expose tools for language models to use, making it easier to build intelligent agents. They’re especially useful in programming environments where you need to handle tasks like calculations, data fetching, or custom logic. In this tutorial, we’ll work with two key files: a custom MCP server script (math_server.py) for basic math operations, and a main workflow script that connects to it, adds more tools, and runs a LangChain agent.
Think of MCP servers as modular plugins – you define tools with descriptions so the language model (LM) knows when to use them. We’ll start with a simple custom server called “Math Server” and later explore pre-made options for efficiency.
Setting Up a Simple Math Server
Let’s kick things off by creating our custom Math MCP Server. This server will handle basic arithmetic like addition, subtraction, multiplication, division, exponentiation, and square roots. We’ll use the FastMCP
class from the mcp
package, which you can install with pip install mcp
.
Create a file called math_server.py
and add the following code. This script initializes the server and defines the tools as decorated functions.
"""
Math MCP Server - provides basic arithmetic operations
Run this as: python math_server.py
"""
from mcp.server.fastmcp import FastMCP
############################ Server Initialization ############################
# Create MCP server instance
mcp = FastMCP("Math Server")
############################## Tool Definitions ##############################
@mcp.tool() # Register the function as a callable tool
def add(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
@mcp.tool()
def subtract(a: int, b: int) -> int:
"""Subtract second number from first number."""
return a - b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers together."""
return a * b
@mcp.tool()
def divide(a: float, b: float) -> float:
"""Divide first number by second number."""
# Prevent division by zero
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
@mcp.tool()
def power(base: float, exponent: float) -> float:
"""Raise base to the power of exponent."""
return base ** exponent
@mcp.tool()
def square_root(number: float) -> float:
"""Calculate square root of a number."""
# Prevent taking the square root of a negative number
if number < 0:
raise ValueError("Cannot calculate square root of negative number")
return number ** 0.5
############################## Server Execution ##############################
if __name__ == "__main__":
print("🧮 Starting Math MCP Server...")
mcp.run(transport='stdio') # Use standard I/O for communication
To run this server, simply execute python math_server.py
in your terminal. It starts a process that listens for connections via standard input/output (stdio). The tools are now ready – each has a description that helps the LM decide when to call them, like “Add two numbers together” for the add
function.
In the code, notice how we handle edge cases, such as preventing division by zero or square roots of negative numbers. This makes the server robust for real-world use.
Creating and Adding Tools
With the Math Server set up, let’s integrate it into a larger workflow. We’ll create a main script that connects to the server, loads its tools, and adds some custom local tools. These custom tools will include getting the current time and calculating percentages, which aren’t part of the math server but enhance our agent’s capabilities.
We’ll also combine these with LangChain-compatible tools. The key here is using langchain_mcp_adapters
to bridge MCP tools into LangChain format.
Here’s the main script (save it as something like main.py
). It uses asyncio for asynchronous operations, starts the MCP server as a subprocess, and sets up a client session.
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI # Or use another LLM like ChatGoogleGenerativeAI
from langchain_ollama import ChatOllama
from langchain_core.tools import tool
from langchain_community.tools import DuckDuckGoSearchRun
import datetime
from langchain_mcp_adapters.client import MultiServerMCPClient
################################ Configure MCP Server ################################
# Define how to start the external MCP server process
server_params = StdioServerParameters(
command='python',
args=['math_server.py'], # Path to the server script
env=None,
)
# Example for installing the MCP server fetch tool: pip install mcp-server-fetch
# server_params = StdioServerParameters(
# command='python',
# args=['-m', 'mcp_server_fetch'],
# env=None,
# )
################################ Define Custom Local Tools ################################
@tool
def get_current_time() -> str:
"""Get the current date and time."""
return datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
@tool
def calculate_percentage(value: float, percentage: float) -> float:
"""Calculate what percentage of a value is.
Args:
value: The base value
percentage: The percentage to calculate (e.g., 20 for 20%)
"""
return (value * percentage) / 100
################################ Main Agent Logic ################################
async def main():
# Start the MCP server as a subprocess
async with stdio_client(server_params) as (read, write):
# Establish a client session with the running server
async with ClientSession(read, write) as session:
await session.initialize() # Finalize the connection and handshake
# Load tools exposed by the remote MCP server
mcp_tools = await load_mcp_tools(session)
print('MCP tools:', [tool.name for tool in mcp_tools])
# Define additional tools available in this local script
custom_tools = [
get_current_time,
calculate_percentage,
]
# Combine remote and local tools into a single list for the agent
all_tools = mcp_tools + custom_tools
print('All available tools:', [tool.name for tool in all_tools])
# Configure the Large Language Model
llm = ChatOpenAI(model='gpt-4o', temperature=0)
# llm = ChatOllama(model='llama3.2', temperature=0) # Or use a local Ollama model
# Create a ReAct agent that can use the combined toolset
agent = create_react_agent(llm, all_tools)
# Send a complex, multi-tool query to the agent
response = await agent.ainvoke({
'messages': [
{
'role': 'user',
'content': "What's the current time? Also calculate (3 + 5) * 12 and then find 15% of that result.",
}
]
})
# Example query for the web fetch tool
# response = await agent.ainvoke({
# 'messages': [
# {
# 'role': 'user',
# 'content': 'fetch the website https://langchain-ai.github.io/langgraph/agents/mcp/ and summarize it',
# }
# ]
# })
# Print the agent's final response
print('Agent response:', response['messages'][-1].content)
################################ Run the Application ################################
if __name__ == '__main__':
asyncio.run(main())
In this script, the main
function handles the handshake with the server using ClientSession
. It loads the MCP tools asynchronously, adds custom ones like get_current_time
(which uses Python’s datetime module) and calculate_percentage
(a simple math function decorated as a LangChain tool). Everything is combined into all_tools
for the agent.
Run this with python main.py
. You’ll see the available tools printed, and the agent will process a sample query that chains multiple tools – for example, adding numbers, multiplying, and then calculating a percentage.
Running the MCP Server
Once everything is set up, running the server is seamless. The main script launches the math server as a subprocess, connects via stdio, and initializes the session. You can swap in different LLMs, like switching to Ollama for local runs by uncommenting the line.
Test it with the provided query: it gets the time, performs math using the MCP tools, and computes the percentage locally. The agent’s response will show how it reasons step by step, calling tools as needed. If you see output like the list of tools and a final answer, you’re good! This demonstrates the server’s power in handling real queries.
Using Pre-Made MCP Servers
Custom servers are great for control, but pre-made ones save time. For instance, install the “Fetch” server with pip install mcp-server-fetch
. In the main script, comment out the math server params and uncomment the fetch ones. Now your agent can fetch web content – try the example query to summarize a URL.
This expands your toolkit without writing extra code. Tools like DuckDuckGoSearchRun (imported in the code) can be added similarly for even more functionality.
Conclusion
You’ve now mastered the basics of MCP servers with LangChain, from building a custom Math Server to integrating tools and running agents. We focused on LangChain here, but in future tutorials, we’ll dive into LangGraph for more complex workflows like multi-agent systems. Experiment with the code, tweak the tools, and try different LLMs. If you run into issues, check your package installations or the console output for errors. Happy coding – what’s your first project idea with this setup?
This content originally appeared on DEV Community and was authored by James