So you’ve started looking into MCP servers and you’ve hit a wall almost immediately. The docs mention stdio, SSE, and HTTP Streamable, and unless you’ve spent time in the web development world, it’s not obvious what any of these mean or why you’d pick one over another.
Here’s the thing: the transport layer is one of the most important decisions you’ll make when building an MCP server. Get it wrong and you’ll either build something that only works on your laptop, or you’ll over-engineer a production deployment when a two-liner would have done the job.
In this post I’m going to break down all three transport types, show you exactly how to configure each one in FastMCP, and give you a clear decision guide so you know which one to reach for.
A Quick Primer: What Is a “Transport” in MCP?
Before we get into the types, let’s make sure we’re on the same page.
When your AI client (Claude Desktop, a Python agent, a VS Code extension) talks to your MCP server, they need to agree on how messages get sent back and forth. That’s the transport layer. It has nothing to do with the tools you’ve built or what your server does, it’s purely about the communication channel between client and server.
MCP currently supports three transports:
| Transport | Where It Runs | Best For |
|---|---|---|
| stdio | Local machine only | Claude Desktop, local scripts |
| SSE | Over a network | Remote servers, shared team tools |
| HTTP Streamable | Over a network | Production, modern clients |
Let’s dig into each one.
Transport 1: stdio (Standard Input/Output)
What Is It?
stdio is the simplest transport MCP has. When Claude Desktop launches your MCP server, it literally spawns it as a child process and communicates through standard input and standard output, the same pipes your terminal uses when you pipe commands together like cat file.txt | grep error.
Claude writes a JSON message to your server’s stdin. Your server reads it, does its thing, and writes a JSON response to stdout. Back and forth, for the life of the session. When you close Claude Desktop, the process dies.
How to Configure It in FastMCP
This is the default. You don’t have to do anything special:
# server.py
from fastmcp import FastMCP
mcp = FastMCP(name="Network Automation MCP")
# ... register your tools ...
if __name__ == "__main__":
mcp.run() # stdio is the default transport
And in your claude_desktop_config.json:
{
"mcpServers": {
"network-automation": {
"command": "python",
"args": ["/full/path/to/server.py"]
}
}
}
Claude Desktop spawns the process, talks to it over stdio, and cleans it up when done. You never open a port. You never touch a firewall rule.
The Gotchas
stdout is sacred. Your MCP server communicates over stdout, which means anything else that writes to stdout will corrupt the protocol. This is a big one. If you have print() statements in your tools for debugging, they will break your server silently. Use stderr for any debug logging:
import sys
# This will corrupt the stdio transport
print("Connecting to NetBox...")
# This is safe
print("Connecting to NetBox...", file=sys.stderr)
Or better yet, use Python’s logging module pointed at a log file.
It’s local only. stdio only works when the client can launch your server as a subprocess on the same machine. That means no remote clients, no sharing with your team, and no running your server on a central automation host.
When to Use stdio
- You’re connecting to Claude Desktop on your own machine
- You’re building and testing a new server
- Your tools only need to be accessible from one machine
- You want zero infrastructure overhead
For a lot of network engineers building personal automation tools, stdio is all you’ll ever need.
Transport 2: SSE (Server-Sent Events)
What Is It?
SSE is an HTTP-based transport where your MCP server runs as a persistent web server and clients connect to it over the network. The name comes from the underlying web technology: Server-Sent Events, a standard that lets a server push a stream of messages to a connected client over a long-lived HTTP connection.
The connection flow looks like this:
- Client opens an HTTP connection to your server’s
/sseendpoint - Server keeps that connection open and streams messages back as events
- Client sends messages back via a separate HTTP POST to a
/messagesendpoint - The connection stays alive until the client disconnects
This is how you get an MCP server that multiple clients can connect to, running on a remote host, accessible across your network.
How to Configure It in FastMCP
# server.py
from fastmcp import FastMCP
mcp = FastMCP(name="Network Automation MCP")
# ... register your tools ...
if __name__ == "__main__":
mcp.run(transport="sse", host="0.0.0.0", port=8000)
Start the server:
python server.py
Your server is now listening on http://your-server-ip:8000/sse.
To connect a client, point it at the SSE endpoint. For a Python-based agent using the MCP SDK:
from mcp import ClientSession
from mcp.client.sse import sse_client
async with sse_client("http://your-server-ip:8000/sse") as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Now call your tools
result = await session.call_tool("query_netbox", {
"endpoint": "dcim/devices",
"params": {"site": "nyc-dc1"}
})
Running It as a Service
If you’re deploying this on a Linux host (Ubuntu, RHEL, etc.), you’ll want it running as a systemd service so it survives reboots:
# /etc/systemd/system/network-mcp.service
[Unit]
Description=Network Automation MCP Server
After=network.target
[Service]
Type=simple
User=automation
WorkingDirectory=/opt/network-mcp
ExecStart=/opt/network-mcp/venv/bin/python server.py
Restart=on-failure
EnvironmentFile=/opt/network-mcp/.env
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable network-mcp
sudo systemctl start network-mcp
The Gotchas
SSE is being deprecated in the MCP spec. I’ll say it plainly: Anthropic and the MCP working group have marked SSE as a legacy transport. It still works, current clients still support it, and it’s fine for team-internal tooling right now, but if you’re building something you want to maintain long-term, you should be aware that HTTP Streamable is the direction the spec is heading. More on that in a moment.
Two endpoints, two connections. SSE uses /sse for server-to-client messages and /messages for client-to-server posts. This split can cause headaches with proxies, load balancers, and firewalls that aren’t expecting a long-lived GET alongside regular POSTs. If you’re routing this through Nginx or HAProxy, you’ll need to tune your proxy timeout settings so it doesn’t kill the SSE connection.
No built-in auth. FastMCP’s SSE transport doesn’t ship with authentication out of the box. If you’re exposing this beyond localhost, put it behind a reverse proxy with TLS and at minimum a shared secret header check. Don’t put an unauthenticated MCP server with access to your network devices on an open port.
When to Use SSE
- You need multiple clients connecting to a single server instance
- You’re sharing tools with a small team on an internal network
- Your AI agent runs remotely and needs to call your MCP tools
- You’re not ready to move to HTTP Streamable yet but need network access
Transport 3: HTTP Streamable
What Is It?
HTTP Streamable is the current recommended transport in the MCP spec and the one you should be building toward for anything production-facing. It solves the main architectural headache with SSE: instead of maintaining two separate connections, everything happens over a single HTTP endpoint using a smarter content negotiation approach.
The client POSTs a request to your server. Your server responds with either:
- A standard JSON response (for quick, single-response tools)
- A streaming response using SSE within a single HTTP response body (for tools that stream results back progressively)
The key difference from standalone SSE is that there’s no persistent connection. Each request is independent. This makes it play nicely with every piece of HTTP infrastructure you already have, load balancers, API gateways, reverse proxies, service meshes, without any special configuration.
How to Configure It in FastMCP
# server.py
from fastmcp import FastMCP
mcp = FastMCP(name="Network Automation MCP")
# ... register your tools ...
if __name__ == "__main__":
mcp.run(transport="http", host="0.0.0.0", port=8000)
That’s it. FastMCP handles the content negotiation internally. Clients that support HTTP Streamable will get streaming responses where needed. Clients that just want JSON get JSON.
Your single endpoint is: http://your-server-ip:8000/mcp
Putting It Behind Nginx (The Right Way)
For production, you want TLS termination and a reverse proxy in front. Here’s a minimal Nginx config:
server {
listen 443 ssl;
server_name mcp.yourdomain.internal;
ssl_certificate /etc/ssl/certs/mcp.crt;
ssl_certificate_key /etc/ssl/private/mcp.key;
location /mcp {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
# Required for streaming responses
proxy_buffering off;
proxy_cache off;
# Pass the real client IP through
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
# Auth header check, basic shared secret
# For production use OAuth or mTLS instead
if ($http_x_mcp_token != "your-shared-secret") {
return 403;
}
}
}
proxy_buffering offis not optional here. If Nginx buffers the response, your client won’t receive streamed content until the buffer flushes, which defeats the purpose of streaming. This one will waste an hour of your life if you miss it.
Connecting a Python Agent
import httpx
from mcp.client.http import http_client
from mcp import ClientSession
async with http_client(
"https://mcp.yourdomain.internal/mcp",
headers={"X-MCP-Token": "your-shared-secret"}
) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
result = await session.call_tool("query_netbox", {
"endpoint": "dcim/devices",
"params": {"site": "nyc-dc1"}
})
When to Use HTTP Streamable
- You’re building something that needs to be production-grade
- You need to run behind a load balancer or API gateway
- Multiple services or teams will be consuming your MCP server
- You want something that will still work when MCP client support matures
- You’re starting a new server from scratch and don’t have a reason to use SSE
The Decision Guide
Still not sure which one to pick? Here’s the short version:
Are you connecting Claude Desktop on your own machine?
└─ YES → Use stdio. Don't overthink it.
Do you need remote or multi-client access?
└─ YES → Is this internal/experimental?
└─ YES → SSE is fine for now, just know it's legacy
└─ NO (production / long-term) → HTTP Streamable
Are you building something from scratch and want to do it right?
└─ HTTP Streamable, always.
Putting It All Together: Supporting Multiple Transports
Here’s a trick worth knowing: you don’t have to pick just one. You can expose the same FastMCP server over multiple transports using a simple environment variable to switch between them. Useful when you want to run locally over stdio during development and deploy over HTTP Streamable in production, same codebase, no changes.
# server.py
import os
from fastmcp import FastMCP
mcp = FastMCP(name="Network Automation MCP")
# ... register your tools ...
if __name__ == "__main__":
transport = os.getenv("MCP_TRANSPORT", "stdio")
if transport == "stdio":
mcp.run()
elif transport == "sse":
mcp.run(transport="sse", host="0.0.0.0", port=int(os.getenv("MCP_PORT", 8000)))
elif transport == "http":
mcp.run(transport="http", host="0.0.0.0", port=int(os.getenv("MCP_PORT", 8000)))
else:
raise ValueError(f"Unknown transport: {transport}")
Run it locally with stdio (default):
python server.py
Run it as an HTTP Streamable server:
MCP_TRANSPORT=http MCP_PORT=8000 python server.py
Summary
MCP gives you three ways to get your tools to an AI. stdio is for local use, zero overhead, zero infrastructure, perfect for Claude Desktop. SSE gets you onto the network with minimal effort but is being phased out and comes with proxy headaches. HTTP Streamable is where the spec is heading, standard HTTP, production-friendly, and the right choice for anything you want to run beyond your own machine.
If you haven’t already built your first MCP server, check out my previous post on building a network automation MCP with FastMCP, we built the three core network tools that power all the examples above.
Next up: Skills vs. MCP, do you even need a server at all?
Questions? Hit me up in the comments or on LinkedIn.
David Henderson | Network Doodles, Decoding Tech, One Doodle at a Time