Chapter 3b - Tools and MCP¶
Companion to book/ch03b_tools_mcp.md. Runs top-to-bottom in Google Colab in mock mode with no API key required.
# Clone the repo (skip if already present - Colab keeps files across runs in one session)
import os
if not os.path.exists("crafting-agentic-swarms"):
!git clone https://github.com/TheAiSingularity/crafting-agentic-swarms.git
%cd crafting-agentic-swarms
!pip install -e ".[dev]" --quiet
!pip install matplotlib plotly ipywidgets --quiet
import os
try:
from google.colab import userdata
os.environ["ANTHROPIC_API_KEY"] = userdata.get("ANTHROPIC_API_KEY")
print("Using real API (key from Colab secrets).")
except (ImportError, Exception):
os.environ.setdefault("SWARM_MOCK", "true")
print("Running in mock mode (no API key needed).")
What you'll build here¶
- Register a Python function as an internal tool with
swarm.tools.registry. - Spawn an MCP server from an inline Python source string, then connect to it via stdio.
- Call
list_tools()andcall_tool(...), printing the actual JSON-RPC round-trip. - Extend the server's toolset and reconnect to show the tool list growing.
- Render the handshake as a simple text sequence diagram.
1. An internal Python tool¶
from swarm.tools.registry import GLOBAL_REGISTRY, register
@register(
name="reverse",
description="Reverse a string",
schema={
"type": "object",
"properties": {"s": {"type": "string"}},
"required": ["s"],
},
)
async def reverse(s: str) -> str:
return s[::-1]
print("Registered:", [t["name"] for t in GLOBAL_REGISTRY.list_tools()])
print("Dispatch result:", await GLOBAL_REGISTRY.dispatch("reverse", {"s": "hello"}))
Two moving parts: the schema (JSON Schema - tells the model how to call the tool) and the function (actual work). dispatch handles sync/async, timeouts, and error wrapping.
2. Write an MCP server from inside this notebook¶
An MCP server is any process that speaks JSON-RPC 2.0 over stdio. Below is a minimal one: it initializes, lists one tool, and answers get_time with the current UTC timestamp. We write it to /tmp/mcp_demo_server.py so we can spawn it.
from pathlib import Path
SERVER_V1 = """import json, sys, datetime
TOOLS = [
{
"name": "get_time",
"description": "Return the current UTC time as an ISO string.",
"inputSchema": {"type": "object", "properties": {}},
}
]
def send(obj):
sys.stdout.write(json.dumps(obj) + "\\n")
sys.stdout.flush()
def handle(msg):
method = msg.get("method")
if method == "initialize":
return {"protocolVersion": msg["params"].get("protocolVersion", "2024-11-05"),
"capabilities": {"tools": {}}, "serverInfo": {"name": "demo", "version": "1"}}
if method == "tools/list":
return {"tools": TOOLS}
if method == "tools/call":
if msg["params"]["name"] == "get_time":
now = datetime.datetime.utcnow().isoformat()
return {"content": [{"type": "text", "text": now}]}
return None
for raw in sys.stdin:
try:
msg = json.loads(raw)
except Exception:
continue
if "id" not in msg:
continue
result = handle(msg)
if result is None:
send({"jsonrpc": "2.0", "id": msg["id"],
"error": {"code": -32601, "message": "Method not found"}})
else:
send({"jsonrpc": "2.0", "id": msg["id"], "result": result})
"""
Path("/tmp/mcp_demo_server.py").write_text(SERVER_V1)
print("Wrote /tmp/mcp_demo_server.py:", len(SERVER_V1), "bytes")
3. Connect, list, call¶
import sys
from swarm.tools.mcp_client import MCPClient
client = MCPClient()
await client.connect(sys.executable, ["/tmp/mcp_demo_server.py"])
print(f"Connected - protocol={client.protocol_version} server={client.server_info}")
tools_v1 = await client.list_tools()
print(f"\ntools/list returned {len(tools_v1)} tool(s):")
for t in tools_v1:
print(f" - {t['name']}: {t['description']}")
now_text = await client.call_tool("get_time", {})
print(f"\ntools/call get_time -> {now_text!r}")
await client.close()
4. Extend the server, reconnect, see the list grow¶
SERVER_V2 = """import json, sys, datetime
TOOLS = [
{
"name": "get_time",
"description": "Return the current UTC time as an ISO string.",
"inputSchema": {"type": "object", "properties": {}},
},
{
"name": "add",
"description": "Add two numbers.",
"inputSchema": {
"type": "object",
"properties": {"a": {"type": "number"}, "b": {"type": "number"}},
"required": ["a", "b"],
},
},
]
def send(obj):
sys.stdout.write(json.dumps(obj) + "\\n")
sys.stdout.flush()
def handle(msg):
method = msg.get("method")
if method == "initialize":
return {"protocolVersion": msg["params"].get("protocolVersion", "2024-11-05"),
"capabilities": {"tools": {}}, "serverInfo": {"name": "demo", "version": "2"}}
if method == "tools/list":
return {"tools": TOOLS}
if method == "tools/call":
name = msg["params"]["name"]
args = msg["params"].get("arguments", {})
if name == "get_time":
return {"content": [{"type": "text", "text": datetime.datetime.utcnow().isoformat()}]}
if name == "add":
total = args["a"] + args["b"]
return {"content": [{"type": "text", "text": str(total)}]}
return None
for raw in sys.stdin:
try:
msg = json.loads(raw)
except Exception:
continue
if "id" not in msg:
continue
result = handle(msg)
if result is None:
send({"jsonrpc": "2.0", "id": msg["id"],
"error": {"code": -32601, "message": "Method not found"}})
else:
send({"jsonrpc": "2.0", "id": msg["id"], "result": result})
"""
Path("/tmp/mcp_demo_server.py").write_text(SERVER_V2)
client = MCPClient()
await client.connect(sys.executable, ["/tmp/mcp_demo_server.py"])
tools_v2 = await client.list_tools()
print(f"After upgrade: {len(tools_v2)} tool(s)")
for t in tools_v2:
print(f" - {t['name']}: {t['description']}")
sum_text = await client.call_tool("add", {"a": 2, "b": 40})
print(f"\ntools/call add({{a:2,b:40}}) -> {sum_text!r}")
await client.close()
5. Handshake - textual sequence diagram¶
print("Client Server")
print("------ ------")
print(" |-- initialize(protocolVersion) ---------->|")
print(" |<-- result{protocolVersion, serverInfo} --|")
print(" |-- notifications/initialized ------------>|")
print(" |")
print(" |-- tools/list --------------------------->|")
print(" |<-- result{tools: [...]} -----------------|")
print(" |")
print(" |-- tools/call{name, arguments} ---------->|")
print(" |<-- result{content: [...]} ---------------|")
print(" |")
print(" |-- stdin close -------------------------->|")
That's the whole protocol. Any process speaking that dance on stdio is an MCP server. Your internal tools (Section 1) and MCP tools both end up as dicts with the same {name, description, input_schema} shape, which is why run_loop can consume them interchangeably.
6. Bridge MCP tools into the loop¶
client = MCPClient()
await client.connect(sys.executable, ["/tmp/mcp_demo_server.py"])
mcp_tool_schemas = await client.list_tools()
print("MCP-provided tools ready for run_loop:")
for t in mcp_tool_schemas:
print(f" {t['name']:10s} {t['description']}")
await client.close()
# The shape is identical to internal tools - drop them into tools= on run_loop.
7. Real-API gate¶
No gate needed - everything above uses the subprocess, not the LLM. MCP can be tested fully offline.
8. Tool dispatch timing¶
Internal Python tools dispatch synchronously through GLOBAL_REGISTRY.dispatch. MCP tools make a subprocess round-trip: JSON serialize -> stdin -> stdout read -> JSON parse. Here's the per-call overhead for both paths.
import time
N = 50
# Internal dispatch
t0 = time.monotonic()
for _ in range(N):
await GLOBAL_REGISTRY.dispatch("reverse", {"s": "hello"})
internal_ms = (time.monotonic() - t0) * 1000 / N
# MCP dispatch (reuse a single connection)
client = MCPClient()
await client.connect(sys.executable, ["/tmp/mcp_demo_server.py"])
t0 = time.monotonic()
for _ in range(N):
await client.call_tool("add", {"a": 1, "b": 2})
mcp_ms = (time.monotonic() - t0) * 1000 / N
await client.close()
print(f"Internal dispatch: {internal_ms:.2f} ms/call ({N} calls)")
print(f"MCP round-trip: {mcp_ms:.2f} ms/call ({N} calls)")
print(f"Overhead ratio: {mcp_ms / max(internal_ms, 0.01):.1f}x")
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(6, 3.5))
bars = ax.bar(["internal", "MCP stdio"], [internal_ms, mcp_ms], color=["#3b82f6", "#f59e0b"])
for bar, val in zip(bars, [internal_ms, mcp_ms]):
ax.text(bar.get_x() + bar.get_width() / 2, val, f"{val:.2f} ms", ha="center", va="bottom")
ax.set_ylabel("ms / call (mean)")
ax.set_title("Dispatch overhead - internal vs MCP")
plt.tight_layout()
plt.show()
MCP adds stdio round-trip latency - a fraction of a millisecond locally, potentially more over slower process boundaries. That's typically acceptable, but if you have a hot loop calling a tool hundreds of times, internal registration wins.
9. Failure modes - broken server¶
What if the server crashes mid-call, or never responds? MCPClient raises MCPError with actionable context. Let's simulate each.
from swarm.tools.mcp_client import MCPError
# Scenario 1: server does not exist.
client = MCPClient()
try:
await client.connect("/nonexistent/binary", [])
except Exception as exc:
print(f"Missing binary: {type(exc).__name__}: {exc}")
finally:
await client.close()
# Scenario 2: server exits before handshake completes.
bad_server = "/tmp/bad_mcp_server.py"
Path(bad_server).write_text("import sys; sys.exit(1)")
client = MCPClient(handshake_timeout_s=2.0)
try:
await client.connect(sys.executable, [bad_server])
except MCPError as exc:
print(f"Crashed during handshake: {exc}")
finally:
await client.close()
Both failures return quickly with a clear error. In an agent, you wrap client.connect(...) in a try and fall back to internal tools when the server is unavailable.
Takeaways¶
- An MCP server is a subprocess that speaks JSON-RPC 2.0 on stdio.
- Three-step handshake: initialize, notifications/initialized, then request/response.
- Both internal tools and MCP tools reduce to a common
{name, description, input_schema}dict. - MCP adds stdio round-trip overhead - negligible for most workloads, painful in hot loops.
- Broken servers surface as
MCPError; wrap connect in a try and degrade to internal tools.