AI Agents · MCP

How Model Context Protocol
changes agent design

Most agent architectures are a while loop with an LLM in it. The interesting design question isn't the loop — it's the interface between the model and the rest of the system. MCP is Anthropic's answer to that question, and how SysSimulator maps onto it.

3Capability types
2Transport options
5Simulation tools exposed
JSON-RPC2.0 protocol
01 — Protocol

What MCP Actually Specifies

MCP defines a client-server protocol over JSON-RPC 2.0. The MCP host (the application managing the LLM — Claude Desktop, a custom app, an IDE plugin) runs one or more MCP clients. Each client connects to an MCP server that exposes some set of capabilities.

Those capabilities fall into three categories. The protocol separates two concerns that usually get conflated: what capabilities exist (the MCP server knows) and when to use them (the model decides, with the host mediating).

Tools

Functions the model can call

A tool has a name, a description, and a JSON Schema-defined input spec. The model decides when to call a tool; the MCP server executes it.

get_weather
search_codebase
create_github_issue
Resources

Data the model can read

A resource has a URI and a MIME type. Resources can be static (a file) or dynamic (current state of a database query). The model doesn't poll resources; the host exposes them as context.

simulation://current/topology
simulation://current/leader
simulation://current/metrics
Prompts

Reusable prompt templates

Parameterized templates that the server exposes. Less commonly used but useful for standardizing how a tool server wants to be addressed.

explain_scenario
summarize_metrics
suggest_fixes
02 — Architecture

The Architecture Shift This Enables

Before MCP, if you wanted an agent that could search the web, query a database, and write to a CRM, you wrote three separate integrations, embedded the tool definitions in your prompt, and hoped the model called them correctly. If you wanted a different application to have the same capabilities, you wrote the integrations again.

With MCP, the integrations live in MCP servers. Any MCP host can connect to them. The protocol is the reuse layer. Tool definitions stop being application configuration and become infrastructure.

Without MCP
App A defines tools inline
  → connects to API
  → handles auth

App B defines same tools inline
  → connects to same API
  → handles auth again

Duplication everywhere.
Auth complexity everywhere.
With MCP
MCP Server defines tools
  → handles auth
  → exposes protocol

App A connects as MCP client
App B connects as MCP client

Duplication gone.
Auth lives in one place.
The key insight

When tool definitions stop being application configuration and become infrastructure, the same capability you built for your local Claude Desktop setup is the same server your production agent application uses. The protocol is the reuse layer. You test the integration once.

03 — SysSimulator

MCP and the Simulation Use Case

SysSimulator models distributed systems: nodes, message passing, network conditions, protocol behavior. An LLM connected to the simulator's MCP tools can run architecture experiments conversationally — "What happens to Raft consensus latency if I add a 100ms delay between the leader and one follower?" becomes a tool-call sequence the model can execute, observe, and reason about.

Tool / Resource Type What it does
create_scenario Tool Initialize a simulation with a topology and protocol config
step_simulation Tool Advance the simulation by N events and return the event log
inject_fault Tool Drop a node, partition a network segment, or introduce latency
get_node_state Tool Read the current state of a specific node (leader, replica, etc.)
get_event_log Tool Retrieve the sequence of events since the last call
simulation://current/topology Resource Live network graph — nodes, edges, connection state
simulation://current/leader Resource Current consensus leader (if applicable to the scenario)
simulation://current/metrics Resource Throughput, latency percentiles, message counts

This is the pattern MCP makes tractable. The simulation logic stays in the simulator. The reasoning stays in the model. The protocol connects them without either side needing to know much about the other.

04 — Transport

Transport Options and What to Use When

MCP supports two transports. The choice depends on whether your server and client share a machine, and whether the server needs to serve multiple clients simultaneously.

stdio

Client spawns server as subprocess

Communication over stdin/stdout. Simple, secure (no network exposure), easy to debug with logging.

Right for: local tooling, developer environments, any case where the server and client run on the same machine.

HTTP + SSE

Server runs as HTTP service

Client connects via Server-Sent Events for the server-to-client stream. Right for shared infrastructure, remote servers, or servers accessible to multiple clients.

Right for: production agents, browser-accessible state, multiple simultaneous clients.

SysSimulator uses the HTTP/SSE transport because the simulator state needs to be accessible to a browser-based UI. The Rust WASM core can't easily serve as a stdio MCP server from inside a browser tab. The HTTP server runs separately, accepts MCP connections, and forwards them to the simulation state.

05 — Sampling

Sampling: The Part Most Implementations Skip

MCP includes a sampling capability that most people overlook. It allows the MCP server to ask the host to run inference — the server can request that the model generate text as part of a tool execution.

The practical use case: a server that needs to summarize or classify data before returning it as a tool result. Instead of the server calling an LLM API directly (requiring its own API key, its own model selection logic), it delegates to the host that's already running a model.

Simulation example

For SysSimulator, this enables a tool like explain_event_sequence that takes a raw event log and asks the model to narrate what happened in human terms, then returns that narration as the tool result. The summary stays in context. The raw event log — which might be thousands of events — doesn't consume tokens.

06 — Scope

What MCP Doesn't Solve

MCP handles the interface between model and tools. It doesn't handle everything around it — those remain your application's responsibility.

Orchestration

How the model decides which tools to call, in what order, with what error handling — that's your application logic. MCP doesn't specify an agent loop.

State management

Each MCP connection is stateless at the protocol level. If your tool sequence requires state across calls (step N depends on step N-1), the server manages that state, or you pass it in each call.

Tool selection at scale

When a server exposes 50+ tools, the model's ability to choose correctly degrades. MCP doesn't solve tool retrieval — that's a separate problem usually addressed with tool embeddings and vector search.

Security boundaries

MCP standardizes the interface, not the permissions model. A tool that writes to a database is dangerous if called incorrectly. That's the application's problem, not the protocol's.

The MCP specification is at modelcontextprotocol.io. The reference server implementations in the official repo are the fastest way to understand how tool definitions translate to protocol messages. If you want to see the Rust and WASM architecture the simulation tools are built on, see /how-it-works.