What MCP Actually Specifies
MCP defines a client-server protocol over JSON-RPC 2.0. The MCP host (the application managing the LLM — Claude Desktop, a custom app, an IDE plugin) runs one or more MCP clients. Each client connects to an MCP server that exposes some set of capabilities.
Those capabilities fall into three categories. The protocol separates two concerns that usually get conflated: what capabilities exist (the MCP server knows) and when to use them (the model decides, with the host mediating).
Functions the model can call
A tool has a name, a description, and a JSON Schema-defined input spec. The model decides when to call a tool; the MCP server executes it.
search_codebase
create_github_issue
Data the model can read
A resource has a URI and a MIME type. Resources can be static (a file) or dynamic (current state of a database query). The model doesn't poll resources; the host exposes them as context.
simulation://current/leader
simulation://current/metrics
Reusable prompt templates
Parameterized templates that the server exposes. Less commonly used but useful for standardizing how a tool server wants to be addressed.
summarize_metrics
suggest_fixes
The Architecture Shift This Enables
Before MCP, if you wanted an agent that could search the web, query a database, and write to a CRM, you wrote three separate integrations, embedded the tool definitions in your prompt, and hoped the model called them correctly. If you wanted a different application to have the same capabilities, you wrote the integrations again.
With MCP, the integrations live in MCP servers. Any MCP host can connect to them. The protocol is the reuse layer. Tool definitions stop being application configuration and become infrastructure.
App A defines tools inline → connects to API → handles auth App B defines same tools inline → connects to same API → handles auth again Duplication everywhere. Auth complexity everywhere.
MCP Server defines tools → handles auth → exposes protocol App A connects as MCP client App B connects as MCP client Duplication gone. Auth lives in one place.
When tool definitions stop being application configuration and become infrastructure, the same capability you built for your local Claude Desktop setup is the same server your production agent application uses. The protocol is the reuse layer. You test the integration once.
MCP and the Simulation Use Case
SysSimulator models distributed systems: nodes, message passing, network conditions, protocol behavior. An LLM connected to the simulator's MCP tools can run architecture experiments conversationally — "What happens to Raft consensus latency if I add a 100ms delay between the leader and one follower?" becomes a tool-call sequence the model can execute, observe, and reason about.
| Tool / Resource | Type | What it does |
|---|---|---|
| create_scenario | Tool | Initialize a simulation with a topology and protocol config |
| step_simulation | Tool | Advance the simulation by N events and return the event log |
| inject_fault | Tool | Drop a node, partition a network segment, or introduce latency |
| get_node_state | Tool | Read the current state of a specific node (leader, replica, etc.) |
| get_event_log | Tool | Retrieve the sequence of events since the last call |
| simulation://current/topology | Resource | Live network graph — nodes, edges, connection state |
| simulation://current/leader | Resource | Current consensus leader (if applicable to the scenario) |
| simulation://current/metrics | Resource | Throughput, latency percentiles, message counts |
This is the pattern MCP makes tractable. The simulation logic stays in the simulator. The reasoning stays in the model. The protocol connects them without either side needing to know much about the other.
Transport Options and What to Use When
MCP supports two transports. The choice depends on whether your server and client share a machine, and whether the server needs to serve multiple clients simultaneously.
Client spawns server as subprocess
Communication over stdin/stdout. Simple, secure (no network exposure), easy to debug with logging.
Right for: local tooling, developer environments, any case where the server and client run on the same machine.
Server runs as HTTP service
Client connects via Server-Sent Events for the server-to-client stream. Right for shared infrastructure, remote servers, or servers accessible to multiple clients.
Right for: production agents, browser-accessible state, multiple simultaneous clients.
SysSimulator uses the HTTP/SSE transport because the simulator state needs to be accessible to a browser-based UI. The Rust WASM core can't easily serve as a stdio MCP server from inside a browser tab. The HTTP server runs separately, accepts MCP connections, and forwards them to the simulation state.
Sampling: The Part Most Implementations Skip
MCP includes a sampling capability that most people overlook. It allows the
MCP server to ask the host to run inference — the server can request that the
model generate text as part of a tool execution.
The practical use case: a server that needs to summarize or classify data before returning it as a tool result. Instead of the server calling an LLM API directly (requiring its own API key, its own model selection logic), it delegates to the host that's already running a model.
For SysSimulator, this enables a tool like explain_event_sequence that
takes a raw event log and asks the model to narrate what happened in human terms, then
returns that narration as the tool result. The summary stays in context. The raw event log
— which might be thousands of events — doesn't consume tokens.
What MCP Doesn't Solve
MCP handles the interface between model and tools. It doesn't handle everything around it — those remain your application's responsibility.
Orchestration
How the model decides which tools to call, in what order, with what error handling — that's your application logic. MCP doesn't specify an agent loop.
State management
Each MCP connection is stateless at the protocol level. If your tool sequence requires state across calls (step N depends on step N-1), the server manages that state, or you pass it in each call.
Tool selection at scale
When a server exposes 50+ tools, the model's ability to choose correctly degrades. MCP doesn't solve tool retrieval — that's a separate problem usually addressed with tool embeddings and vector search.
Security boundaries
MCP standardizes the interface, not the permissions model. A tool that writes to a database is dangerous if called incorrectly. That's the application's problem, not the protocol's.
The MCP specification is at modelcontextprotocol.io. The reference server implementations in the official repo are the fastest way to understand how tool definitions translate to protocol messages. If you want to see the Rust and WASM architecture the simulation tools are built on, see /how-it-works.