Technical Deep Dive

The Rust, WASM, and
Flutter stack

Running a distributed-systems simulation in a browser sounds like it should be slow. JavaScript GC fires mid-run. Event loops block. Tabs freeze. Here's exactly how SysSimulator avoids all of that — and why those specific technology choices were made.

~120KBWASM binary (gzipped)
60fpsRender loop
200ms60s scenario wall time
0GC pauses in simulation
01 — Engine

The Simulation Core: Rust and a Discrete Event Scheduler

The heart of the system is a discrete event simulation (DES) engine written in pure Rust. No async runtime. No threads (WASM is single-threaded in browsers). Just a priority queue of timestamped events and a tight loop that processes them.

Rust
pub struct Simulator {
    event_queue: BinaryHeap<Reverse<SimEvent>>,
    clock: SimTime,
    nodes: HashMap<NodeId, Node>,
    network: NetworkTopology,
}

impl Simulator {
    pub fn step(&mut self) -> Option<SimEvent> {
        let event = self.event_queue.pop()?.0;
        self.clock = event.time;
        self.dispatch(event.clone());
        Some(event)
    }
}

SimTime is a u64 representing nanoseconds of simulated time — not wall time. The simulation can run a full 60-second scenario in 200 milliseconds of real time, or deliberately slow down to let you watch a Raft leader election happen in human-perceivable steps.

The BinaryHeap<Reverse<SimEvent>> pattern is the standard DES trick — Reverse flips the max-heap into a min-heap so the earliest event always pops first. Events carry a type, a target node ID, a payload, and a timestamp.

Why not async Rust?

Async Rust compiled to WASM introduces runtimes like Tokio that add ~150KB to the binary and — more importantly — introduce their own scheduling logic that fights with the deterministic event ordering required for reproducible simulations. When you replay a scenario with the same seed, you need identical results. Async executors make that guarantee hard to keep.

02 — Reproducibility

Determinism as a Feature

Every random value in the simulator comes from a seeded PRNG — specifically SmallRng from the rand crate, seeded at scenario start. Network latency jitter, packet loss events, node failure timing — all seeded.

This means a scenario is fully replayable. You can serialize the seed and the event log, send it to a colleague, and they run the exact same simulation. The "share scenario" feature works exactly this way: it's a seed and a config JSON, not a recording of every event.

It also means the test suite can run the simulator and assert on specific event sequences. If a Raft implementation regresses, the test fails deterministically — not flakily.

What determinism enables
Share scenarios
A seed + config JSON is all that's needed to reproduce any simulation exactly — no recording of event streams required
Regression tests
The full DES engine is tested with cargo test — assertions run against exact event sequences, not probabilistic outcomes
Debug replay
A bug triggered under specific traffic conditions can be reproduced exactly by re-running with the same seed
03 — FFI

Compiling to WASM: The wasm-bindgen Bridge

The Rust core compiles to a .wasm binary via wasm-pack. The interface surface exposed to the outside world is defined with #[wasm_bindgen] macros:

Rust — WASM interface
#[wasm_bindgen]
pub struct SimulatorHandle {
    inner: Rc<RefCell<Simulator>>,
}

#[wasm_bindgen]
impl SimulatorHandle {
    #[wasm_bindgen(constructor)]
    pub fn new(config_json: &str) -> Result<SimulatorHandle, JsValue> {
        let config: ScenarioConfig = serde_json::from_str(config_json)
            .map_err(|e| JsValue::from_str(&e.to_string()))?;
        Ok(SimulatorHandle {
            inner: Rc::new(RefCell::new(Simulator::from_config(config))),
        })
    }

    pub fn step(&mut self) -> JsValue {
        match self.inner.borrow_mut().step() {
            Some(event) => serde_wasm_bindgen::to_value(&event).unwrap(),
            None => JsValue::NULL,
        }
    }
}

The Rc<RefCell<>> wrapping is necessary because wasm_bindgen doesn't support mutable references across the FFI boundary in the way you'd want. You wrap in RefCell and take the borrow at call time. It's not pretty. It's the standard pattern.

Serialization crosses the boundary as JSON strings (for config) or via serde_wasm_bindgen (for event data). The latter avoids an intermediate JSON parse/stringify round-trip for the hot path — the step() call that fires potentially thousands of times per second.

Binary size

The compiled .wasm binary is ~380KB uncompressed, ~120KB gzipped. That's with opt-level = "z" and lto = true in the release profile. The wasm-opt post-processing step from Binaryen shaves another ~15KB. For a full simulation engine with several implemented protocols, that's acceptable.

04 — Rendering

The Flutter Shell: Why Not Just a Web App?

The question that comes up every time: why Flutter? The honest answer: the node graph canvas. Rendering 500+ nodes with animated edges, real-time packet-in-flight visualization, and smooth 60fps interaction is genuinely difficult in the browser DOM. SVG doesn't scale. Canvas 2D requires careful manual dirty-region tracking. WebGL is correct but means writing a mini rendering engine.

Flutter's CustomPainter with Canvas gives a retained-mode drawing API backed by Skia (or Impeller on newer targets). The node graph renderer is a single CustomPainter subclass:

Dart — WASM bridge
class SimulatorBridge {
  late js.JSObject _handle;

  Future<void> initialize(ScenarioConfig config) async {
    final configJson = jsonEncode(config.toJson());
    _handle = SimulatorWasm.create(configJson.toJS);
  }

  SimEvent? step() {
    final result = _handle.callMethod('step'.toJS);
    if (result.isNull) return null;
    return SimEvent.fromJson(result.dartify() as Map);
  }
}

This is deliberately thin. The bridge does type conversion and nothing else. Business logic stays in Rust. Rendering logic stays in Flutter. The bridge is not the place for either.

The node graph renderer transforms world coordinates to screen coordinates using a Matrix4 maintained by the pan/zoom gesture handler, draws edges first (z-order), then nodes, then in-flight packet animations — and only repaints when the simulation emits a new event or the user interacts.

05 — Performance

The Render Loop

Flutter's animation system drives the simulation step rate. A Ticker fires every frame (targeting 60fps). Each tick, the Flutter layer calls step() on the Rust core some number of times — controlled by a "simulation speed" multiplier.

Simulation speed multipliers
1× speed
One Rust step() call per frame. Real-time — watch Raft elections unfold at human-perceivable pace.
100× speed
100 steps per frame — a second of simulated time passes in a fraction of a real second. Good for long-running capacity scenarios.
10,000× speed
Serialization overhead at the bridge becomes the bottleneck. Fast enough for most use cases; not for parameter sweeps.

Events returned by step() flow into a stream that the UI layer subscribes to. State changes — node status, message queues, leader election outcomes — are applied to Flutter ChangeNotifier objects. Widgets rebuild only when their specific state changes. Even at 100× speed, the Dart side isn't doing work proportional to the number of simulation events — it's batching state deltas and applying them once per frame.

06 — Architecture

What This Architecture Gets You

Reproducibility

Deterministic by construction

The Rust core is deterministic and has no side effects visible outside. Test it with cargo test entirely without a browser — assertions run on exact event sequences.

GC isolation

Simulation state lives outside Dart heap

Simulation state lives in WASM linear memory. A GC pause pauses rendering; it doesn't corrupt or delay simulation events. The two runtimes don't share memory.

Concurrency model

No real concurrency in the core

Events are processed one at a time in timestamp order. The "concurrency" being simulated is modeled, not real — which is the only way to get observable, controllable behavior.

Privacy

No data leaves your machine

The entire simulation runs in WASM memory in your browser tab. You can model topologies containing real service names and proprietary configurations without any data reaching an external server.

Known tradeoff

The FFI boundary has a cost. Crossing from Dart into WASM and back is not free. At very high simulation speeds (10,000×), the serialization overhead at the bridge becomes the bottleneck, not the simulation itself. For most use cases — learning, architecture validation, interview prep — it doesn't matter. For genuinely large-scale parameter sweeps, a native Rust binary would be faster.

Specific dependencies worth naming

  • wasm-bindgen 0.2.x The FFI glue. Macro expansion is verbose but the output is correct.
  • serde + serde_wasm_bindgen Serialization across the boundary. serde_wasm_bindgen avoids the JSON string intermediary for hot-path calls.
  • rand (SmallRng) Fast, seedable, portable PRNG. Not cryptographic, which is fine here.
  • wasm-pack Build toolchain that handles wasm-opt post-processing and generates the JS/TS glue code.
  • priority-queue Used where event priorities need to change (Dijkstra-style). The standard BinaryHeap doesn't support priority updates.
  • flutter_riverpod State management on the Flutter side — ChangeNotifier providers per feature slice.