Home Posts Wasm vs Docker Startup Latency Benchmark Guide [2026]
Cloud Infrastructure

Wasm vs Docker Startup Latency Benchmark Guide [2026]

Wasm vs Docker Startup Latency Benchmark Guide [2026]
Dillip Chowdary
Dillip Chowdary
Tech Entrepreneur & Innovator · May 05, 2026 · 9 min read

Bottom Line

If you want a fair startup-latency comparison, benchmark the same handler shape on the same host and measure time-to-first-successful-response. In that setup, Wasm often trims cold-start overhead, but Docker still wins on ecosystem maturity and deployment reach.

Key Takeaways

  • Measure startup as launch-to-first-200 response, not process spawn alone.
  • Use the same route behavior for both artifacts to avoid fake wins.
  • Hyperfine 1.19.0 gives repeatable multi-run summaries and JSON export.
  • Wasmtime 43.0.1 can serve WASI HTTP components directly with serve.
  • Local results isolate runtime overhead; cloud scheduler latency is a separate layer.

Most Wasm versus Docker debates mix packaging, runtime, framework, and cloud-provider effects into one noisy number. A better approach is to isolate one variable: startup path length for the same tiny HTTP function on the same machine. In this tutorial, you will build two comparable baselines, measure cold starts as time-to-first-successful-response, and leave with a benchmark harness you can reuse before making a serverless platform decision.

DimensionWasmDockerEdge
Artifact startup pathRuntime loads a WebAssembly component directlyEngine creates a container, networking, and processWasm
Operational portabilityImproving fast, but platform support still variesNear-universal packaging and hosting supportDocker
Cold-start focusOften lower local overhead for tiny handlersUsually higher due to container lifecycle workWasm
Tooling maturityYounger ecosystem around components and hostingDeep CI/CD, registry, scan, and debugging supportDocker
Benchmark fairness riskEasy to over-optimize with unrealistic toy modulesEasy to inflate overhead with large base imagesTie

Setup the benchmark

Prerequisites

  • Docker Engine or Docker Desktop installed and working locally.
  • Rust toolchain installed via rustup.
  • Wasmtime 43.0.1 or newer. The official CLI docs show wasmtime serve for WASI HTTP components.
  • cargo-component installed so you can build the Wasm sample.
  • hyperfine 1.19.0 for repeatable multi-run timing and JSON export.
  • curl and bash available on your machine.

Bottom Line

Benchmark startup as launch-to-first-HTTP-200 on the same host. That strips out most platform noise and exposes the real difference between a Wasm runtime path and a container runtime path.

Install the toolchain

The commands below come from official project documentation: Wasmtime installs via its shell script, Docker publishes ports with -p, and Hyperfine supports --warmup, --runs, and JSON export.

curl https://wasmtime.dev/install.sh -sSf | bash
cargo install cargo-component --locked
cargo install --locked wkg
sudo apt install hyperfine
wasmtime -V
hyperfine --version
docker --version

This tutorial intentionally measures local runtime overhead, not end-to-end cloud cold starts. Provider scheduler queues, registry pulls, and multi-tenant throttling can dominate in production, so keep those as a separate benchmark later.

Build matching baselines

The easiest way to stay honest is to keep both handlers tiny and behaviorally similar. We will use the Bytecode Alliance WASI HTTP sample for Wasm and a minimal Rust HTTP server in a container for Docker.

1. Build the Wasm baseline

git clone https://github.com/bytecodealliance/sample-wasi-http-rust.git
cd sample-wasi-http-rust
cargo component build --release
WASM_FILE=$(find target -name '*.wasm' | head -n 1)
echo "$WASM_FILE"

The sample exposes a root route and a /wait route, which is useful later if you want to benchmark not just readiness but simple request handling under cold start.

2. Build the Docker baseline

Create a sibling directory named docker-fn and add a tiny server that returns the same hello world body on /.

mkdir ../docker-fn
cd ../docker-fn
mkdir src
use std::{
    io::{Read, Write},
    net::TcpListener,
    thread,
    time::Duration,
};

fn main() {
    let listener = TcpListener::bind("0.0.0.0:3000").unwrap();

    for stream in listener.incoming() {
        let mut stream = stream.unwrap();
        let mut buf = [0_u8; 1024];
        let n = stream.read(&mut buf).unwrap_or(0);
        let req = String::from_utf8_lossy(&buf[..n]);

        let body = if req.starts_with("GET /wait ") {
            thread::sleep(Duration::from_secs(1));
            "wait done\n"
        } else {
            "hello world\n"
        };

        let resp = format!(
            "HTTP/1.1 200 OK\r\nContent-Length: {}\r\nConnection: close\r\n\r\n{}",
            body.len(),
            body
        );

        let _ = stream.write_all(resp.as_bytes());
    }
}
[package]
name = "docker-fn"
version = "0.1.0"
edition = "2021"
FROM rust:slim AS build
WORKDIR /app
COPY Cargo.toml Cargo.toml
COPY src src
RUN cargo build --release

FROM debian:bookworm-slim
WORKDIR /app
COPY --from=build /app/target/release/docker-fn /app/docker-fn
EXPOSE 3000
CMD ["/app/docker-fn"]
docker build -t docker-fn:latest .
Watch out: Do not compare a tiny Wasm artifact against a fat container image full of language toolchains and debug packages. That measures packaging sloppiness, not runtime design.

Measure cold starts

Now create two small harness scripts. Each script starts the runtime, polls until the first successful response, and then exits. Hyperfine times the whole script, which gives you a clean startup-to-ready metric.

3. Create the Wasm harness

#!/usr/bin/env bash
set -euo pipefail
PORT="${PORT:-18081}"
LOG=/tmp/wasm-bench.log
WASM_FILE=$(find ../sample-wasi-http-rust/target -name '*.wasm' | head -n 1)

wasmtime serve --addr="127.0.0.1:${PORT}" "$WASM_FILE" >"$LOG" 2>&1 &
PID=$!

cleanup() {
  kill "$PID" 2>/dev/null || true
  wait "$PID" 2>/dev/null || true
}
trap cleanup EXIT

for _ in $(seq 1 200); do
  if curl -fsS "http://127.0.0.1:${PORT}/" >/dev/null 2>&1; then
    exit 0
  fi
  sleep 0.01
done

echo "wasm server did not become ready" 1>&2
exit 1

4. Create the Docker harness

#!/usr/bin/env bash
set -euo pipefail
PORT="${PORT:-18082}"
NAME=docker-fn-bench

docker run --name "$NAME" --rm -d -p "${PORT}:3000" docker-fn:latest >/dev/null

cleanup() {
  docker stop "$NAME" >/dev/null 2>&1 || true
}
trap cleanup EXIT

for _ in $(seq 1 200); do
  if curl -fsS "http://127.0.0.1:${PORT}/" >/dev/null 2>&1; then
    exit 0
  fi
  sleep 0.01
done

echo "docker server did not become ready" 1>&2
exit 1
chmod +x bench-wasm.sh bench-docker.sh

5. Run the benchmark

hyperfine \
  --warmup 3 \
  --runs 20 \
  --export-json startup-results.json \
  './bench-wasm.sh' \
  './bench-docker.sh'

If you want cleaner shell snippets before sharing them with your team, run them through TechBytes’ Code Formatter. It is a small step, but benchmark harnesses tend to grow messy once you add logging, retry logic, and provider-specific flags.

Pro tip: Keep the benchmark host quiet. Background package updates, browser tabs, and antivirus scans can add more jitter than the runtime difference you are trying to measure.

Verify and interpret

Verification checklist

  • curl http://127.0.0.1:18081/ returns hello world when the Wasm server is up.
  • curl http://127.0.0.1:18082/ returns hello world when the container is up.
  • startup-results.json exists and contains two benchmark entries.
  • Hyperfine prints a summary comparing both commands by relative speed.

Expected output shape

Benchmark 1: ./bench-wasm.sh
  Time (mean ± σ):   lower-ms-range

Benchmark 2: ./bench-docker.sh
  Time (mean ± σ):   higher-ms-range

Summary
  './bench-wasm.sh' ran faster than './bench-docker.sh'

Do not fixate on one run. Look at mean, standard deviation, and the relative comparison. If Wasm wins but the variance overlaps heavily, your environment is noisy and the result is weaker than it looks.

How to read the result

  • A lower mean suggests a shorter cold-start path on your machine.
  • A lower standard deviation suggests the runtime is more predictable under repeat launches.
  • A small delta means startup overhead is probably not your main architecture driver.
  • A large delta on local tests does not automatically predict the same win on a managed serverless platform.

When to choose each

Startup latency is important, but it is not the whole decision. Use the result as one input, then weigh platform fit.

Choose Wasm when:

  • You care about shaving local cold-start overhead for small, short-lived functions.
  • You want strong sandboxing with small deployable artifacts.
  • Your workload fits a narrow interface like HTTP request in, HTTP response out.
  • You control the runtime or are targeting a platform with first-class WASI support.

Choose Docker when:

  • You need broad hosting compatibility across CI, staging, and production.
  • You rely on mature container tooling for scanning, debugging, and observability.
  • Your function depends on native libraries, custom OS packages, or existing base-image workflows.
  • You are optimizing team throughput more than raw startup time.

Troubleshooting and what's next

Top 3 troubleshooting fixes

  1. Port already in use: change PORT in each script or stop the previous benchmark process. Reused ports are the fastest way to get false failures.
  2. Wasm component will not start: confirm you built a WASI HTTP component and not a plain Wasm module. The official Wasmtime CLI documents serve for the wasi:http/proxy world.
  3. Docker result looks wildly slower on first run only: make sure the image is already built locally. An image build or pull is not part of startup latency and should not be timed with the harness.

What's next

  • Add a second benchmark for the /wait route to separate startup from per-request behavior.
  • Repeat the test with a larger container image to quantify image bloat penalties.
  • Run the same harness inside your CI runners to see whether developer laptops are misleading you.
  • After the local benchmark, move to a managed platform benchmark and measure scheduler latency separately.

Frequently Asked Questions

How do I benchmark Wasm vs Docker cold starts fairly? +
Use the same host, the same route behavior, and the same readiness test for both artifacts. Measure from runtime launch to the first successful 200 OK response so you capture real startup work instead of just process creation.
Is a local Wasm vs Docker benchmark the same as a cloud serverless benchmark? +
No. A local harness isolates packaging and runtime overhead, which is useful, but managed platforms add scheduler delay, image pull behavior, network setup, and multi-tenant noise. Treat local numbers as the runtime baseline, not the full production story.
Why use Hyperfine instead of a hand-rolled loop with time? +
hyperfine handles repeated runs, warmups, statistical summaries, and JSON export in one tool. That reduces measurement mistakes and makes it easier to compare results across machines and CI environments.
Does Wasm always start faster than Docker for serverless functions? +
Not always. Tiny Wasm artifacts often have a shorter local startup path, but the result depends on the runtime, the host, framework weight, and whether the container image is already present. You still need to measure your actual workload.

Get Engineering Deep-Dives in Your Inbox

Weekly breakdowns of architecture, security, and developer tooling — no fluff.

Found this useful? Share it.