Wasm vs Docker Startup Latency Benchmark Guide [2026]
Bottom Line
If you want a fair startup-latency comparison, benchmark the same handler shape on the same host and measure time-to-first-successful-response. In that setup, Wasm often trims cold-start overhead, but Docker still wins on ecosystem maturity and deployment reach.
Key Takeaways
- ›Measure startup as launch-to-first-200 response, not process spawn alone.
- ›Use the same route behavior for both artifacts to avoid fake wins.
- ›Hyperfine 1.19.0 gives repeatable multi-run summaries and JSON export.
- ›Wasmtime 43.0.1 can serve WASI HTTP components directly with
serve. - ›Local results isolate runtime overhead; cloud scheduler latency is a separate layer.
Most Wasm versus Docker debates mix packaging, runtime, framework, and cloud-provider effects into one noisy number. A better approach is to isolate one variable: startup path length for the same tiny HTTP function on the same machine. In this tutorial, you will build two comparable baselines, measure cold starts as time-to-first-successful-response, and leave with a benchmark harness you can reuse before making a serverless platform decision.
| Dimension | Wasm | Docker | Edge |
|---|---|---|---|
| Artifact startup path | Runtime loads a WebAssembly component directly | Engine creates a container, networking, and process | Wasm |
| Operational portability | Improving fast, but platform support still varies | Near-universal packaging and hosting support | Docker |
| Cold-start focus | Often lower local overhead for tiny handlers | Usually higher due to container lifecycle work | Wasm |
| Tooling maturity | Younger ecosystem around components and hosting | Deep CI/CD, registry, scan, and debugging support | Docker |
| Benchmark fairness risk | Easy to over-optimize with unrealistic toy modules | Easy to inflate overhead with large base images | Tie |
Setup the benchmark
Prerequisites
- Docker Engine or Docker Desktop installed and working locally.
- Rust toolchain installed via
rustup. - Wasmtime 43.0.1 or newer. The official CLI docs show
wasmtime servefor WASI HTTP components. - cargo-component installed so you can build the Wasm sample.
- hyperfine 1.19.0 for repeatable multi-run timing and JSON export.
- curl and
bashavailable on your machine.
Bottom Line
Benchmark startup as launch-to-first-HTTP-200 on the same host. That strips out most platform noise and exposes the real difference between a Wasm runtime path and a container runtime path.
Install the toolchain
The commands below come from official project documentation: Wasmtime installs via its shell script, Docker publishes ports with -p, and Hyperfine supports --warmup, --runs, and JSON export.
curl https://wasmtime.dev/install.sh -sSf | bash
cargo install cargo-component --locked
cargo install --locked wkg
sudo apt install hyperfine
wasmtime -V
hyperfine --version
docker --versionThis tutorial intentionally measures local runtime overhead, not end-to-end cloud cold starts. Provider scheduler queues, registry pulls, and multi-tenant throttling can dominate in production, so keep those as a separate benchmark later.
Build matching baselines
The easiest way to stay honest is to keep both handlers tiny and behaviorally similar. We will use the Bytecode Alliance WASI HTTP sample for Wasm and a minimal Rust HTTP server in a container for Docker.
1. Build the Wasm baseline
git clone https://github.com/bytecodealliance/sample-wasi-http-rust.git
cd sample-wasi-http-rust
cargo component build --release
WASM_FILE=$(find target -name '*.wasm' | head -n 1)
echo "$WASM_FILE"The sample exposes a root route and a /wait route, which is useful later if you want to benchmark not just readiness but simple request handling under cold start.
2. Build the Docker baseline
Create a sibling directory named docker-fn and add a tiny server that returns the same hello world body on /.
mkdir ../docker-fn
cd ../docker-fn
mkdir srcuse std::{
io::{Read, Write},
net::TcpListener,
thread,
time::Duration,
};
fn main() {
let listener = TcpListener::bind("0.0.0.0:3000").unwrap();
for stream in listener.incoming() {
let mut stream = stream.unwrap();
let mut buf = [0_u8; 1024];
let n = stream.read(&mut buf).unwrap_or(0);
let req = String::from_utf8_lossy(&buf[..n]);
let body = if req.starts_with("GET /wait ") {
thread::sleep(Duration::from_secs(1));
"wait done\n"
} else {
"hello world\n"
};
let resp = format!(
"HTTP/1.1 200 OK\r\nContent-Length: {}\r\nConnection: close\r\n\r\n{}",
body.len(),
body
);
let _ = stream.write_all(resp.as_bytes());
}
}[package]
name = "docker-fn"
version = "0.1.0"
edition = "2021"FROM rust:slim AS build
WORKDIR /app
COPY Cargo.toml Cargo.toml
COPY src src
RUN cargo build --release
FROM debian:bookworm-slim
WORKDIR /app
COPY --from=build /app/target/release/docker-fn /app/docker-fn
EXPOSE 3000
CMD ["/app/docker-fn"]docker build -t docker-fn:latest .Measure cold starts
Now create two small harness scripts. Each script starts the runtime, polls until the first successful response, and then exits. Hyperfine times the whole script, which gives you a clean startup-to-ready metric.
3. Create the Wasm harness
#!/usr/bin/env bash
set -euo pipefail
PORT="${PORT:-18081}"
LOG=/tmp/wasm-bench.log
WASM_FILE=$(find ../sample-wasi-http-rust/target -name '*.wasm' | head -n 1)
wasmtime serve --addr="127.0.0.1:${PORT}" "$WASM_FILE" >"$LOG" 2>&1 &
PID=$!
cleanup() {
kill "$PID" 2>/dev/null || true
wait "$PID" 2>/dev/null || true
}
trap cleanup EXIT
for _ in $(seq 1 200); do
if curl -fsS "http://127.0.0.1:${PORT}/" >/dev/null 2>&1; then
exit 0
fi
sleep 0.01
done
echo "wasm server did not become ready" 1>&2
exit 14. Create the Docker harness
#!/usr/bin/env bash
set -euo pipefail
PORT="${PORT:-18082}"
NAME=docker-fn-bench
docker run --name "$NAME" --rm -d -p "${PORT}:3000" docker-fn:latest >/dev/null
cleanup() {
docker stop "$NAME" >/dev/null 2>&1 || true
}
trap cleanup EXIT
for _ in $(seq 1 200); do
if curl -fsS "http://127.0.0.1:${PORT}/" >/dev/null 2>&1; then
exit 0
fi
sleep 0.01
done
echo "docker server did not become ready" 1>&2
exit 1chmod +x bench-wasm.sh bench-docker.sh5. Run the benchmark
hyperfine \
--warmup 3 \
--runs 20 \
--export-json startup-results.json \
'./bench-wasm.sh' \
'./bench-docker.sh'If you want cleaner shell snippets before sharing them with your team, run them through TechBytes’ Code Formatter. It is a small step, but benchmark harnesses tend to grow messy once you add logging, retry logic, and provider-specific flags.
Verify and interpret
Verification checklist
curl http://127.0.0.1:18081/returnshello worldwhen the Wasm server is up.curl http://127.0.0.1:18082/returnshello worldwhen the container is up.startup-results.jsonexists and contains two benchmark entries.- Hyperfine prints a summary comparing both commands by relative speed.
Expected output shape
Benchmark 1: ./bench-wasm.sh
Time (mean ± σ): lower-ms-range
Benchmark 2: ./bench-docker.sh
Time (mean ± σ): higher-ms-range
Summary
'./bench-wasm.sh' ran faster than './bench-docker.sh'Do not fixate on one run. Look at mean, standard deviation, and the relative comparison. If Wasm wins but the variance overlaps heavily, your environment is noisy and the result is weaker than it looks.
How to read the result
- A lower mean suggests a shorter cold-start path on your machine.
- A lower standard deviation suggests the runtime is more predictable under repeat launches.
- A small delta means startup overhead is probably not your main architecture driver.
- A large delta on local tests does not automatically predict the same win on a managed serverless platform.
When to choose each
Startup latency is important, but it is not the whole decision. Use the result as one input, then weigh platform fit.
Choose Wasm when:
- You care about shaving local cold-start overhead for small, short-lived functions.
- You want strong sandboxing with small deployable artifacts.
- Your workload fits a narrow interface like HTTP request in, HTTP response out.
- You control the runtime or are targeting a platform with first-class WASI support.
Choose Docker when:
- You need broad hosting compatibility across CI, staging, and production.
- You rely on mature container tooling for scanning, debugging, and observability.
- Your function depends on native libraries, custom OS packages, or existing base-image workflows.
- You are optimizing team throughput more than raw startup time.
Troubleshooting and what's next
Top 3 troubleshooting fixes
- Port already in use: change
PORTin each script or stop the previous benchmark process. Reused ports are the fastest way to get false failures. - Wasm component will not start: confirm you built a WASI HTTP component and not a plain Wasm module. The official Wasmtime CLI documents serve for the
wasi:http/proxyworld. - Docker result looks wildly slower on first run only: make sure the image is already built locally. An image build or pull is not part of startup latency and should not be timed with the harness.
What's next
- Add a second benchmark for the
/waitroute to separate startup from per-request behavior. - Repeat the test with a larger container image to quantify image bloat penalties.
- Run the same harness inside your CI runners to see whether developer laptops are misleading you.
- After the local benchmark, move to a managed platform benchmark and measure scheduler latency separately.
Frequently Asked Questions
How do I benchmark Wasm vs Docker cold starts fairly? +
200 OK response so you capture real startup work instead of just process creation.Is a local Wasm vs Docker benchmark the same as a cloud serverless benchmark? +
Why use Hyperfine instead of a hand-rolled loop with time? +
Does Wasm always start faster than Docker for serverless functions? +
Get Engineering Deep-Dives in Your Inbox
Weekly breakdowns of architecture, security, and developer tooling — no fluff.