Home Posts Node.js Cold Starts [2026]: Snapshotting + Prefetching
Cloud Infrastructure

Node.js Cold Starts [2026]: Snapshotting + Prefetching

Node.js Cold Starts [2026]: Snapshotting + Prefetching
Dillip Chowdary
Dillip Chowdary
Tech Entrepreneur & Innovator · April 26, 2026 · 8 min read

Bottom Line

Use snapshotting to serialize pure, always-needed bootstrap state inside your Node.js process, then use platform pre-initialization for predictable traffic. The first cuts work at startup; the second cuts how often startup happens at all.

Key Takeaways

  • Node 24 marks v8.startupSnapshot stable, but the CLI snapshot flags remain experimental.
  • AWS Lambda supports nodejs24.x and nodejs22.x as of April 26, 2026.
  • Snapshot blobs must match the exact Node version, platform, architecture, and compatible V8 flags.
  • Provisioned concurrency only applies to a published version or alias, never $LATEST.

Cold starts are no longer just a Java problem or a traffic-spike problem. In 2026, the most effective Node.js strategy is layered: reduce the amount of work your process performs during bootstrap, then reduce how often the platform has to create a fresh environment at all. For Node.js, that means combining snapshotting inside the runtime with selective prefetching during function initialization.

Prerequisites

What you need

  • Node.js 24 for the cleanest path. In Node 24, the v8.startupSnapshot API is stable.
  • A function runtime where you control the Node process entrypoint, such as a containerized function, custom runtime, or self-hosted worker.
  • If you deploy on AWS Lambda, a published function version plus an alias. Provisioned concurrency does not work on $LATEST.
  • A bundler that can emit one file. Node's runtime snapshot support still expects a single snapshotted user-land file.

Bottom Line

Snapshot what is deterministic and always used, then pre-initialize only the hot dependencies your first request always needs. That split keeps startup fast without turning init into a giant speculative load phase.

One nuance matters up front: Node 24 makes the Startup Snapshot API stable, but the CLI flags you use to build and load runtime snapshots, including --build-snapshot, --build-snapshot-config, and --snapshot-blob, are still marked experimental. Treat this as a controlled optimization, not a blind default for every service.

Step 1: Measure the Baseline

Before snapshotting anything, isolate what your cold path actually does. In most Node.js functions, the expensive work is not the handler body. It is the startup graph: imports, JSON parsing, regex compilation, client construction, and config loading.

Create a tiny measurable function

// src/boot.mjs
import fs from 'node:fs';

export function buildBoot() {
  return {
    featureFlags: JSON.parse(
      fs.readFileSync(new URL('./flags.json', import.meta.url), 'utf8')
    ),
    routes: {
      health: /^\/health$/,
      user: /^\/users\/[a-z0-9-]+$/i
    }
  };
}
// src/handler.mjs
import { buildBoot } from './boot.mjs';

let boot = globalThis.__BOOT ?? null;

function getBoot() {
  if (!boot) boot = buildBoot();
  return boot;
}

export async function handle(event) {
  const state = getBoot();

  return {
    ok: true,
    featureCount: Object.keys(state.featureFlags).length,
    userMatch: state.routes.user.test(event.path)
  };
}
// runner.mjs
import { performance } from 'node:perf_hooks';

const started = performance.now();
const { handle } = await import('./src/handler.mjs');
const event = JSON.parse(process.argv[2] ?? '{"path":"/users/demo-user"}');
const result = await handle(event);

console.log(JSON.stringify({
  initMs: Number((performance.now() - started).toFixed(2)),
  result
}));

Run it a few times and write down the first-run number. That is your local cold-start baseline. If your generated bundle becomes hard to inspect, run it through TechBytes' Code Formatter before you decide what belongs in the snapshot.

Pro tip: Snapshot only data and code you need on nearly every invocation: parsed config, compiled regexes, lookup tables, and reusable module state. Rare-path objects should stay lazy.

Step 2: Build a Startup Snapshot

Node's runtime snapshot support still has an important limitation: only one user-land file can be snapshotted directly. The practical fix is to bundle your snapshot entry into one file, then generate a blob that later boots the process with that state already materialized.

Bundle the snapshot entry

npm install --save-dev esbuild

npx esbuild src/snapshot-entry.mjs \
  --bundle \
  --platform=node \
  --target=node24 \
  --outfile=dist/snapshot-entry.bundle.mjs
// src/snapshot-entry.mjs
import { buildBoot } from './boot.mjs';

globalThis.__BOOT = buildBoot();

Generate the snapshot blob

{
  "builder": "./dist/snapshot-entry.bundle.mjs",
  "withoutCodeCache": false
}
node \
  --snapshot-blob ./dist/function.blob \
  --build-snapshot-config ./snapshot.config.json

Why use --build-snapshot-config instead of only --build-snapshot? Because the config file makes the single-file builder explicit, and the optional withoutCodeCache switch lets you trade snapshot size against function compilation time.

At runtime, load the blob before your normal entrypoint:

node --snapshot-blob ./dist/function.blob ./runner.mjs

If your boot state is pure and deterministic, the handler will see globalThis.__BOOT immediately instead of rebuilding it during the first request.

Watch out: Snapshot blobs are strict artifacts. The running Node binary must match the exact version, platform, and architecture that built the blob, and V8 flag compatibility also matters.

Step 3: Prefetch Hot State

Snapshotting reduces in-process bootstrap work. It does not change platform behavior when a fresh environment is still required. That is where prefetching comes in: start the hot-path work during init, then keep rare-path work lazy.

Prefetch what the first request always needs

// index.mjs
import { SSMClient, GetParameterCommand } from '@aws-sdk/client-ssm';

const ssm = new SSMClient();
const hotConfigPromise = ssm.send(
  new GetParameterCommand({ Name: process.env.FLAG_PARAM })
);

let hotConfig;

export const handler = async (event) => {
  hotConfig ??= JSON.parse((await hotConfigPromise).Parameter?.Value ?? '{}');

  if (event.path === '/health') {
    return { statusCode: 200, body: 'ok' };
  }

  return {
    statusCode: 200,
    body: JSON.stringify({ flags: hotConfig })
  };
};

This pattern starts the fetch during initialization instead of waiting for the first real request. It aligns with AWS guidance: move reusable initialization into static init, but lazily load objects that only appear on specific execution paths.

Pre-initialize Lambda environments for predictable traffic

VERSION=$(aws lambda publish-version \
  --function-name edge-api \
  --query 'Version' \
  --output text)

aws lambda create-alias \
  --function-name edge-api \
  --name LIVE \
  --function-version "$VERSION"

aws lambda put-provisioned-concurrency-config \
  --function-name edge-api \
  --qualifier LIVE \
  --provisioned-concurrent-executions 10

On AWS Lambda, provisioned concurrency gives you pre-initialized execution environments. That is effectively platform-level prefetching: the environment is ready before traffic lands. If you already know your traffic peaks, start here. AWS also recommends estimating concurrency as:

concurrency = average requests per second * average request duration in seconds

Then add a small buffer. AWS documentation explicitly suggests about 10% on top of typical concurrency needs.

Verification and Expected Output

You are looking for two separate wins: lower local init time from snapshotting, and fewer cold environments from pre-initialization.

Local verification

  1. Run the baseline entry without a blob.
  2. Run the same entry with --snapshot-blob.
  3. Compare the initMs field, not just total wall clock.
node ./runner.mjs '{"path":"/users/demo-user"}'
node --snapshot-blob ./dist/function.blob ./runner.mjs '{"path":"/users/demo-user"}'

A typical expected pattern looks like this:

{"initMs":142.61,"result":{"ok":true,"featureCount":3,"userMatch":true}}
{"initMs":49.34,"result":{"ok":true,"featureCount":3,"userMatch":true}}

The exact numbers vary by dependency graph, CPU, and I/O, but the shape should be consistent: same result payload, lower init cost.

Lambda verification

  • Confirm the alias, not $LATEST, is receiving traffic.
  • Check that ProvisionedConcurrencyInvocations is non-zero.
  • Use get-provisioned-concurrency-config to confirm allocation.
  • Inspect AWSLAMBDAINITIALIZATION_TYPE when debugging mixed behavior. Expected values are provisioned-concurrency or on-demand.
aws lambda get-provisioned-concurrency-config \
  --function-name edge-api \
  --qualifier LIVE

Troubleshooting and What's Next

Top 3 troubleshooting checks

  1. The blob fails after deploy. Rebuild it with the exact same Node version, platform, and architecture as production. Snapshot blobs are not portable build artifacts.
  2. The snapshot gives little or no improvement. You probably snapshotted too little, or your real bottleneck is network init, not module init. Move hot config fetches and client construction into init, and keep rare paths lazy.
  3. You still see cold starts on Lambda. Most often the trigger is hitting $LATEST or an alias without provisioned concurrency, or traffic is exceeding the configured provisioned capacity and spilling into on-demand environments.

What's next

  • Add v8.startupSnapshot serialize and deserialize callbacks if you need to transform state before snapshot write or after restore.
  • Split always-hot bootstrap from rare-path bootstrap so you can snapshot one and lazy-load the other.
  • Bundle only the certificates you need on modern Lambda runtimes instead of restoring broad CA loading behavior, which can hurt cold-start performance.
  • If your traffic pattern is highly predictable, put provisioned concurrency behind scheduled or target-tracking scaling instead of leaving it static.

As of April 26, 2026, AWS Lambda supports nodejs24.x and nodejs22.x. That makes this a good year to standardize on a modern Node runtime, snapshot the pure startup state you fully control, and let the platform pre-initialize only the capacity you actually need.

Frequently Asked Questions

Does Node.js snapshotting work with AWS Lambda zip functions? +
Not in the same direct way you would use it in a custom Node process, because managed Lambda runtimes abstract the node startup command. If you fully control the process entrypoint, such as in a containerized or custom runtime flow, snapshotting is practical. For managed Lambda cold-start control, provisioned concurrency is the native lever.
Why does my snapshot blob break after a Node upgrade? +
Because Node checks strict compatibility when loading a blob. The runtime must match the exact Node version, platform, architecture, and compatible V8 flags used when the snapshot was created. If any of those change, rebuild the blob as part of your deploy pipeline.
Should I prefetch every dependency during function init? +
No. Prefetch only the state your first real request almost always needs, such as feature flags, clients, or a hot config document. AWS explicitly notes that static init is useful, but rare-path objects are better left lazy so you do not bloat init time.
Is the Startup Snapshot API production-ready in Node 24? +
The v8.startupSnapshot API itself is stable in Node 24. However, the runtime snapshot CLI flags such as --build-snapshot and --snapshot-blob are still marked experimental, so you should treat them as controlled infrastructure features with rollout checks and rebuild automation.

Get Engineering Deep-Dives in Your Inbox

Weekly breakdowns of architecture, security, and developer tooling — no fluff.

Found this useful? Share it.