Node.js 26 Parallelism Cheat Sheet [Deep Dive 2026]
Bottom Line
Use worker threads for CPU-bound JavaScript, SharedArrayBuffer for zero-copy shared state, and Atomics only around the smallest critical sections. If the workload is mostly I/O, stay on the event loop and skip the extra coordination cost.
Key Takeaways
- ›worker_threads are for CPU-bound JavaScript; Node docs explicitly say they do not help much with I/O.
- ›SharedArrayBuffer is shared, not transferred, and must never go in a
transferList. - ›Use Atomics.load/store/add/compareExchange for correctness; reserve wait/notify for parked worker threads.
- ›Tune worker memory with resourceLimits and process memory with --max-old-space-size and --max-semi-space-size.
- ›Profile real contention with worker.performance.eventLoopUtilization(), startCpuProfile(), and startHeapProfile().
Node.js parallelism is finally broad enough that most teams can stop guessing. The practical stack in the v26 line is still the same core trio: worker_threads for CPU-bound work, SharedArrayBuffer for zero-copy shared state, and Atomics for correctness at the handoff points. This cheat sheet is optimized for scanning: commands by purpose, config knobs, advanced patterns, and the sharp edges that usually cost the first production incident.
Key Takeaways
- worker_threads are for CPU-bound JavaScript; they are not a generic speed-up for I/O.
- SharedArrayBuffer is shared memory and must not appear in a
transferList. - Use Atomics sparingly around shared state, not around whole business workflows.
- Constrain memory with resourceLimits per worker and V8 heap flags per process.
- Profile first with eventLoopUtilization() and worker CPU or heap profiles before tuning pool size.
Live Filter And Commands
Bottom Line
Use worker_threads for CPU-bound JavaScript, SharedArrayBuffer for zero-copy shared state, and Atomics only around the smallest critical sections. If the job is mostly I/O, stay on the event loop.
Checked against the official Node.js v26 worker_threads docs, v26 globals docs, current CLI docs, the Node.js releases page, and the Release Working Group schedule. As of April 30, 2026, the release plan scheduled v26 for April 22, 2026, while the latest indexed releases snapshot still showed v25.9.0 as Current and v24.15.0 as LTS, so confirm node -v locally before rolling flags into production.
Keyboard shortcuts: press / to focus search, Esc to clear, and 1-5 to jump between sections.
Spawn And Lifecycle
import { Worker } from 'node:worker_threads';
const worker = new Worker(new URL('./hash-worker.mjs', import.meta.url), {
name: 'hash',
workerData: { batchSize: 1000 }
});
worker.once('online', () => console.log('worker online'));
worker.on('message', (msg) => console.log('result', msg));
worker.once('error', console.error);
worker.once('exit', (code) => {
if (code !== 0) console.error(`worker stopped with code ${code}`);
});Dedicated Channel For Ongoing Traffic
import { MessageChannel, Worker } from 'node:worker_threads';
const worker = new Worker(new URL('./worker.mjs', import.meta.url));
const { port1, port2 } = new MessageChannel();
worker.postMessage({ port: port1 }, [port1]);
port2.postMessage({ jobId: 42, payload: 'start' });Shared Counter
const sab = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT);
const counter = new Int32Array(sab);
Atomics.add(counter, 0, 1);
const current = Atomics.load(counter, 0);
console.log(current);Per-Worker Memory Limits
import { Worker } from 'node:worker_threads';
new Worker(new URL('./worker.mjs', import.meta.url), {
resourceLimits: {
maxOldGenerationSizeMb: 256,
maxYoungGenerationSizeMb: 32,
stackSizeMb: 4
}
});Process-Wide Memory Flags
NODE_OPTIONS='--max-old-space-size=1536 --max-semi-space-size=64' node app.mjsDefault Config File
{
"nodeOptions": {
"max-old-space-size": 1536,
"max-semi-space-size": 64
}
}node --experimental-default-config-file app.mjsProfile The Worker Instead Of Guessing
const util = worker.performance.eventLoopUtilization();
const cpuHandle = await worker.startCpuProfile();
const cpuProfile = await cpuHandle.stop();
const heapHandle = await worker.startHeapProfile();
const heapProfile = await heapHandle.stop();Keyboard Shortcuts
| Key | Action | Why it matters |
|---|---|---|
/ | Focus live filter | Jump straight to the API or flag you need. |
Esc | Clear live filter | Reset the command list instantly. |
1 | Go to commands | Fast path to copy-paste snippets. |
2 | Go to worker core | Review lifecycle and messaging rules. |
3 | Go to shared memory | Check Atomics and SharedArrayBuffer patterns. |
4 | Go to configuration | Jump to flags, limits, and inspection. |
5 | Go to advanced usage | Find locks, direct thread messaging, and footguns. |
Worker Threads Core API
What They Are Good At
- Node's official docs say Workers are useful for CPU-intensive JavaScript operations.
- The same docs say they do not help much with I/O-intensive work, because Node's built-in async I/O is usually more efficient.
- Unlike
child_processorcluster, workers can share memory by transferring ArrayBuffer instances or sharing SharedArrayBuffer instances. - Use workerData for one-shot startup payloads and postMessage() for long-lived traffic.
Core Methods And Signals
- new Worker(): spawn a JS thread with optional name, argv, env, and resourceLimits.
- worker.on('message'): receive parent or child messages over the built-in channel.
- worker.once('error'): surface uncaught worker exceptions.
- worker.once('exit'): verify non-zero exit codes explicitly.
- worker.threadName: expose readable thread labels for logs and profiling views.
import {
Worker,
isMainThread,
parentPort,
workerData
} from 'node:worker_threads';
if (isMainThread) {
const worker = new Worker(new URL(import.meta.url), {
workerData: { n: 10_000 }
});
worker.on('message', console.log);
} else {
parentPort.postMessage(workerData.n * 2);
}Message Topology Rules
- Use a dedicated MessageChannel when you want a clean, purpose-specific pipe instead of overloading the default worker channel.
- Use BroadcastChannel for one-to-many coordination like drain signals or cache invalidation across a pool.
- Use postMessageToThread() only when threads are not in a direct parent-child relationship.
- Use setEnvironmentData() and getEnvironmentData() when every new worker should receive a cloned config snapshot automatically.
- Use SHARE_ENV only when you intentionally want
process.envmutations to be visible across threads.
import {
setEnvironmentData,
getEnvironmentData,
Worker,
isMainThread
} from 'node:worker_threads';
if (isMainThread) {
setEnvironmentData('region', 'us-east-1');
new Worker(new URL(import.meta.url));
} else {
console.log(getEnvironmentData('region'));
}Shared Memory And Atomics
SharedArrayBuffer Vs ArrayBuffer
- ArrayBuffer can be copied or transferred.
- SharedArrayBuffer is shared memory and stays accessible from both threads.
- Official Node docs are explicit: if a message contains a SharedArrayBuffer, it cannot be listed in a
transferList. - Transferring a normal ArrayBuffer detaches other views that point at the same underlying memory.
const sab = new SharedArrayBuffer(1024);
const bytes = new Uint8Array(sab);
bytes[0] = 7;
worker.postMessage({ sab });
// No transferList here. SharedArrayBuffer is shared, not transferred.Atomics Cheat Sheet
- Atomics.load() and Atomics.store(): visibility and ordering for shared values.
- Atomics.add(), sub(), and(), or(), xor(): simple counters and flags.
- Atomics.compareExchange(): lock-free state transition when one writer must win.
- Atomics.wait() and Atomics.notify(): park and wake worker threads around queue state.
- Keep shared layouts small and explicit: one typed array for state, one for payload indexes, one for ring-buffer offsets.
// int32[0] = state: 0 idle, 1 ready
const sab = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT * 2);
const state = new Int32Array(sab, 0, 1);
const payload = new Int32Array(sab, Int32Array.BYTES_PER_ELEMENT, 1);
// producer
Atomics.store(payload, 0, 99);
Atomics.store(state, 0, 1);
Atomics.notify(state, 0, 1);
// consumer worker
while (true) {
Atomics.wait(state, 0, 0);
const value = Atomics.load(payload, 0);
console.log(value);
Atomics.store(state, 0, 0);
}Buffer Footguns
- Node docs warn that Buffer.from() and Buffer.allocUnsafe() often use the internal Buffer pool.
- Those pooled buffers are cloned instead of transferred, which can copy more memory than expected.
- That behavior can increase memory usage and create security concerns when you assume only a tiny slice moved.
- If you know a buffer must never transfer, mark its backing store with markAsUntransferable().
import { MessageChannel, markAsUntransferable } from 'node:worker_threads';
const pooled = new ArrayBuffer(8);
markAsUntransferable(pooled);
const { port1 } = new MessageChannel();
try {
port1.postMessage(new Uint8Array(pooled), [pooled]);
} catch (err) {
console.error(err.name); // DataCloneError
}Configuration And Debugging
Commands Grouped By Purpose
- Cap process heap: use
--max-old-space-sizewhen the whole process can overgrow. - Adjust young generation: use
--max-semi-space-sizewhen allocation churn dominates. - Cap one worker: use resourceLimits in the Worker constructor.
- Inspect workers: use
--experimental-worker-inspectionfor Chrome DevTools worker inspection support. - Centralize defaults: use
NODE_OPTIONSornode.config.jsonwith--experimental-default-config-file.
# Process-wide V8 heap tuning
node --max-old-space-size=1536 --max-semi-space-size=64 app.mjs
# Worker inspection in DevTools
node --experimental-worker-inspection app.mjs
# Heap snapshots near memory pressure
node --max-old-space-size=100 --heapsnapshot-near-heap-limit=3 app.mjsConfig File Pattern
The current official CLI docs support a JSON config file behind --experimental-default-config-file. The docs also note that configuration priority is: NODE_OPTIONS and CLI, then configuration file, then dotenv NODE_OPTIONS.
{
"nodeOptions": {
"max-old-space-size": 1536,
"max-semi-space-size": 64
}
}node --experimental-default-config-file app.mjsRuntime Signals Worth Watching
- worker.performance.eventLoopUtilization(): see whether a worker is actually saturated.
- worker.startCpuProfile(): capture CPU hot paths from the parent.
- worker.startHeapProfile(): inspect object retention inside one worker.
- worker.resourceLimits: confirm what limits were applied, especially in shared pool factories.
- navigator.hardwareConcurrency: quick upper bound for initial pool sizing, not the final answer.
console.log(navigator.hardwareConcurrency);
console.log(worker.performance.eventLoopUtilization());
console.log(worker.resourceLimits);Advanced Patterns And Footguns
Advanced APIs Worth Knowing
- BroadcastChannel: simple one-to-many signaling inside a process.
- postMessageToThread(): direct messaging to non-parent threads; still marked active development in official docs.
- locks or navigator.locks: experimental lock manager for named shared resources.
- worker[Symbol.asyncDispose](): terminate workers automatically when an
await usingscope exits. - isInternalThread: useful when debugging loaders or internal worker behavior.
import { locks } from 'node:worker_threads';
await locks.request('cache-rebuild', async () => {
// only one worker rebuilds the cache at a time
});import { postMessageToThread } from 'node:worker_threads';
await postMessageToThread(targetThreadId, { type: 'drain' }, [], 1000);Pool Design Rules That Age Well
- Start with a fixed-size pool, not one worker per request.
- Backpressure belongs in the queue, not in ad hoc retries between threads.
- Use AsyncResource for worker pools so async stack traces and diagnostics stay correlated.
- Prefer coarse task messages and tiny shared state; do not atomically coordinate every field.
- Benchmark with realistic payload sizes because small messages and large messages fail for different reasons.
import { AsyncResource } from 'node:async_hooks';
class WorkerTask extends AsyncResource {
constructor() {
super('WorkerTask');
}
done(callback, err, result) {
this.runInAsyncScope(callback, null, err, result);
this.emitDestroy();
}
}Common Failure Modes
- Using workers for I/O-bound HTTP handlers and seeing no win.
- Sharing too much mutable state and spending more time in coordination than compute.
- Transferring a buffer view without realizing sibling views detach.
- Over-sizing the pool and increasing run-queue contention, GC pressure, and context switching.
- Publishing logs, snapshots, or memory dumps without stripping secrets first. Use the TechBytes Data Masking Tool before sharing production diagnostics outside the team.
Frequently Asked Questions
When should I use worker threads instead of child_process or cluster in Node.js? +
SharedArrayBuffer. Use child_process or process-level isolation when you need separate OS processes, fault isolation, or different runtimes. For most compute-heavy JavaScript, workers are the first option to test.Is SharedArrayBuffer faster than postMessage in Node.js? +
Atomics and more complex invariants. For one-shot jobs with moderate payloads, plain postMessage() is often simpler and fast enough.Can Atomics.wait block the Node.js event loop? +
Atomics.wait() blocks the calling thread, so calling it on the main thread freezes the event loop. It is much safer inside a dedicated worker that exists to wait on shared state, queues, or a ring buffer.How many worker threads should a Node.js pool have? +
navigator.hardwareConcurrency for pure CPU workloads, then benchmark down instead of assuming more is better. Real pool size depends on payload size, GC pressure, native add-ons, and how much time workers spend blocked on shared state. Measure throughput, queue latency, and eventLoopUtilization() before locking the number in.Get Engineering Deep-Dives in Your Inbox
Weekly breakdowns of architecture, security, and developer tooling — no fluff.