Home Posts WebGPU OIT Reference [2026 Cheat Sheet & Patterns]
Developer Reference

WebGPU OIT Reference [2026 Cheat Sheet & Patterns]

WebGPU OIT Reference [2026 Cheat Sheet & Patterns]
Dillip Chowdary
Dillip Chowdary
Tech Entrepreneur & Innovator · May 01, 2026 · 14 min read

Bottom Line

For most WebGPU apps, weighted blended OIT is the default shipping choice: one transparent pass, one composite pass, and predictable memory. Reach for per-pixel linked lists or depth peeling only when your scene quality target justifies the extra bandwidth, atomics, and debug cost.

Key Takeaways

  • Weighted blended OIT is usually the best WebGPU default for dense transparent scenes.
  • WebGPU still requires HTTPS and is marked Limited availability on MDN.
  • Use rgba16float accumulation plus a separate revealage target for the cleanest starter layout.
  • Gate timestamp-query and other optional paths behind runtime feature checks.
  • Wrap risky setup in pushErrorScope() and tune weights per scene, not per renderer.

Order-independent transparency is still one of the first places a serious WebGPU renderer stops being simple. The good news is that the implementation surface is now stable enough to treat OIT as an engineering choice instead of a research project. This reference is optimized for shipping: what to pick first, which WebGPU calls matter, how to configure the common weighted blended path, and where advanced exact methods start to hurt.

  • Weighted blended OIT is usually the best WebGPU default for dense transparent scenes.
  • WebGPU still requires HTTPS and is marked Limited availability on MDN.
  • Use rgba16float accumulation plus a separate revealage target for the cleanest starter layout.
  • Gate timestamp-query and other optional paths behind runtime feature checks.
  • Wrap risky setup in pushErrorScope() and tune weights per scene, not per renderer.

Support and Scope

Bottom Line

If you need transparent 3D visualization on the web today, start with weighted blended OIT. It maps cleanly to current WebGPU pipelines and avoids the memory and contention risks of exact per-pixel data structures.

What WebGPU gives you right now

  • WebGPU is available only in secure contexts and can run in Web Workers, which is useful when your renderer should stay off the main thread.
  • MDN still marks WebGPU as Limited availability, so production rollouts should keep a fallback path or explicit capability gate.
  • The core rendering pieces you need for OIT are stable: GPUDevice, GPUTexture, GPUCanvasContext.configure(), render passes, blend states, and WGSL fragment outputs.
  • timestamp-query is optional and must be requested only when the adapter exposes it.

What this reference covers

  • Weighted blended OIT as the practical default.
  • Depth peeling, per-pixel linked lists, and stochastic transparency as escalation paths.
  • JS and WGSL snippets you can drop into a renderer after formatting them with the Code Formatter.

Technique Map

Which OIT family fits which scene

TechniquePass profileMemory profileStrengthsRisks
Weighted blended OITTransparent pass + composite passFixed-size render targetsFast, simple, stable for particles, UI glass, scientific overlaysApproximate; overlaps can wash out or darken if weights are poor
Depth peelingMany geometry passesModerate, but bandwidth-heavyDeterministic layer order, easy mental modelPerformance drops fast as layer count rises
Per-pixel linked listsBuild + resolveNode pool plus head-pointer storageClosest to exact transparency without repeated peelingAtomic contention, overflow handling, harder debugging
Stochastic transparencySample-drivenUsually fixed-sizeGood for foliage and noisy volumetric-looking contentNoise management and temporal stability are non-trivial

Ship weighted blended first when

  • Your transparent objects are numerous but visually forgiving.
  • You need predictable memory per frame.
  • You are building dashboards, CAD previews, medical overlays, particles, or annotation-heavy 3D views.
  • You want an implementation that fits standard WebGPU blending and fullscreen composition.

Escalate past weighted blended when

  • You must preserve thin intersecting layers exactly.
  • You have frequent deep overlap of refractive or near-opaque transparent surfaces.
  • Your users will zoom far enough in to catch approximation artifacts.
  • You can budget extra passes, atomics, and failure handling.
Watch out: Weighted blended OIT is not exact transparency. Treat the weight function as scene configuration, not a universal constant.

Commands by Purpose

Core WebGPU calls you actually touch

PurposeCommandWhy it matters for OIT
Feature detectnavigator.gpuHard gate before any OIT path exists.
Adapter selectionnavigator.gpu.requestAdapter()Gets the adapter that exposes limits and optional features.
Device creationadapter.requestDevice()Requests the logical device; include only features you truly need.
Canvas setupcontext.configure()Sets the presentation format and alpha mode for final composite output.
Attachment creationdevice.createTexture()Allocates accumulation and revealage targets.
Pipeline creationdevice.createRenderPipeline()Defines transparent-pass blend rules and composite-pass output.
Transparent passcommandEncoder.beginRenderPass()Renders transparent geometry into OIT attachments.
Submitdevice.queue.submit()Commits both the transparent and composite passes.
Validation capturedevice.pushErrorScope("validation")Catches setup mistakes before they turn into vague downstream failures.
Profilingdevice.createQuerySet({ type: "timestamp", ... })Useful when the adapter supports timestamp-query.
Visibility samplingbeginOcclusionQuery() / endOcclusionQuery()Helps estimate whether costly transparent work is actually contributing.

Minimal bootstrap

if (!navigator.gpu) {
  throw new Error("WebGPU is unavailable in this browser/context.");
}

const adapter = await navigator.gpu.requestAdapter();
if (!adapter) {
  throw new Error("No GPU adapter was returned.");
}

const requiredFeatures = [];
if (adapter.features.has("timestamp-query")) {
  requiredFeatures.push("timestamp-query");
}

const device = await adapter.requestDevice({ requiredFeatures });
const canvas = document.querySelector("canvas");
const context = canvas.getContext("webgpu");

context.configure({
  device,
  format: navigator.gpu.getPreferredCanvasFormat(),
  alphaMode: "premultiplied",
});

Configuration

Recommended starter layout for weighted blended OIT

  • Use one floating-point accumulation target, typically rgba16float.
  • Use one revealage target that preserves transmittance across the pass.
  • Clear accumulation to (0, 0, 0, 0).
  • Clear revealage to 1 in every channel you plan to sample.
  • Composite into the canvas in a final fullscreen pass.
const size = {
  width: canvas.width,
  height: canvas.height,
};

const accumTex = device.createTexture({
  size,
  format: "rgba16float",
  usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING,
});

const revealTex = device.createTexture({
  size,
  format: "rgba8unorm",
  usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING,
});

const transparentPass = {
  colorAttachments: [
    {
      view: accumTex.createView(),
      clearValue: { r: 0, g: 0, b: 0, a: 0 },
      loadOp: "clear",
      storeOp: "store",
    },
    {
      view: revealTex.createView(),
      clearValue: { r: 1, g: 1, b: 1, a: 1 },
      loadOp: "clear",
      storeOp: "store",
    },
  ],
};

Blend state pattern

const accumBlend = {
  color: { srcFactor: "one", dstFactor: "one", operation: "add" },
  alpha: { srcFactor: "one", dstFactor: "one", operation: "add" },
};

const revealBlend = {
  color: {
    srcFactor: "zero",
    dstFactor: "one-minus-src-alpha",
    operation: "add",
  },
  alpha: {
    srcFactor: "zero",
    dstFactor: "one-minus-src-alpha",
    operation: "add",
  },
};

Transparent pass shader shape

struct TransparentOut {
  @location(0) accum: vec4f,
  @location(1) reveal: vec4f,
};

@fragment
fn fs_transparent(color: vec3f, alphaIn: f32) -> TransparentOut {
  let alpha = clamp(alphaIn, 0.0, 1.0);
  let weight = max(1e-2, pow(alpha + 0.01, 4.0));

  var out: TransparentOut;
  out.accum = vec4f(color * alpha * weight, alpha * weight);
  out.reveal = vec4f(0.0, 0.0, 0.0, alpha);
  return out;
}

Composite pass shader shape

@fragment
fn fs_composite(@builtin(position) pos: vec4f) -> @location(0) vec4f {
  let xy = vec2i(pos.xy);
  let accum = textureLoad(accumTex, xy, 0);
  let reveal = textureLoad(revealTex, xy, 0).r;

  let avgColor = accum.rgb / max(accum.a, 1e-4);
  let outAlpha = 1.0 - reveal;

  return vec4f(avgColor, outAlpha);
}
Pro tip: Tune the weight function against your worst overlap case first. If labels, fog cards, and particle clouds all use different alpha behavior, expose material-class presets instead of one global weight.

Advanced Usage

When weighted blended is no longer enough

  • Depth peeling is the simplest exact-ish fallback conceptually, but it burns passes quickly.
  • Per-pixel linked lists are better when overdraw is deep and exact ordering matters more than simplicity.
  • Stochastic transparency is attractive for foliage and noisy content, especially when temporal accumulation already exists.

Per-pixel linked list planning notes

  • Store head pointers in a storage buffer indexed by pixel.
  • Store nodes in a second storage buffer with color, depth, alpha, and next-pointer fields.
  • Allocate nodes with WGSL atomics on a global counter.
  • Plan for overflow explicitly; a silent node-pool overflow is a correctness bug, not a perf footnote.
  • Resolve in a later pass by sorting or partial ordering per pixel.

Debug and profiling hooks

device.pushErrorScope("validation");

const pipeline = device.createRenderPipeline(descriptor);
const pipelineError = await device.popErrorScope();
if (pipelineError) {
  console.error("Pipeline validation failed:", pipelineError.message);
}
const querySet = adapter.features.has("timestamp-query")
  ? device.createQuerySet({ type: "timestamp", count: 2 })
  : null;

Official references worth keeping open

Live Filter and Shortcuts

Drop-in JS filter for a long OIT reference table

<input id="ref-filter" type="search" placeholder="Filter by API, pass, or keyword" />
<table id="ref-table">
  <tbody>
    <tr><td>requestAdapter</td><td>startup</td></tr>
    <tr><td>createTexture</td><td>attachments</td></tr>
    <tr><td>beginRenderPass</td><td>transparent pass</td></tr>
  </tbody>
</table>

<script type="module">
  const input = document.querySelector("#ref-filter");
  const rows = [...document.querySelectorAll("#ref-table tbody tr")];

  function applyFilter() {
    const q = input.value.trim().toLowerCase();
    for (const row of rows) {
      row.hidden = !row.textContent.toLowerCase().includes(q);
    }
  }

  input.addEventListener("input", applyFilter);

  window.addEventListener("keydown", (event) => {
    if (event.key === "/" && document.activeElement !== input) {
      event.preventDefault();
      input.focus();
      input.select();
    }

    if (event.key === "Escape" && document.activeElement === input) {
      input.value = "";
      applyFilter();
      input.blur();
    }
  });
</script>

Suggested shortcuts for your OIT sandbox

ShortcutBind toWhy it helps
/Focus reference filterFastest way to search API calls or techniques.
EscClear filterResets the cheat sheet without touching the mouse.
1Opaque-only passSeparates base-scene issues from transparency issues.
2Transparent accumulation passLets you inspect weighted contribution directly.
3Composite passConfirms whether artifacts are born in blend or resolve.
OOverdraw overlayQuickly reveals where exact OIT may become necessary.
TToggle timestamp instrumentationShows real pass cost when timestamp-query is available.
RReload shadersShortens weight-function tuning loops.

Frequently Asked Questions

How do I implement order-independent transparency in WebGPU without sorting every transparent object? +
Start with weighted blended OIT. Render transparent geometry into an accumulation target and a revealage target, then run a fullscreen composite pass that reconstructs average color and output alpha. It is approximate, but it avoids per-object sorting and maps cleanly to standard GPURenderPipeline blending.
Is weighted blended OIT accurate enough for CAD, medical, or scientific 3D views? +
Often yes, but not always. It performs well for labels, particles, volume-like overlays, and moderately overlapping shells, but dense intersections of near-opaque surfaces can expose artifacts. When those artifacts matter, move to depth peeling or a per-pixel linked list path for the affected materials.
What WebGPU features are required for a practical OIT renderer? +
The baseline weighted blended path only needs standard render passes, blend states, textures, samplers, and WGSL fragment outputs. timestamp-query is optional for profiling and should be requested only if the adapter exposes it. Exact OIT variants may also need substantial storage-buffer capacity and heavier atomic usage.
Why is my WebGPU transparency pipeline failing validation? +
The usual causes are mismatched attachment formats, fragment outputs that do not line up with color targets, and invalid blend-state assumptions. Wrap pipeline creation in pushErrorScope("validation") and inspect the message returned by popErrorScope(). Also verify that your revealage attachment clear value and fragment output alpha match the blend equation you intended.

Get Engineering Deep-Dives in Your Inbox

Weekly breakdowns of architecture, security, and developer tooling — no fluff.

Found this useful? Share it.