CI/CD Performance Optimization: Caching, Parallelism
Bottom Line
The fastest CI pipelines do three things consistently: reuse dependency state, run independent work in parallel, and structure Docker builds so unchanged layers stay cached. Most teams get the biggest gain by fixing cache keys and Dockerfile order before buying more runners.
Key Takeaways
- ›Use lockfile-based cache keys so dependency restores stay correct and reproducible.
- ›Set max-parallel deliberately to balance speed against runner saturation.
- ›Copy dependency manifests before source files so Docker can reuse install layers.
- ›Use Buildx cache-to/cache-from with type=gha for remote CI layer reuse.
- ›Verify wins with wall-clock time, cache-hit rate, and Docker layers marked CACHED.
Slow pipelines are usually not a compute problem first; they are a reuse problem. If every run reinstalls dependencies, rebuilds identical Docker layers, and serializes work that could run concurrently, CI time grows with every commit. This tutorial shows a practical path to faster pipelines in GitHub Actions: measure the baseline, add deterministic dependency caching, parallelize the job graph, and restructure Docker builds so BuildKit can reuse expensive layers across runs.
Prerequisites
Bottom Line
Cache what is expensive and stable, parallelize only what is independent, and make Docker layers reflect change frequency. Those three adjustments usually cut CI time more than runner upgrades do.
- A repository already running CI in GitHub Actions.
- A package manager with a lockfile such as
package-lock.json,pnpm-lock.yaml, oryarn.lock. - A Docker build that currently runs inside CI, or one you want to add.
- Basic familiarity with workflow YAML and Dockerfiles.
- If you need to clean up long YAML or shell snippets before sharing them in docs or PRs, the Code Formatter is a useful quick pass.
Examples below assume a Node.js service, but the same ideas apply to Python, Go, Java, and polyglot monorepos.
Step 1: Measure the Baseline
Do not optimize blind. First, capture where time actually goes in a representative run. For most teams, the slowest stages are dependency installation, tests, and container image builds.
What to record
- Total pipeline duration from workflow start to finish.
- Per-job duration for lint, unit tests, integration tests, and image build.
- Whether jobs wait on each other because of unnecessary
needsedges. - Whether Docker logs show repeated rebuilds of dependency layers.
Start with a simple workflow that exposes timing clearly:
name: ci
on:
push:
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/setup-node@v6
with:
node-version: 22
- run: npm ci
- run: npm test
docker:
runs-on: ubuntu-latest
needs: test
steps:
- uses: actions/checkout@v6
- run: docker build -t app:ci .
This is intentionally conservative. It gives you a baseline before you introduce caching or parallel execution.
Step 2: Cache Dependencies
The easiest win in many pipelines is dependency caching. In actions/setup-node@v6, package-manager caching is built in for supported managers. For Node projects, prefer that first because it keeps the workflow simpler and keys the cache from your dependency files.
Use built-in package-manager caching
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/setup-node@v6
with:
node-version: 22
cache: npm
cache-dependency-path: package-lock.json
- run: npm ci
- run: npm test
- cache enables package-manager cache reuse.
- cache-dependency-path ties invalidation to the lockfile, not to arbitrary branch names.
- npm ci remains reproducible because it installs from the lockfile rather than mutating it.
node_modules, unless you have measured a clear gain and understand the portability risks across runners and native dependencies.If you are using a toolchain that does not offer a built-in cache integration, use actions/cache directly and keep keys as narrow as possible. Good keys are deterministic and derived from lockfiles, tool versions, and OS. Bad keys are broad enough to return stale or incompatible artifacts.
What good cache keys include
- Runner OS such as Linux or macOS.
- Language or runtime version.
- Hash of dependency lockfiles.
Step 3: Parallelize Safely
Once caching reduces repeated setup work, attack pipeline structure. A common anti-pattern is putting lint, tests, and packaging into one linear job. Independent tasks should start at the same time.
Split independent jobs
name: ci
on:
push:
pull_request:
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/setup-node@v6
with:
node-version: 22
cache: npm
- run: npm ci
- run: npm run lint
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
node: [20, 22]
max-parallel: 2
steps:
- uses: actions/checkout@v6
- uses: actions/setup-node@v6
with:
node-version: ${{ matrix.node }}
cache: npm
- run: npm ci
- run: npm test
docker:
runs-on: ubuntu-latest
needs: [lint, test]
steps:
- uses: actions/checkout@v6
- run: docker build -t app:ci .
How to choose max-parallel
- Increase it when test shards are CPU-bound and runners are available.
- Lower it when jobs compete for the same external resource, such as a rate-limited package registry or shared database.
- Keep fail-fast disabled for test matrices when you need full failure visibility across versions.
Step 4: Optimize Docker Layers
Docker performance is mostly about layer invalidation. If you copy your entire repository before installing dependencies, any source change can invalidate the install layer and force a full rebuild. Reorder the Dockerfile so stable inputs are earlier and fast-changing inputs are later.
Use layer-friendly Dockerfile ordering
# syntax=docker/dockerfile:1
FROM node:22-alpine AS build
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm npm ci
COPY . .
RUN npm run build
FROM node:22-alpine AS runtime
WORKDIR /app
COPY --from=build /app/package.json ./package.json
COPY --from=build /app/package-lock.json ./package-lock.json
RUN --mount=type=cache,target=/root/.npm npm ci --omit=dev
COPY --from=build /app/dist ./dist
CMD ["node", "dist/server.js"]
- Dependency manifests are copied before application source, so the install layer survives ordinary code edits.
- RUN --mount=type=cache allows BuildKit to reuse package-manager cache data during image builds.
- A multi-stage build keeps the runtime image smaller and avoids shipping build-only dependencies.
Persist Buildx cache between CI runs
jobs:
docker:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v6
- uses: docker/setup-buildx-action@v4
- uses: docker/build-push-action@v7
with:
context: .
push: false
tags: app:ci
cache-from: type=gha
cache-to: type=gha,mode=max
This is the CI equivalent of not throwing away your builder state after every run. The first build is still expensive; later builds should reuse layers unless dependency manifests or build inputs changed.
Verify, Troubleshoot, and What’s Next
Verification and expected output
After the changes, compare three successive runs instead of one. You are looking for consistency, not a one-off best case.
- Dependency install steps should shrink after the first warm-cache run.
- Matrix jobs should begin together instead of waiting behind a monolithic setup phase.
- Docker build logs should show previously built steps as CACHED or complete much faster.
- Total wall-clock time should fall even if aggregate compute minutes stay similar.
Troubleshooting: top 3 issues
- Cache never hits. Check that the lockfile path is correct and committed. If the key depends on a file that changes every run, you have effectively disabled reuse.
- Parallel jobs overwhelm shared services. Lower max-parallel, add retries around flaky network steps, or isolate services per shard.
- Docker layers keep rebuilding. Inspect the Dockerfile order. If
COPY . .appears before dependency installation, small source changes will invalidate expensive layers.
What’s next
- Shard large test suites by timing data instead of file count.
- Split monorepo workflows so only changed packages build and test.
- Publish reusable workflow templates for cache policy and Docker build conventions.
- Add CI observability so cache hit rate and median pipeline duration are visible over time.
The highest-leverage habit is to treat CI like production infrastructure: measure it, model dependencies explicitly, and optimize invalidation boundaries. Once that mindset is in place, caching and parallelization stop being hacks and become part of normal engineering design.
Frequently Asked Questions
What is the safest thing to cache in GitHub Actions? +
node_modules can work, but it is more sensitive to platform differences and native modules.Why did parallelizing my CI make it slower? +
How do I make Docker builds reuse layers in CI? +
cache-from: type=gha and cache-to: type=gha,mode=max in docker/build-push-action. Without both Dockerfile ordering and persisted cache state, reuse will be limited.Should I use actions/cache or setup-node cache? +
Get Engineering Deep-Dives in Your Inbox
Weekly breakdowns of architecture, security, and developer tooling — no fluff.