Skip to content

Fix: Docker Build Cache Invalidated — Slow Builds on Every Run

FixDevs ·

Quick Answer

How to fix Docker layer cache being invalidated on every build — Dockerfile instruction order, COPY optimization, ARG vs ENV, BuildKit cache mounts, and .dockerignore.

The Problem

A Docker build that should use the cache rebuilds all layers on every run:

docker build -t myapp .

# Expected: Using cache for slow layers
Step 4/12 : RUN npm install
 ---> Using cache

# Actual: All layers rebuild every time
Step 4/12 : RUN npm install
 ---> Running in a1b2c3d4e5f6
added 1234 packages from 5678 contributors  # Full reinstall every build

Or the cache is invalidated by an unrelated file change:

COPY . .            ← Copies everything including README.md
RUN npm install     ← Invalidated every time any file changes, even non-package files

Or in CI/CD, the cache is never used because the build runs on a fresh runner:

# GitHub Actions — no cache between runs by default
Step 5/15: RUN pip install -r requirements.txt
Downloading flask-2.3.2-py3-none-any.whl  # Re-downloads every time

Why This Happens

Docker builds images layer by layer. Each instruction (RUN, COPY, ADD) creates a layer. Docker caches each layer and reuses it if:

  1. The instruction string hasn’t changed
  2. The parent layer hash is the same
  3. For COPY/ADD: the file content hasn’t changed

Once one layer is invalidated, all subsequent layers are also invalidated — they can’t use cache because their parent layer changed. This makes instruction order critical.

Common cache-busting mistakes:

  • COPY . . before RUN npm install — any file change (even README.md) invalidates the COPY layer, causing npm install to re-run.
  • ARG BUILD_DATE or timestamps — dynamic values invalidate the layer on every build.
  • Not using .dockerignoreCOPY . . includes node_modules, .git, and other artifacts that shouldn’t affect the cache but do.
  • apt-get update without pinned versions — package list updates invalidate the cache unexpectedly.
  • CI running on fresh machines — layer cache is per-machine unless explicitly saved/restored.

Fix 1: Order Instructions from Least to Most Frequently Changing

The most impactful change — put dependency installation before copying application code:

# WRONG — any file change busts the npm install cache
FROM node:20-alpine

WORKDIR /app

COPY . .                    # ← Everything copied first — changes every build
RUN npm ci                  # ← Reinstalls on every file change
RUN npm run build
# CORRECT — dependencies cached until package.json changes
FROM node:20-alpine

WORKDIR /app

# Step 1: Copy ONLY dependency files first (changes rarely)
COPY package.json package-lock.json ./

# Step 2: Install dependencies (uses cache if package files unchanged)
RUN npm ci

# Step 3: Copy application code (changes frequently — only invalidates layers below)
COPY . .

# Step 4: Build (depends on app code — runs when app code changes)
RUN npm run build

The principle: Instructions that change frequently go at the bottom. Anything above an unchanged instruction can be cached.

For Python projects:

# CORRECT order for Python
FROM python:3.12-slim

WORKDIR /app

COPY requirements.txt ./             # Only requirements file first
RUN pip install --no-cache-dir -r requirements.txt  # Cached until requirements.txt changes

COPY . .                             # App code (changes often — only invalidates CMD)
CMD ["uvicorn", "main:app", "--host", "0.0.0.0"]

For Go projects:

# CORRECT order for Go
FROM golang:1.22-alpine AS builder

WORKDIR /app

COPY go.mod go.sum ./               # Module files first
RUN go mod download                 # Cache modules — rarely changes

COPY . .                            # Source files
RUN go build -o server ./cmd/server

Fix 2: Create a Thorough .dockerignore

Without .dockerignore, COPY . . includes files that shouldn’t affect the image but change frequently, busting the cache:

# .dockerignore
# Version control
.git
.gitignore

# Dependencies (built inside Docker)
node_modules
vendor
__pycache__
*.pyc
*.pyo

# Build output
dist
build
.next
target

# Test files
.pytest_cache
coverage
.nyc_output
*.test.js

# Development files
.env
.env.local
*.env.development
docker-compose.override.yml

# Editor files
.vscode
.idea
*.swp
*.swo

# OS files
.DS_Store
Thumbs.db

# Documentation
*.md
docs

# CI/CD
.github
.circleci

# Logs
*.log
logs

Verify .dockerignore is working:

# Check what Docker sends to the build context (before any COPY instructions)
docker build --no-cache -t test . 2>&1 | head -5
# "Sending build context to Docker daemon  1.234MB"
# If this number is large (hundreds of MB), .dockerignore is insufficient

Fix 3: Use BuildKit Cache Mounts for Package Managers

Docker BuildKit’s --mount=type=cache persists package manager caches between builds without storing them in the image layer:

# Enable BuildKit (default in Docker 23.0+)
export DOCKER_BUILDKIT=1
# Or add to daemon.json: { "features": { "buildkit": true } }
# syntax=docker/dockerfile:1

FROM node:20-alpine

WORKDIR /app
COPY package.json package-lock.json ./

# Cache mount — npm cache persists between builds, not in the image
RUN --mount=type=cache,target=/root/.npm \
    npm ci

COPY . .
RUN --mount=type=cache,target=/root/.npm \
    npm run build
# Python with pip cache
FROM python:3.12-slim

WORKDIR /app
COPY requirements.txt ./

RUN --mount=type=cache,target=/root/.cache/pip \
    pip install -r requirements.txt

COPY . .
# Go with module cache
FROM golang:1.22

WORKDIR /app
COPY go.mod go.sum ./

RUN --mount=type=cache,target=/root/go/pkg/mod \
    go mod download

COPY . .
RUN --mount=type=cache,target=/root/go/pkg/mod \
    --mount=type=cache,target=/root/.cache/go-build \
    go build -o /app/server ./...
# apt-get with cache
FROM ubuntu:22.04

RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
    --mount=type=cache,target=/var/lib/apt,sharing=locked \
    apt-get update && apt-get install -y \
    curl \
    git \
    && rm -rf /var/lib/apt/lists/*

Cache mounts don’t affect the layer hash — they persist between builds on the same machine regardless of whether other layers changed.

Fix 4: Avoid Dynamic Values That Bust Cache

Variables or values that change every build invalidate all layers that depend on them:

# WRONG — timestamp changes every build, busts all subsequent layers
ARG BUILD_DATE
RUN echo "Built on $BUILD_DATE" >> /app/build-info.txt

# WRONG — git SHA from ARG causes rebuild every commit
ARG GIT_SHA
RUN echo $GIT_SHA > /app/version.txt
# CORRECT — use ARG AFTER all package installation layers
FROM node:20-alpine
WORKDIR /app

# All expensive layers first (no ARGs yet)
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

# ARG/ENV near the end — only invalidates layers below this line
ARG GIT_SHA=unknown
ARG BUILD_DATE=unknown
RUN echo "{\"sha\": \"$GIT_SHA\", \"date\": \"$BUILD_DATE\"}" > /app/public/version.json

CMD ["node", "server.js"]

Pass dynamic values at runtime instead of build time:

# Better: use environment variables at runtime
ENV APP_VERSION=unknown

# Pass at runtime:
# docker run -e APP_VERSION=$(git rev-parse HEAD) myapp

Fix 5: Pin apt-get Packages to Prevent Cache Invalidation

apt-get update refreshes the package list — the result changes over time, busting the cache:

# WRONG — apt-get update changes output over time
RUN apt-get update
RUN apt-get install -y curl   # ← These are separate layers AND update may re-run

# STILL WRONG — separate layers, and update may return different results later
RUN apt-get update && apt-get install -y curl

# CORRECT — combined into one layer, pin versions, clean up in same layer
RUN apt-get update && apt-get install -y --no-install-recommends \
    curl=7.88.1-10+deb12u5 \  # Pin version
    git=1:2.39.2-1.1 \
    && rm -rf /var/lib/apt/lists/*   # Remove cache from layer — reduces image size

For development where pinning is inconvenient, use a base image that already has tools:

# Use an image that already has curl and git — no apt-get needed
FROM node:20-alpine  # alpine includes apk (faster) and common tools

Fix 6: Cache Docker Layers in CI/CD

On CI/CD, each run starts with a fresh environment — no local layer cache. Explicit cache management is required:

GitHub Actions with docker/build-push-action:

# .github/workflows/build.yml
- name: Set up Docker Buildx
  uses: docker/setup-buildx-action@v3

- name: Build and push
  uses: docker/build-push-action@v5
  with:
    context: .
    push: true
    tags: myapp:latest
    cache-from: type=gha          # Pull cache from GitHub Actions Cache
    cache-to: type=gha,mode=max   # Push cache to GitHub Actions Cache
    # mode=max caches all layers, not just the final stage

Registry-based cache (works across CI systems):

- name: Build with registry cache
  uses: docker/build-push-action@v5
  with:
    context: .
    cache-from: type=registry,ref=ghcr.io/myorg/myapp:buildcache
    cache-to: type=registry,ref=ghcr.io/myorg/myapp:buildcache,mode=max
    push: true
    tags: ghcr.io/myorg/myapp:latest

Inline cache (simpler but less effective):

- name: Build
  run: |
    docker pull myapp:latest || true   # Pull previous image for cache
    docker build \
      --cache-from myapp:latest \
      --build-arg BUILDKIT_INLINE_CACHE=1 \
      -t myapp:latest .
    docker push myapp:latest

Fix 7: Use Multi-Stage Builds to Minimize Cache Impact

Multi-stage builds keep build tools out of the final image and allow independent caching of each stage:

# syntax=docker/dockerfile:1

# Stage 1: Dependencies (cached unless package.json changes)
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm npm ci

# Stage 2: Builder (cached unless source code changes)
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

# Stage 3: Production (minimal image — only runtime files)
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production

COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules

EXPOSE 3000
CMD ["node", "dist/server.js"]

Each stage is cached independently — a source code change only rebuilds the builder and runner stages, not the deps stage.

Still Not Working?

Force a full rebuild — to verify cache is working, first confirm a clean build works correctly, then make a change and verify cache is used:

# Force no-cache rebuild
docker build --no-cache -t myapp .

# Next run — should use cache for unchanged layers
docker build -t myapp .

BuildKit not enabled — classic Docker builder (without BuildKit) has less effective caching. Enable BuildKit:

DOCKER_BUILDKIT=1 docker build -t myapp .

# Or set permanently in /etc/docker/daemon.json:
{ "features": { "buildkit": true } }

Image from different registry not compatible for cache--cache-from requires the image to have been built with --build-arg BUILDKIT_INLINE_CACHE=1 or with the type=registry cache exporter to be useful as a cache source.

.dockerignore not in the right directory.dockerignore must be in the same directory as the Dockerfile (the build context root), not in subdirectories.

For related Docker issues, see Fix: Docker Multi-Stage Build Failed and Fix: Docker Compose Networking Not Working.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles