Fix: Docker Container Keeps Restarting
Quick Answer
How to fix a Docker container that keeps restarting — reading exit codes, debugging CrashLoopBackOff, fixing entrypoint errors, missing env vars, out-of-memory kills, and restart policy misconfiguration.
The Error
A Docker container exits immediately after starting and keeps restarting in a loop:
docker ps
# CONTAINER ID IMAGE COMMAND STATUS PORTS
# a1b2c3d4e5f6 myapp "node ..." Restarting (1) 5 seconds agoOr in Docker Compose:
myapp | Error: Cannot find module '/app/dist/server.js'
myapp exited with code 1
myapp | Error: Cannot find module '/app/dist/server.js'
myapp exited with code 1The container starts, crashes, Docker restarts it due to the restart policy, it crashes again — indefinitely.
Why This Happens
Docker’s restart policy (--restart always or restart: always in Compose) automatically restarts containers that exit. When the app crashes on startup, this creates a restart loop:
- Application error on startup — a missing file, bad environment variable, or uncaught exception crashes the app before it becomes healthy.
- Wrong entrypoint or command — the CMD or ENTRYPOINT points to a file that doesn’t exist in the image, or the command syntax is wrong.
- Missing required environment variables — the app reads an env var at startup and throws if it’s undefined.
- Port already in use — the app tries to bind a port that’s occupied, fails, and exits.
- Out-of-memory kill — the container hits its memory limit and the kernel kills the process (exit code 137).
- Signal handling — the app doesn’t handle SIGTERM properly and exits with a non-zero code, triggering a restart even during intentional shutdown.
- Dependency not ready — the app tries to connect to a database or external service at startup before it’s available.
Step 1: Read the Exit Code
The exit code tells you the category of failure:
docker inspect <container_name_or_id> --format='{{.State.ExitCode}}'Common exit codes:
| Exit Code | Meaning |
|---|---|
1 | Application error (check your app’s logs) |
2 | Misuse of shell command |
125 | Docker run itself failed |
126 | Command found but not executable |
127 | Command not found (wrong path in CMD/ENTRYPOINT) |
137 | Killed by signal 9 (OOM kill or docker kill) |
139 | Segmentation fault |
143 | Killed by signal 15 (SIGTERM — graceful shutdown) |
Exit code 127 → fix your CMD path. Exit code 137 → increase memory limit. Exit code 1 → read the application logs.
Step 2: Read the Logs
# Show logs from the last run (even after restart)
docker logs <container_name>
# Follow logs in real time
docker logs -f <container_name>
# Show last 50 lines
docker logs --tail 50 <container_name>
# For Docker Compose
docker compose logs myapp
docker compose logs --tail 50 myappThe log output almost always contains the specific error. Read it before changing anything else.
Fix 1: Fix Application Startup Errors
The most common cause is the application crashing during initialization. The log will show the specific error:
Node.js — module not found:
Error: Cannot find module '/app/dist/server.js'The build step wasn’t run before building the image, so dist/ doesn’t exist:
# Ensure build runs inside the Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build # ← Must be here
CMD ["node", "dist/server.js"]Python — import error:
ModuleNotFoundError: No module named 'fastapi'Dependencies weren’t installed in the image:
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt # ← Required
COPY . .
CMD ["python", "main.py"]Fix 2: Fix Wrong CMD or ENTRYPOINT
Exit code 127 means Docker ran the command but the shell couldn’t find the binary:
# Check what command the container tried to run
docker inspect <container> --format='{{.Config.Cmd}}'
docker inspect <container> --format='{{.Config.Entrypoint}}'Run the container interactively to debug:
# Override the entrypoint to get a shell
docker run -it --entrypoint /bin/sh myapp
# Inside the container, check if the file exists
ls -la /app/dist/server.js
which nodeFix the Dockerfile CMD:
# Wrong — file doesn't exist at this path
CMD ["node", "server.js"]
# Correct — use the actual path
CMD ["node", "/app/dist/server.js"]
# Or use the working directory
WORKDIR /app
CMD ["node", "dist/server.js"]Common Mistake: Using a shell script as entrypoint but forgetting
chmod +x:COPY entrypoint.sh /entrypoint.sh RUN chmod +x /entrypoint.sh # ← Required, otherwise exit code 126 ENTRYPOINT ["/entrypoint.sh"]
Fix 3: Provide Missing Environment Variables
If the app reads a required env var at startup and it’s missing, it crashes with exit code 1. The log will show something like:
Error: DATABASE_URL environment variable is required
TypeError: Cannot read properties of undefined (reading 'split')Pass environment variables to the container:
# Single variable
docker run -e DATABASE_URL=postgres://... myapp
# From a .env file
docker run --env-file .env myappIn Docker Compose:
services:
myapp:
image: myapp
environment:
DATABASE_URL: postgres://user:pass@db:5432/myapp
REDIS_URL: redis://redis:6379
NODE_ENV: production
# Or load from a file
env_file:
- .env.productionMake the app fail clearly when required variables are missing:
// Node.js — validate env at startup
const requiredEnv = ['DATABASE_URL', 'JWT_SECRET', 'PORT'];
for (const key of requiredEnv) {
if (!process.env[key]) {
console.error(`Missing required environment variable: ${key}`);
process.exit(1);
}
}Fix 4: Fix Out-of-Memory Kills (Exit Code 137)
Exit code 137 means the kernel killed the process because it exceeded the container’s memory limit:
# Check if it was OOM killed
docker inspect <container> --format='{{.State.OOMKilled}}'
# true = OOM killedIncrease the memory limit:
docker run -m 512m myapp # 512 MB limit
docker run -m 1g myapp # 1 GB limitIn Docker Compose:
services:
myapp:
image: myapp
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256MFind what’s consuming memory — run without a limit and check usage:
docker stats <container>If memory grows without bound, the app has a memory leak. Fix the leak rather than just raising the limit.
Fix 5: Wait for Dependencies
If the app connects to a database or other service at startup, and that service isn’t ready yet, the connection fails and the app exits:
Error: connect ECONNREFUSED 127.0.0.1:5432Use depends_on with health checks in Docker Compose:
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: secret
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
myapp:
image: myapp
depends_on:
db:
condition: service_healthy # Wait until db passes health check
environment:
DATABASE_URL: postgres://postgres:secret@db:5432/myappOr use a wait script inside the container:
# wait-for-it.sh (common utility)
./wait-for-it.sh db:5432 --timeout=30 -- node dist/server.jsImplement retry logic in the app itself — this is the most resilient approach:
async function connectWithRetry(maxAttempts = 5) {
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
await db.connect();
console.log('Database connected');
return;
} catch (err) {
if (attempt === maxAttempts) throw err;
console.log(`Connection attempt ${attempt} failed, retrying in 5s...`);
await new Promise(r => setTimeout(r, 5000));
}
}
}Fix 6: Adjust the Restart Policy
If the app exits intentionally (e.g., a one-time migration task), restart: always keeps restarting it unnecessarily. Use the right policy:
services:
# Long-running server — restart on failure, not on clean exit
myapp:
image: myapp
restart: unless-stopped # or "on-failure"
# One-time task — never restart
migrate:
image: myapp
command: ["node", "migrate.js"]
restart: "no" # Default — don't restart after exitRestart policies:
| Policy | Behavior |
|---|---|
no | Never restart (default) |
on-failure | Restart only on non-zero exit code |
unless-stopped | Restart unless manually stopped |
always | Always restart, even on clean exit |
restart: always combined with a crashing app creates an infinite loop. Switch to on-failure with a max count:
docker run --restart on-failure:5 myapp # Max 5 restartsStill Not Working?
Run the container without the restart policy to inspect the exit more carefully:
docker run --rm myapp
# Container exits once, you see the full outputRun with a shell to explore the container filesystem:
docker run -it --rm --entrypoint /bin/sh myapp
# Inside: ls, env, cat /app/dist/server.js, etc.Check system-level OOM kills (outside Docker):
dmesg | grep -i "out of memory"
dmesg | grep -i "oom"
journalctl -k | grep -i "killed process"Check if a port conflict is causing the crash:
# See what's using port 3000 on the host
ss -tlnp | grep :3000
lsof -i :3000For related issues, see Fix: Docker Container Already in Use and Fix: Kubernetes CrashLoopBackOff.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Docker Compose Services Can't Connect to Each Other
How to fix Docker Compose networking issues — services can't reach each other by hostname, port mapping confusion, network aliases, depends_on timing, and host vs container port differences.
Fix: Docker Compose Environment Variables Not Loading from .env File
How to fix Docker Compose not loading environment variables from .env files — why variables are empty or undefined inside containers, the difference between env_file and variable substitution, and how to debug env var issues.
Fix: Coolify Not Working — Deployment Failing, SSL Not Working, or Containers Not Starting
How to fix Coolify self-hosted PaaS issues — server setup, application deployment, Docker and Nixpacks builds, environment variables, SSL certificates, database provisioning, and GitHub integration.
Fix: Docker Secrets Not Working — BuildKit --secret Not Mounting, Compose Secrets Undefined, or Secret Leaking into Image
How to fix Docker secrets — BuildKit secret mounts in Dockerfile, docker-compose secrets config, runtime vs build-time secrets, environment variable alternatives, and verifying secrets don't leak into image layers.