Dockerizing a Node.js Application
A beginner-friendly tutorial on containerizing your Node APIs, writing multi-stage Dockerfiles, and ensuring production readiness.
Tags
Dockerizing a Node.js Application
TL;DR
Using multi-stage Docker builds with Alpine base images reduces Node.js container sizes from 1GB to under 100MB while significantly shrinking the attack surface for security vulnerabilities.
Prerequisites
- ›Docker Desktop installed (or Docker Engine on Linux)
- ›A Node.js application (Express, Fastify, or NestJS)
- ›Basic terminal/command line familiarity
- ›Node.js 18+ installed locally for development
Step 1: Create a Basic Dockerfile
Start with a simple Dockerfile to understand the fundamentals, then optimize it.
# Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "dist/main.js"]Understanding Each Instruction
- ›
FROM node:20-alpine-- uses the Alpine Linux variant of Node.js, which is roughly 50MB instead of 1GB for the default Debian-based image - ›
WORKDIR /app-- sets the working directory inside the container - ›
COPY package*.json ./-- copies only the package files first for better layer caching - ›
RUN npm ci-- installs exact versions frompackage-lock.json(more reliable thannpm installin CI) - ›
COPY . .-- copies the rest of the application code - ›
EXPOSE 3000-- documents the port (does not actually publish it) - ›
CMD-- the default command to run when the container starts
Step 2: Write a Multi-Stage Dockerfile
Multi-stage builds separate the build environment from the runtime environment. The build stage installs dev dependencies and compiles TypeScript. The production stage copies only the compiled output and production dependencies.
# Dockerfile
# ---- Build stage ----
FROM node:20-alpine AS builder
WORKDIR /app
# Install all dependencies (including devDependencies for building)
COPY package*.json ./
RUN npm ci
# Copy source code and build
COPY tsconfig.json ./
COPY src/ ./src/
RUN npm run build
# ---- Production stage ----
FROM node:20-alpine AS production
# Create a non-root user for security
RUN addgroup --system --gid 1001 appgroup && \
adduser --system --uid 1001 --ingroup appgroup appuser
WORKDIR /app
# Copy only production dependencies
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Copy compiled output from builder stage
COPY --from=builder /app/dist ./dist
# Switch to non-root user
USER appuser
EXPOSE 3000
CMD ["node", "dist/main.js"]Why Multi-Stage?
The builder stage includes TypeScript, webpack, and all dev dependencies -- potentially hundreds of megabytes. The production stage only gets the compiled JavaScript and production dependencies. This means:
- ›Smaller image size (often 5-10x smaller)
- ›No dev tools or source code in production
- ›Fewer packages means fewer potential security vulnerabilities
Step 3: Create a .dockerignore File
The .dockerignore file prevents unnecessary files from being sent to the Docker daemon during builds.
# .dockerignore
node_modules
dist
.git
.gitignore
.env
.env.*
*.md
.vscode
.idea
coverage
.nyc_output
docker-compose*.yml
Dockerfile
.dockerignore
tests
__tests__
*.test.ts
*.spec.ts
Without a .dockerignore, the COPY . . instruction copies node_modules (which gets overwritten by npm ci anyway), .git history, test files, and other unnecessary content into the build context. This slows down builds and bloats the image.
Step 4: Set Up Docker Compose for Development
Docker Compose orchestrates multiple services (your app, database, cache) with a single command.
# docker-compose.yml
services:
api:
build:
context: .
dockerfile: Dockerfile
target: builder # Use the builder stage for development
ports:
- "3000:3000"
environment:
NODE_ENV: development
DATABASE_URL: postgresql://postgres:postgres@db:5432/myapp
REDIS_URL: redis://cache:6379
volumes:
- ./src:/app/src # Mount source for hot reload
- ./package.json:/app/package.json
command: npm run dev # Override CMD for development
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
db:
image: postgres:16-alpine
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: myapp
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
postgres_data:Start the entire development stack:
docker compose upVolume Mounts for Hot Reload
The volumes section mounts your local src/ directory into the container. When you edit files locally, the changes are reflected inside the container immediately. Combined with a watch-mode command like npm run dev (using nodemon or tsx), you get hot reload inside Docker.
Step 5: Production Docker Compose
Create a separate compose file for production-like environments.
# docker-compose.prod.yml
services:
api:
build:
context: .
dockerfile: Dockerfile
target: production
ports:
- "3000:3000"
environment:
NODE_ENV: production
DATABASE_URL: ${DATABASE_URL}
REDIS_URL: ${REDIS_URL}
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
cpus: "0.5"
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10sRun it with:
docker compose -f docker-compose.prod.yml up -dStep 6: Add Health Check Endpoints
Docker and orchestrators like Kubernetes use health checks to determine if a container is ready to serve traffic.
// src/health.ts
import { Router } from "express";
const healthRouter = Router();
healthRouter.get("/health", (req, res) => {
res.status(200).json({ status: "ok", timestamp: new Date().toISOString() });
});
healthRouter.get("/health/ready", async (req, res) => {
try {
// Check database connectivity
await db.query("SELECT 1");
// Check Redis connectivity
await redis.ping();
res.status(200).json({
status: "ready",
checks: {
database: "connected",
cache: "connected",
},
});
} catch (error) {
res.status(503).json({
status: "not ready",
error: (error as Error).message,
});
}
});
export { healthRouter };// src/main.ts
import express from "express";
import { healthRouter } from "./health";
const app = express();
app.use(healthRouter);
// ... rest of your application routes
app.listen(3000, () => console.log("Server running on port 3000"));The /health endpoint is a simple liveness check (is the process running?). The /health/ready endpoint is a readiness check (can the application handle requests, including its dependencies?).
Step 7: Production Optimizations
Graceful Shutdown
Handle SIGTERM signals so Docker can stop your container cleanly.
// src/main.ts
import { createServer } from "http";
const app = express();
const server = createServer(app);
server.listen(3000, () => console.log("Server running on port 3000"));
function gracefulShutdown(signal: string) {
console.log(`Received ${signal}. Starting graceful shutdown...`);
server.close(async () => {
console.log("HTTP server closed");
// Close database connections
await db.end();
// Close Redis connection
await redis.quit();
console.log("All connections closed");
process.exit(0);
});
// Force shutdown after 10 seconds
setTimeout(() => {
console.error("Forced shutdown after timeout");
process.exit(1);
}, 10000);
}
process.on("SIGTERM", () => gracefulShutdown("SIGTERM"));
process.on("SIGINT", () => gracefulShutdown("SIGINT"));Use dumb-init or tini
Node.js does not handle signals correctly as PID 1 inside containers. Use a lightweight init system.
FROM node:20-alpine AS production
# Install dumb-init
RUN apk add --no-cache dumb-init
# ... rest of your Dockerfile
# Use dumb-init as entrypoint
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/main.js"]dumb-init forwards signals properly to the Node.js process, ensuring graceful shutdown works as expected.
Security Scanning
Scan your built images for known vulnerabilities:
# Using Docker Scout (built into Docker Desktop)
docker scout cves my-app:latest
# Using Trivy
trivy image my-app:latestRun these scans in your CI pipeline to catch vulnerabilities before deployment.
Putting It All Together
A production-ready Docker setup for Node.js has these layers:
- ›Multi-stage Dockerfile -- build stage for compilation, production stage with minimal runtime
- ›.dockerignore -- keeps the build context clean and the image small
- ›Docker Compose (dev) -- orchestrates your app, database, and cache with volume mounts for hot reload
- ›Docker Compose (prod) -- resource limits, restart policies, and health checks
- ›Health endpoints -- liveness and readiness checks for orchestrator integration
- ›Graceful shutdown -- clean connection cleanup when the container stops
The result is a container that starts fast, runs lean, and behaves correctly in production orchestration systems.
Next Steps
- ›Add CI/CD -- build and push images to a container registry in your GitHub Actions pipeline
- ›Implement layer caching in CI -- use Docker BuildKit cache mounts for faster CI builds
- ›Set up logging -- pipe structured JSON logs to stdout for collection by Docker or Kubernetes
- ›Add secrets management -- use Docker secrets or external vaults instead of environment variables for sensitive data
- ›Kubernetes deployment -- write Kubernetes manifests with readiness probes pointing at your health endpoints
FAQ
Why should you use multi-stage Docker builds for Node.js?
Multi-stage builds let you use a full Node image for building (with dev dependencies) and then copy only the production artifacts to a minimal Alpine image, dramatically reducing final image size and security vulnerabilities.
What base image should you use for Node.js Docker containers?
Use Alpine-based Node images (node:alpine) for production. They are under 100MB compared to the default images at over 1GB, with a much smaller attack surface for security.
How does Docker ensure consistent environments?
Docker packages your application with its exact dependencies, runtime, and OS configuration into a container that runs identically on any machine, eliminating environment-specific bugs.
Collaboration
Need help with a project?
Let's Build It
I help startups and established companies design, build, and scale world-class digital products. From deep technical architecture to pixel-perfect UI — let's bring your vision to life.
Related Articles
How to Add Observability to a Node.js App with OpenTelemetry
Learn how to instrument a Node.js app with OpenTelemetry for traces, metrics, and logs, and build a practical observability setup for production debugging.
How to Build a Backend-for-Frontend (BFF) with Next.js and Node.js
A practical guide to building a Backend-for-Frontend with Next.js and Node.js for API aggregation, auth handling, caching, and frontend-specific data shaping.
How I Structure CI/CD for Next.js, Docker, and GitHub Actions
A practical CI/CD blueprint for Next.js apps using Docker and GitHub Actions, including testing, image builds, deployment stages, cache strategy, and release safety.