Deploy a Next.js App to AWS with Docker
A step-by-step guide to deploying a Next.js application on AWS using Docker, ECR, ECS Fargate, and an Application Load Balancer with health checks.
Tags
Deploy a Next.js App to AWS with Docker
In this tutorial, you will containerize a Next.js application with Docker, push the image to Amazon Elastic Container Registry (ECR), deploy it on ECS Fargate, and put it behind an Application Load Balancer with health checks. By the end, you will have a production-grade deployment that supports zero-downtime updates and horizontal scaling.
Deploying to AWS with Docker gives you full control over your infrastructure. Unlike platform-as-a-service solutions, you can run WebSocket servers, background workers, and long-running processes alongside your Next.js application. Docker also ensures your application runs identically in development and production.
TL;DR
Write a multi-stage Dockerfile that builds your Next.js app in one stage and creates a minimal production image in the next. Push the image to ECR. Create an ECS Fargate task definition that references the image. Deploy behind an ALB with health checks on /api/health. Configure environment variables through AWS Secrets Manager.
Prerequisites
- ›AWS account with IAM permissions for ECR, ECS, and EC2
- ›AWS CLI v2 installed and configured
- ›Docker installed locally
- ›A Next.js 14+ application ready for deployment
- ›Basic familiarity with AWS services
# Verify your tools are installed
docker --version
aws --version
aws sts get-caller-identity # Verify AWS credentialsStep 1: Create the Dockerfile
A multi-stage Dockerfile produces the smallest possible production image by separating the build dependencies from the runtime.
# Stage 1: Install dependencies
FROM node:20-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci --only=production && \
cp -R node_modules /prod_node_modules && \
npm ci
# Stage 2: Build the application
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Disable Next.js telemetry during build
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build
# Stage 3: Production runtime
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
# Create a non-root user for security
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nextjs
# Copy only the files needed to run the application
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
CMD ["node", "server.js"]This Dockerfile has three stages. The deps stage installs all dependencies. The builder stage compiles your Next.js application. The runner stage creates a minimal image with only the production artifacts. The final image is typically under 150 MB compared to over 1 GB for an unoptimized image.
For the standalone output to work, enable it in your Next.js configuration:
// next.config.ts
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
output: "standalone",
};
export default nextConfig;The standalone output mode bundles your application and its dependencies into a self-contained directory that can run with just node server.js, without needing node_modules.
Step 2: Add a Health Check Endpoint
AWS load balancers and container orchestrators need a health check endpoint to determine if your application is ready to receive traffic.
// app/api/health/route.ts
import { NextResponse } from "next/server";
export async function GET() {
try {
// Add checks for your critical dependencies
// For example, verify database connectivity:
// await prisma.$queryRaw`SELECT 1`;
return NextResponse.json(
{
status: "healthy",
timestamp: new Date().toISOString(),
uptime: process.uptime(),
},
{ status: 200 }
);
} catch (error) {
return NextResponse.json(
{
status: "unhealthy",
timestamp: new Date().toISOString(),
error: "Health check failed",
},
{ status: 503 }
);
}
}The health check endpoint should verify that your application can serve requests and that critical dependencies like the database are accessible. Return a 200 status for healthy and 503 for unhealthy. Keep the check fast because the load balancer calls it every few seconds.
Step 3: Add a Docker Ignore File
Prevent unnecessary files from being included in the Docker build context.
# .dockerignore
node_modules
.next
.git
.gitignore
*.md
docker-compose*.yml
.env*.local
.vscode
coverage
.huskyA well-configured .dockerignore speeds up builds significantly by reducing the size of the build context that Docker sends to the daemon.
Step 4: Build and Test Locally
Build the Docker image and verify it works before pushing to AWS.
# Build the image
docker build -t my-nextjs-app .
# Run the container locally
docker run -p 3000:3000 \
-e DATABASE_URL="postgresql://user:pass@host:5432/db" \
my-nextjs-app
# Test the health endpoint
curl http://localhost:3000/api/healthVerify that your application loads correctly at http://localhost:3000 and the health check returns a 200 response. Fix any issues before proceeding to AWS deployment.
Step 5: Push to Amazon ECR
Create an ECR repository and push your Docker image.
# Set your AWS region and account ID
AWS_REGION=us-east-1
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
# Create the ECR repository
aws ecr create-repository \
--repository-name my-nextjs-app \
--region $AWS_REGION
# Authenticate Docker with ECR
aws ecr get-login-password --region $AWS_REGION | \
docker login --username AWS \
--password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com
# Tag the image for ECR
docker tag my-nextjs-app:latest \
${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/my-nextjs-app:latest
# Push the image
docker push \
${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/my-nextjs-app:latestECR stores your Docker images privately within your AWS account. Each push overwrites the latest tag. For production, use version-specific tags like v1.2.3 or the git commit SHA to enable reliable rollbacks.
Step 6: Create the ECS Task Definition
The task definition tells ECS how to run your container, including resource limits, environment variables, and logging configuration.
{
"family": "my-nextjs-app",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/ecsTaskExecutionRole",
"taskRoleArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/ecsTaskRole",
"containerDefinitions": [
{
"name": "nextjs",
"image": "YOUR_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/my-nextjs-app:latest",
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"environment": [
{
"name": "NODE_ENV",
"value": "production"
}
],
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:us-east-1:YOUR_ACCOUNT_ID:secret:prod/database-url"
}
],
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:3000/api/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/my-nextjs-app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
},
"essential": true
}
]
}Register the task definition:
aws ecs register-task-definition \
--cli-input-json file://task-definition.jsonKey configuration choices explained:
- ›256 CPU / 512 Memory: The smallest Fargate size, suitable for low-traffic applications. Scale up as needed.
- ›secrets: References AWS Secrets Manager for sensitive values. Never put database URLs or API keys in the
environmentsection. - ›healthCheck.startPeriod: Gives the container 60 seconds to start before health checks begin, preventing premature restarts during cold starts.
- ›awslogs: Sends container logs to CloudWatch for monitoring and debugging.
Step 7: Configure the Application Load Balancer
The ALB distributes incoming traffic across your ECS tasks and handles SSL termination.
# Create a target group for the ECS service
aws elbv2 create-target-group \
--name my-nextjs-tg \
--protocol HTTP \
--port 3000 \
--vpc-id vpc-your-vpc-id \
--target-type ip \
--health-check-path /api/health \
--health-check-interval-seconds 30 \
--healthy-threshold-count 2 \
--unhealthy-threshold-count 3
# Create the load balancer
aws elbv2 create-load-balancer \
--name my-nextjs-alb \
--subnets subnet-xxx subnet-yyy \
--security-groups sg-your-sg-id \
--scheme internet-facing
# Create a listener on port 443 (HTTPS)
aws elbv2 create-listener \
--load-balancer-arn arn:aws:elasticloadbalancing:...:loadbalancer/app/my-nextjs-alb/xxx \
--protocol HTTPS \
--port 443 \
--certificates CertificateArn=arn:aws:acm:...:certificate/xxx \
--default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:...:targetgroup/my-nextjs-tg/xxx
# Redirect HTTP to HTTPS
aws elbv2 create-listener \
--load-balancer-arn arn:aws:elasticloadbalancing:...:loadbalancer/app/my-nextjs-alb/xxx \
--protocol HTTP \
--port 80 \
--default-actions Type=redirect,RedirectConfig='{Protocol=HTTPS,Port=443,StatusCode=HTTP_301}'The health check on /api/health ensures the ALB only sends traffic to healthy containers. The HTTP-to-HTTPS redirect ensures all traffic is encrypted.
Step 8: Create the ECS Service
The ECS service maintains the desired number of running tasks and integrates with the load balancer.
# Create the ECS cluster
aws ecs create-cluster --cluster-name my-nextjs-cluster
# Create the service
aws ecs create-service \
--cluster my-nextjs-cluster \
--service-name my-nextjs-service \
--task-definition my-nextjs-app \
--desired-count 2 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={subnets=[subnet-xxx,subnet-yyy],securityGroups=[sg-xxx],assignPublicIp=ENABLED}" \
--load-balancers "targetGroupArn=arn:aws:elasticloadbalancing:...:targetgroup/my-nextjs-tg/xxx,containerName=nextjs,containerPort=3000" \
--deployment-configuration "maximumPercent=200,minimumHealthyPercent=100"The desired-count of 2 runs two instances for high availability. The deployment configuration ensures zero-downtime updates: ECS launches new tasks before draining old ones (maximumPercent=200), and at least 100% of tasks remain healthy during the deployment.
Step 9: Deploy Updates
When you push a new version of your application, update the ECS service to pull the new image.
# Build and push the new image
docker build -t my-nextjs-app .
docker tag my-nextjs-app:latest \
${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/my-nextjs-app:latest
docker push \
${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/my-nextjs-app:latest
# Force a new deployment (pulls the latest image)
aws ecs update-service \
--cluster my-nextjs-cluster \
--service my-nextjs-service \
--force-new-deploymentThe --force-new-deployment flag tells ECS to launch new tasks with the latest image, even if the task definition has not changed. ECS performs a rolling update, replacing old tasks one at a time while maintaining the minimum healthy percentage.
Summary
The complete deployment pipeline consists of:
- ›Dockerfile: Multi-stage build that produces a minimal production image using Next.js standalone output
- ›Health check endpoint: Enables the ALB and ECS to monitor container health
- ›ECR: Private Docker registry within your AWS account
- ›ECS Task Definition: Specifies resource limits, environment variables, secrets, and logging
- ›Application Load Balancer: Distributes traffic, terminates SSL, and enforces HTTPS
- ›ECS Service: Maintains desired task count with zero-downtime rolling deployments
This architecture is production-ready and scales horizontally by increasing the desired task count or adding auto-scaling policies based on CPU or request metrics.
Next Steps
- ›Set up auto-scaling with target tracking policies based on CPU utilization or request count
- ›Add a CI/CD pipeline using GitHub Actions to automate the build, push, and deploy workflow
- ›Configure CloudWatch alarms for monitoring container health, error rates, and latency
- ›Implement blue/green deployments using AWS CodeDeploy for even safer rollouts
- ›Add a CDN with CloudFront in front of the ALB for global edge caching of static assets
- ›Set up logging and tracing with AWS X-Ray for distributed tracing across services
FAQ
Why use Docker for deploying Next.js instead of Vercel?
Docker gives you full control over your infrastructure and avoids vendor lock-in. It supports features that serverless platforms cannot provide, such as persistent WebSocket connections, background workers, cron jobs, and long-running processes. Docker deployments are also more cost-effective at scale because you pay for compute capacity rather than per-invocation pricing.
How much does it cost to run a Next.js app on ECS Fargate?
Costs depend on your task size and traffic patterns. A minimal setup with 0.25 vCPU and 0.5 GB memory running continuously costs roughly $10-15 per month. Running two tasks for high availability doubles that. You pay only for the compute time your containers use, and Fargate Spot can reduce costs further for fault-tolerant workloads.
Can I use ECS with EC2 instead of Fargate?
Yes. The EC2 launch type gives you more control over the underlying instances and can be cheaper for steady-state workloads using Reserved Instances or Savings Plans. Fargate is simpler because AWS manages provisioning, patching, and scaling the server infrastructure. Choose Fargate to start and migrate to EC2 if you need to optimize costs at scale.
How do I handle environment variables in ECS?
Store sensitive values like database URLs, API keys, and secrets in AWS Secrets Manager or SSM Parameter Store. Reference them in the secrets section of your task definition. Non-sensitive configuration like NODE_ENV can go in the environment section. Never bake secrets into your Docker image because anyone with access to the image can extract them.
How do I set up a CI/CD pipeline for this deployment?
Use GitHub Actions with a workflow that triggers on pushes to your main branch. The workflow should build the Docker image, push it to ECR, and update the ECS service using the AWS CLI. Store your AWS credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) as GitHub repository secrets. Use OIDC federation with IAM roles for a more secure alternative to long-lived credentials.
Collaboration
Need help with a project?
Let's Build It
I help startups and established companies design, build, and scale world-class digital products. From deep technical architecture to pixel-perfect UI — let's bring your vision to life.
Related Articles
How to Add Observability to a Node.js App with OpenTelemetry
Learn how to instrument a Node.js app with OpenTelemetry for traces, metrics, and logs, and build a practical observability setup for production debugging.
How to Build a Backend-for-Frontend (BFF) with Next.js and Node.js
A practical guide to building a Backend-for-Frontend with Next.js and Node.js for API aggregation, auth handling, caching, and frontend-specific data shaping.
How I Structure CI/CD for Next.js, Docker, and GitHub Actions
A practical CI/CD blueprint for Next.js apps using Docker and GitHub Actions, including testing, image builds, deployment stages, cache strategy, and release safety.