Set Up CI/CD with GitHub Actions and Docker
A hands-on guide to setting up a CI/CD pipeline with GitHub Actions and Docker — covering multi-stage builds, testing, caching, and deployment to AWS ECR/ECS.
Tags
Set Up CI/CD with GitHub Actions and Docker
In this tutorial, you will build a complete CI/CD pipeline using GitHub Actions that tests your code, builds a Docker image using multi-stage builds, and deploys it to AWS ECR and ECS. You will implement caching strategies to keep builds fast, manage environment variables securely, and set up separate workflows for staging and production deployments. By the end, every push to your main branch triggers an automated pipeline that gets your code from commit to production.
TL;DR
Create a GitHub Actions workflow that runs tests on every push, builds a multi-stage Docker image, pushes it to AWS ECR, and deploys to ECS. Use Docker layer caching and dependency caching to keep builds under five minutes. Manage secrets through GitHub's encrypted secrets and use environment-specific configurations for staging and production.
Prerequisites
- ›A GitHub repository with a Node.js or TypeScript application
- ›Docker installed locally for testing builds
- ›An AWS account with ECR and ECS set up (or any container registry/orchestrator)
- ›Basic familiarity with Docker and YAML syntax
Step 1: Create a Multi-Stage Dockerfile
A multi-stage Dockerfile separates the build environment from the production image, resulting in a smaller and more secure final image.
# Stage 1: Install dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production && \
cp -R node_modules /prod_modules && \
npm ci
# Stage 2: Build the application
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Stage 3: Production image
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 appuser
COPY --from=deps /prod_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY package.json ./
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["node", "dist/main.js"]This Dockerfile has three stages. The deps stage installs all dependencies and saves a separate copy of production-only dependencies. The builder stage copies the full dependencies and source code, then runs the build. The runner stage starts from a clean Alpine image, copies only the production dependencies and built output, and runs as a non-root user.
The HEALTHCHECK instruction lets Docker and container orchestrators verify the application is responding. The non-root appuser follows the security principle of least privilege.
Build and test it locally:
docker build -t my-app:latest .
docker run -p 3000:3000 my-app:latestStep 2: Create the GitHub Actions Workflow
Create your workflow file at .github/workflows/ci-cd.yml:
name: CI/CD Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
AWS_REGION: us-east-1
ECR_REPOSITORY: my-app
ECS_SERVICE: my-app-service
ECS_CLUSTER: my-cluster
ECS_TASK_DEFINITION: .aws/task-definition.json
jobs:
test:
name: Run Tests
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16-alpine
env:
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
POSTGRES_DB: testdb
ports:
- 5432:5432
options: >-
--health-cmd="pg_isready -U testuser"
--health-interval=10s
--health-timeout=5s
--health-retries=5
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Run type check
run: npm run type-check
- name: Run unit tests
run: npm run test
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdb
- name: Run e2e tests
run: npm run test:e2e
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdb
- name: Upload coverage
uses: actions/upload-artifact@v4
with:
name: coverage
path: coverage/
retention-days: 7The services section spins up a PostgreSQL container alongside your tests, giving you a real database to test against. The actions/setup-node@v4 with cache: "npm" caches the npm cache directory, speeding up dependency installation on subsequent runs.
Step 3: Add the Build and Push Job
Add a second job that builds the Docker image and pushes it to AWS ECR:
build-and-push:
name: Build and Push Docker Image
runs-on: ubuntu-latest
needs: test
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/develop'
permissions:
id-token: write
contents: read
outputs:
image-tag: ${{ steps.meta.outputs.tags }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Generate image metadata
id: meta
run: |
IMAGE_TAG="${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ github.sha }}"
echo "tags=$IMAGE_TAG" >> $GITHUB_OUTPUT
echo "Image tag: $IMAGE_TAG"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ steps.meta.outputs.tags }}
${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
NODE_ENV=production
- name: Scan image for vulnerabilities
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ steps.meta.outputs.tags }}
format: "sarif"
output: "trivy-results.sarif"
severity: "CRITICAL,HIGH"Key details about this job. The needs: test ensures it only runs after tests pass. The if condition limits builds to pushes on main and develop branches (not pull requests). AWS authentication uses OIDC (role-to-assume) which is more secure than storing access keys as secrets. Docker Buildx enables advanced features like multi-platform builds and the GitHub Actions cache backend (type=gha). The Trivy scan checks the built image for known vulnerabilities.
The cache-from and cache-to directives use the GitHub Actions cache to store Docker layers between workflow runs. Unchanged layers are reused, which can reduce build times from minutes to seconds for incremental changes.
Step 4: Add the Deployment Job
Add a third job that deploys to AWS ECS:
deploy:
name: Deploy to ECS
runs-on: ubuntu-latest
needs: build-and-push
if: github.ref == 'refs/heads/main'
permissions:
id-token: write
contents: read
environment:
name: production
url: https://myapp.example.com
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Render ECS task definition
id: render-task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: ${{ env.ECS_TASK_DEFINITION }}
container-name: my-app
image: ${{ needs.build-and-push.outputs.image-tag }}
environment-variables: |
NODE_ENV=production
DATABASE_URL=${{ secrets.DATABASE_URL }}
- name: Deploy to Amazon ECS
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.render-task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
wait-for-minutes: 10
- name: Notify deployment
if: always()
run: |
if [ "${{ job.status }}" == "success" ]; then
echo "Deployment successful: ${{ github.sha }}"
else
echo "Deployment failed: ${{ github.sha }}"
fiThe environment: production setting enables GitHub's environment protection rules. You can require manual approval, restrict which branches can deploy, and add environment-specific secrets. The wait-for-service-stability flag makes the job wait until ECS confirms the new containers are healthy before marking the deployment as successful.
Step 5: Create the ECS Task Definition
Create the task definition template that the workflow will render with the correct image tag:
{
"family": "my-app",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "my-app",
"image": "PLACEHOLDER",
"essential": true,
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"healthCheck": {
"command": [
"CMD-SHELL",
"wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1"
],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 10
},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/my-app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
}
}
]
}Save this file at .aws/task-definition.json. The image field will be replaced by the workflow with the actual ECR image URI. The health check matches the one in your Dockerfile to ensure consistency.
Step 6: Manage Environment Variables and Secrets
GitHub Actions provides two mechanisms for sensitive configuration: secrets and environment variables.
Add secrets in your repository settings under Settings > Secrets and variables > Actions:
- ›
AWS_ROLE_ARN: The IAM role ARN for OIDC authentication - ›
DATABASE_URL: Your production database connection string
For non-sensitive configuration that varies by environment, use GitHub environments:
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: build-and-push
if: github.ref == 'refs/heads/develop'
environment:
name: staging
url: https://staging.myapp.example.com
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ env.AWS_REGION }}
- name: Deploy to staging ECS
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.render-task-def.outputs.task-definition }}
service: my-app-staging
cluster: my-cluster-staging
wait-for-service-stability: trueEach environment (staging, production) can have its own secrets and variables. This keeps production credentials isolated from development workflows.
Step 7: Implement Caching Strategies
Effective caching is the difference between a 10-minute pipeline and a 2-minute one. Here are the caching layers in this pipeline:
npm dependency cache is handled by actions/setup-node with the cache: "npm" option. It caches the ~/.npm directory and restores it based on a hash of package-lock.json.
Docker layer cache uses the GitHub Actions cache backend. Add this to your build step:
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=maxThe mode=max setting caches all layers, not just the final image layers. This means intermediate stages (like dependency installation) are cached too.
Test result caching for Jest can be configured with a custom cache key:
- name: Cache Jest
uses: actions/cache@v4
with:
path: /tmp/jest_rt
key: jest-${{ runner.os }}-${{ hashFiles('**/*.test.ts') }}
restore-keys: |
jest-${{ runner.os }}-Together, these caching strategies ensure that only changed parts of the pipeline execute from scratch.
Step 8: Add a Pull Request Workflow
Create a separate workflow for pull requests that runs tests and builds the image without deploying:
# .github/workflows/pr-check.yml
name: PR Check
on:
pull_request:
branches: [main, develop]
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- run: npm ci
- run: npm run lint
- run: npm run type-check
- run: npm run test
build:
name: Docker Build (no push)
runs-on: ubuntu-latest
needs: test
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/build-push-action@v5
with:
context: .
push: false
cache-from: type=gha
cache-to: type=gha,mode=maxThis workflow validates that the Docker image builds successfully without pushing it to any registry. It catches Dockerfile errors before they reach the main branch.
The Complete Pipeline Flow
Here is how the full CI/CD pipeline works:
- ›A developer pushes code or opens a pull request
- ›The test job runs: linting, type checking, unit tests, and e2e tests against a real PostgreSQL instance
- ›If tests pass and the push is to main or develop, the build job creates a Docker image using multi-stage builds
- ›The image is pushed to AWS ECR with both a commit SHA tag and a
latesttag - ›The image is scanned for vulnerabilities with Trivy
- ›If the push is to main, the deploy job updates the ECS task definition and deploys the new image
- ›ECS performs a rolling update, routing traffic to new containers only after health checks pass
- ›If the push is to develop, the staging deployment job runs instead
Next Steps
- ›Rollback automation: Add a workflow that reverts to the previous ECS task definition if health checks fail
- ›Slack notifications: Add a notification step that posts deployment status to a Slack channel
- ›Database migrations: Run Drizzle or Prisma migrations as a step before deployment
- ›Feature flags: Integrate a feature flag service to decouple deployments from releases
- ›Multi-region deployment: Extend the pipeline to deploy to multiple AWS regions for redundancy
- ›Performance testing: Add a load testing step using k6 or Artillery before production deployment
FAQ
What is CI/CD and why does it matter?
CI/CD stands for Continuous Integration and Continuous Deployment. Continuous Integration automatically runs tests and builds whenever code is pushed, catching bugs early. Continuous Deployment automatically releases tested code to production. Together they reduce manual deployment steps, catch issues before they reach users, and enable teams to ship faster with confidence.
What are multi-stage Docker builds?
Multi-stage Docker builds use multiple FROM statements in a single Dockerfile, each creating a separate build stage. You can copy artifacts from one stage to another while discarding unnecessary build tools and dependencies. This produces smaller, more secure production images because the final image only contains the runtime and your built application, not the build toolchain.
How do GitHub Actions secrets work?
GitHub Actions secrets are encrypted environment variables stored at the repository or organization level. They are available to workflow runs as environment variables but are masked in logs. You add them in the repository settings under Secrets and Variables. Secrets are commonly used for API keys, Docker registry credentials, and deployment tokens.
How do you cache Docker layers in GitHub Actions?
Docker layer caching in GitHub Actions uses the actions/cache action or the built-in cache support in docker/build-push-action. The cache stores Docker layers from previous builds, so unchanged layers are reused instead of rebuilt. This can reduce build times significantly, especially for steps that install dependencies which change less frequently than application code.
How do you deploy Docker containers to AWS ECS?
Deploying to AWS ECS involves pushing your Docker image to Amazon ECR (Elastic Container Registry), then updating your ECS service to use the new image. In GitHub Actions, you authenticate with AWS using OIDC or access keys, build and push the image to ECR, render a new task definition with the updated image tag, and deploy it to ECS which handles rolling updates automatically.
Collaboration
Need help with a project?
Let's Build It
I help startups and established companies design, build, and scale world-class digital products. From deep technical architecture to pixel-perfect UI — let's bring your vision to life.
Related Articles
How to Add Observability to a Node.js App with OpenTelemetry
Learn how to instrument a Node.js app with OpenTelemetry for traces, metrics, and logs, and build a practical observability setup for production debugging.
How to Build a Backend-for-Frontend (BFF) with Next.js and Node.js
A practical guide to building a Backend-for-Frontend with Next.js and Node.js for API aggregation, auth handling, caching, and frontend-specific data shaping.
How I Structure CI/CD for Next.js, Docker, and GitHub Actions
A practical CI/CD blueprint for Next.js apps using Docker and GitHub Actions, including testing, image builds, deployment stages, cache strategy, and release safety.