[feat] Added new documentation for Node.js guide (#23607)

This PR introduces a comprehensive, language-specific guide for
containerizing Node.js applications using Docker, aimed at helping
developers streamline development, testing, and deployment workflows. It
includes hands-on steps and real-world examples to configure CI/CD
pipelines using GitHub Actions, following modern DevOps best practices.

**What’s Included**

- Step-by-step instructions to containerize Node.js applications using
Docker.
- Configuration for a local development environment inside containers
with automatic reloads.
- Guidance on running unit and integration tests within Docker
containers.
- Full CI/CD pipeline setup using GitHub Actions for automated builds,
tests, and deployments.
- Deployment instructions for a local Kubernetes cluster to validate
production readiness.

**Credits**
[Kristiyan Velkov](https://www.linkedin.com/in/kristiyanvelkov/), Docker
Captain.

## Reviews

<!-- Notes for reviewers here -->
<!-- List applicable reviews (optionally @tag reviewers) -->

- [x] Technical review
- [x] Editorial review
- [ ] Product review
This commit is contained in:
Kristiyan Velkov
2025-11-21 20:25:26 +02:00
committed by GitHub
parent d211ce0784
commit b11f7a5a52
7 changed files with 2288 additions and 875 deletions

View File

@@ -15,12 +15,41 @@ params:
time: 20 minutes
---
The Node.js language-specific guide teaches you how to containerize a Node.js application using Docker. In this guide, youll learn how to:
[Node.js](https://nodejs.org/en) is a JavaScript runtime for building web applications. This guide shows you how to containerize a TypeScript Node.js application with a React frontend and PostgreSQL database.
- Containerize and run a Node.js application
- Set up a local environment to develop a Node.js application using containers
- Run tests for a Node.js application using containers
- Configure a CI/CD pipeline for a containerized Node.js application using GitHub Actions
- Deploy your containerized Node.js application locally to Kubernetes to test and debug your deployment
The sample application is a modern full-stack Todo application featuring:
Start by containerizing an existing Node.js application.
- **Backend**: Express.js with TypeScript, PostgreSQL database, and RESTful API
- **Frontend**: React.js with Vite and Tailwind CSS 4
> **Acknowledgment**
>
> Docker extends its sincere gratitude to [Kristiyan Velkov](https://www.linkedin.com/in/kristiyan-velkov-763130b3/) for authoring this guide. As a Docker Captain and experienced Full-stack engineer, his expertise in Docker, DevOps, and modern web development has made this resource invaluable for the community, helping developers navigate and optimize their Docker workflows.
---
## What will you learn?
In this guide, you will learn how to:
- Containerize and run a Node.js application using Docker.
- Run tests inside a Docker container.
- Set up a development container environment.
- Configure GitHub Actions for CI/CD with Docker.
- Deploy your Dockerized Node.js app to Kubernetes.
To begin, youll start by containerizing an existing Node.js application.
---
## Prerequisites
Before you begin, make sure you're familiar with the following:
- Basic understanding of [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript) and [TypeScript](https://www.typescriptlang.org/).
- Basic knowledge of [Node.js](https://nodejs.org/en), [npm](https://docs.npmjs.com/about-npm), and [React](https://react.dev/) for modern web development.
- Understanding of Docker concepts such as images, containers, and Dockerfiles. If you're new to Docker, start with the [Docker basics](/get-started/docker-concepts/the-basics/what-is-a-container.md) guide.
- Familiarity with [Express.js](https://expressjs.com/) for backend API development.
Once you've completed the Node.js getting started modules, youll be ready to containerize your own Node.js application using the examples and instructions provided in this guide.

View File

@@ -1,140 +0,0 @@
---
title: Configure CI/CD for your Node.js application
linkTitle: Configure CI/CD
weight: 40
keywords: ci/cd, github actions, node.js, node
description: Learn how to configure CI/CD using GitHub Actions for your Node.js application.
aliases:
- /language/nodejs/configure-ci-cd/
- /guides/language/nodejs/configure-ci-cd/
---
## Prerequisites
Complete all the previous sections of this guide, starting with [Containerize a Node.js application](containerize.md). You must have a [GitHub](https://github.com/signup) account and a [Docker](https://hub.docker.com/signup) account to complete this section.
## Overview
In this section, you'll learn how to set up and use GitHub Actions to build and test your Docker image as well as push it to Docker Hub. You will complete the following steps:
1. Create a new repository on GitHub.
2. Define the GitHub Actions workflow.
3. Run the workflow.
## Step one: Create the repository
Create a GitHub repository, configure the Docker Hub credentials, and push your source code.
1. [Create a new repository](https://github.com/new) on GitHub.
2. Open the repository **Settings**, and go to **Secrets and variables** >
**Actions**.
3. Create a new **Repository variable** named `DOCKER_USERNAME` and your Docker ID as a value.
4. Create a new [Personal Access Token (PAT)](/manuals/security/access-tokens.md#create-an-access-token) for Docker Hub. You can name this token `docker-tutorial`. Make sure access permissions include Read and Write.
5. Add the PAT as a **Repository secret** in your GitHub repository, with the name
`DOCKERHUB_TOKEN`.
6. In your local repository on your machine, run the following command to change
the origin to the repository you just created. Make sure you change
`your-username` to your GitHub username and `your-repository` to the name of
the repository you created.
```console
$ git remote set-url origin https://github.com/your-username/your-repository.git
```
7. Run the following commands to stage, commit, and push your local repository to GitHub.
```console
$ git add -A
$ git commit -m "my commit"
$ git push -u origin main
```
## Step two: Set up the workflow
Set up your GitHub Actions workflow for building, testing, and pushing the image
to Docker Hub.
1. Go to your repository on GitHub and then select the **Actions** tab.
2. Select **set up a workflow yourself**.
This takes you to a page for creating a new GitHub actions workflow file in
your repository, under `.github/workflows/main.yml` by default.
3. In the editor window, copy and paste the following YAML configuration.
```yaml
name: ci
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and test
uses: docker/build-push-action@v6
with:
target: test
load: true
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
push: true
target: prod
tags: ${{ vars.DOCKER_USERNAME }}/${{ github.event.repository.name }}:latest
```
For more information about the YAML syntax for `docker/build-push-action`,
refer to the [GitHub Action README](https://github.com/docker/build-push-action/blob/master/README.md).
## Step three: Run the workflow
Save the workflow file and run the job.
1. Select **Commit changes...** and push the changes to the `main` branch.
After pushing the commit, the workflow starts automatically.
2. Go to the **Actions** tab. It displays the workflow.
Selecting the workflow shows you the breakdown of all the steps.
3. When the workflow is complete, go to your
[repositories on Docker Hub](https://hub.docker.com/repositories).
If you see the new repository in that list, it means the GitHub Actions
successfully pushed the image to Docker Hub.
## Summary
In this section, you learned how to set up a GitHub Actions workflow for your Node.js application.
Related information:
- [Introduction to GitHub Actions](/guides/gha.md)
- [Docker Build GitHub Actions](/manuals/build/ci/github-actions/_index.md)
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
## Next steps
Next, learn how you can locally test and debug your workloads on Kubernetes before deploying.

View File

@@ -0,0 +1,344 @@
---
title: Automate your builds with GitHub Actions
linkTitle: Automate your builds with GitHub Actions
weight: 50
keywords: CI/CD, GitHub Actions, Node.js, Docker
description: Learn how to configure CI/CD using GitHub Actions for your Node.js application.
aliases:
- /language/nodejs/configure-ci-cd/
- /guides/language/nodejs/configure-ci-cd/
---
## Prerequisites
Complete all the previous sections of this guide, starting with [Containerize a Node.js application](containerize.md).
You must also have:
- A [GitHub](https://github.com/signup) account.
- A [Docker Hub](https://hub.docker.com/signup) account.
---
## Overview
In this section, you'll set up a **CI/CD pipeline** using [GitHub Actions](https://docs.github.com/en/actions) to automatically:
- Build your Node.js application inside a Docker container.
- Run unit and integration tests, and make sure your application meets solid code quality standards.
- Perform security scanning and vulnerability assessment.
- Push production-ready images to [Docker Hub](https://hub.docker.com).
---
## Connect your GitHub repository to Docker Hub
To enable GitHub Actions to build and push Docker images, you'll securely store your Docker Hub credentials in your new GitHub repository.
### Step 1: Connect your GitHub repository to Docker Hub
1. Create a Personal Access Token (PAT) from [Docker Hub](https://hub.docker.com).
1. From your Docker Hub account, go to **Account Settings → Security**.
2. Generate a new Access Token with **Read/Write** permissions.
3. Name it something like `docker-nodejs-sample`.
4. Copy and save the token — you'll need it in Step 4.
2. Create a repository in [Docker Hub](https://hub.docker.com/repositories/).
1. From your Docker Hub account, select **Create a repository**.
2. For the Repository Name, use something descriptive — for example: `nodejs-sample`.
3. Once created, copy and save the repository name — you'll need it in Step 4.
3. Create a new [GitHub repository](https://github.com/new) for your Node.js project.
4. Add Docker Hub credentials as GitHub repository secrets.
In your newly created GitHub repository:
1. From **Settings**, go to **Secrets and variables → Actions → New repository secret**.
2. Add the following secrets:
| Name | Value |
| ------------------------ | ------------------------------------------------ |
| `DOCKER_USERNAME` | Your Docker Hub username |
| `DOCKERHUB_TOKEN` | Your Docker Hub access token (created in Step 1) |
| `DOCKERHUB_PROJECT_NAME` | Your Docker Project Name (created in Step 2) |
These secrets let GitHub Actions to authenticate securely with Docker Hub during automated workflows.
5. Connect your local project to GitHub.
Link your local project `docker-nodejs-sample` to the GitHub repository you just created by running the following command from your project root:
```console
$ git remote set-url origin https://github.com/{your-username}/{your-repository-name}.git
```
> [!IMPORTANT]
> Replace `{your-username}` and `{your-repository}` with your actual GitHub username and repository name.
To confirm that your local project is correctly connected to the remote GitHub repository, run:
```console
$ git remote -v
```
You should see output similar to:
```console
origin https://github.com/{your-username}/{your-repository-name}.git (fetch)
origin https://github.com/{your-username}/{your-repository-name}.git (push)
```
This confirms that your local repository is properly linked and ready to push your source code to GitHub.
6. Push your source code to GitHub.
Follow these steps to commit and push your local project to your GitHub repository:
1. Stage all files for commit.
```console
$ git add -A
```
This command stages all changes — including new, modified, and deleted files — preparing them for commit.
2. Commit your changes.
```console
$ git commit -m "Initial commit with CI/CD pipeline"
```
This command creates a commit that snapshots the staged changes with a descriptive message.
3. Push the code to the `main` branch.
```console
$ git push -u origin main
```
This command pushes your local commits to the `main` branch of the remote GitHub repository and sets the upstream branch.
Once completed, your code will be available on GitHub, and any GitHub Actions workflow you've configured will run automatically.
> [!NOTE]
> Learn more about the Git commands used in this step:
>
> - [Git add](https://git-scm.com/docs/git-add) Stage changes (new, modified, deleted) for commit
> - [Git commit](https://git-scm.com/docs/git-commit) Save a snapshot of your staged changes
> - [Git push](https://git-scm.com/docs/git-push) Upload local commits to your GitHub repository
> - [Git remote](https://git-scm.com/docs/git-remote) View and manage remote repository URLs
---
### Step 2: Set up the workflow
Now you'll create a GitHub Actions workflow that builds your Docker image, runs tests, and pushes the image to Docker Hub.
1. From your repository on GitHub, select the **Actions** tab in the top menu.
2. When prompted, select **Set up a workflow yourself**.
This opens an inline editor to create a new workflow file. By default, it will be saved to:
`.github/workflows/main.yml`
3. Add the following workflow configuration to the new file:
```yaml
name: CI/CD Node.js Application with Docker
on:
push:
branches: [main]
pull_request:
branches: [main]
types: [opened, synchronize, reopened]
jobs:
test:
name: Run Node.js Tests
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16-alpine
env:
POSTGRES_DB: todoapp_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Cache npm dependencies
uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-npm-${{ hashFiles('**/package-lock.json') }}
restore-keys: ${{ runner.os }}-npm-
- name: Build test image
uses: docker/build-push-action@v6
with:
context: .
target: test
tags: nodejs-app-test:latest
platforms: linux/amd64
load: true
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache,mode=max
- name: Run tests inside container
run: |
docker run --rm \
--network host \
-e NODE_ENV=test \
-e POSTGRES_HOST=localhost \
-e POSTGRES_PORT=5432 \
-e POSTGRES_DB=todoapp_test \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
nodejs-app-test:latest
env:
CI: true
timeout-minutes: 10
build-and-push:
name: Build and Push Docker Image
runs-on: ubuntu-latest
needs: test
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: ${{ runner.os }}-buildx-
- name: Extract metadata
id: meta
run: |
echo "REPO_NAME=${GITHUB_REPOSITORY##*/}" >> "$GITHUB_OUTPUT"
echo "SHORT_SHA=${GITHUB_SHA::7}" >> "$GITHUB_OUTPUT"
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push multi-arch production image
uses: docker/build-push-action@v6
with:
context: .
target: production
push: true
platforms: linux/amd64,linux/arm64
tags: |
${{ secrets.DOCKER_USERNAME }}/${{ secrets.DOCKERHUB_PROJECT_NAME }}:latest
${{ secrets.DOCKER_USERNAME }}/${{ secrets.DOCKERHUB_PROJECT_NAME }}:${{ steps.meta.outputs.SHORT_SHA }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache,mode=max
```
This workflow performs the following tasks for your Node.js application:
- Triggers on every `push` or `pull request` to the `main` branch.
- Builds a test Docker image using the `test` stage.
- Runs tests in a containerized environment.
- Stops the workflow if any test fails.
- Caches Docker build layers and npm dependencies for faster runs.
- Authenticates with Docker Hub using GitHub secrets.
- Builds an image using the `production` stage.
- Tags and pushes the image to Docker Hub with `latest` and short SHA tags.
> [!NOTE]
> For more information about `docker/build-push-action`, refer to the [GitHub Action README](https://github.com/docker/build-push-action/blob/master/README.md).
---
### Step 3: Run the workflow
After adding your workflow file, trigger the CI/CD process.
1. Commit and push your workflow file
From the GitHub editor, select **Commit changes…**.
- This push automatically triggers the GitHub Actions pipeline.
2. Monitor the workflow execution
1. From your GitHub repository, go to the **Actions** tab.
2. Select the workflow run to follow each step: **test**, **build**, **security**, and (if successful) **push** and **deploy**.
3. Verify the Docker image on Docker Hub
- After a successful workflow run, visit your [Docker Hub repositories](https://hub.docker.com/repositories).
- You should see a new image under your repository with:
- Repository name: `${your-repository-name}`
- Tags include:
- `latest` represents the most recent successful build; ideal for quick testing or deployment.
- `<short-sha>` a unique identifier based on the commit hash, useful for version tracking, rollbacks, and traceability.
> [!TIP] Protect your main branch
> To maintain code quality and prevent accidental direct pushes, enable branch protection rules:
>
> - From your GitHub repository, go to **Settings → Branches**.
> - Under Branch protection rules, select **Add rule**.
> - Specify `main` as the branch name.
> - Enable options like:
> - _Require a pull request before merging_.
> - _Require status checks to pass before merging_.
>
> This ensures that only tested and reviewed code is merged into `main` branch.
---
## Summary
In this section, you set up a comprehensive CI/CD pipeline for your containerized Node.js application using GitHub Actions.
What you accomplished:
- Created a new GitHub repository specifically for your project.
- Generated a Docker Hub access token and added it as a GitHub secret.
- Created a GitHub Actions workflow that:
- Builds your application in a Docker container.
- Run tests in a containerized environment.
- Pushes an image to Docker Hub if tests pass.
- Verified the workflow runs successfully.
Your Node.js application now has automated testing and deployment.
---
## Related resources
Deepen your understanding of automation and best practices for containerized apps:
- [Introduction to GitHub Actions](/guides/gha.md) Learn how GitHub Actions automate your workflows
- [Docker Build GitHub Actions](/manuals/build/ci/github-actions/_index.md) Set up container builds with GitHub Actions
- [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions) Full reference for writing GitHub workflows
- [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) Learn about GHCR features and usage
- [Best practices for writing Dockerfiles](/develop/develop-images/dockerfile_best-practices/) Optimize your image for performance and security
---
## Next steps
Next, learn how you can deploy your containerized Node.js application to Kubernetes with production-ready configuration. This helps you ensure your application behaves as expected in a production-like environment, reducing surprises during deployment.

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,9 @@
---
title: Test your Node.js deployment
linkTitle: Test your deployment
title: Deploy your Node.js application
linkTitle: Deploy your app
weight: 50
keywords: deploy, kubernetes, node, node.js
description: Learn how to deploy locally to test and debug your Kubernetes deployment
keywords: deploy, kubernetes, node, node.js, production
description: Learn how to deploy your containerized Node.js application to Kubernetes with production-ready configuration
aliases:
- /language/nodejs/deploy/
- /guides/language/nodejs/deploy/
@@ -16,128 +16,579 @@ aliases:
## Overview
In this section, you'll learn how to use Docker Desktop to deploy your
application to a fully-featured Kubernetes environment on your development
machine. This allows you to test and debug your workloads on Kubernetes locally
before deploying.
In this section, you'll learn how to deploy your containerized Node.js application to Kubernetes using Docker Desktop. This deployment uses production-ready configurations including security hardening, auto-scaling, persistent storage, and high availability features.
## Create a Kubernetes YAML file
You'll deploy a complete stack including:
In the cloned repository's directory, create a file named
`docker-node-kubernetes.yaml`. Open the file in an IDE or text editor and add
the following contents. Replace `DOCKER_USERNAME/REPO_NAME` with your Docker
username and the name of the repository that you created in [Configure CI/CD for
your Node.js application](configure-ci-cd.md).
- Node.js Todo application with 3 replicas.
- PostgreSQL database with persistent storage.
- Auto-scaling based on CPU and memory usage.
- Ingress configuration for external access.
- Security settings.
## Create a Kubernetes deployment file
Create a new file called `nodejs-sample-kubernetes.yaml` in your project root:
```yaml
# ========================================
# Node.js Todo App - Kubernetes Deployment
# ========================================
apiVersion: v1
kind: Namespace
metadata:
name: todoapp
labels:
app: todoapp
---
# ========================================
# ConfigMap for Application Configuration
# ========================================
apiVersion: v1
kind: ConfigMap
metadata:
name: todoapp-config
namespace: todoapp
data:
NODE_ENV: 'production'
ALLOWED_ORIGINS: 'https://yourdomain.com'
POSTGRES_HOST: 'todoapp-postgres'
POSTGRES_PORT: '5432'
POSTGRES_DB: 'todoapp'
POSTGRES_USER: 'todoapp'
---
# ========================================
# Secret for Database Credentials
# ========================================
apiVersion: v1
kind: Secret
metadata:
name: todoapp-secrets
namespace: todoapp
type: Opaque
data:
postgres-password: dG9kb2FwcF9wYXNzd29yZA== # base64 encoded "todoapp_password"
---
# ========================================
# PostgreSQL PersistentVolumeClaim
# ========================================
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: todoapp
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
---
# ========================================
# PostgreSQL Deployment
# ========================================
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-nodejs-demo
namespace: default
name: todoapp-postgres
namespace: todoapp
labels:
app: todoapp-postgres
spec:
replicas: 1
selector:
matchLabels:
todo: web
app: todoapp-postgres
template:
metadata:
labels:
todo: web
app: todoapp-postgres
spec:
containers:
- name: todo-site
image: DOCKER_USERNAME/REPO_NAME
imagePullPolicy: Always
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
name: postgres
env:
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: todoapp-config
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: todoapp-config
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: todoapp-secrets
key: postgres-password
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
livenessProbe:
exec:
command:
- pg_isready
- -U
- todoapp
- -d
- todoapp
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command:
- pg_isready
- -U
- todoapp
- -d
- todoapp
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
---
# ========================================
# PostgreSQL Service
# ========================================
apiVersion: v1
kind: Service
metadata:
name: todo-entrypoint
namespace: default
name: todoapp-postgres
namespace: todoapp
labels:
app: todoapp-postgres
spec:
type: NodePort
selector:
todo: web
type: ClusterIP
ports:
- port: 3000
- port: 5432
targetPort: 5432
protocol: TCP
name: postgres
selector:
app: todoapp-postgres
---
# ========================================
# Application Deployment
# ========================================
apiVersion: apps/v1
kind: Deployment
metadata:
name: todoapp-deployment
namespace: todoapp
labels:
app: todoapp
spec:
replicas: 3
selector:
matchLabels:
app: todoapp
template:
metadata:
labels:
app: todoapp
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1001
fsGroup: 1001
containers:
- name: todoapp
image: ghcr.io/your-username/docker-nodejs-sample:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
name: http
protocol: TCP
env:
- name: NODE_ENV
valueFrom:
configMapKeyRef:
name: todoapp-config
key: NODE_ENV
- name: ALLOWED_ORIGINS
valueFrom:
configMapKeyRef:
name: todoapp-config
key: ALLOWED_ORIGINS
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: todoapp-config
key: POSTGRES_HOST
- name: POSTGRES_PORT
valueFrom:
configMapKeyRef:
name: todoapp-config
key: POSTGRES_PORT
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: todoapp-config
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: todoapp-config
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: todoapp-secrets
key: postgres-password
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: '256Mi'
cpu: '250m'
limits:
memory: '512Mi'
cpu: '500m'
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
---
# ========================================
# Application Service
# ========================================
apiVersion: v1
kind: Service
metadata:
name: todoapp-service
namespace: todoapp
labels:
app: todoapp
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 3000
nodePort: 30001
protocol: TCP
selector:
app: todoapp
---
# ========================================
# Ingress for External Access
# ========================================
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: todoapp-ingress
namespace: todoapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: 'letsencrypt-prod'
spec:
tls:
- hosts:
- yourdomain.com
secretName: todoapp-tls
rules:
- host: yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: todoapp-service
port:
number: 80
---
# ========================================
# HorizontalPodAutoscaler
# ========================================
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: todoapp-hpa
namespace: todoapp
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: todoapp-deployment
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
---
# ========================================
# PodDisruptionBudget
# ========================================
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: todoapp-pdb
namespace: todoapp
spec:
minAvailable: 1
selector:
matchLabels:
app: todoapp
```
In this Kubernetes YAML file, there are two objects, separated by the `---`:
## Configure the deployment
- A Deployment, describing a scalable group of identical pods. In this case,
you'll get just one replica, or copy of your pod. That pod, which is
described under `template`, has just one container in it. The container is
created from the image built by GitHub Actions in [Configure CI/CD for your
Node.js application](configure-ci-cd.md).
- A NodePort service, which will route traffic from port 30001 on your host to
port 3000 inside the pods it routes to, allowing you to reach your app
from the network.
Before deploying, you need to customize the deployment file for your environment:
To learn more about Kubernetes objects, see the [Kubernetes documentation](https://kubernetes.io/docs/home/).
1. **Image reference**: Replace `your-username` with your GitHub username or Docker Hub username:
## Deploy and check your application
```yaml
image: ghcr.io/your-username/docker-nodejs-sample:latest
```
1. In a terminal, navigate to where you created `docker-node-kubernetes.yaml`
and deploy your application to Kubernetes.
2. **Domain name**: Replace `yourdomain.com` with your actual domain in two places:
```yaml
# In ConfigMap
ALLOWED_ORIGINS: "https://yourdomain.com"
# In Ingress
- host: yourdomain.com
```
3. **Database password** (optional): The default password is already base64 encoded. To change it:
```console
$ kubectl apply -f docker-node-kubernetes.yaml
$ echo -n "your-new-password" | base64
```
You should see output that looks like the following, indicating your Kubernetes objects were created successfully.
Then update the Secret:
```shell
deployment.apps/docker-nodejs-demo created
service/todo-entrypoint created
```yaml
data:
postgres-password: <your-base64-encoded-password>
```
2. Make sure everything worked by listing your deployments.
4. **Storage class**: Adjust based on your cluster (current: `standard`)
## Understanding the deployment
The deployment file creates a complete application stack with multiple components working together.
### Architecture
The deployment includes:
- **Node.js application**: Runs 3 replicas of your containerized Todo app
- **PostgreSQL database**: Single instance with 10Gi of persistent storage
- **Services**: Kubernetes services handle load balancing across application replicas
- **Ingress**: External access through an ingress controller with SSL/TLS support
### Security
The deployment uses several security features:
- Containers run as a non-root user (UID 1001)
- Read-only root filesystem prevents unauthorized writes
- Linux capabilities are dropped to minimize attack surface
- Sensitive data like database passwords are stored in Kubernetes secrets
### High availability
To keep your application running reliably:
- Three application replicas ensure service continues if one pod fails
- Pod disruption budget maintains at least one available pod during updates
- Rolling updates allow zero-downtime deployments
- Health checks on the `/health` endpoint ensure only healthy pods receive traffic
### Auto-scaling
The Horizontal Pod Autoscaler scales your application based on resource usage:
- Scales between 1 and 5 replicas automatically
- Triggers scaling when CPU usage exceeds 70%
- Triggers scaling when memory usage exceeds 80%
- Resource limits: 256Mi-512Mi memory, 250m-500m CPU per pod
### Data persistence
PostgreSQL data is stored persistently:
- 10Gi persistent volume stores database files
- Database initializes automatically on first startup
- Data persists across pod restarts and updates
## Deploy your application
### Step 1: Deploy to Kubernetes
Deploy your application to the local Kubernetes cluster:
```console
$ kubectl apply -f nodejs-sample-kubernetes.yaml
```
You should see output confirming all resources were created:
```shell
namespace/todoapp created
secret/todoapp-secrets created
configmap/todoapp-config created
persistentvolumeclaim/postgres-pvc created
deployment.apps/todoapp-postgres created
service/todoapp-postgres created
deployment.apps/todoapp-deployment created
service/todoapp-service created
ingress.networking.k8s.io/todoapp-ingress created
poddisruptionbudget.policy/todoapp-pdb created
horizontalpodautoscaler.autoscaling/todoapp-hpa created
```
### Step 2: Verify the deployment
Check that your deployments are running:
```console
$ kubectl get deployments -n todoapp
```
Expected output:
```shell
NAME READY UP-TO-DATE AVAILABLE AGE
todoapp-deployment 3/3 3 3 30s
todoapp-postgres 1/1 1 1 30s
```
Verify your services are created:
```console
$ kubectl get services -n todoapp
```
Expected output:
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
todoapp-service ClusterIP 10.111.101.229 <none> 80/TCP 45s
todoapp-postgres ClusterIP 10.111.102.130 <none> 5432/TCP 45s
```
Check that persistent storage is working:
```console
$ kubectl get pvc -n todoapp
```
Expected output:
```shell
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-pvc Bound pvc-12345678-1234-1234-1234-123456789012 10Gi RWO standard 1m
```
### Step 3: Access your application
For local testing, use port forwarding to access your application:
```console
$ kubectl port-forward -n todoapp service/todoapp-service 8080:80
```
Open your browser and visit [http://localhost:8080](http://localhost:8080) to see your Todo application running in Kubernetes.
### Step 4: Test the deployment
Test that your application is working correctly:
1. **Add some todos** through the web interface
2. **Check application pods**:
```console
$ kubectl get deployments
$ kubectl get pods -n todoapp -l app=todoapp
```
Your deployment should be listed as follows:
```shell
NAME READY UP-TO-DATE AVAILABLE AGE
docker-nodejs-demo 1/1 1 1 6s
```
This indicates all one of the pods you asked for in your YAML are up and running. Do the same check for your services.
3. **View application logs**:
```console
$ kubectl get services
$ kubectl logs -f deployment/todoapp-deployment -n todoapp
```
You should get output like the following.
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d22h
todo-entrypoint NodePort 10.111.101.229 <none> 3000:30001/TCP 33s
```
In addition to the default `kubernetes` service, you can see your `todo-entrypoint` service, accepting traffic on port 30001/TCP.
3. Open a browser and visit your app at `localhost:30001`. You should see your
application.
4. Run the following command to tear down your application.
4. **Check database connectivity**:
```console
$ kubectl delete -f docker-node-kubernetes.yaml
$ kubectl get pods -n todoapp -l app=todoapp-postgres
```
5. **Monitor auto-scaling**:
```console
$ kubectl describe hpa todoapp-hpa -n todoapp
```
### Step 5: Clean up
When you're done testing, remove the deployment:
```console
$ kubectl delete -f nodejs-sample-kubernetes.yaml
```
## Summary
In this section, you learned how to use Docker Desktop to deploy your application to a fully-featured Kubernetes environment on your development machine.
You've deployed your containerized Node.js application to Kubernetes. You learned how to:
Related information:
- Create a comprehensive Kubernetes deployment file with security hardening
- Deploy a multi-tier application (Node.js + PostgreSQL) with persistent storage
- Configure auto-scaling, health checks, and high availability features
- Test and monitor your deployment locally using Docker Desktop's Kubernetes
- [Kubernetes documentation](https://kubernetes.io/docs/home/)
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/use-desktop/kubernetes.md)
- [Swarm mode overview](/manuals/engine/swarm/_index.md)
Your application is now running in a production-like environment with enterprise-grade features including security contexts, resource management, and automatic scaling.
---
## Related resources
Explore official references and best practices to sharpen your Kubernetes deployment workflow:
- [Kubernetes documentation](https://kubernetes.io/docs/home/) Learn about core concepts, workloads, services, and more.
- [Deploy on Kubernetes with Docker Desktop](/manuals/desktop/use-desktop/kubernetes.md) Use Docker Desktop's built-in Kubernetes support for local testing and development.
- [`kubectl` CLI reference](https://kubernetes.io/docs/reference/kubectl/) Manage Kubernetes clusters from the command line.
- [Kubernetes Deployment resource](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) Understand how to manage and scale applications using Deployments.
- [Kubernetes Service resource](https://kubernetes.io/docs/concepts/services-networking/service/) Learn how to expose your application to internal and external traffic.

View File

@@ -1,7 +1,7 @@
---
title: Use containers for Node.js development
linkTitle: Develop your app
weight: 20
weight: 30
keywords: node, node.js, development
description: Learn how to develop your Node.js application locally using containers.
aliases:
@@ -24,253 +24,192 @@ In this section, you'll learn how to set up a development environment for your c
## Add a local database and persist data
You can use containers to set up local services, like a database. In this section, you'll update the `compose.yaml` file to define a database service and a volume to persist data.
The application uses PostgreSQL for data persistence. Add a database service to your Docker Compose configuration.
1. Open your `compose.yaml` file in an IDE or text editor.
2. Uncomment the database related instructions. The following is the updated
`compose.yaml` file.
### Add database service to Docker Compose
> [!IMPORTANT]
>
> For this section, don't run `docker compose up` until you are instructed to. Running the command at intermediate points may incorrectly initialize your database.
If you haven't already created a `compose.yml` file in the previous section, or if you need to add the database service, update your `compose.yml` file to include the PostgreSQL database service:
```yaml {hl_lines="26-51",collapse=true,title=compose.yaml}
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Docker Compose reference guide at
# https://docs.docker.com/go/compose-spec-reference/
```yaml
services:
# ... existing app services ...
# Here the instructions define your application as a service called "server".
# This service is built from the Dockerfile in the current directory.
# You can add other services your application may depend on here, such as a
# database or a cache. For examples, see the Awesome Compose repository:
# https://github.com/docker/awesome-compose
services:
server:
build:
context: .
environment:
NODE_ENV: production
ports:
- 3000:3000
# ========================================
# PostgreSQL Database Service
# ========================================
db:
image: postgres:16-alpine
container_name: todoapp-db
environment:
POSTGRES_DB: '${POSTGRES_DB:-todoapp}'
POSTGRES_USER: '${POSTGRES_USER:-todoapp}'
POSTGRES_PASSWORD: '${POSTGRES_PASSWORD:-todoapp_password}'
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- '${DB_PORT:-5432}:5432'
restart: unless-stopped
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U ${POSTGRES_USER:-todoapp} -d ${POSTGRES_DB:-todoapp}']
interval: 10s
timeout: 5s
retries: 5
start_period: 5s
networks:
- todoapp-network
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker compose up`.
# ========================================
# Volume Configuration
# ========================================
volumes:
postgres_data:
name: todoapp-postgres-data
driver: local
depends_on:
db:
condition: service_healthy
db:
image: postgres
restart: always
user: postgres
secrets:
- db-password
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=example
- POSTGRES_PASSWORD_FILE=/run/secrets/db-password
expose:
- 5432
healthcheck:
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
volumes:
db-data:
secrets:
db-password:
file: db/password.txt
# ========================================
# Network Configuration
# ========================================
networks:
todoapp-network:
name: todoapp-network
driver: bridge
```
### Update your application service
Make sure your application service in `compose.yml` is configured to connect to the database:
```yaml {hl_lines="18-20,42-44",collapse=true,title=compose.yml}
services:
app-dev:
build:
context: .
dockerfile: Dockerfile
target: development
container_name: todoapp-dev
ports:
- '${APP_PORT:-3000}:3000' # API server
- '${VITE_PORT:-5173}:5173' # Vite dev server
- '${DEBUG_PORT:-9229}:9229' # Node.js debugger
environment:
NODE_ENV: development
DOCKER_ENV: 'true'
POSTGRES_HOST: db
POSTGRES_PORT: 5432
POSTGRES_DB: todoapp
POSTGRES_USER: todoapp
POSTGRES_PASSWORD: '${POSTGRES_PASSWORD:-todoapp_password}'
ALLOWED_ORIGINS: '${ALLOWED_ORIGINS:-http://localhost:3000,http://localhost:5173}'
volumes:
- ./src:/app/src:ro
- ./package.json:/app/package.json
- ./vite.config.ts:/app/vite.config.ts:ro
- ./tailwind.config.js:/app/tailwind.config.js:ro
- ./postcss.config.js:/app/postcss.config.js:ro
depends_on:
db:
condition: service_healthy
develop:
watch:
- action: sync
path: ./src
target: /app/src
ignore:
- '**/*.test.*'
- '**/__tests__/**'
- action: rebuild
path: ./package.json
- action: sync
path: ./vite.config.ts
target: /app/vite.config.ts
- action: sync
path: ./tailwind.config.js
target: /app/tailwind.config.js
- action: sync
path: ./postcss.config.js
target: /app/postcss.config.js
restart: unless-stopped
networks:
- todoapp-network
db:
image: postgres:16-alpine
container_name: todoapp-db
environment:
POSTGRES_DB: '${POSTGRES_DB:-todoapp}'
POSTGRES_USER: '${POSTGRES_USER:-todoapp}'
POSTGRES_PASSWORD: '${POSTGRES_PASSWORD:-todoapp_password}'
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- '${DB_PORT:-5432}:5432'
restart: unless-stopped
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U ${POSTGRES_USER:-todoapp} -d ${POSTGRES_DB:-todoapp}']
interval: 10s
timeout: 5s
retries: 5
start_period: 5s
networks:
- todoapp-network
volumes:
postgres_data:
name: todoapp-postgres-data
driver: local
networks:
todoapp-network:
name: todoapp-network
driver: bridge
```
1. The PostgreSQL database configuration is handled automatically by the application. The database is created and initialized when the application starts, with data persisted using the `postgres_data` volume.
1. Configure your environment by copying the example file:
```console
$ cp .env.example .env
```
> [!NOTE]
>
> To learn more about the instructions in the Compose file, see [Compose file
> reference](/reference/compose-file/).
Update the `.env` file with your preferred settings:
3. Open `src/persistence/postgres.js` in an IDE or text editor. You'll notice
that this application uses a Postgres database and requires some environment
variables in order to connect to the database. The `compose.yaml` file doesn't
have these variables defined yet.
4. Add the environment variables that specify the database configuration. The
following is the updated `compose.yaml` file.
```env
# Application Configuration
NODE_ENV=development
APP_PORT=3000
VITE_PORT=5173
DEBUG_PORT=9230
```yaml {hl_lines="16-19",collapse=true,title=compose.yaml}
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Docker Compose reference guide at
# https://docs.docker.com/go/compose-spec-reference/
# Database Configuration
POSTGRES_HOST=db
POSTGRES_PORT=5432
POSTGRES_DB=todoapp
POSTGRES_USER=todoapp
POSTGRES_PASSWORD=todoapp_password
# Here the instructions define your application as a service called "server".
# This service is built from the Dockerfile in the current directory.
# You can add other services your application may depend on here, such as a
# database or a cache. For examples, see the Awesome Compose repository:
# https://github.com/docker/awesome-compose
services:
server:
build:
context: .
environment:
NODE_ENV: production
POSTGRES_HOST: db
POSTGRES_USER: postgres
POSTGRES_PASSWORD_FILE: /run/secrets/db-password
POSTGRES_DB: example
ports:
- 3000:3000
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker compose up`.
depends_on:
db:
condition: service_healthy
db:
image: postgres
restart: always
user: postgres
secrets:
- db-password
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=example
- POSTGRES_PASSWORD_FILE=/run/secrets/db-password
expose:
- 5432
healthcheck:
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
volumes:
db-data:
secrets:
db-password:
file: db/password.txt
# Security Configuration
ALLOWED_ORIGINS=http://localhost:3000,http://localhost:5173
```
5. Add the `secrets` section under the `server` service so that your application securely handles the database password. The following is the updated `compose.yaml` file.
1. Run the following command to start your application in development mode:
```yaml {hl_lines="33-34",collapse=true,title=compose.yaml}
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Docker Compose reference guide at
# https://docs.docker.com/go/compose-spec-reference/
# Here the instructions define your application as a service called "server".
# This service is built from the Dockerfile in the current directory.
# You can add other services your application may depend on here, such as a
# database or a cache. For examples, see the Awesome Compose repository:
# https://github.com/docker/awesome-compose
services:
server:
build:
context: .
environment:
NODE_ENV: production
POSTGRES_HOST: db
POSTGRES_USER: postgres
POSTGRES_PASSWORD_FILE: /run/secrets/db-password
POSTGRES_DB: example
ports:
- 3000:3000
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker compose up`.
depends_on:
db:
condition: service_healthy
secrets:
- db-password
db:
image: postgres
restart: always
user: postgres
secrets:
- db-password
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=example
- POSTGRES_PASSWORD_FILE=/run/secrets/db-password
expose:
- 5432
healthcheck:
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
volumes:
db-data:
secrets:
db-password:
file: db/password.txt
```console
$ docker compose up app-dev --build
```
6. In the `docker-nodejs-sample` directory, create a directory named `db`.
7. In the `db` directory, create a file named `password.txt`. This file will
contain your database password.
1. Open a browser and verify that the application is running at [http://localhost:5173](http://localhost:5173) for the frontend or [http://localhost:3000](http://localhost:3000) for the API. The React frontend is served by Vite dev server on port 5173, with API calls proxied to the Express server on port 3000.
You should now have at least the following contents in your
`docker-nodejs-sample` directory.
1. Add some items to the todo list to test data persistence.
```text
├── docker-nodejs-sample/
│ ├── db/
│ │ └── password.txt
│ ├── spec/
│ ├── src/
│ ├── .dockerignore
│ ├── .gitignore
│ ├── compose.yaml
│ ├── Dockerfile
│ ├── package-lock.json
│ ├── package.json
│ └── README.md
1. After adding some items to the todo list, press `CTRL + C` in the terminal to stop your application.
1. Run the application again:
```console
$ docker compose up app-dev
```
8. Open the `password.txt` file in an IDE or text editor, and specify a password
of your choice. Your password must be on a single line with no additional
lines. Ensure that the file doesn't contain any newline characters or other
hidden characters.
9. Ensure that you save your changes to all the files that you have modified.
10. Run the following command to start your application.
```console
$ docker compose up --build
```
11. Open a browser and verify that the application is running at
[http://localhost:3000](http://localhost:3000).
12. Add some items to the todo list to test data persistence.
13. After adding some items to the todo list, press `ctrl+c` in the terminal to
stop your application.
14. In the terminal, run `docker compose rm` to remove your containers.
```console
$ docker compose rm
```
15. Run `docker compose up` to run your application again.
```console
$ docker compose up --build
```
16. Refresh [http://localhost:3000](http://localhost:3000) in your browser and verify that the todo items persisted, even after the containers were removed and ran again.
1. Refresh [http://localhost:5173](http://localhost:5173) in your browser and verify that the todo items persisted, even after the containers were removed and ran again.
## Configure and run a development container
@@ -280,145 +219,301 @@ In addition to adding a bind mount, you can configure your Dockerfile and `compo
### Update your Dockerfile for development
Open the Dockerfile in an IDE or text editor. Note that the Dockerfile doesn't
install development dependencies and doesn't run nodemon. You'll
need to update your Dockerfile to install the development dependencies and run
nodemon.
Your Dockerfile should be configured as a multi-stage build with separate stages for development, production, and testing. If you followed the previous section, your Dockerfile already includes a development stage that has all development dependencies and runs the application with hot reload enabled.
Rather than creating one Dockerfile for production, and another Dockerfile for
development, you can use one multi-stage Dockerfile for both.
Update your Dockerfile to the following multi-stage Dockerfile.
Here's the development stage from your multi-stage Dockerfile:
```dockerfile {hl_lines="5-26",collapse=true,title=Dockerfile}
# syntax=docker/dockerfile:1
# ========================================
# Development Stage
# ========================================
FROM build-deps AS development
ARG NODE_VERSION=18.0.0
# Set environment
ENV NODE_ENV=development \
NPM_CONFIG_LOGLEVEL=warn
FROM node:${NODE_VERSION}-alpine as base
WORKDIR /usr/src/app
EXPOSE 3000
FROM base as dev
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --include=dev
USER node
# Copy source files
COPY . .
CMD npm run dev
FROM base as prod
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --omit=dev
USER node
COPY . .
CMD node src/index.js
# Ensure all directories have proper permissions
RUN mkdir -p /app/node_modules/.vite && \
chown -R nodejs:nodejs /app && \
chmod -R 755 /app
# Switch to non-root user
USER nodejs
# Expose ports
EXPOSE 3000 5173 9229
# Start development server
CMD ["npm", "run", "dev:docker"]
```
In the Dockerfile, you first add a label `as base` to the `FROM
node:${NODE_VERSION}-alpine` statement. This lets you refer to this build stage
in other build stages. Next, you add a new build stage labeled `dev` to install
your development dependencies and start the container using `npm run dev`.
Finally, you add a stage labeled `prod` that omits the dev dependencies and runs
your application using `node src/index.js`. To learn more about multi-stage
builds, see [Multi-stage builds](/manuals/build/building/multi-stage.md).
The development stage:
- Installs all dependencies including dev dependencies
- Exposes ports for the API server (3000), Vite dev server (5173), and Node.js debugger (9229)
- Runs `npm run dev` which starts both the Express server and Vite dev server concurrently
- Includes health checks for monitoring container status
Next, you'll need to update your Compose file to use the new stage.
### Update your Compose file for development
To run the `dev` stage with Compose, you need to update your `compose.yaml`
file. Open your `compose.yaml` file in an IDE or text editor, and then add the
`target: dev` instruction to target the `dev` stage from your multi-stage
Dockerfile.
Update your `compose.yml` file to run the development stage with bind mounts for hot reloading:
Also, add a new volume to the server service for the bind mount. For this application, you'll mount `./src` from your local machine to `/usr/src/app/src` in the container.
Lastly, publish port `9229` for debugging.
The following is the updated Compose file. All comments have been removed.
```yaml {hl_lines=[5,8,20,21],collapse=true,title=compose.yaml}
```yaml {hl_lines=[5,8-10,20-27],collapse=true,title=compose.yml}
services:
server:
app-dev:
build:
context: .
target: dev
dockerfile: Dockerfile
target: development
container_name: todoapp-dev
ports:
- 3000:3000
- 9229:9229
- '${APP_PORT:-3000}:3000' # API server
- '${VITE_PORT:-5173}:5173' # Vite dev server
- '${DEBUG_PORT:-9229}:9229' # Node.js debugger
environment:
NODE_ENV: production
NODE_ENV: development
DOCKER_ENV: 'true'
POSTGRES_HOST: db
POSTGRES_USER: postgres
POSTGRES_PASSWORD_FILE: /run/secrets/db-password
POSTGRES_DB: example
POSTGRES_PORT: 5432
POSTGRES_DB: todoapp
POSTGRES_USER: todoapp
POSTGRES_PASSWORD: '${POSTGRES_PASSWORD:-todoapp_password}'
ALLOWED_ORIGINS: '${ALLOWED_ORIGINS:-http://localhost:3000,http://localhost:5173}'
volumes:
- ./src:/app/src:ro
- ./package.json:/app/package.json
- ./vite.config.ts:/app/vite.config.ts:ro
- ./tailwind.config.js:/app/tailwind.config.js:ro
- ./postcss.config.js:/app/postcss.config.js:ro
depends_on:
db:
condition: service_healthy
secrets:
- db-password
volumes:
- ./src:/usr/src/app/src
db:
image: postgres
restart: always
user: postgres
secrets:
- db-password
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=example
- POSTGRES_PASSWORD_FILE=/run/secrets/db-password
expose:
- 5432
healthcheck:
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
volumes:
db-data:
secrets:
db-password:
file: db/password.txt
develop:
watch:
- action: sync
path: ./src
target: /app/src
ignore:
- '**/*.test.*'
- '**/__tests__/**'
- action: rebuild
path: ./package.json
- action: sync
path: ./vite.config.ts
target: /app/vite.config.ts
- action: sync
path: ./tailwind.config.js
target: /app/tailwind.config.js
- action: sync
path: ./postcss.config.js
target: /app/postcss.config.js
restart: unless-stopped
networks:
- todoapp-network
```
Key features of the development configuration:
- **Multi-port exposure**: API server (3000), Vite dev server (5173), and debugger (9229)
- **Comprehensive bind mounts**: Source code, configuration files, and package files for hot reloading
- **Environment variables**: Configurable through `.env` file or defaults
- **PostgreSQL database**: Production-ready database with persistent storage
- **Docker Compose watch**: Automatic file synchronization and container rebuilds
- **Health checks**: Database health monitoring with automatic dependency management
### Run your development container and debug your application
Run the following command to run your application with the new changes to the `Dockerfile` and `compose.yaml` file.
Run the following command to run your application with the development configuration:
```console
$ docker compose up --build
$ docker compose up app-dev --build
```
Open a browser and verify that the application is running at [http://localhost:3000](http://localhost:3000).
Or with file watching for automatic updates:
Any changes to the application's source files on your local machine will now be
immediately reflected in the running container.
Open `docker-nodejs-sample/src/static/js/app.js` in an IDE or text editor and update the button text on line 109 from `Add Item` to `Add`.
```diff
+ {submitting ? 'Adding...' : 'Add'}
- {submitting ? 'Adding...' : 'Add Item'}
```console
$ docker compose up app-dev --watch
```
Refresh [http://localhost:3000](http://localhost:3000) in your browser and verify that the updated text appears.
For local development without Docker:
You can now connect an inspector client to your application for debugging. For
more details about inspector clients, see the [Node.js
documentation](https://nodejs.org/en/docs/guides/debugging-getting-started).
```console
$ npm run dev:with-db
```
Or start services separately:
```console
$ npm run db:start # Start PostgreSQL container
$ npm run dev # Start both server and client
```
### Using Task Runner (alternative)
The project includes a Taskfile.yml for advanced workflows:
```console
# Development
$ task dev # Start development environment
$ task dev:build # Build development image
$ task dev:run # Run development container
# Production
$ task build # Build production image
$ task run # Run production container
$ task build-run # Build and run in one step
# Testing
$ task test # Run all tests
$ task test:unit # Run unit tests with coverage
$ task test:lint # Run linting
# Kubernetes
$ task k8s:deploy # Deploy to Kubernetes
$ task k8s:status # Check deployment status
$ task k8s:logs # View pod logs
# Utilities
$ task clean # Clean up containers and images
$ task health # Check application health
$ task logs # View container logs
```
The application will start with both the Express API server and Vite development server:
- **API Server**: [http://localhost:3000](http://localhost:3000) - Express.js backend with REST API
- **Frontend**: [http://localhost:5173](http://localhost:5173) - Vite dev server with hot module replacement
- **Health Check**: [http://localhost:3000/health](http://localhost:3000/health) - Application health status
Any changes to the application's source files on your local machine will now be immediately reflected in the running container thanks to the bind mounts.
Try making a change to test hot reloading:
1. Open `src/client/components/TodoApp.tsx` in an IDE or text editor.
1. Update the main heading text:
```diff
- <h1 className="text-3xl font-bold text-gray-900 mb-8">
- Modern Todo App
- </h1>
+ <h1 className="text-3xl font-bold text-gray-900 mb-8">
+ My Todo App
+ </h1>
```
1. Save the file and the Vite dev server will automatically reload the page with your changes.
**Debugging support:**
You can connect a debugger to your application on port 9229. The Node.js inspector is enabled with `--inspect=0.0.0.0:9230` in the development script (`dev:server`).
### VS Code debugger setup
1. Create a launch configuration in `.vscode/launch.json`:
```json
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Docker Container",
"type": "node",
"request": "attach",
"port": 9229,
"address": "localhost",
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app",
"protocol": "inspector",
"restart": true,
"sourceMaps": true,
"skipFiles": ["<node_internals>/**"]
}
]
}
```
1. Start your development container:
```console
docker compose up app-dev --build
```
1. Attach the debugger:
- Open VS Code
- From the Debug panel (Ctrl/Cmd + Shift + D), select **Attach to Docker Container** from the drop-down
- Select the green play button or press F5
### Chrome DevTools (alternative)
You can also use Chrome DevTools for debugging:
1. Start your container (if not already running):
```console
docker compose up app-dev --build
```
1. Open Chrome and go to `chrome://inspect`.
1. From the **Configure** option, add:
```text
localhost:9229
```
1. When your Node.js target appears, select **inspect**.
### Debugging configuration details
The debugger configuration:
- **Container port**: 9230 (internal debugger port)
- **Host port**: 9229 (mapped external port)
- **Script**: `tsx watch --inspect=0.0.0.0:9230 src/server/index.ts`
The debugger listens on all interfaces (`0.0.0.0`) inside the container on port 9230 and is accessible on port 9229 from your host machine.
### Troubleshooting debugger connection
If the debugger doesn't connect:
1. Check if the container is running:
```console
docker ps
```
1. Check if the port is exposed:
```console
docker port todoapp-dev
```
1. Check container logs:
```console
docker compose logs app-dev
```
You should see a message like:
```text
Debugger listening on ws://0.0.0.0:9230/...
```
Now you can set breakpoints in your TypeScript source files and debug your containerized Node.js application.
For more details about Node.js debugging, see the [Node.js documentation](https://nodejs.org/en/docs/guides/debugging-getting-started).
## Summary
In this section, you took a look at setting up your Compose file to add a mock
database and persist data. You also learned how to create a multi-stage
Dockerfile and set up a bind mount for development.
You've set up your Compose file with a PostgreSQL database and data persistence. You also created a multi-stage Dockerfile and configured bind mounts for development.
Related information:

View File

@@ -15,159 +15,213 @@ Complete all the previous sections of this guide, starting with [Containerize a
## Overview
Testing is an essential part of modern software development. Testing can mean a
lot of things to different development teams. There are unit tests, integration
tests and end-to-end testing. In this guide you take a look at running your unit
tests in Docker when developing and when building.
Testing is a core part of building reliable software. Whether you're writing unit tests, integration tests, or end-to-end tests, running them consistently across environments matters. Docker makes this easy by giving you the same setup locally, in CI/CD, and during image builds.
## Run tests when developing locally
The sample application already has the Jest package for running tests and has tests inside the `spec` directory. When developing locally, you can use Compose to run your tests.
The sample application uses Vitest for testing, and it already includes tests for React components, custom hooks, API routes, database operations, and utility functions.
Run the following command to run the test script from the `package.json` file inside a container.
### Run tests locally (without Docker)
```console
$ docker compose run server npm run test
$ npm run test
```
To learn more about the command, see [docker compose run](/reference/cli/docker/compose/run/).
### Add test service to Docker Compose
You should see output like the following.
To run tests in a containerized environment, you need to add a dedicated test service to your `compose.yml` file. Add the following service configuration:
```yaml
services:
# ... existing services ...
# ========================================
# Test Service
# ========================================
app-test:
build:
context: .
dockerfile: Dockerfile
target: test
container_name: todoapp-test
environment:
NODE_ENV: test
POSTGRES_HOST: db
POSTGRES_PORT: 5432
POSTGRES_DB: todoapp_test
POSTGRES_USER: todoapp
POSTGRES_PASSWORD: '${POSTGRES_PASSWORD:-todoapp_password}'
depends_on:
db:
condition: service_healthy
command: ['npm', 'run', 'test:coverage']
networks:
- todoapp-network
profiles:
- test
```
This test service configuration:
- **Builds from test stage**: Uses the `test` target from your multi-stage Dockerfile
- **Isolated test database**: Uses a separate `todoapp_test` database for testing
- **Profile-based**: Uses the `test` profile so it only runs when explicitly requested
- **Health dependency**: Waits for the database to be healthy before starting tests
### Run tests in a container
You can run tests using the dedicated test service:
```console
> docker-nodejs@1.0.0 test
> jest
PASS spec/routes/deleteItem.spec.js
PASS spec/routes/getItems.spec.js
PASS spec/routes/addItem.spec.js
PASS spec/routes/updateItem.spec.js
PASS spec/persistence/sqlite.spec.js
● Console
console.log
Using sqlite database at /tmp/todo.db
at Database.log (src/persistence/sqlite.js:18:25)
console.log
Using sqlite database at /tmp/todo.db
at Database.log (src/persistence/sqlite.js:18:25)
console.log
Using sqlite database at /tmp/todo.db
at Database.log (src/persistence/sqlite.js:18:25)
console.log
Using sqlite database at /tmp/todo.db
at Database.log (src/persistence/sqlite.js:18:25)
console.log
Using sqlite database at /tmp/todo.db
at Database.log (src/persistence/sqlite.js:18:25)
Test Suites: 5 passed, 5 total
Tests: 9 passed, 9 total
Snapshots: 0 total
Time: 2.008 s
Ran all test suites.
$ docker compose up app-test --build
```
Or run tests against the development service:
```console
$ docker compose run --rm app-dev npm run test
```
For a one-off test run with coverage:
```console
$ docker compose run --rm app-dev npm run test:coverage
```
### Run tests with coverage
To generate a coverage report:
```console
$ npm run test:coverage
```
You should see output like the following:
```console
> docker-nodejs-sample@1.0.0 test
> vitest --run
✓ src/server/__tests__/routes/todos.test.ts (5 tests) 16ms
✓ src/shared/utils/__tests__/validation.test.ts (15 tests) 6ms
✓ src/client/components/__tests__/LoadingSpinner.test.tsx (8 tests) 67ms
✓ src/server/database/__tests__/postgres.test.ts (13 tests) 136ms
✓ src/client/components/__tests__/ErrorMessage.test.tsx (8 tests) 127ms
✓ src/client/components/__tests__/TodoList.test.tsx (8 tests) 147ms
✓ src/client/components/__tests__/TodoItem.test.tsx (8 tests) 218ms
✓ src/client/__tests__/App.test.tsx (13 tests) 259ms
✓ src/client/components/__tests__/AddTodoForm.test.tsx (12 tests) 323ms
✓ src/client/hooks/__tests__/useTodos.test.ts (11 tests) 569ms
Test Files 9 passed (9)
Tests 88 passed (88)
Start at 20:57:19
Duration 4.41s (transform 1.79s, setup 2.66s, collect 5.38s, tests 4.61s, environment 14.07s, prepare 4.34s)
```
### Test structure
The test suite covers:
- **Client Components** (`src/client/components/__tests__/`): React component testing with React Testing Library
- **Custom Hooks** (`src/client/hooks/__tests__/`): React hooks testing with proper mocking
- **Server Routes** (`src/server/__tests__/routes/`): API endpoint testing
- **Database Layer** (`src/server/database/__tests__/`): PostgreSQL database operations testing
- **Utility Functions** (`src/shared/utils/__tests__/`): Validation and helper function testing
- **Integration Tests** (`src/client/__tests__/`): Full application integration testing
## Run tests when building
To run your tests when building, you need to update your Dockerfile to add a new test stage.
To run tests during the Docker build process, you need to add a dedicated test stage to your Dockerfile. If you haven't already added this stage, add the following to your multi-stage Dockerfile:
The following is the updated Dockerfile.
```dockerfile
# ========================================
# Test Stage
# ========================================
FROM build-deps AS test
```dockerfile {hl_lines="27-35"}
# syntax=docker/dockerfile:1
# Set environment
ENV NODE_ENV=test \
CI=true
ARG NODE_VERSION=18.0.0
# Copy source files
COPY --chown=nodejs:nodejs . .
FROM node:${NODE_VERSION}-alpine as base
WORKDIR /usr/src/app
EXPOSE 3000
# Switch to non-root user
USER nodejs
FROM base as dev
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --include=dev
USER node
COPY . .
CMD npm run dev
FROM base as prod
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --omit=dev
USER node
COPY . .
CMD node src/index.js
FROM base as test
ENV NODE_ENV test
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --include=dev
USER node
COPY . .
RUN npm run test
# Run tests with coverage
CMD ["npm", "run", "test:coverage"]
```
Instead of using `CMD` in the test stage, use `RUN` to run the tests. The reason is that the `CMD` instruction runs when the container runs, and the `RUN` instruction runs when the image is being built and the build will fail if the tests fail.
This test stage:
Run the following command to build a new image using the test stage as the target and view the test results. Include `--progress=plain` to view the build output, `--no-cache` to ensure the tests always run, and `--target test` to target the test stage.
- **Test environment**: Sets `NODE_ENV=test` and `CI=true` for proper test execution
- **Non-root user**: Runs tests as the `nodejs` user for security
- **Flexible execution**: Uses `CMD` instead of `RUN` to allow running tests during build or as a separate container
- **Coverage support**: Configured to run tests with coverage reporting
### Build and run tests during image build
To build an image that runs tests during the build process, you can create a custom Dockerfile or modify the existing one temporarily:
```console
$ docker build -t node-docker-image-test --progress=plain --no-cache --target test .
$ docker build --target test -t node-docker-image-test .
```
You should see output containing the following.
### Run tests in a dedicated test container
The recommended approach is to use the test service defined in `compose.yml`:
```console
...
$ docker compose --profile test up app-test --build
```
#11 [test 3/3] RUN npm run test
#11 1.058
#11 1.058 > docker-nodejs@1.0.0 test
#11 1.058 > jest
#11 1.058
#11 3.765 PASS spec/routes/getItems.spec.js
#11 3.767 PASS spec/routes/deleteItem.spec.js
#11 3.783 PASS spec/routes/updateItem.spec.js
#11 3.806 PASS spec/routes/addItem.spec.js
#11 4.179 PASS spec/persistence/sqlite.spec.js
#11 4.207
#11 4.208 Test Suites: 5 passed, 5 total
#11 4.208 Tests: 9 passed, 9 total
#11 4.208 Snapshots: 0 total
#11 4.208 Time: 2.168 s
#11 4.208 Ran all test suites.
#11 4.265 npm notice
#11 4.265 npm notice New major version of npm available! 8.6.0 -> 9.8.1
#11 4.265 npm notice Changelog: <https://github.com/npm/cli/releases/tag/v9.8.1>
#11 4.265 npm notice Run `npm install -g npm@9.8.1` to update!
#11 4.266 npm notice
#11 DONE 4.3s
Or run it as a one-off container:
...
```console
$ docker compose run --rm app-test
```
### Run tests with coverage in CI/CD
For continuous integration, you can run tests with coverage:
```console
$ docker build --target test --progress=plain --no-cache -t test-image .
$ docker run --rm test-image npm run test:coverage
```
You should see output containing the following:
```console
✓ src/server/__tests__/routes/todos.test.ts (5 tests) 16ms
✓ src/shared/utils/__tests__/validation.test.ts (15 tests) 6ms
✓ src/client/components/__tests__/LoadingSpinner.test.tsx (8 tests) 67ms
✓ src/server/database/__tests__/postgres.test.ts (13 tests) 136ms
✓ src/client/components/__tests__/ErrorMessage.test.tsx (8 tests) 127ms
✓ src/client/components/__tests__/TodoList.test.tsx (8 tests) 147ms
✓ src/client/components/__tests__/TodoItem.test.tsx (8 tests) 218ms
✓ src/client/__tests__/App.test.tsx (13 tests) 259ms
✓ src/client/components/__tests__/AddTodoForm.test.tsx (12 tests) 323ms
✓ src/client/hooks/__tests__/useTodos.test.ts (11 tests) 569ms
Test Files 9 passed (9)
Tests 88 passed (88)
Start at 20:57:19
Duration 4.41s (transform 1.79s, setup 2.66s, collect 5.38s, tests 4.61s, environment 14.07s, prepare 4.34s)
```
## Summary
In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image.
In this section, you learned how to run tests when developing locally using Docker Compose and how to run tests when building your image.
Related information:
- [docker compose run](/reference/cli/docker/compose/run/)
- [Dockerfile reference](/reference/dockerfile/) Understand all Dockerfile instructions and syntax.
- [Best practices for writing Dockerfiles](/develop/develop-images/dockerfile_best-practices/) Write efficient, maintainable, and secure Dockerfiles.
- [Compose file reference](/compose/compose-file/) Learn the full syntax and options available for configuring services in `compose.yaml`.
- [`docker compose run` CLI reference](/reference/cli/docker/compose/run/) Run one-off commands in a service container.
## Next steps