Deployment Guide
This comprehensive guide covers deployment strategies, configurations, and best practices for the Awesome NestJS Boilerplate across different environments and platforms.
- Deployment Guide
Deployment Overview
The Awesome NestJS Boilerplate supports multiple deployment strategies:
- Docker Deployment: Containerized deployment with Docker and Docker Compose
- Traditional Deployment: Direct server deployment with PM2 process management
- Cloud Platforms: Managed deployment on AWS, GCP, Heroku, and other platforms
- CI/CD Integration: Automated deployment with GitHub Actions and GitLab CI
Environment Preparation
Production Environment Variables
Create a production .env file with the following variables:
# Application
NODE_ENV=production
PORT=3000
# Database
DB_TYPE=postgres
DB_HOST=your-db-host
DB_PORT=5432
DB_USERNAME=your-db-user
DB_PASSWORD=your-secure-password
DB_DATABASE=your-db-name
ENABLE_ORM_LOGS=false
# JWT Authentication (RSA key pair — RS256 algorithm)
# Generate with: openssl genpkey -algorithm RSA -out private.pem && openssl rsa -pubout -in private.pem -out public.pem
JWT_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----"
JWT_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----\n...\n-----END PUBLIC KEY-----"
# CORS
CORS_ORIGINS=https://yourdomain.com,https://www.yourdomain.com
# API Documentation (disable in production)
ENABLE_DOCUMENTATION=false
# Throttling
THROTTLER_TTL=60
THROTTLER_LIMIT=100
# NATS (if using microservices)
NATS_ENABLED=false
NATS_HOST=your-nats-host
NATS_PORT=4222
# AWS S3 (if using file uploads)
AWS_S3_BUCKET_NAME=your-bucket-name
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_REGION=us-east-1Database Configuration
Ensure your production database is properly configured:
-- Create database and user
CREATE DATABASE your_db_name;
CREATE USER your_db_user WITH ENCRYPTED PASSWORD 'your_secure_password';
GRANT ALL PRIVILEGES ON DATABASE your_db_name TO your_db_user;
-- Grant schema permissions
GRANT ALL ON SCHEMA public TO your_db_user;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO your_db_user;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO your_db_user;Docker Deployment
Prerequisites
- Docker Engine 20.10+
- Docker Compose 2.0+
- Access to a container registry (optional)
Production Docker Setup
Dockerfile (multi-stage build, from the actual Dockerfile in the repo):
# Stage 1: base — enable pnpm via corepack
FROM node:24-slim AS base
RUN corepack enable
# Stage 2: build — install all deps, compile TypeScript
FROM base AS build
WORKDIR /app
COPY pnpm-lock.yaml package.json ./
RUN pnpm install --frozen-lockfile
COPY . .
RUN pnpm run build:prod
# Stage 3: prod-deps — production-only dependencies
FROM base AS prod-deps
WORKDIR /app
COPY pnpm-lock.yaml package.json ./
RUN pnpm install --frozen-lockfile --prod
# Stage 4: runtime — lean final image
FROM node:24-slim AS runtime
WORKDIR /app
ENV NODE_ENV=production
ENV NODE_OPTIONS="--max-old-space-size=8192"
ARG PORT=3000
ARG secret_manager_arn
EXPOSE $PORT
COPY --from=build /app/dist ./dist
COPY --from=prod-deps /app/node_modules ./node_modules
CMD ["node", "dist/main.js"]Docker Compose Deployment
The repo's docker-compose.yml provides the development stack. For production, adapt it or build your own. The development compose includes:
- app: Builds from the multi-stage
Dockerfile, depends on postgres (healthy) and meilisearch - postgres: Standard Postgres image with
pg_isreadyhealth check andinit-data.shvolume mount - pgadmin: dpage/pgadmin4, available at
http://localhost:8080 - meilisearch: getmeili/meilisearch, available at
http://localhost:7701
Example production-focused override (docker-compose.prod.yml):
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DB_HOST=postgres
- DB_PORT=5432
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
- DB_DATABASE=${DB_DATABASE}
- JWT_PRIVATE_KEY=${JWT_PRIVATE_KEY}
- JWT_PUBLIC_KEY=${JWT_PUBLIC_KEY}
- CORS_ORIGINS=${CORS_ORIGINS}
depends_on:
postgres:
condition: service_healthy
restart: unless-stopped
postgres:
image: postgres:16-alpine
environment:
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_DATABASE}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USERNAME} -d ${DB_DATABASE}"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
postgres_data:Deployment commands:
# Build and start services
docker-compose -f docker-compose.prod.yml up -d --build
# View logs
docker-compose -f docker-compose.prod.yml logs -f
# Scale the application
docker-compose -f docker-compose.prod.yml up -d --scale app=3
# Stop services
docker-compose -f docker-compose.prod.yml downContainer Registry
Build and push to registry:
# Build image
docker build -t your-registry/nest-boilerplate:latest .
# Tag for versioning
docker tag your-registry/nest-boilerplate:latest your-registry/nest-boilerplate:v1.0.0
# Push to registry
docker push your-registry/nest-boilerplate:latest
docker push your-registry/nest-boilerplate:v1.0.0Traditional Server Deployment
Prerequisites
- Node.js 24+ LTS
- pnpm 10.26+
- PostgreSQL 14+
- PM2 process manager
- Nginx (recommended)
Build and Deploy
# 1. Clone repository
git clone https://github.com/your-username/your-nest-app.git
cd your-nest-app
# 2. Install dependencies
pnpm install --frozen-lockfile
# 3. Build application
pnpm build:prod
# 4. Set up environment
cp .env.example .env
# Edit .env with production values
# 5. Run database migrations
pnpm migration:run
# 6. Start application
pnpm start:prodProcess Management
PM2 Configuration (ecosystem.config.js):
module.exports = {
apps: [
{
name: 'nest-boilerplate',
script: 'dist/main.js',
instances: 'max',
exec_mode: 'cluster',
env: {
NODE_ENV: 'production',
PORT: 3000,
},
env_production: {
NODE_ENV: 'production',
PORT: 3000,
},
error_file: './logs/err.log',
out_file: './logs/out.log',
log_file: './logs/combined.log',
time: true,
max_memory_restart: '1G',
node_args: '--max-old-space-size=1024',
},
],
};PM2 Commands:
# Install PM2 globally
npm install -g pm2
# Start application
pm2 start ecosystem.config.js --env production
# Monitor processes
pm2 monit
# View logs
pm2 logs nest-boilerplate
# Restart application
pm2 restart nest-boilerplate
# Stop application
pm2 stop nest-boilerplate
# Save PM2 configuration
pm2 save
# Setup PM2 startup script
pm2 startupNginx Configuration (/etc/nginx/sites-available/nest-boilerplate):
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}Cloud Platform Deployment
AWS Deployment
AWS Elastic Beanstalk
.elasticbeanstalk/config.yml:
branch-defaults:
main:
environment: nest-boilerplate-prod
global:
application_name: nest-boilerplate
default_platform: Node.js 24
default_region: us-east-1
sc: gitDeployment commands:
# Install EB CLI
pip install awsebcli
# Initialize EB application
eb init
# Create environment
eb create production
# Deploy
eb deploy
# Open application
eb openAWS ECS with Fargate
task-definition.json:
{
"family": "nest-boilerplate",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024",
"executionRoleArn": "arn:aws:iam::account:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "nest-app",
"image": "your-account.dkr.ecr.region.amazonaws.com/nest-boilerplate:latest",
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"environment": [
{
"name": "NODE_ENV",
"value": "production"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/nest-boilerplate",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
}
}
]
}Memory Requirements:
- Minimum: 512MB (for basic applications without image processing)
- Recommended: 1GB (for applications with image processing, file uploads, or heavy workloads)
- Production: 2GB+ (for high-traffic applications with complex operations)
The Dockerfile includes NODE_OPTIONS="--max-old-space-size=8192" for 8GB containers, which limits Node.js heap to 100% of container memory, leaving room for system overhead. For larger containers, adjust this value accordingly (e.g., --max-old-space-size=16384 for 16GB containers).
Google Cloud Platform
app.yaml (App Engine):
runtime: nodejs24
env_variables:
NODE_ENV: production
DB_HOST: /cloudsql/project:region:instance
JWT_PRIVATE_KEY: your-rsa-private-key
JWT_PUBLIC_KEY: your-rsa-public-key
automatic_scaling:
min_instances: 1
max_instances: 10
target_cpu_utilization: 0.6
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10Deployment:
# Deploy to App Engine
gcloud app deploy
# View logs
gcloud app logs tail -s defaultHeroku Deployment
Procfile:
web: node dist/main.js
release: pnpm migration:runpackage.json scripts:
{
"scripts": {
"heroku-postbuild": "pnpm build:prod"
}
}Deployment commands:
# Login to Heroku
heroku login
# Create application
heroku create your-app-name
# Add PostgreSQL addon
heroku addons:create heroku-postgresql:hobby-dev
# Set environment variables
heroku config:set NODE_ENV=production
heroku config:set JWT_PRIVATE_KEY="$(cat private.pem)"
heroku config:set JWT_PUBLIC_KEY="$(cat public.pem)"
# Deploy
git push heroku main
# Run migrations
heroku run pnpm migration:run
# View logs
heroku logs --tailDigitalOcean App Platform
.do/app.yaml:
name: nest-boilerplate
services:
- name: api
source_dir: /
github:
repo: your-username/nest-boilerplate
branch: main
run_command: pnpm start:prod
environment_slug: node-js
instance_count: 1
instance_size_slug: basic-xxs
env:
- key: NODE_ENV
value: production
- key: JWT_PRIVATE_KEY
value: your-rsa-private-key
type: SECRET
- key: JWT_PUBLIC_KEY
value: your-rsa-public-key
type: SECRET
databases:
- name: postgres-db
engine: PG
version: "13"
size: db-s-dev-databaseCI/CD Pipeline
GitHub Actions
.github/workflows/deploy.yml:
name: Deploy to Production
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '24'
cache: 'pnpm'
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Run tests
run: pnpm test:cov
- name: Run e2e tests
run: pnpm test:e2e
build-and-deploy:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '24'
cache: 'pnpm'
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Build application
run: pnpm build:prod
- name: Build Docker image
run: |
docker build -t ${{ secrets.DOCKER_REGISTRY }}/nest-boilerplate:${{ github.sha }} .
docker tag ${{ secrets.DOCKER_REGISTRY }}/nest-boilerplate:${{ github.sha }} ${{ secrets.DOCKER_REGISTRY }}/nest-boilerplate:latest
- name: Login to Docker Registry
run: echo ${{ secrets.DOCKER_PASSWORD }} | docker login ${{ secrets.DOCKER_REGISTRY }} -u ${{ secrets.DOCKER_USERNAME }} --password-stdin
- name: Push Docker image
run: |
docker push ${{ secrets.DOCKER_REGISTRY }}/nest-boilerplate:${{ github.sha }}
docker push ${{ secrets.DOCKER_REGISTRY }}/nest-boilerplate:latest
- name: Deploy to production
run: |
# Add your deployment script here
# e.g., kubectl apply, docker-compose pull && docker-compose up -d, etc.GitLab CI
.gitlab-ci.yml:
stages:
- test
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
test:
stage: test
image: node:24-slim
before_script:
- corepack enable
- pnpm config set store-dir .pnpm-store
cache:
paths:
- .pnpm-store/
script:
- pnpm install --frozen-lockfile
- pnpm test:cov
- pnpm test:e2e
build:
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
only:
- main
deploy:
stage: deploy
image: alpine:latest
script:
- apk add --no-cache curl
- curl -X POST $DEPLOY_WEBHOOK_URL
only:
- mainDatabase Migration
Production migration strategy:
# 1. Backup database
pg_dump -h localhost -U username -d database_name > backup.sql
# 2. Run migrations
pnpm migration:run
# 3. Verify migration
pnpm migration:show
# 4. Rollback if needed (be careful!)
pnpm migration:revertZero-downtime migration approach:
- Deploy new version alongside old version
- Run migrations on new version
- Switch traffic to new version
- Remove old version
Security Considerations
Environment Security
- Use environment variables for sensitive data
- Never commit secrets to version control
- Use secret management services (AWS Secrets Manager, etc.)
- Implement proper RBAC for deployment access
Application Security
- Enable HTTPS with valid SSL certificates
- Configure proper CORS settings
- Implement rate limiting
- Use security headers (helmet.js)
- Regular security audits (
pnpm audit)
Infrastructure Security
- Use private networks for database connections
- Implement proper firewall rules
- Regular security updates
- Monitor for vulnerabilities
Monitoring and Logging
Application Monitoring
Health Check Endpoint:
// src/health-check.ts
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
async function healthCheck() {
try {
const app = await NestFactory.create(AppModule);
await app.listen(3000);
console.log('Health check passed');
process.exit(0);
} catch (error) {
console.error('Health check failed:', error);
process.exit(1);
}
}
healthCheck();Logging Configuration:
// src/main.ts
import { Logger } from '@nestjs/common';
const logger = new Logger('Bootstrap');
// Log application startup
logger.log(`Application is running on: ${await app.getUrl()}`);External Monitoring
Sentry Integration:
pnpm add @sentry/node// src/main.ts
import * as Sentry from '@sentry/node';
if (process.env.NODE_ENV === 'production') {
Sentry.init({
dsn: process.env.SENTRY_DSN,
});
}Performance Optimization
Application Optimization
- Enable compression middleware
- Implement caching strategies
- Optimize database queries
- Use connection pooling
- Enable HTTP/2
Memory Management
Node.js Memory Configuration: The application includes memory management to prevent Out-of-Memory (OOM) crashes:
Dockerfile Configuration: Sets
NODE_OPTIONS="--max-old-space-size=8192"(suitable for high-memory containers; adjust per your container allocation)- Limits Node.js heap to the specified MB
- Leave headroom for system overhead and native modules
Memory Monitoring: Application logs memory usage:
- On startup
- Every 5 minutes in production (configurable)
- Includes RSS, heap used/total, external memory, and array buffers
Container Memory Recommendations:
- 1GB: Minimum for basic applications
- 2GB: Recommended for applications with image processing or S3 uploads
- 4GB+: For high-traffic applications with complex operations
Adjusting Memory Limits: Update the NODE_OPTIONS environment variable in the Dockerfile or at runtime:
# For 2GB container (~75% of available memory)
ENV NODE_OPTIONS="--max-old-space-size=1536"
# For 4GB container
ENV NODE_OPTIONS="--max-old-space-size=3072"Memory Optimization Best Practices:
- Monitor memory usage logs to identify memory growth patterns
- Review image processing code to ensure buffers are properly released
- Use streaming for large file operations when possible
- Implement connection pooling to limit database connection memory
- Set appropriate Bull queue job retention policies
Troubleshooting OOM Issues:
- Check application logs for memory usage patterns
- Review container memory limits in your orchestration platform (ECS, Kubernetes, etc.)
- Increase container memory allocation if consistently hitting limits
- Investigate memory leaks using Node.js memory profiling tools
Infrastructure Optimization
- Use CDN for static assets
- Implement load balancing
- Auto-scaling configuration
- Database read replicas
- Caching layers (Redis)
Backup and Recovery
Database Backup
Automated backup script:
#!/bin/bash
# backup.sh
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/backups"
DB_NAME="your_database"
# Create backup
pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME > $BACKUP_DIR/backup_$DATE.sql
# Compress backup
gzip $BACKUP_DIR/backup_$DATE.sql
# Remove old backups (keep last 7 days)
find $BACKUP_DIR -name "backup_*.sql.gz" -mtime +7 -deleteCron job for automated backups:
# Add to crontab
0 2 * * * /path/to/backup.shApplication Backup
- Regular code repository backups
- Configuration file backups
- SSL certificate backups
- Log file archival
Scaling Strategies
Horizontal Scaling
- Load balancer configuration
- Multiple application instances
- Database read replicas
- Microservices architecture
Vertical Scaling
- Increase server resources
- Optimize application performance
- Database performance tuning
- Memory and CPU optimization
Auto-scaling Configuration
AWS Auto Scaling:
{
"AutoScalingGroupName": "nest-boilerplate-asg",
"MinSize": 1,
"MaxSize": 10,
"DesiredCapacity": 2,
"TargetGroupARNs": ["arn:aws:elasticloadbalancing:..."],
"HealthCheckType": "ELB",
"HealthCheckGracePeriod": 300
}Troubleshooting
Common Issues
Application won't start:
- Check environment variables
- Verify database connectivity
- Check port availability
- Review application logs
Database connection issues:
- Verify database credentials
- Check network connectivity
- Confirm database server status
- Review connection pool settings
Performance issues:
- Monitor resource usage
- Check database query performance
- Review application logs
- Analyze network latency
Debugging Tools
# Check application status
pm2 status
# View real-time logs
pm2 logs --lines 100
# Monitor resource usage
htop
iostat
netstat -tulpn
# Database performance
EXPLAIN ANALYZE SELECT ...;Log Analysis
# Search for errors
grep -i error /var/log/app.log
# Monitor access patterns
tail -f /var/log/nginx/access.log
# Check system logs
journalctl -u your-service -f