11 mins
DevOps and CI/CD Pipeline: Building Automated Deployment Workflows

Complete guide to implementing DevOps practices and CI/CD pipelines for modern web applications

DevOps and CI/CD Pipeline: Building Automated Deployment Workflowsh1

Hello! I’m Ahmet Zeybek, a full stack developer with extensive experience in DevOps and automated deployment systems. In today’s fast-paced development environment, manual deployments are a thing of the past. In this comprehensive guide, I’ll show you how to build robust CI/CD pipelines that ensure code quality, security, and rapid deployments.

Why DevOps and CI/CD Matterh2

Modern software development demands:

  • Faster time to market - Deploy multiple times per day
  • Higher code quality - Automated testing and code review
  • Better security - Automated vulnerability scanning
  • Improved reliability - Consistent deployments across environments
  • Reduced costs - Less manual intervention and errors

CI/CD Pipeline Architectureh2

1. Pipeline Stages Overviewh3

A typical CI/CD pipeline consists of these stages:

graph LR
A[Code Commit] --> B[Static Analysis]
B --> C[Unit Tests]
C --> D[Integration Tests]
D --> E[Build & Package]
E --> F[Security Scan]
F --> G[Deploy to Staging]
G --> H[E2E Tests]
H --> I[Deploy to Production]
I --> J[Monitoring & Alerting]

2. GitHub Actions Workflowh3

Let’s implement a complete CI/CD pipeline using GitHub Actions:

.github/workflows/ci-cd.yml
name: CI/CD Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
NODE_ENV: ${{ secrets.NODE_ENV }}
DOCKER_REGISTRY: ${{ secrets.DOCKER_REGISTRY }}
KUBECONFIG: ${{ secrets.KUBECONFIG }}
jobs:
# Static code analysis
lint-and-format:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint
continue-on-error: false
- name: Run Prettier check
run: npm run format:check
- name: Type checking
run: npm run type-check
# Unit and integration tests
test:
needs: lint-and-format
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18, 20]
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: test_db
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:7-alpine
ports:
- 6379:6379
steps:
- uses: actions/checkout@v4
- name: Setup Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run unit tests
run: npm run test:unit
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
REDIS_URL: redis://localhost:6379
- name: Run integration tests
run: npm run test:integration
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage/lcov.info
# Build and security scanning
build-and-security:
needs: test
runs-on: ubuntu-latest
outputs:
image-tag: ${{ steps.meta.outputs.tags }}
image-digest: ${{ steps.build.outputs.digest }}
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Registry
uses: docker/login-action@v3
with:
registry: ${{ env.DOCKER_REGISTRY }}
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.DOCKER_REGISTRY }}/myapp
tags: |
type=ref,event=branch
type=ref,event=pr
type=sha,prefix={{branch}}-
type=raw,value=latest,enable={{is_default_branch}}
- name: Build and push Docker image
id: build
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.DOCKER_REGISTRY }}/myapp:${{ steps.meta.outputs.version }}
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload Trivy scan results
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: 'trivy-results.sarif'
# Deploy to staging
deploy-staging:
needs: build-and-security
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/develop'
environment: staging
steps:
- uses: actions/checkout@v4
- name: Configure kubectl
uses: azure/k8s-set-context@v3
with:
method: kubeconfig
kubeconfig: ${{ secrets.KUBE_CONFIG_STAGING }}
- name: Deploy to staging
run: |
# Update deployment with new image
kubectl set image deployment/myapp app=${{ env.DOCKER_REGISTRY }}/myapp:${{ needs.build-and-security.outputs.image-tag }}
# Wait for rollout to complete
kubectl rollout status deployment/myapp --timeout=300s
- name: Run smoke tests
run: |
# Wait for pods to be ready
kubectl wait --for=condition=available --timeout=300s deployment/myapp
# Run basic health checks
curl -f https://staging.myapp.com/health || exit 1
# End-to-end tests
e2e-tests:
needs: deploy-staging
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/develop'
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install Playwright
run: npx playwright install --with-deps
- name: Run E2E tests
run: npm run test:e2e
env:
BASE_URL: https://staging.myapp.com
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-report
path: playwright-report/
retention-days: 30
# Deploy to production
deploy-production:
needs: [build-and-security, e2e-tests]
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
environment: production
steps:
- uses: actions/checkout@v4
- name: Configure kubectl
uses: azure/k8s-set-context@v3
with:
method: kubeconfig
kubeconfig: ${{ secrets.KUBE_CONFIG_PRODUCTION }}
- name: Deploy to production
run: |
# Create production deployment with new image
kubectl set image deployment/myapp app=${{ env.DOCKER_REGISTRY }}/myapp:${{ needs.build-and-security.outputs.image-tag }}
# Perform rolling update
kubectl rollout status deployment/myapp --timeout=600s
- name: Verify deployment
run: |
# Wait for pods to be ready
kubectl wait --for=condition=available --timeout=300s deployment/myapp
# Run health checks
curl -f https://myapp.com/health || exit 1
curl -f https://myapp.com/api/health || exit 1
- name: Post-deployment notification
if: success()
run: |
curl -X POST -H 'Content-type: application/json' \
--data '{"text":"🚀 Production deployment successful!"}' \
${{ secrets.SLACK_WEBHOOK_URL }}

Containerization Strategyh2

3. Multi-stage Docker Buildh3

Optimize your Docker images for production:

# Dockerfile
# Multi-stage build for optimal image size
# Stage 1: Dependencies
FROM node:20-alpine AS deps
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY pnpm-lock.yaml ./
# Install dependencies only (no dev dependencies in production)
RUN npm ci --only=production && npm cache clean --force
# Stage 2: Builder
FROM node:20-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json pnpm-lock.yaml ./
# Install all dependencies (including dev)
RUN npm ci
# Copy source code
COPY . .
# Build the application
RUN npm run build
# Stage 3: Production
FROM node:20-alpine AS production
WORKDIR /app
# Create non-root user for security
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001
# Copy package files
COPY package*.json ./
# Copy production dependencies from deps stage
COPY --from=deps /app/node_modules ./node_modules
# Copy built application from builder stage
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/public ./public
# Set correct ownership
RUN chown -R nextjs:nodejs /app
USER nextjs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Start the application
CMD ["npm", "start"]

4. Docker Compose for Local Developmenth3

docker-compose.yml
version: '3.8'
services:
# Main application
app:
build:
context: .
dockerfile: Dockerfile
target: development
ports:
- '3000:3000'
volumes:
- .:/app
- /app/node_modules
- /app/.next
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:password@db:5432/myapp_dev
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
networks:
- myapp-network
# PostgreSQL database
db:
image: postgres:15-alpine
restart: unless-stopped
environment:
POSTGRES_DB: myapp_dev
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- myapp-network
# Redis cache
redis:
image: redis:7-alpine
restart: unless-stopped
ports:
- '6379:6379'
volumes:
- redis_data:/data
networks:
- myapp-network
# Redis Commander (GUI for Redis)
redis-commander:
image: rediscommander/redis-commander:latest
restart: unless-stopped
environment:
REDIS_HOSTS: local:redis:6379
ports:
- '8081:8081'
depends_on:
- redis
networks:
- myapp-network
volumes:
postgres_data:
redis_data:
networks:
myapp-network:
driver: bridge

Infrastructure as Codeh2

5. Kubernetes Deploymenth3

k8s/deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myregistry/myapp:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: 'production'
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
- name: REDIS_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: redis-url
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: '128Mi'
cpu: '100m'
limits:
memory: '512Mi'
cpu: '500m'
---
# k8s/service.yml
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- name: http
port: 80
targetPort: 3000
protocol: TCP
type: ClusterIP
---
# k8s/ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: 'nginx'
cert-manager.io/cluster-issuer: 'letsencrypt-prod'
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
spec:
tls:
- hosts:
- myapp.com
- www.myapp.com
secretName: myapp-tls
rules:
- host: myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80
- host: www.myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80

Monitoring and Observabilityh2

6. Application Monitoring Setuph3

src/monitoring/metrics.ts
import promClient from 'prom-client'
// Create a Registry which registers the metrics
const register = new promClient.Registry()
// Add a default label which is added to all metrics
register.setDefaultLabels({
app: 'myapp',
})
// Create custom metrics
const httpRequestDuration = new promClient.Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status_code'],
buckets: [0.1, 0.5, 1, 2.5, 5, 10],
})
const activeConnections = new promClient.Gauge({
name: 'active_connections',
help: 'Number of active connections',
})
const errorCounter = new promClient.Counter({
name: 'errors_total',
help: 'Total number of errors',
labelNames: ['type', 'route'],
})
// Register metrics
register.registerMetric(httpRequestDuration)
register.registerMetric(activeConnections)
register.registerMetric(errorCounter)
// Middleware to collect metrics
export const metricsMiddleware = (req: Request, res: Response, next: NextFunction) => {
const start = Date.now()
res.on('finish', () => {
const duration = (Date.now() - start) / 1000
httpRequestDuration.labels(req.method, req.route?.path || req.path, res.statusCode.toString()).observe(duration)
if (res.statusCode >= 400) {
errorCounter.labels('http_error', req.route?.path || req.path).inc()
}
})
next()
}
// Health check endpoint
export const healthCheck = async (req: Request, res: Response) => {
try {
// Check database connection
await checkDatabaseConnection()
// Check external services
await checkExternalServices()
res.status(200).json({
status: 'healthy',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
version: process.env.npm_package_version,
})
} catch (error) {
res.status(503).json({
status: 'unhealthy',
error: error.message,
timestamp: new Date().toISOString(),
})
}
}

7. Logging Strategyh3

Implement structured logging with correlation IDs:

src/utils/logger.ts
import winston from 'winston'
import { v4 as uuidv4 } from 'uuid'
export interface LogContext {
correlationId?: string
userId?: string
requestId?: string
[key: string]: any
}
class Logger {
private logger: winston.Logger
constructor() {
this.logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json(),
winston.format.printf(({ timestamp, level, message, correlationId, ...meta }) => {
return JSON.stringify({
timestamp,
level,
message,
correlationId,
...meta,
})
})
),
defaultMeta: { service: 'myapp' },
transports: [
new winston.transports.Console({
level: 'info',
format: winston.format.combine(winston.format.colorize(), winston.format.simple()),
}),
new winston.transports.File({ filename: 'logs/error.log', level: 'error' }),
new winston.transports.File({ filename: 'logs/combined.log' }),
],
})
}
private generateCorrelationId(): string {
return uuidv4()
}
info(message: string, context?: LogContext): void {
this.logger.info(message, {
correlationId: context?.correlationId || this.generateCorrelationId(),
...context,
})
}
error(message: string, error?: Error, context?: LogContext): void {
this.logger.error(message, {
correlationId: context?.correlationId || this.generateCorrelationId(),
error: error?.message,
stack: error?.stack,
...context,
})
}
warn(message: string, context?: LogContext): void {
this.logger.warn(message, {
correlationId: context?.correlationId || this.generateCorrelationId(),
...context,
})
}
}
export const logger = new Logger()
// Middleware to add correlation ID to requests
export const correlationIdMiddleware = (req: Request, res: Response, next: NextFunction) => {
const correlationId = (req.headers['x-correlation-id'] as string) || uuidv4()
req.correlationId = correlationId
res.setHeader('x-correlation-id', correlationId)
next()
}

Database Migration Strategyh2

8. Automated Database Migrationsh3

src/database/migrations/001_initial_schema.ts
import { MigrationInterface, QueryRunner } from 'typeorm'
export class InitialSchema1640995200000 implements MigrationInterface {
name = 'InitialSchema1640995200000'
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`
CREATE TABLE "users" (
"id" SERIAL NOT NULL,
"email" character varying NOT NULL,
"password_hash" character varying NOT NULL,
"created_at" TIMESTAMP NOT NULL DEFAULT now(),
"updated_at" TIMESTAMP NOT NULL DEFAULT now(),
CONSTRAINT "UQ_users_email" UNIQUE ("email"),
CONSTRAINT "PK_users" PRIMARY KEY ("id")
)
`)
await queryRunner.query(`
CREATE INDEX "IDX_users_email" ON "users" ("email")
`)
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`DROP INDEX "IDX_users_email"`)
await queryRunner.query(`DROP TABLE "users"`)
}
}
// src/database/migration-runner.ts
import { DataSource } from 'typeorm'
import { logger } from '../utils/logger'
export class MigrationRunner {
constructor(private dataSource: DataSource) {}
async runMigrations(): Promise<void> {
try {
logger.info('Starting database migrations')
await this.dataSource.initialize()
await this.dataSource.runMigrations()
logger.info('Database migrations completed successfully')
} catch (error) {
logger.error('Failed to run database migrations', error)
throw error
}
}
async rollbackMigrations(): Promise<void> {
try {
logger.info('Rolling back database migrations')
await this.dataSource.initialize()
await this.dataSource.undoLastMigration()
logger.info('Database migrations rolled back successfully')
} catch (error) {
logger.error('Failed to rollback database migrations', error)
throw error
}
}
}

Security Integrationh2

9. Security Scanning in CI/CDh3

.github/workflows/security.yml
name: Security Scanning
on:
push:
branches: [main]
schedule:
- cron: '0 2 * * 1' # Weekly on Monday at 2 AM
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run npm audit
run: npm audit --audit-level=moderate
continue-on-error: true
- name: Run Snyk to check for vulnerabilities
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --severity-threshold=high
- name: Run CodeQL Analysis
uses: github/codeql-action/init@v3
with:
languages: javascript
- name: Autobuild
uses: github/codeql-action/autobuild@v3
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
- name: Upload SARIF file
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: results.sarif

Performance Optimizationh2

10. Performance Testing Integrationh3

src/tests/performance/load-test.ts
import autocannon from 'autocannon'
export class LoadTester {
async runLoadTest(): Promise<void> {
const instance = autocannon({
url: 'https://myapp.com',
connections: 100, // 100 concurrent connections
duration: 60, // Test duration in seconds
requests: [
{
method: 'GET',
path: '/api/posts',
headers: {
'Content-Type': 'application/json',
},
},
{
method: 'POST',
path: '/api/posts',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
title: 'Test Post',
content: 'Test content',
}),
},
],
})
instance.on('response', (statusCode, resBytes, responseTime) => {
console.log(`Status: ${statusCode}, Bytes: ${resBytes}, Time: ${responseTime}ms`)
})
instance.on('done', (results) => {
console.log('Load test completed:', results)
})
autocannon.track(instance)
}
}

Best Practices for DevOpsh2

1. Infrastructure as Codeh3

  • Use Terraform, CloudFormation, or CDK
  • Version control your infrastructure
  • Automated provisioning and teardown

2. Configuration Managementh3

  • Environment-specific configurations
  • Secret management with tools like Vault
  • Configuration validation

3. Backup and Disaster Recoveryh3

  • Automated backups
  • Cross-region replication
  • Disaster recovery testing

4. Cost Optimizationh3

  • Auto-scaling based on demand
  • Resource tagging and monitoring
  • Unused resource cleanup

Conclusionh2

DevOps and CI/CD pipelines are essential for modern software development. They enable:

  • Faster deployments and reduced time to market
  • Higher code quality through automated testing
  • Better security with integrated scanning
  • Improved reliability with proper monitoring
  • Reduced operational costs through automation

Start small, iterate, and continuously improve your DevOps practices. The investment in automation pays dividends in productivity and reliability.

What DevOps challenges are you facing? Which tools and practices work best for your team? Share your experiences!

Further Readingh2


This post reflects my experience as of October 2025. DevOps tools and practices evolve rapidly, so always evaluate the latest solutions for your specific needs.