Guía extensa para implementar CI/CD con FastAPI, Django, React, Next.js y Docker
Este manual proporciona una guía completa para configurar entornos de desarrollo, implementar pipelines de CI/CD y desplegar aplicaciones modernas utilizando tecnologías populares como FastAPI, Django, React y Next.js.
Nota: Este manual asume que tienes conocimientos básicos de desarrollo web, Docker y sistemas de control de versiones como Git.
Organiza tu proyecto con una estructura modular que permita escalar fácilmente:
proyecto/
├── backend/ # Backend (FastAPI o Django)
│ ├── app/ # Código de la aplicación
│ ├── requirements.txt # Dependencias de Python
│ └── Dockerfile
├── frontend/ # Frontend (React o Next.js)
│ ├── public/ # Assets estáticos
│ ├── src/ # Código fuente
│ └── Dockerfile
├── nginx/ # Configuración de Nginx
│ └── nginx.conf
├── docker-compose.yml # Configuración para desarrollo
├── docker-compose.prod.yml # Configuración para producción
└── .github/ # GitHub Actions workflows
└── workflows/
Crea archivos .env
para manejar variables de entorno:
# backend/.env
DEBUG=True
SECRET_KEY=tu_clave_secreta
DATABASE_URL=postgresql://user:password@db:5432/dbname
ALLOWED_HOSTS=localhost,127.0.0.1
# frontend/.env
NEXT_PUBLIC_API_URL=http://localhost:8000
NEXT_PUBLIC_ENV=development
Advertencia: Nunca commits archivos .env
con información sensible. Asegúrate de agregarlos a tu .gitignore
.
Crea un Dockerfile optimizado para producción:
# backend/Dockerfile
FROM python:3.9-slim as builder
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && \
apt-get install -y --no-install-recommends gcc python3-dev
COPY requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
FROM python:3.9-slim
WORKDIR /app
COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .
RUN pip install --no-cache /wheels/*
COPY . .
RUN useradd -m myuser && chown -R myuser:myuser /app
USER myuser
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "app.main:app"]
Configuración similar pero adaptada para Django:
# backend/Dockerfile (Django)
FROM python:3.9-slim as builder
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && \
apt-get install -y --no-install-recommends gcc python3-dev libpq-dev
COPY requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
FROM python:3.9-slim
WORKDIR /app
COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .
RUN pip install --no-cache /wheels/*
COPY . .
RUN python manage.py collectstatic --noinput
RUN useradd -m myuser && chown -R myuser:myuser /app
USER myuser
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "project.wsgi:application"]
Configuración para una aplicación React:
# frontend/Dockerfile (React)
FROM node:16-alpine as builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Configuración optimizada para Next.js:
# frontend/Dockerfile (Next.js)
FROM node:16-alpine as builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:16-alpine as runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/public ./public
USER nextjs
EXPOSE 3000
CMD ["npm", "start"]
Archivo docker-compose.yml
para desarrollo:
version: '3.8'
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile
volumes:
- ./backend:/app
ports:
- "8000:8000"
env_file:
- ./backend/.env
depends_on:
- db
restart: unless-stopped
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- ./frontend:/app
- /app/node_modules
ports:
- "3000:3000"
env_file:
- ./frontend/.env
depends_on:
- backend
restart: unless-stopped
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: ${DB_USER:-postgres}
POSTGRES_PASSWORD: ${DB_PASSWORD:-postgres}
POSTGRES_DB: ${DB_NAME:-appdb}
ports:
- "5432:5432"
restart: unless-stopped
nginx:
image: nginx:alpine
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
depends_on:
- backend
- frontend
restart: unless-stopped
volumes:
postgres_data:
Archivo docker-compose.prod.yml
para producción:
version: '3.8'
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile
image: myapp-backend
ports:
- "8000:8000"
env_file:
- ./backend/.env.prod
depends_on:
- db
restart: unless-stopped
networks:
- app-network
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
image: myapp-frontend
ports:
- "3000:3000"
env_file:
- ./frontend/.env.prod
depends_on:
- backend
restart: unless-stopped
networks:
- app-network
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: ${DB_USER:-postgres}
POSTGRES_PASSWORD: ${DB_PASSWORD:-postgres}
POSTGRES_DB: ${DB_NAME:-appdb}
restart: unless-stopped
networks:
- app-network
nginx:
image: nginx:alpine
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
- "443:443"
depends_on:
- backend
- frontend
restart: unless-stopped
networks:
- app-network
volumes:
postgres_data:
networks:
app-network:
driver: bridge
Automatiza pruebas, builds y despliegues con GitHub Actions.
Configura runners auto-hospedados para mayor control.
Implementa estrategias para staging, pre-producción y producción.
Configura un workflow para pruebas y despliegue:
# .github/workflows/fastapi-ci.yml
name: FastAPI CI/CD
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:13-alpine
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: testdb
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r backend/requirements.txt
pip install pytest pytest-cov
- name: Run tests
working-directory: ./backend
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/testdb
run: |
pytest --cov=app --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v1
deploy:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v2
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Build and push backend
working-directory: ./backend
run: |
docker build -t ${{ secrets.DOCKER_HUB_USERNAME }}/myapp-backend:latest .
docker push ${{ secrets.DOCKER_HUB_USERNAME }}/myapp-backend:latest
- name: SSH and deploy
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_KEY }}
script: |
cd /var/www/myapp
docker-compose -f docker-compose.prod.yml pull backend
docker-compose -f docker-compose.prod.yml up -d --no-deps backend
Configuración de .gitlab-ci.yml
:
# .gitlab-ci.yml
stages:
- test
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
POSTGRES_DB: testdb
POSTGRES_USER: runner
POSTGRES_PASSWORD: ""
POSTGRES_HOST_AUTH_METHOD: trust
test:
stage: test
image: python:3.9
services:
- postgres:13-alpine
before_script:
- pip install -r backend/requirements.txt
- pip install pytest pytest-cov
script:
- cd backend
- pytest --cov=app --cov-report=xml
artifacts:
reports:
cobertura: backend/coverage.xml
build_backend:
stage: build
image: docker:19.03.12
services:
- docker:19.03.12-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- echo "$DOCKER_REGISTRY_PASSWORD" | docker login -u "$DOCKER_REGISTRY_USER" --password-stdin
script:
- cd backend
- docker build -t $CI_REGISTRY_IMAGE/backend:latest .
- docker push $CI_REGISTRY_IMAGE/backend:latest
only:
- main
deploy_production:
stage: deploy
image: alpine:latest
before_script:
- apk add --no-cache openssh-client rsync
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
script:
- ssh $SSH_USER@$SSH_HOST "cd /var/www/myapp && docker-compose -f docker-compose.prod.yml pull backend && docker-compose -f docker-compose.prod.yml up -d --no-deps backend"
only:
- main
Workflow para construir y desplegar el frontend:
# .github/workflows/frontend-ci.yml
name: Frontend CI/CD
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Use Node.js 16.x
uses: actions/setup-node@v2
with:
node-version: '16.x'
- name: Install dependencies
working-directory: ./frontend
run: npm ci
- name: Run tests
working-directory: ./frontend
run: npm test
- name: Build
working-directory: ./frontend
run: npm run build
deploy:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v2
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Build and push frontend
working-directory: ./frontend
run: |
docker build -t ${{ secrets.DOCKER_HUB_USERNAME }}/myapp-frontend:latest .
docker push ${{ secrets.DOCKER_HUB_USERNAME }}/myapp-frontend:latest
- name: SSH and deploy
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_KEY }}
script: |
cd /var/www/myapp
docker-compose -f docker-compose.prod.yml pull frontend
docker-compose -f docker-compose.prod.yml up -d --no-deps frontend
Pasos para preparar un servidor Ubuntu/Debian:
# Actualizar sistema
sudo apt update && sudo apt upgrade -y
# Instalar dependencias básicas
sudo apt install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
# Agregar repositorio Docker
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# Instalar Docker y docker-compose
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Configurar usuario sin root
sudo usermod -aG docker $USER
newgrp docker
# Instalar Nginx (si no se usa en contenedor)
sudo apt install -y nginx
sudo systemctl enable nginx
# Configurar firewall
sudo ufw allow OpenSSH
sudo ufw allow http
sudo ufw allow https
sudo ufw enable
Script para automatizar el despliegue en el servidor:
#!/bin/bash
# Variables
APP_DIR="/var/www/myapp"
DOCKER_COMPOSE_FILE="docker-compose.prod.yml"
# Crear directorio si no existe
sudo mkdir -p $APP_DIR
sudo chown -R $USER:$USER $APP_DIR
cd $APP_DIR
# Clonar repositorio o actualizar
if [ ! -d "$APP_DIR/.git" ]; then
git clone https://github.com/tu-usuario/tu-repo.git .
else
git pull origin main
fi
# Copiar archivos de entorno si no existen
if [ ! -f "$APP_DIR/backend/.env.prod" ]; then
cp $APP_DIR/backend/.env.example $APP_DIR/backend/.env.prod
fi
if [ ! -f "$APP_DIR/frontend/.env.prod" ]; then
cp $APP_DIR/frontend/.env.example $APP_DIR/frontend/.env.prod
fi
# Descargar imágenes y levantar servicios
docker-compose -f $DOCKER_COMPOSE_FILE pull
docker-compose -f $DOCKER_COMPOSE_FILE up -d --build
# Ejecutar migraciones (para Django)
docker-compose -f $DOCKER_COMPOSE_FILE exec backend python manage.py migrate
# Recopilar archivos estáticos (para Django)
docker-compose -f $DOCKER_COMPOSE_FILE exec backend python manage.py collectstatic --noinput
echo "Despliegue completado con éxito"
Archivo de configuración para producción:
# nginx/nginx.conf
upstream backend {
server backend:8000;
}
upstream frontend {
server frontend:3000;
}
server {
listen 80;
server_name tu-dominio.com www.tu-dominio.com;
location / {
proxy_pass http://frontend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /admin {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /static/ {
alias /app/staticfiles/;
}
location /media/ {
alias /app/media/;
}
}
Automatiza la obtención de certificados SSL:
# Instalar Certbot
sudo apt install -y certbot python3-certbot-nginx
# Obtener certificado (si Nginx está en el host)
sudo certbot --nginx -d tu-dominio.com -d www.tu-dominio.com
# O renovar con Docker (usando contenedor)
docker run -it --rm --name certbot \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
-p 80:80 \
certbot/certbot certonly --standalone \
-d tu-dominio.com -d www.tu-dominio.com \
--non-interactive --agree-tos \
--email tu-email@example.com
# Configurar renovación automática
(crontab -l 2>/dev/null; echo "0 0 * * * docker run --rm -v /etc/letsencrypt:/etc/letsencrypt -v /var/lib/letsencrypt:/var/lib/letsencrypt certbot/certbot renew --quiet && docker kill --signal=HUP nginx") | crontab -
Actualiza la configuración de Nginx para SSL:
server {
listen 443 ssl;
server_name tu-dominio.com www.tu-dominio.com;
ssl_certificate /etc/letsencrypt/live/tu-dominio.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/tu-dominio.com/privkey.pem;
# Configuraciones SSL recomendadas
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
# Resto de la configuración igual que antes...
}
server {
listen 80;
server_name tu-dominio.com www.tu-dominio.com;
return 301 https://$host$request_uri;
}
Agrega contenedores para monitoreo en docker-compose.monitoring.yml
:
version: '3.8'
services:
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
depends_on:
- cadvisor
restart: unless-stopped
networks:
- app-network
cadvisor:
image: gcr.io/cadvisor/cadvisor
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
ports:
- "8080:8080"
restart: unless-stopped
networks:
- app-network
grafana:
image: grafana/grafana
ports:
- "3001:3000"
volumes:
- grafana-storage:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=secret
depends_on:
- prometheus
restart: unless-stopped
networks:
- app-network
loki:
image: grafana/loki:latest
ports:
- "3100:3100"
volumes:
- ./monitoring/loki-config.yml:/etc/loki/local-config.yaml
command: -config.file=/etc/loki/local-config.yaml
restart: unless-stopped
networks:
- app-network
promtail:
image: grafana/promtail:latest
volumes:
- ./monitoring/promtail-config.yml:/etc/promtail/config.yml
- /var/log:/var/log
command: -config.file=/etc/promtail/config.yml
restart: unless-stopped
networks:
- app-network
volumes:
grafana-storage:
networks:
app-network:
external: true
Archivo monitoring/prometheus.yml
:
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
- job_name: 'backend'
metrics_path: '/metrics'
static_configs:
- targets: ['backend:8000']
- job_name: 'node'
static_configs:
- targets: ['node-exporter:9100']
Script para backup de bases de datos y volúmenes:
#!/bin/bash
# Configuración
BACKUP_DIR="/backups"
DATE=$(date +%Y-%m-%d_%H-%M-%S)
DB_SERVICE="db"
DB_NAME="appdb"
DB_USER="postgres"
DB_PASS="postgres"
# Crear directorio de backups si no existe
mkdir -p $BACKUP_DIR
# Backup de PostgreSQL
docker exec -e PGPASSWORD=$DB_PASS $DB_SERVICE pg_dump -U $DB_USER -d $DB_NAME > $BACKUP_DIR/db_backup_$DATE.sql
# Comprimir backup
gzip $BACKUP_DIR/db_backup_$DATE.sql
# Backup de volúmenes Docker
docker run --rm --volumes-from ${DB_SERVICE} -v $BACKUP_DIR:/backup alpine \
tar czf /backup/volume_backup_$DATE.tar.gz /var/lib/postgresql/data
# Eliminar backups antiguos (más de 7 días)
find $BACKUP_DIR -type f -mtime +7 -name "*.gz" -delete
echo "Backup completado: $BACKUP_DIR/backup_$DATE.tar.gz"
Programar con cron:
# Ejecutar diariamente a las 2 AM
0 2 * * * /path/to/backup_script.sh
Tareas comunes de mantenimiento:
# Limpiar contenedores e imágenes no utilizadas
docker system prune -f
# Actualizar todos los contenedores
docker-compose -f docker-compose.prod.yml pull
docker-compose -f docker-compose.prod.yml up -d
# Ver logs de un servicio
docker-compose -f docker-compose.prod.yml logs -f backend
# Escalar servicios
docker-compose -f docker-compose.prod.yml up -d --scale backend=4
# Ver uso de recursos
docker stats
# Inspeccionar red
docker network inspect app-network
Este manual ha cubierto el proceso completo desde la configuración inicial hasta el despliegue en producción de aplicaciones modernas utilizando tecnologías como FastAPI, Django, React y Next.js, con Docker y sistemas de CI/CD.
¡Felicidades! Ahora tienes un pipeline completo de desarrollo y despliegue que te permitirá implementar aplicaciones web de manera eficiente, escalable y mantenible.