Docker Setup Guide
This guide explains how to use Docker to run the fullstack-bun monorepo in both development and production environments.
Architecture
This monorepo contains:
- Frontend: React app with Vite and React Router
- API: Hono server running on Bun
- PostgreSQL: Database service
- Redis: Cache/session store
Prerequisites
- Docker and Docker Compose installed
- Bun installed (for local development without Docker)
Development Setup
Quick Start with npm scripts
# Start all services in development mode
bun run docker:dev
# Start with rebuild (after dependency changes)
bun run docker:dev:build
# Stop all services
bun run docker:dev:down
# Stop and remove volumes (clears database data)
bun run docker:dev:cleanManual Docker Compose commands
# Start all services
docker-compose up
# Start with rebuild
docker-compose up --buildThis will start:
- Frontend on http://localhost:5173 (HMR on :5174)
- API on http://localhost:3001
- PostgreSQL on localhost:5432
- Redis on localhost:6379
Start specific services
# Start only the API
docker-compose up api
# Start only the frontend
docker-compose up frontend
# Start backend services (postgres, redis, api)
docker-compose up postgres redis apiView logs
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f api
docker-compose logs -f frontendStop services
# Stop all services
docker-compose down
# Stop and remove volumes (clears database data)
docker-compose down -vProduction Setup
Quick Start with npm scripts
# Start production environment
bun run docker:prod
# Start with rebuild
bun run docker:prod:build
# Stop production services
bun run docker:prod:down
# Stop and remove volumes
bun run docker:prod:cleanManual Docker Compose commands
# Build and run production images
docker-compose -f docker-compose.prod.yml up -d --build
# Stop production services
docker-compose -f docker-compose.prod.yml downThis will:
- Build optimized production images
- Run frontend on http://localhost:5173
- Run API on http://localhost:3001
- Keep PostgreSQL and Redis isolated on the internal Docker network (access via
docker-compose exec)
To connect for maintenance, run commands such as docker-compose -f docker-compose.prod.yml exec postgres psql -U $POSTGRES_USER $POSTGRES_DB or docker-compose -f docker-compose.prod.yml exec redis redis-cli -a $REDIS_PASSWORD.
Environment Variables
- Copy the example file:
cp .env.example .env- Update
.envwith your production values:
# PostgreSQL Configuration
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your_secure_password_here
POSTGRES_DB=mydatabase
POSTGRES_PORT=5432
# Redis Configuration
REDIS_PORT=6379
REDIS_PASSWORD=your_secure_redis_password_here
# Application Ports
API_PORT=3001
FRONTEND_PORT=5173
# Production Configuration
CORS_ALLOWLISTED_ORIGINS=https://yourdomain.com,https://www.yourdomain.com
VITE_API_BASE_URL=https://api.yourdomain.comImportant: Always use strong passwords and secure values in production! Redis is configured to require authentication—set REDIS_PASSWORD and keep it secret (e.g., load from your secret manager).
Development Features
Hot Reload
Both frontend and API support hot reload:
- Changes to
apps/frontend/src/*will reload the frontend - Changes to
apps/api/src/*will reload the API - Changes to
packages/*will be picked up by both apps - Config file changes (vite.config.ts, tsconfig.json, etc.) are also mounted
Volume Mounts
Development containers mount source code and config as volumes:
Frontend:
./apps/frontend/src→/app/apps/frontend/src./apps/frontend/public→/app/apps/frontend/public./apps/frontend/locales→/app/apps/frontend/locales./apps/frontend/vite.config.ts→/app/apps/frontend/vite.config.ts(read-only)./apps/frontend/react-router.config.ts→/app/apps/frontend/react-router.config.ts(read-only)./apps/frontend/tsconfig*.json→/app/apps/frontend/tsconfig*.json(read-only)
API:
./apps/api/src→/app/apps/api/src./apps/api/tsconfig*.json→/app/apps/api/tsconfig*.json(read-only)
Shared:
./packages→/app/packages./package.json→/app/package.json(read-only)
Node modules are preserved using anonymous volumes to prevent overwriting. If you add or change dependencies and the container still can't find them, remove only the API node_modules volumes (keep database volumes intact) and rebuild:
docker-compose down
docker volume rm myapp_api_node_modules myapp_api_app_node_modules
docker-compose build --no-cache api
docker-compose up apiDockerfile Details
Frontend Dockerfile
Multi-stage build:
- deps: Installs dependencies
- dev: Development server with Vite
- build: Builds production assets
- production: Serves built app with Hono
API Dockerfile
Multi-stage build:
- deps: Installs dependencies
- dev: Development server with Bun watch mode
- production: Production server
Email Testing with Mailpit
Mailpit is a lightweight email testing tool that captures all outgoing emails and provides a web UI to view them. Perfect for local development without sending real emails.
Quick Start with Docker
Run Mailpit as a standalone container:
docker run -d \
--name=mailpit \
--restart unless-stopped \
-v /path/to/your/data:/data \
-e MP_DATABASE=/data/mailpit.db \
-e MP_SMTP_AUTH_ACCEPT_ANY=1 \
-e MP_SMTP_AUTH_ALLOW_INSECURE=1 \
-p 8025:8025 \
-p 1025:1025 \
axllent/mailpitReplace /path/to/your/data with an actual path on your machine where you want to store email data (e.g., ~/mailpit-data).
This will start:
- SMTP Server on
localhost:1025(for your app to send emails). - Web UI on http://localhost:8025 (to view captured emails)
For more options/details see https://mailpit.axllent.org/docs/configuration/runtime-options/
Integration with Docker Compose
For better integration with your development environment, add Mailpit to docker-compose.yml:
services:
# ... existing services ...
mailpit:
image: axllent/mailpit
container_name: mailpit
restart: unless-stopped
ports:
- "8025:8025" # Web UI
- "1025:1025" # SMTP
environment:
MP_DATABASE: /data/mailpit.db
MP_SMTP_AUTH_ACCEPT_ANY: 1
MP_SMTP_AUTH_ALLOW_INSECURE: 1
volumes:
- mailpit_data:/data
volumes:
# ... existing volumes ...
mailpit_data:Then start it with your other services:
# Start with all services
bun run docker:dev
# Or start just Mailpit
docker-compose up mailpitConfigure Your Application
Update your API's email configuration to use Mailpit:
# .env or .env.local
SMTP_HOST=localhost # should be host.docker.internal when running on macos in docker container
SMTP_PORT=1025
SMTP_USER=any_user # Accepts any credentials
SMTP_PASSWORD=any_password # Accepts any credentials
SMTP_FROM=noreply@yourapp.localIf using Docker Compose, the API should reference the service name:
SMTP_HOST=mailpit # Service name instead of localhost
SMTP_PORT=1025Using Mailpit
- Send emails from your app - All emails will be captured instead of actually being sent
- View emails - Open http://localhost:8025 in your browser
- Test email flows - View HTML/text versions, check links, verify content
- Search and filter - Search by recipient, subject, or content
Managing Mailpit
# View logs
docker logs mailpit
# Stop Mailpit
docker stop mailpit
# Start Mailpit
docker start mailpit
# Remove Mailpit and data
docker stop mailpit
docker rm mailpit
docker volume rm <project>_mailpit_data # If using Docker ComposeBenefits for Development
- No real emails sent during development/testing
- Inspect email content and formatting
- Test email workflows without external services
- No configuration needed (accepts all authentication)
- Lightweight and fast
Troubleshooting
Port conflicts
If ports are already in use, modify docker-compose.yml:
services:
frontend:
ports:
- "5174:5173" # Use port 5174 on host insteadContainer won't start
Check logs:
docker-compose logs api
docker-compose logs frontendDependencies not installing
Rebuild without cache:
docker-compose build --no-cacheDatabase connection issues
Ensure the API waits for PostgreSQL:
docker-compose up postgres # Wait for healthy
docker-compose up api # Then start APIOr use the healthcheck dependencies (already configured).
Useful Commands
NPM Scripts (Recommended)
# Development
bun run docker:dev # Start dev environment
bun run docker:dev:build # Rebuild and start
bun run docker:dev:down # Stop dev services
bun run docker:dev:clean # Stop and remove volumes
# Production
bun run docker:prod # Start production
bun run docker:prod:build # Rebuild and start
bun run docker:prod:down # Stop production
bun run docker:prod:clean # Stop and remove volumesDocker Compose Commands
# Enter a running container
docker-compose exec api sh
docker-compose exec frontend sh
# Run commands inside containers
docker-compose exec api bun run lint
docker-compose exec frontend bun test
# Remove all containers and volumes
docker-compose down -v
# Rebuild specific service
docker-compose build api
docker-compose build frontend
# View running containers
docker-compose ps
# Check resource usage
docker statsCI/CD Integration
For CI/CD pipelines, you can:
- Build images:
docker build -f apps/api/Dockerfile -t myapp-api:latest .
docker build -f apps/frontend/Dockerfile -t myapp-frontend:latest .- Run tests in containers:
docker-compose run --rm frontend bun test- Push to registry:
docker tag myapp-api:latest registry.example.com/myapp-api:latest
docker push registry.example.com/myapp-api:latestBest Practices
- Don't commit .env files - Use
.env.exampleas template - Use specific versions - Pin Docker image versions in production
- Health checks - Already configured for all services
- Resource limits - Add memory/CPU limits for production
- Security - Don't run as root, use read-only filesystems where possible