initial commit

This commit is contained in:
2025-10-19 22:09:35 +03:00
commit 6d593b4554
114 changed files with 23622 additions and 0 deletions

View File

@ -0,0 +1,539 @@
# Docker Setup
Learn how to configure and run the FastAPI Boilerplate using Docker Compose. The project includes a complete containerized setup with PostgreSQL, Redis, background workers, and optional services.
## Docker Compose Architecture
The boilerplate includes these core services:
```yaml
services:
web: # FastAPI application (uvicorn or gunicorn)
worker: # ARQ background task worker
db: # PostgreSQL 13 database
redis: # Redis Alpine for caching/queues
# Optional services (commented out by default):
# pgadmin: # Database administration
# nginx: # Reverse proxy
# create_superuser: # One-time superuser creation
# create_tier: # One-time tier creation
```
## Basic Docker Compose
### Main Configuration
The main `docker-compose.yml` includes:
```yaml
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile
# Development mode (reload enabled)
command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
# Production mode (uncomment for production)
# command: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000
env_file:
- ./src/.env
ports:
- "8000:8000"
depends_on:
- db
- redis
volumes:
- ./src/app:/code/app
- ./src/.env:/code/.env
worker:
build:
context: .
dockerfile: Dockerfile
command: arq app.core.worker.settings.WorkerSettings
env_file:
- ./src/.env
depends_on:
- db
- redis
volumes:
- ./src/app:/code/app
- ./src/.env:/code/.env
db:
image: postgres:13
env_file:
- ./src/.env
volumes:
- postgres-data:/var/lib/postgresql/data
expose:
- "5432"
redis:
image: redis:alpine
volumes:
- redis-data:/data
expose:
- "6379"
volumes:
postgres-data:
redis-data:
```
### Environment File Loading
All services automatically load environment variables from `./src/.env`:
```yaml
env_file:
- ./src/.env
```
The Docker services use these environment variables:
- `POSTGRES_USER`, `POSTGRES_PASSWORD`, `POSTGRES_DB` for database
- `REDIS_*_HOST` variables automatically resolve to service names
- All application settings from your `.env` file
## Service Details
### Web Service (FastAPI Application)
The web service runs your FastAPI application:
```yaml
web:
build:
context: .
dockerfile: Dockerfile
# Development: uvicorn with reload
command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
# Production: gunicorn with multiple workers (commented out)
# command: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000
env_file:
- ./src/.env
ports:
- "8000:8000" # Direct access in development
volumes:
- ./src/app:/code/app # Live code reloading
- ./src/.env:/code/.env
```
**Key Features:**
- **Development mode**: Uses uvicorn with `--reload` for automatic code reloading
- **Production mode**: Switch to gunicorn with multiple workers (commented out)
- **Live reloading**: Source code mounted as volume for development
- **Port exposure**: Direct access on port 8000 (can be disabled for nginx)
### Worker Service (Background Tasks)
Handles background job processing with ARQ:
```yaml
worker:
build:
context: .
dockerfile: Dockerfile
command: arq app.core.worker.settings.WorkerSettings
env_file:
- ./src/.env
depends_on:
- db
- redis
volumes:
- ./src/app:/code/app
- ./src/.env:/code/.env
```
**Features:**
- Runs ARQ worker for background job processing
- Shares the same codebase and environment as web service
- Automatically connects to Redis for job queues
- Live code reloading in development
### Database Service (PostgreSQL 13)
```yaml
db:
image: postgres:13
env_file:
- ./src/.env
volumes:
- postgres-data:/var/lib/postgresql/data
expose:
- "5432" # Internal network only
```
**Configuration:**
- Uses environment variables: `POSTGRES_USER`, `POSTGRES_PASSWORD`, `POSTGRES_DB`
- Data persisted in named volume `postgres-data`
- Only exposed to internal Docker network (no external port)
- To enable external access, uncomment the ports section
### Redis Service
```yaml
redis:
image: redis:alpine
volumes:
- redis-data:/data
expose:
- "6379" # Internal network only
```
**Features:**
- Lightweight Alpine Linux image
- Data persistence with named volume
- Used for caching, job queues, and rate limiting
- Internal network access only
## Optional Services
### Database Administration (pgAdmin)
Uncomment to enable web-based database management:
```yaml
pgadmin:
container_name: pgadmin4
image: dpage/pgadmin4:latest
restart: always
ports:
- "5050:80"
volumes:
- pgadmin-data:/var/lib/pgadmin
env_file:
- ./src/.env
depends_on:
- db
```
**Usage:**
- Access at `http://localhost:5050`
- Requires `PGADMIN_DEFAULT_EMAIL` and `PGADMIN_DEFAULT_PASSWORD` in `.env`
- Connect to database using service name `db` and port `5432`
### Reverse Proxy (Nginx)
Uncomment for production-style reverse proxy:
```yaml
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- web
```
**Configuration:**
The included `default.conf` provides:
```nginx
server {
listen 80;
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
**When using nginx:**
1. Uncomment the nginx service
2. Comment out the `ports` section in the web service
3. Uncomment `expose: ["8000"]` in the web service
### Initialization Services
#### Create First Superuser
```yaml
create_superuser:
build:
context: .
dockerfile: Dockerfile
env_file:
- ./src/.env
depends_on:
- db
- web
command: python -m src.scripts.create_first_superuser
volumes:
- ./src:/code/src
```
#### Create First Tier
```yaml
create_tier:
build:
context: .
dockerfile: Dockerfile
env_file:
- ./src/.env
depends_on:
- db
- web
command: python -m src.scripts.create_first_tier
volumes:
- ./src:/code/src
```
**Usage:**
- These are one-time setup services
- Uncomment when you need to initialize data
- Run once, then comment out again
## Dockerfile Details
The project uses a multi-stage Dockerfile with `uv` for fast Python package management:
### Builder Stage
```dockerfile
FROM ghcr.io/astral-sh/uv:python3.11-bookworm-slim AS builder
ENV UV_COMPILE_BYTECODE=1
ENV UV_LINK_MODE=copy
WORKDIR /app
# Install dependencies (cached layer)
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --locked --no-install-project
# Copy and install project
COPY . /app
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --locked --no-editable
```
### Final Stage
```dockerfile
FROM python:3.11-slim-bookworm
# Create non-root user for security
RUN groupadd --gid 1000 app \
&& useradd --uid 1000 --gid app --shell /bin/bash --create-home app
# Copy virtual environment from builder
COPY --from=builder --chown=app:app /app/.venv /app/.venv
ENV PATH="/app/.venv/bin:$PATH"
USER app
WORKDIR /code
# Default command (can be overridden)
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
```
**Security Features:**
- Non-root user execution
- Multi-stage build for smaller final image
- Cached dependency installation
## Common Docker Commands
### Development Workflow
```bash
# Start all services
docker compose up
# Start in background
docker compose up -d
# Rebuild and start (after code changes)
docker compose up --build
# View logs
docker compose logs -f web
docker compose logs -f worker
# Stop services
docker compose down
# Stop and remove volumes (reset data)
docker compose down -v
```
### Service Management
```bash
# Start specific services
docker compose up web db redis
# Scale workers
docker compose up --scale worker=3
# Execute commands in running containers
docker compose exec web bash
docker compose exec db psql -U postgres
docker compose exec redis redis-cli
# View service status
docker compose ps
```
### Production Mode
To switch to production mode:
1. **Enable Gunicorn:**
```yaml
# Comment out uvicorn line
# command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
# Uncomment gunicorn line
command: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000
```
2. **Enable Nginx** (optional):
```yaml
# Uncomment nginx service
nginx:
image: nginx:latest
ports:
- "80:80"
# In web service, comment out ports and uncomment expose
# ports:
# - "8000:8000"
expose:
- "8000"
```
3. **Remove development volumes:**
```yaml
# Remove or comment out for production
# volumes:
# - ./src/app:/code/app
# - ./src/.env:/code/.env
```
## Environment Configuration
### Service Communication
Services communicate using service names:
```yaml
# In your .env file for Docker
POSTGRES_SERVER=db # Not localhost
REDIS_CACHE_HOST=redis # Not localhost
REDIS_QUEUE_HOST=redis
REDIS_RATE_LIMIT_HOST=redis
```
### Port Management
**Development (default):**
- Web: `localhost:8000` (direct access)
- Database: `localhost:5432` (uncomment ports to enable)
- Redis: `localhost:6379` (uncomment ports to enable)
- pgAdmin: `localhost:5050` (if enabled)
**Production with Nginx:**
- Web: `localhost:80` (through nginx)
- Database: Internal only
- Redis: Internal only
## Troubleshooting
### Common Issues
**Container won't start:**
```bash
# Check logs
docker compose logs web
# Rebuild image
docker compose build --no-cache web
# Check environment file
docker compose exec web env | grep POSTGRES
```
**Database connection issues:**
```bash
# Check if db service is running
docker compose ps db
# Test connection from web container
docker compose exec web ping db
# Check database logs
docker compose logs db
```
**Port conflicts:**
```bash
# Check what's using the port
lsof -i :8000
# Use different ports
ports:
- "8001:8000" # Use port 8001 instead
```
### Development vs Production
**Development features:**
- Live code reloading with volume mounts
- Direct port access
- uvicorn with `--reload`
- Exposed database/redis ports for debugging
**Production optimizations:**
- No volume mounts (code baked into image)
- Nginx reverse proxy
- Gunicorn with multiple workers
- Internal service networking only
- Resource limits and health checks
## Best Practices
### Development
- Use volume mounts for live code reloading
- Enable direct port access for debugging
- Use uvicorn with reload for fast development
- Enable optional services (pgAdmin) as needed
### Production
- Switch to gunicorn with multiple workers
- Use nginx for reverse proxy and load balancing
- Remove volume mounts and bake code into images
- Use internal networking only
- Set resource limits and health checks
### Security
- Containers run as non-root user
- Use internal networking for service communication
- Don't expose database/redis ports externally
- Use Docker secrets for sensitive data in production
### Monitoring
- Use `docker compose logs` to monitor services
- Set up health checks for all services
- Monitor resource usage with `docker stats`
- Use structured logging for better observability
The Docker setup provides everything you need for both development and production. Start with the default configuration and customize as your needs grow!

View File

@ -0,0 +1,692 @@
# Environment-Specific Configuration
Learn how to configure your FastAPI application for different environments (development, staging, production) with appropriate security, performance, and monitoring settings.
## Environment Types
The boilerplate supports three environment types:
- **`local`** - Development environment with full debugging
- **`staging`** - Pre-production testing environment
- **`production`** - Production environment with security hardening
Set the environment type with:
```env
ENVIRONMENT="local" # or "staging" or "production"
```
## Development Environment
### Local Development Settings
Create `src/.env.development`:
```env
# ------------- environment -------------
ENVIRONMENT="local"
DEBUG=true
# ------------- app settings -------------
APP_NAME="MyApp (Development)"
APP_VERSION="0.1.0-dev"
# ------------- database -------------
POSTGRES_USER="dev_user"
POSTGRES_PASSWORD="dev_password"
POSTGRES_SERVER="localhost"
POSTGRES_PORT=5432
POSTGRES_DB="myapp_dev"
# ------------- crypt -------------
SECRET_KEY="dev-secret-key-not-for-production-use"
ALGORITHM="HS256"
ACCESS_TOKEN_EXPIRE_MINUTES=60 # Longer for development
REFRESH_TOKEN_EXPIRE_DAYS=30 # Longer for development
# ------------- redis -------------
REDIS_CACHE_HOST="localhost"
REDIS_CACHE_PORT=6379
REDIS_QUEUE_HOST="localhost"
REDIS_QUEUE_PORT=6379
REDIS_RATE_LIMIT_HOST="localhost"
REDIS_RATE_LIMIT_PORT=6379
# ------------- caching -------------
CLIENT_CACHE_MAX_AGE=0 # Disable caching for development
# ------------- rate limiting -------------
DEFAULT_RATE_LIMIT_LIMIT=1000 # Higher limits for development
DEFAULT_RATE_LIMIT_PERIOD=3600
# ------------- admin -------------
ADMIN_NAME="Dev Admin"
ADMIN_EMAIL="admin@localhost"
ADMIN_USERNAME="admin"
ADMIN_PASSWORD="admin123"
# ------------- tier -------------
TIER_NAME="dev_tier"
# ------------- logging -------------
DATABASE_ECHO=true # Log all SQL queries
```
### Development Features
```python
# Development-specific features
if settings.ENVIRONMENT == "local":
# Enable detailed error pages
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Allow all origins in development
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Enable API documentation
app.openapi_url = "/openapi.json"
app.docs_url = "/docs"
app.redoc_url = "/redoc"
```
### Docker Development Override
`docker-compose.override.yml`:
```yaml
version: '3.8'
services:
web:
environment:
- ENVIRONMENT=local
- DEBUG=true
- DATABASE_ECHO=true
volumes:
- ./src:/code/src:cached
command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
ports:
- "8000:8000"
db:
environment:
- POSTGRES_DB=myapp_dev
ports:
- "5432:5432"
redis:
ports:
- "6379:6379"
# Development tools
adminer:
image: adminer
ports:
- "8080:8080"
depends_on:
- db
```
## Staging Environment
### Staging Settings
Create `src/.env.staging`:
```env
# ------------- environment -------------
ENVIRONMENT="staging"
DEBUG=false
# ------------- app settings -------------
APP_NAME="MyApp (Staging)"
APP_VERSION="0.1.0-staging"
# ------------- database -------------
POSTGRES_USER="staging_user"
POSTGRES_PASSWORD="complex_staging_password_123!"
POSTGRES_SERVER="staging-db.example.com"
POSTGRES_PORT=5432
POSTGRES_DB="myapp_staging"
# ------------- crypt -------------
SECRET_KEY="staging-secret-key-different-from-production"
ALGORITHM="HS256"
ACCESS_TOKEN_EXPIRE_MINUTES=30
REFRESH_TOKEN_EXPIRE_DAYS=7
# ------------- redis -------------
REDIS_CACHE_HOST="staging-redis.example.com"
REDIS_CACHE_PORT=6379
REDIS_QUEUE_HOST="staging-redis.example.com"
REDIS_QUEUE_PORT=6379
REDIS_RATE_LIMIT_HOST="staging-redis.example.com"
REDIS_RATE_LIMIT_PORT=6379
# ------------- caching -------------
CLIENT_CACHE_MAX_AGE=300 # 5 minutes
# ------------- rate limiting -------------
DEFAULT_RATE_LIMIT_LIMIT=100
DEFAULT_RATE_LIMIT_PERIOD=3600
# ------------- admin -------------
ADMIN_NAME="Staging Admin"
ADMIN_EMAIL="admin@staging.example.com"
ADMIN_USERNAME="staging_admin"
ADMIN_PASSWORD="secure_staging_password_456!"
# ------------- tier -------------
TIER_NAME="staging_tier"
# ------------- logging -------------
DATABASE_ECHO=false
```
### Staging Features
```python
# Staging-specific features
if settings.ENVIRONMENT == "staging":
# Restricted CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["https://staging.example.com"],
allow_credentials=True,
allow_methods=["GET", "POST", "PUT", "DELETE"],
allow_headers=["*"],
)
# API docs available to superusers only
@app.get("/docs", include_in_schema=False)
async def custom_swagger_ui(current_user: User = Depends(get_current_superuser)):
return get_swagger_ui_html(openapi_url="/openapi.json")
```
### Docker Staging Configuration
`docker-compose.staging.yml`:
```yaml
version: '3.8'
services:
web:
environment:
- ENVIRONMENT=staging
- DEBUG=false
deploy:
replicas: 2
resources:
limits:
memory: 1G
reservations:
memory: 512M
restart: always
db:
environment:
- POSTGRES_DB=myapp_staging
volumes:
- postgres_staging_data:/var/lib/postgresql/data
restart: always
redis:
restart: always
worker:
deploy:
replicas: 2
restart: always
volumes:
postgres_staging_data:
```
## Production Environment
### Production Settings
Create `src/.env.production`:
```env
# ------------- environment -------------
ENVIRONMENT="production"
DEBUG=false
# ------------- app settings -------------
APP_NAME="MyApp"
APP_VERSION="1.0.0"
CONTACT_NAME="Support Team"
CONTACT_EMAIL="support@example.com"
# ------------- database -------------
POSTGRES_USER="prod_user"
POSTGRES_PASSWORD="ultra_secure_production_password_789!"
POSTGRES_SERVER="prod-db.example.com"
POSTGRES_PORT=5433 # Custom port for security
POSTGRES_DB="myapp_production"
# ------------- crypt -------------
SECRET_KEY="ultra-secure-production-key-generated-with-openssl-rand-hex-32"
ALGORITHM="HS256"
ACCESS_TOKEN_EXPIRE_MINUTES=15 # Shorter for security
REFRESH_TOKEN_EXPIRE_DAYS=3 # Shorter for security
# ------------- redis -------------
REDIS_CACHE_HOST="prod-redis.example.com"
REDIS_CACHE_PORT=6380 # Custom port for security
REDIS_QUEUE_HOST="prod-redis.example.com"
REDIS_QUEUE_PORT=6380
REDIS_RATE_LIMIT_HOST="prod-redis.example.com"
REDIS_RATE_LIMIT_PORT=6380
# ------------- caching -------------
CLIENT_CACHE_MAX_AGE=3600 # 1 hour
# ------------- rate limiting -------------
DEFAULT_RATE_LIMIT_LIMIT=100
DEFAULT_RATE_LIMIT_PERIOD=3600
# ------------- admin -------------
ADMIN_NAME="System Administrator"
ADMIN_EMAIL="admin@example.com"
ADMIN_USERNAME="sysadmin"
ADMIN_PASSWORD="extremely_secure_admin_password_with_symbols_#$%!"
# ------------- tier -------------
TIER_NAME="production_tier"
# ------------- logging -------------
DATABASE_ECHO=false
```
### Production Security Features
```python
# Production-specific features
if settings.ENVIRONMENT == "production":
# Strict CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["https://example.com", "https://www.example.com"],
allow_credentials=True,
allow_methods=["GET", "POST", "PUT", "DELETE"],
allow_headers=["Authorization", "Content-Type"],
)
# Disable API documentation
app.openapi_url = None
app.docs_url = None
app.redoc_url = None
# Add security headers
@app.middleware("http")
async def add_security_headers(request: Request, call_next):
response = await call_next(request)
response.headers["X-Content-Type-Options"] = "nosniff"
response.headers["X-Frame-Options"] = "DENY"
response.headers["X-XSS-Protection"] = "1; mode=block"
response.headers["Strict-Transport-Security"] = "max-age=31536000; includeSubDomains"
return response
```
### Docker Production Configuration
`docker-compose.prod.yml`:
```yaml
version: '3.8'
services:
web:
environment:
- ENVIRONMENT=production
- DEBUG=false
deploy:
replicas: 3
resources:
limits:
memory: 2G
cpus: '1'
reservations:
memory: 1G
cpus: '0.5'
restart: always
ports: [] # No direct exposure
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/ssl:/etc/nginx/ssl
- ./nginx/htpasswd:/etc/nginx/htpasswd
depends_on:
- web
restart: always
db:
environment:
- POSTGRES_DB=myapp_production
volumes:
- postgres_prod_data:/var/lib/postgresql/data
ports: [] # No external access
deploy:
resources:
limits:
memory: 4G
reservations:
memory: 2G
restart: always
redis:
volumes:
- redis_prod_data:/data
ports: [] # No external access
deploy:
resources:
limits:
memory: 1G
reservations:
memory: 512M
restart: always
worker:
deploy:
replicas: 2
resources:
limits:
memory: 1G
reservations:
memory: 512M
restart: always
volumes:
postgres_prod_data:
redis_prod_data:
```
## Environment Detection
### Runtime Environment Checks
```python
# src/app/core/config.py
class Settings(BaseSettings):
@computed_field
@property
def IS_DEVELOPMENT(self) -> bool:
return self.ENVIRONMENT == "local"
@computed_field
@property
def IS_PRODUCTION(self) -> bool:
return self.ENVIRONMENT == "production"
@computed_field
@property
def IS_STAGING(self) -> bool:
return self.ENVIRONMENT == "staging"
# Use in application
if settings.IS_DEVELOPMENT:
# Development-only code
pass
if settings.IS_PRODUCTION:
# Production-only code
pass
```
### Environment-Specific Validation
```python
@model_validator(mode="after")
def validate_environment_config(self) -> "Settings":
if self.ENVIRONMENT == "production":
# Production validation
if self.DEBUG:
raise ValueError("DEBUG must be False in production")
if len(self.SECRET_KEY) < 32:
raise ValueError("SECRET_KEY must be at least 32 characters in production")
if "dev" in self.SECRET_KEY.lower():
raise ValueError("Production SECRET_KEY cannot contain 'dev'")
if self.ENVIRONMENT == "local":
# Development warnings
if not self.DEBUG:
logger.warning("DEBUG is False in development environment")
return self
```
## Configuration Management
### Environment File Templates
Create template files for each environment:
```bash
# Create environment templates
cp src/.env.example src/.env.development
cp src/.env.example src/.env.staging
cp src/.env.example src/.env.production
# Use environment-specific files
ln -sf .env.development src/.env # For development
ln -sf .env.staging src/.env # For staging
ln -sf .env.production src/.env # For production
```
### Configuration Validation
```python
# src/scripts/validate_config.py
import asyncio
from src.app.core.config import settings
from src.app.core.db.database import async_get_db
async def validate_configuration():
"""Validate configuration for current environment."""
print(f"Validating configuration for {settings.ENVIRONMENT} environment...")
# Basic settings validation
assert settings.APP_NAME, "APP_NAME is required"
assert settings.SECRET_KEY, "SECRET_KEY is required"
assert len(settings.SECRET_KEY) >= 32, "SECRET_KEY must be at least 32 characters"
# Environment-specific validation
if settings.ENVIRONMENT == "production":
assert not settings.DEBUG, "DEBUG must be False in production"
assert "dev" not in settings.SECRET_KEY.lower(), "Production SECRET_KEY invalid"
assert settings.POSTGRES_PORT != 5432, "Use custom PostgreSQL port in production"
# Test database connection
try:
db = await anext(async_get_db())
print("✓ Database connection successful")
await db.close()
except Exception as e:
print(f"✗ Database connection failed: {e}")
return False
print("✓ Configuration validation passed")
return True
if __name__ == "__main__":
asyncio.run(validate_configuration())
```
### Environment Switching
```bash
#!/bin/bash
# scripts/switch_env.sh
ENV=$1
if [ -z "$ENV" ]; then
echo "Usage: $0 <development|staging|production>"
exit 1
fi
case $ENV in
development)
ln -sf .env.development src/.env
echo "Switched to development environment"
;;
staging)
ln -sf .env.staging src/.env
echo "Switched to staging environment"
;;
production)
ln -sf .env.production src/.env
echo "Switched to production environment"
echo "WARNING: Make sure to review all settings before deployment!"
;;
*)
echo "Invalid environment: $ENV"
echo "Valid options: development, staging, production"
exit 1
;;
esac
# Validate configuration
python -c "from src.app.core.config import settings; print(f'Current environment: {settings.ENVIRONMENT}')"
```
## Security Best Practices
### Environment-Specific Security
```python
# Different security levels per environment
SECURITY_CONFIGS = {
"local": {
"token_expire_minutes": 60,
"enable_cors_origins": ["*"],
"enable_docs": True,
"log_level": "DEBUG",
},
"staging": {
"token_expire_minutes": 30,
"enable_cors_origins": ["https://staging.example.com"],
"enable_docs": True, # For testing
"log_level": "INFO",
},
"production": {
"token_expire_minutes": 15,
"enable_cors_origins": ["https://example.com"],
"enable_docs": False,
"log_level": "WARNING",
}
}
config = SECURITY_CONFIGS[settings.ENVIRONMENT]
```
### Secrets Management
```bash
# Use secrets management in production
# Instead of plain text environment variables
POSTGRES_PASSWORD_FILE="/run/secrets/postgres_password"
SECRET_KEY_FILE="/run/secrets/jwt_secret"
# Docker secrets
services:
web:
secrets:
- postgres_password
- jwt_secret
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
- SECRET_KEY_FILE=/run/secrets/jwt_secret
secrets:
postgres_password:
external: true
jwt_secret:
external: true
```
## Monitoring and Logging
### Environment-Specific Logging
```python
LOGGING_CONFIG = {
"local": {
"level": "DEBUG",
"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s",
"handlers": ["console"],
},
"staging": {
"level": "INFO",
"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s",
"handlers": ["console", "file"],
},
"production": {
"level": "WARNING",
"format": "%(asctime)s - %(name)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s",
"handlers": ["file", "syslog"],
}
}
```
### Health Checks by Environment
```python
@app.get("/health")
async def health_check():
health_info = {
"status": "healthy",
"environment": settings.ENVIRONMENT,
"version": settings.APP_VERSION,
}
# Add detailed info in non-production
if not settings.IS_PRODUCTION:
health_info.update({
"database": await check_database_health(),
"redis": await check_redis_health(),
"worker_queue": await check_worker_health(),
})
return health_info
```
## Best Practices
### Security
- Use different secret keys for each environment
- Disable debug mode in staging and production
- Use custom ports in production
- Implement proper CORS policies
- Remove API documentation in production
### Performance
- Configure appropriate resource limits per environment
- Use caching in staging and production
- Set shorter token expiration in production
- Use connection pooling in production
### Configuration
- Keep environment files in version control (except production)
- Use validation to prevent misconfiguration
- Document all environment-specific settings
- Test configuration changes in staging first
### Monitoring
- Use appropriate log levels per environment
- Monitor different metrics in each environment
- Set up alerts for production only
- Use health checks for all environments
Environment-specific configuration ensures your application runs securely and efficiently in each deployment stage. Start with development settings and progressively harden for production!

View File

@ -0,0 +1,651 @@
# Configuration Guide
This guide covers all configuration options available in the FastAPI Boilerplate, including environment variables, settings classes, and advanced deployment configurations.
## Configuration Overview
The boilerplate uses a layered configuration approach:
- **Environment Variables** (`.env` file) - Primary configuration method
- **Settings Classes** (`src/app/core/config.py`) - Python-based configuration
- **Docker Configuration** (`docker-compose.yml`) - Container orchestration
- **Database Configuration** (`alembic.ini`) - Database migrations
## Environment Variables Reference
All configuration is managed through environment variables defined in the `.env` file located in the `src/` directory.
### Application Settings
Basic application metadata displayed in API documentation:
```env
# ------------- app settings -------------
APP_NAME="Your App Name"
APP_DESCRIPTION="Your app description here"
APP_VERSION="0.1.0"
CONTACT_NAME="Your Name"
CONTACT_EMAIL="your.email@example.com"
LICENSE_NAME="MIT"
```
**Variables Explained:**
- `APP_NAME`: Displayed in API documentation and responses
- `APP_DESCRIPTION`: Shown in OpenAPI documentation
- `APP_VERSION`: API version for documentation and headers
- `CONTACT_NAME`: Contact information for API documentation
- `CONTACT_EMAIL`: Support email for API users
- `LICENSE_NAME`: License type for the API
### Database Configuration
PostgreSQL database connection settings:
```env
# ------------- database -------------
POSTGRES_USER="your_postgres_user"
POSTGRES_PASSWORD="your_secure_password"
POSTGRES_SERVER="localhost"
POSTGRES_PORT=5432
POSTGRES_DB="your_database_name"
```
**Variables Explained:**
- `POSTGRES_USER`: Database user with appropriate permissions
- `POSTGRES_PASSWORD`: Strong password for database access
- `POSTGRES_SERVER`: Hostname or IP of PostgreSQL server
- `POSTGRES_PORT`: PostgreSQL port (default: 5432)
- `POSTGRES_DB`: Name of the database to connect to
**Environment-Specific Values:**
```env
# Local development
POSTGRES_SERVER="localhost"
# Docker Compose
POSTGRES_SERVER="db"
# Production
POSTGRES_SERVER="your-prod-db-host.com"
```
### Security & Authentication
JWT and password security configuration:
```env
# ------------- crypt -------------
SECRET_KEY="your-super-secret-key-here"
ALGORITHM="HS256"
ACCESS_TOKEN_EXPIRE_MINUTES=30
REFRESH_TOKEN_EXPIRE_DAYS=7
```
**Variables Explained:**
- `SECRET_KEY`: Used for JWT token signing (generate with `openssl rand -hex 32`)
- `ALGORITHM`: JWT signing algorithm (HS256 recommended)
- `ACCESS_TOKEN_EXPIRE_MINUTES`: How long access tokens remain valid
- `REFRESH_TOKEN_EXPIRE_DAYS`: How long refresh tokens remain valid
!!! danger "Security Warning"
Never use default values in production. Generate a strong secret key:
```bash
openssl rand -hex 32
```
### Redis Configuration
Redis is used for caching, job queues, and rate limiting:
```env
# ------------- redis cache -------------
REDIS_CACHE_HOST="localhost" # Use "redis" for Docker Compose
REDIS_CACHE_PORT=6379
# ------------- redis queue -------------
REDIS_QUEUE_HOST="localhost" # Use "redis" for Docker Compose
REDIS_QUEUE_PORT=6379
# ------------- redis rate limit -------------
REDIS_RATE_LIMIT_HOST="localhost" # Use "redis" for Docker Compose
REDIS_RATE_LIMIT_PORT=6379
```
**Best Practices:**
- **Development**: Use the same Redis instance for all services
- **Production**: Use separate Redis instances for better isolation
```env
# Production example with separate instances
REDIS_CACHE_HOST="cache.redis.example.com"
REDIS_QUEUE_HOST="queue.redis.example.com"
REDIS_RATE_LIMIT_HOST="ratelimit.redis.example.com"
```
### Caching Settings
Client-side and server-side caching configuration:
```env
# ------------- redis client-side cache -------------
CLIENT_CACHE_MAX_AGE=30 # seconds
```
**Variables Explained:**
- `CLIENT_CACHE_MAX_AGE`: How long browsers should cache responses
### Rate Limiting
Default rate limiting configuration:
```env
# ------------- default rate limit settings -------------
DEFAULT_RATE_LIMIT_LIMIT=10 # requests per period
DEFAULT_RATE_LIMIT_PERIOD=3600 # period in seconds (1 hour)
```
**Variables Explained:**
- `DEFAULT_RATE_LIMIT_LIMIT`: Number of requests allowed per period
- `DEFAULT_RATE_LIMIT_PERIOD`: Time window in seconds
### Admin User
First superuser account configuration:
```env
# ------------- admin -------------
ADMIN_NAME="Admin User"
ADMIN_EMAIL="admin@example.com"
ADMIN_USERNAME="admin"
ADMIN_PASSWORD="secure_admin_password"
```
**Variables Explained:**
- `ADMIN_NAME`: Display name for the admin user
- `ADMIN_EMAIL`: Email address for the admin account
- `ADMIN_USERNAME`: Username for admin login
- `ADMIN_PASSWORD`: Initial password (change after first login)
### User Tiers
Initial tier configuration:
```env
# ------------- first tier -------------
TIER_NAME="free"
```
**Variables Explained:**
- `TIER_NAME`: Name of the default user tier
### Environment Type
Controls API documentation visibility and behavior:
```env
# ------------- environment -------------
ENVIRONMENT="local" # local, staging, or production
```
**Environment Types:**
- **local**: Full API docs available publicly at `/docs`
- **staging**: API docs available to superusers only
- **production**: API docs completely disabled
## Docker Compose Configuration
### Basic Setup
Docker Compose automatically loads the `.env` file:
```yaml
# In docker-compose.yml
services:
web:
env_file:
- ./src/.env
```
### Development Overrides
Create `docker-compose.override.yml` for local customizations:
```yaml
version: '3.8'
services:
web:
ports:
- "8001:8000" # Use different port
environment:
- DEBUG=true
volumes:
- ./custom-logs:/code/logs
```
### Service Configuration
Understanding each Docker service:
```yaml
services:
web: # FastAPI application
db: # PostgreSQL database
redis: # Redis for caching/queues
worker: # ARQ background task worker
nginx: # Reverse proxy (optional)
```
## Python Settings Classes
Advanced configuration is handled in `src/app/core/config.py`:
### Settings Composition
The main `Settings` class inherits from multiple setting groups:
```python
class Settings(
AppSettings,
PostgresSettings,
CryptSettings,
FirstUserSettings,
RedisCacheSettings,
ClientSideCacheSettings,
RedisQueueSettings,
RedisRateLimiterSettings,
DefaultRateLimitSettings,
EnvironmentSettings,
):
pass
```
### Adding Custom Settings
Create your own settings group:
```python
class CustomSettings(BaseSettings):
CUSTOM_API_KEY: str = ""
CUSTOM_TIMEOUT: int = 30
ENABLE_FEATURE_X: bool = False
# Add to main Settings class
class Settings(
AppSettings,
# ... other settings ...
CustomSettings,
):
pass
```
### Opting Out of Services
Remove unused services by excluding their settings:
```python
# Minimal setup without Redis services
class Settings(
AppSettings,
PostgresSettings,
CryptSettings,
FirstUserSettings,
# Removed: RedisCacheSettings
# Removed: RedisQueueSettings
# Removed: RedisRateLimiterSettings
EnvironmentSettings,
):
pass
```
## Database Configuration
### Alembic Configuration
Database migrations are configured in `src/alembic.ini`:
```ini
[alembic]
script_location = migrations
sqlalchemy.url = postgresql://%(POSTGRES_USER)s:%(POSTGRES_PASSWORD)s@%(POSTGRES_SERVER)s:%(POSTGRES_PORT)s/%(POSTGRES_DB)s
```
### Connection Pooling
SQLAlchemy connection pool settings in `src/app/core/db/database.py`:
```python
engine = create_async_engine(
DATABASE_URL,
pool_size=20, # Number of connections to maintain
max_overflow=30, # Additional connections allowed
pool_timeout=30, # Seconds to wait for connection
pool_recycle=1800, # Seconds before connection refresh
)
```
### Database Best Practices
**Connection Pool Sizing:**
- Start with `pool_size=20`, `max_overflow=30`
- Monitor connection usage and adjust based on load
- Use connection pooling monitoring tools
**Migration Strategy:**
- Always backup database before running migrations
- Test migrations on staging environment first
- Use `alembic revision --autogenerate` for model changes
## Security Configuration
### JWT Token Configuration
Customize JWT behavior in `src/app/core/security.py`:
```python
def create_access_token(data: dict, expires_delta: timedelta = None):
to_encode = data.copy()
if expires_delta:
expire = datetime.utcnow() + expires_delta
else:
expire = datetime.utcnow() + timedelta(
minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES
)
```
### CORS Configuration
Configure Cross-Origin Resource Sharing in `src/app/main.py`:
```python
app.add_middleware(
CORSMiddleware,
allow_origins=["http://localhost:3000"], # Specify allowed origins
allow_credentials=True,
allow_methods=["GET", "POST"], # Specify allowed methods
allow_headers=["*"],
)
```
**Production CORS Settings:**
```python
# Never use wildcard (*) in production
allow_origins=[
"https://yourapp.com",
"https://www.yourapp.com"
],
```
### Security Headers
Add security headers middleware:
```python
from starlette.middleware.base import BaseHTTPMiddleware
class SecurityHeadersMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request, call_next):
response = await call_next(request)
response.headers["X-Frame-Options"] = "DENY"
response.headers["X-Content-Type-Options"] = "nosniff"
response.headers["X-XSS-Protection"] = "1; mode=block"
return response
```
## Logging Configuration
### Basic Logging Setup
Configure logging in `src/app/core/logger.py`:
```python
import logging
from logging.handlers import RotatingFileHandler
# Set log level
LOGGING_LEVEL = logging.INFO
# Configure file rotation
file_handler = RotatingFileHandler(
'logs/app.log',
maxBytes=10485760, # 10MB
backupCount=5 # Keep 5 backup files
)
```
### Structured Logging
Use structured logging for better observability:
```python
import structlog
structlog.configure(
processors=[
structlog.stdlib.filter_by_level,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.processors.JSONRenderer()
],
logger_factory=structlog.stdlib.LoggerFactory(),
)
```
### Log Levels by Environment
```python
# Environment-specific log levels
LOG_LEVELS = {
"local": logging.DEBUG,
"staging": logging.INFO,
"production": logging.WARNING
}
LOGGING_LEVEL = LOG_LEVELS.get(settings.ENVIRONMENT, logging.INFO)
```
## Environment-Specific Configurations
### Development (.env.development)
```env
ENVIRONMENT="local"
POSTGRES_SERVER="localhost"
REDIS_CACHE_HOST="localhost"
SECRET_KEY="dev-secret-key-not-for-production"
ACCESS_TOKEN_EXPIRE_MINUTES=60 # Longer for development
DEBUG=true
```
### Staging (.env.staging)
```env
ENVIRONMENT="staging"
POSTGRES_SERVER="staging-db.example.com"
REDIS_CACHE_HOST="staging-redis.example.com"
SECRET_KEY="staging-secret-key-different-from-prod"
ACCESS_TOKEN_EXPIRE_MINUTES=30
DEBUG=false
```
### Production (.env.production)
```env
ENVIRONMENT="production"
POSTGRES_SERVER="prod-db.example.com"
REDIS_CACHE_HOST="prod-redis.example.com"
SECRET_KEY="ultra-secure-production-key-generated-with-openssl"
ACCESS_TOKEN_EXPIRE_MINUTES=15
DEBUG=false
REDIS_CACHE_PORT=6380 # Custom port for security
POSTGRES_PORT=5433 # Custom port for security
```
## Advanced Configuration
### Custom Middleware
Add custom middleware in `src/app/core/setup.py`:
```python
def create_application(router, settings, **kwargs):
app = FastAPI(...)
# Add custom middleware
app.add_middleware(CustomMiddleware, setting=value)
app.add_middleware(TimingMiddleware)
app.add_middleware(RequestIDMiddleware)
return app
```
### Feature Toggles
Implement feature flags:
```python
class FeatureSettings(BaseSettings):
ENABLE_ADVANCED_CACHING: bool = False
ENABLE_ANALYTICS: bool = True
ENABLE_EXPERIMENTAL_FEATURES: bool = False
ENABLE_API_VERSIONING: bool = True
# Use in endpoints
if settings.ENABLE_ADVANCED_CACHING:
# Advanced caching logic
pass
```
### Health Checks
Configure health check endpoints:
```python
@app.get("/health")
async def health_check():
return {
"status": "healthy",
"database": await check_database_health(),
"redis": await check_redis_health(),
"version": settings.APP_VERSION
}
```
## Configuration Validation
### Environment Validation
Add validation to prevent misconfiguration:
```python
def validate_settings():
if not settings.SECRET_KEY:
raise ValueError("SECRET_KEY must be set")
if settings.ENVIRONMENT == "production":
if settings.SECRET_KEY == "dev-secret-key":
raise ValueError("Production must use secure SECRET_KEY")
if settings.DEBUG:
raise ValueError("DEBUG must be False in production")
```
### Runtime Checks
Add validation to application startup:
```python
@app.on_event("startup")
async def startup_event():
validate_settings()
await check_database_connection()
await check_redis_connection()
logger.info(f"Application started in {settings.ENVIRONMENT} mode")
```
## Configuration Troubleshooting
### Common Issues
**Environment Variables Not Loading:**
```bash
# Check file location and permissions
ls -la src/.env
# Check file format (no spaces around =)
cat src/.env | grep "=" | head -5
# Verify environment loading in Python
python -c "from src.app.core.config import settings; print(settings.APP_NAME)"
```
**Database Connection Failed:**
```bash
# Test connection manually
psql -h localhost -U postgres -d myapp
# Check if PostgreSQL is running
systemctl status postgresql
# or on macOS
brew services list | grep postgresql
```
**Redis Connection Failed:**
```bash
# Test Redis connection
redis-cli -h localhost -p 6379 ping
# Check Redis status
systemctl status redis
# or on macOS
brew services list | grep redis
```
### Configuration Testing
Test your configuration with a simple script:
```python
# test_config.py
import asyncio
from src.app.core.config import settings
from src.app.core.db.database import async_get_db
async def test_config():
print(f"App: {settings.APP_NAME}")
print(f"Environment: {settings.ENVIRONMENT}")
# Test database
try:
db = await anext(async_get_db())
print("✓ Database connection successful")
await db.close()
except Exception as e:
print(f"✗ Database connection failed: {e}")
# Test Redis (if enabled)
try:
from src.app.core.utils.cache import redis_client
await redis_client.ping()
print("✓ Redis connection successful")
except Exception as e:
print(f"✗ Redis connection failed: {e}")
if __name__ == "__main__":
asyncio.run(test_config())
```
Run with:
```bash
uv run python test_config.py
```

View File

@ -0,0 +1,311 @@
# Configuration
Learn how to configure your FastAPI Boilerplate application for different environments and use cases. Everything is configured through environment variables and Python settings classes.
## What You'll Learn
- **[Environment Variables](environment-variables.md)** - Configure through `.env` files
- **[Settings Classes](settings-classes.md)** - Python-based configuration management
- **[Docker Setup](docker-setup.md)** - Container and service configuration
- **[Environment-Specific](environment-specific.md)** - Development, staging, and production configs
## Quick Start
The boilerplate uses environment variables as the primary configuration method:
```bash
# Copy the example file
cp src/.env.example src/.env
# Edit with your values
nano src/.env
```
Essential variables to set:
```env
# Application
APP_NAME="My FastAPI App"
SECRET_KEY="your-super-secret-key-here"
# Database
POSTGRES_USER="your_user"
POSTGRES_PASSWORD="your_password"
POSTGRES_DB="your_database"
# Admin Account
ADMIN_EMAIL="admin@example.com"
ADMIN_PASSWORD="secure_password"
```
## Configuration Architecture
The configuration system has three layers:
```
Environment Variables (.env files)
Settings Classes (Python validation)
Application Configuration (Runtime)
```
### Layer 1: Environment Variables
Primary configuration through `.env` files:
```env
POSTGRES_USER="myuser"
POSTGRES_PASSWORD="mypassword"
REDIS_CACHE_HOST="localhost"
SECRET_KEY="your-secret-key"
```
### Layer 2: Settings Classes
Python classes that validate and structure configuration:
```python
class PostgresSettings(BaseSettings):
POSTGRES_USER: str
POSTGRES_PASSWORD: str = Field(min_length=8)
POSTGRES_SERVER: str = "localhost"
POSTGRES_PORT: int = 5432
POSTGRES_DB: str
```
### Layer 3: Application Use
Configuration injected throughout the application:
```python
from app.core.config import settings
# Use anywhere in your code
DATABASE_URL = f"postgresql+asyncpg://{settings.POSTGRES_USER}:{settings.POSTGRES_PASSWORD}@{settings.POSTGRES_SERVER}:{settings.POSTGRES_PORT}/{settings.POSTGRES_DB}"
```
## Key Configuration Areas
### Security Settings
```env
SECRET_KEY="your-super-secret-key-here"
ALGORITHM="HS256"
ACCESS_TOKEN_EXPIRE_MINUTES=30
REFRESH_TOKEN_EXPIRE_DAYS=7
```
### Database Configuration
```env
POSTGRES_USER="your_user"
POSTGRES_PASSWORD="your_password"
POSTGRES_SERVER="localhost"
POSTGRES_PORT=5432
POSTGRES_DB="your_database"
```
### Redis Services
```env
# Cache
REDIS_CACHE_HOST="localhost"
REDIS_CACHE_PORT=6379
# Background jobs
REDIS_QUEUE_HOST="localhost"
REDIS_QUEUE_PORT=6379
# Rate limiting
REDIS_RATE_LIMIT_HOST="localhost"
REDIS_RATE_LIMIT_PORT=6379
```
### Application Settings
```env
APP_NAME="Your App Name"
APP_VERSION="1.0.0"
ENVIRONMENT="local" # local, staging, production
DEBUG=true
```
### Rate Limiting
```env
DEFAULT_RATE_LIMIT_LIMIT=100
DEFAULT_RATE_LIMIT_PERIOD=3600 # 1 hour in seconds
```
### Admin User
```env
ADMIN_NAME="Admin User"
ADMIN_EMAIL="admin@example.com"
ADMIN_USERNAME="admin"
ADMIN_PASSWORD="secure_password"
```
## Environment-Specific Configurations
### Development
```env
ENVIRONMENT="local"
DEBUG=true
POSTGRES_SERVER="localhost"
REDIS_CACHE_HOST="localhost"
ACCESS_TOKEN_EXPIRE_MINUTES=60 # Longer for development
```
### Staging
```env
ENVIRONMENT="staging"
DEBUG=false
POSTGRES_SERVER="staging-db.example.com"
REDIS_CACHE_HOST="staging-redis.example.com"
ACCESS_TOKEN_EXPIRE_MINUTES=30
```
### Production
```env
ENVIRONMENT="production"
DEBUG=false
POSTGRES_SERVER="prod-db.example.com"
REDIS_CACHE_HOST="prod-redis.example.com"
ACCESS_TOKEN_EXPIRE_MINUTES=15
# Use custom ports for security
POSTGRES_PORT=5433
REDIS_CACHE_PORT=6380
```
## Docker Configuration
### Basic Setup
Docker Compose automatically loads your `.env` file:
```yaml
services:
web:
env_file:
- ./src/.env
environment:
- DATABASE_URL=postgresql+asyncpg://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
```
### Service Overview
```yaml
services:
web: # FastAPI application
db: # PostgreSQL database
redis: # Redis for caching/queues
worker: # Background task worker
```
## Common Configuration Patterns
### Feature Flags
```python
# In settings class
class FeatureSettings(BaseSettings):
ENABLE_CACHING: bool = True
ENABLE_ANALYTICS: bool = False
ENABLE_BACKGROUND_JOBS: bool = True
# Use in code
if settings.ENABLE_CACHING:
cache_result = await get_from_cache(key)
```
### Environment Detection
```python
@app.get("/docs", include_in_schema=False)
async def custom_swagger_ui():
if settings.ENVIRONMENT == "production":
raise HTTPException(404, "Documentation not available")
return get_swagger_ui_html(openapi_url="/openapi.json")
```
### Health Checks
```python
@app.get("/health")
async def health_check():
return {
"status": "healthy",
"environment": settings.ENVIRONMENT,
"version": settings.APP_VERSION,
"database": await check_database_health(),
"redis": await check_redis_health()
}
```
## Quick Configuration Tasks
### Generate Secret Key
```bash
# Generate a secure secret key
openssl rand -hex 32
```
### Test Configuration
```python
# test_config.py
from app.core.config import settings
print(f"App: {settings.APP_NAME}")
print(f"Environment: {settings.ENVIRONMENT}")
print(f"Database: {settings.POSTGRES_DB}")
```
### Environment File Templates
```bash
# Development
cp src/.env.example src/.env.development
# Staging
cp src/.env.example src/.env.staging
# Production
cp src/.env.example src/.env.production
```
## Best Practices
### Security
- Never commit `.env` files to version control
- Use different secret keys for each environment
- Disable debug mode in production
- Use secure passwords and keys
### Performance
- Configure appropriate connection pool sizes
- Set reasonable token expiration times
- Use Redis for caching in production
- Configure proper rate limits
### Maintenance
- Document all custom environment variables
- Use validation in settings classes
- Test configurations in staging first
- Monitor configuration changes
### Testing
- Use separate test environment variables
- Mock external services in tests
- Validate configuration on startup
- Test with different environment combinations
## Getting Started
Follow this path to configure your application:
### 1. **[Environment Variables](environment-variables.md)** - Start here
Learn about all available environment variables, their purposes, and recommended values for different environments.
### 2. **[Settings Classes](settings-classes.md)** - Validation layer
Understand how Python settings classes validate and structure your configuration with type hints and validation rules.
### 3. **[Docker Setup](docker-setup.md)** - Container configuration
Configure Docker Compose services, networking, and environment-specific overrides.
### 4. **[Environment-Specific](environment-specific.md)** - Deployment configs
Set up configuration for development, staging, and production environments with best practices.
## What's Next
Each guide provides practical examples and copy-paste configurations:
1. **[Environment Variables](environment-variables.md)** - Complete reference and examples
2. **[Settings Classes](settings-classes.md)** - Custom validation and organization
3. **[Docker Setup](docker-setup.md)** - Service configuration and overrides
4. **[Environment-Specific](environment-specific.md)** - Production-ready configurations
The boilerplate provides sensible defaults - just customize what you need!

View File

@ -0,0 +1,537 @@
# Settings Classes
Learn how Python settings classes validate, structure, and organize your application configuration. The boilerplate uses Pydantic's `BaseSettings` for type-safe configuration management.
## Settings Architecture
The main `Settings` class inherits from multiple specialized setting groups:
```python
# src/app/core/config.py
class Settings(
AppSettings,
PostgresSettings,
CryptSettings,
FirstUserSettings,
RedisCacheSettings,
ClientSideCacheSettings,
RedisQueueSettings,
RedisRateLimiterSettings,
DefaultRateLimitSettings,
EnvironmentSettings,
):
pass
# Single instance used throughout the app
settings = Settings()
```
## Built-in Settings Groups
### Application Settings
Basic app metadata and configuration:
```python
class AppSettings(BaseSettings):
APP_NAME: str = "FastAPI"
APP_DESCRIPTION: str = "A FastAPI project"
APP_VERSION: str = "0.1.0"
CONTACT_NAME: str = "Your Name"
CONTACT_EMAIL: str = "your.email@example.com"
LICENSE_NAME: str = "MIT"
```
### Database Settings
PostgreSQL connection configuration:
```python
class PostgresSettings(BaseSettings):
POSTGRES_USER: str
POSTGRES_PASSWORD: str
POSTGRES_SERVER: str = "localhost"
POSTGRES_PORT: int = 5432
POSTGRES_DB: str
@computed_field
@property
def DATABASE_URL(self) -> str:
return (
f"postgresql+asyncpg://{self.POSTGRES_USER}:"
f"{self.POSTGRES_PASSWORD}@{self.POSTGRES_SERVER}:"
f"{self.POSTGRES_PORT}/{self.POSTGRES_DB}"
)
```
### Security Settings
JWT and authentication configuration:
```python
class CryptSettings(BaseSettings):
SECRET_KEY: str
ALGORITHM: str = "HS256"
ACCESS_TOKEN_EXPIRE_MINUTES: int = 30
REFRESH_TOKEN_EXPIRE_DAYS: int = 7
@field_validator("SECRET_KEY")
@classmethod
def validate_secret_key(cls, v: str) -> str:
if len(v) < 32:
raise ValueError("SECRET_KEY must be at least 32 characters")
return v
```
### Redis Settings
Separate Redis instances for different services:
```python
class RedisCacheSettings(BaseSettings):
REDIS_CACHE_HOST: str = "localhost"
REDIS_CACHE_PORT: int = 6379
class RedisQueueSettings(BaseSettings):
REDIS_QUEUE_HOST: str = "localhost"
REDIS_QUEUE_PORT: int = 6379
class RedisRateLimiterSettings(BaseSettings):
REDIS_RATE_LIMIT_HOST: str = "localhost"
REDIS_RATE_LIMIT_PORT: int = 6379
```
### Rate Limiting Settings
Default rate limiting configuration:
```python
class DefaultRateLimitSettings(BaseSettings):
DEFAULT_RATE_LIMIT_LIMIT: int = 10
DEFAULT_RATE_LIMIT_PERIOD: int = 3600 # 1 hour
```
### Admin User Settings
First superuser account creation:
```python
class FirstUserSettings(BaseSettings):
ADMIN_NAME: str = "Admin"
ADMIN_EMAIL: str
ADMIN_USERNAME: str = "admin"
ADMIN_PASSWORD: str
@field_validator("ADMIN_EMAIL")
@classmethod
def validate_admin_email(cls, v: str) -> str:
if "@" not in v:
raise ValueError("ADMIN_EMAIL must be a valid email")
return v
```
## Creating Custom Settings
### Basic Custom Settings
Add your own settings group:
```python
class CustomSettings(BaseSettings):
CUSTOM_API_KEY: str = ""
CUSTOM_TIMEOUT: int = 30
ENABLE_FEATURE_X: bool = False
MAX_UPLOAD_SIZE: int = 10485760 # 10MB
@field_validator("MAX_UPLOAD_SIZE")
@classmethod
def validate_upload_size(cls, v: int) -> int:
if v < 1024: # 1KB minimum
raise ValueError("MAX_UPLOAD_SIZE must be at least 1KB")
if v > 104857600: # 100MB maximum
raise ValueError("MAX_UPLOAD_SIZE cannot exceed 100MB")
return v
# Add to main Settings class
class Settings(
AppSettings,
PostgresSettings,
# ... other settings ...
CustomSettings, # Add your custom settings
):
pass
```
### Advanced Custom Settings
Settings with complex validation and computed fields:
```python
class EmailSettings(BaseSettings):
SMTP_HOST: str = ""
SMTP_PORT: int = 587
SMTP_USERNAME: str = ""
SMTP_PASSWORD: str = ""
SMTP_USE_TLS: bool = True
EMAIL_FROM: str = ""
EMAIL_FROM_NAME: str = ""
@computed_field
@property
def EMAIL_ENABLED(self) -> bool:
return bool(self.SMTP_HOST and self.SMTP_USERNAME)
@model_validator(mode="after")
def validate_email_config(self) -> "EmailSettings":
if self.SMTP_HOST and not self.EMAIL_FROM:
raise ValueError("EMAIL_FROM required when SMTP_HOST is set")
if self.SMTP_USERNAME and not self.SMTP_PASSWORD:
raise ValueError("SMTP_PASSWORD required when SMTP_USERNAME is set")
return self
```
### Feature Flag Settings
Organize feature toggles:
```python
class FeatureSettings(BaseSettings):
# Core features
ENABLE_CACHING: bool = True
ENABLE_RATE_LIMITING: bool = True
ENABLE_BACKGROUND_JOBS: bool = True
# Optional features
ENABLE_ANALYTICS: bool = False
ENABLE_EMAIL_NOTIFICATIONS: bool = False
ENABLE_FILE_UPLOADS: bool = False
# Experimental features
ENABLE_EXPERIMENTAL_API: bool = False
ENABLE_BETA_FEATURES: bool = False
@model_validator(mode="after")
def validate_feature_dependencies(self) -> "FeatureSettings":
if self.ENABLE_EMAIL_NOTIFICATIONS and not self.ENABLE_BACKGROUND_JOBS:
raise ValueError("Email notifications require background jobs")
return self
```
## Settings Validation
### Field Validation
Validate individual fields:
```python
class DatabaseSettings(BaseSettings):
DB_POOL_SIZE: int = 20
DB_MAX_OVERFLOW: int = 30
DB_TIMEOUT: int = 30
@field_validator("DB_POOL_SIZE")
@classmethod
def validate_pool_size(cls, v: int) -> int:
if v < 1:
raise ValueError("Pool size must be at least 1")
if v > 100:
raise ValueError("Pool size should not exceed 100")
return v
@field_validator("DB_TIMEOUT")
@classmethod
def validate_timeout(cls, v: int) -> int:
if v < 5:
raise ValueError("Timeout must be at least 5 seconds")
return v
```
### Model Validation
Validate across multiple fields:
```python
class SecuritySettings(BaseSettings):
ENABLE_HTTPS: bool = False
SSL_CERT_PATH: str = ""
SSL_KEY_PATH: str = ""
FORCE_SSL: bool = False
@model_validator(mode="after")
def validate_ssl_config(self) -> "SecuritySettings":
if self.ENABLE_HTTPS:
if not self.SSL_CERT_PATH:
raise ValueError("SSL_CERT_PATH required when HTTPS enabled")
if not self.SSL_KEY_PATH:
raise ValueError("SSL_KEY_PATH required when HTTPS enabled")
if self.FORCE_SSL and not self.ENABLE_HTTPS:
raise ValueError("Cannot force SSL without enabling HTTPS")
return self
```
### Environment-Specific Validation
Different validation rules per environment:
```python
class EnvironmentSettings(BaseSettings):
ENVIRONMENT: str = "local"
DEBUG: bool = True
@model_validator(mode="after")
def validate_environment_config(self) -> "EnvironmentSettings":
if self.ENVIRONMENT == "production":
if self.DEBUG:
raise ValueError("DEBUG must be False in production")
if self.ENVIRONMENT not in ["local", "staging", "production"]:
raise ValueError("ENVIRONMENT must be local, staging, or production")
return self
```
## Computed Properties
### Dynamic Configuration
Create computed values from other settings:
```python
class StorageSettings(BaseSettings):
STORAGE_TYPE: str = "local" # local, s3, gcs
# Local storage
LOCAL_STORAGE_PATH: str = "./uploads"
# S3 settings
AWS_ACCESS_KEY_ID: str = ""
AWS_SECRET_ACCESS_KEY: str = ""
AWS_BUCKET_NAME: str = ""
AWS_REGION: str = "us-east-1"
@computed_field
@property
def STORAGE_ENABLED(self) -> bool:
if self.STORAGE_TYPE == "local":
return bool(self.LOCAL_STORAGE_PATH)
elif self.STORAGE_TYPE == "s3":
return bool(self.AWS_ACCESS_KEY_ID and self.AWS_SECRET_ACCESS_KEY and self.AWS_BUCKET_NAME)
return False
@computed_field
@property
def STORAGE_CONFIG(self) -> dict:
if self.STORAGE_TYPE == "local":
return {"path": self.LOCAL_STORAGE_PATH}
elif self.STORAGE_TYPE == "s3":
return {
"bucket": self.AWS_BUCKET_NAME,
"region": self.AWS_REGION,
"credentials": {
"access_key": self.AWS_ACCESS_KEY_ID,
"secret_key": self.AWS_SECRET_ACCESS_KEY,
}
}
return {}
```
## Organizing Settings
### Service-Based Organization
Group settings by service or domain:
```python
# Authentication service settings
class AuthSettings(BaseSettings):
JWT_SECRET_KEY: str
JWT_ALGORITHM: str = "HS256"
ACCESS_TOKEN_EXPIRE: int = 30
REFRESH_TOKEN_EXPIRE: int = 7200
PASSWORD_MIN_LENGTH: int = 8
# Notification service settings
class NotificationSettings(BaseSettings):
EMAIL_ENABLED: bool = False
SMS_ENABLED: bool = False
PUSH_ENABLED: bool = False
# Email settings
SMTP_HOST: str = ""
SMTP_PORT: int = 587
# SMS settings (example with Twilio)
TWILIO_ACCOUNT_SID: str = ""
TWILIO_AUTH_TOKEN: str = ""
# Main settings
class Settings(
AppSettings,
AuthSettings,
NotificationSettings,
# ... other settings
):
pass
```
### Conditional Settings Loading
Load different settings based on environment:
```python
class BaseAppSettings(BaseSettings):
APP_NAME: str = "FastAPI App"
DEBUG: bool = False
class DevelopmentSettings(BaseAppSettings):
DEBUG: bool = True
LOG_LEVEL: str = "DEBUG"
DATABASE_ECHO: bool = True
class ProductionSettings(BaseAppSettings):
DEBUG: bool = False
LOG_LEVEL: str = "WARNING"
DATABASE_ECHO: bool = False
def get_settings() -> BaseAppSettings:
environment = os.getenv("ENVIRONMENT", "local")
if environment == "production":
return ProductionSettings()
else:
return DevelopmentSettings()
settings = get_settings()
```
## Removing Unused Services
### Minimal Configuration
Remove services you don't need:
```python
# Minimal setup without Redis services
class MinimalSettings(
AppSettings,
PostgresSettings,
CryptSettings,
FirstUserSettings,
# Removed: RedisCacheSettings
# Removed: RedisQueueSettings
# Removed: RedisRateLimiterSettings
EnvironmentSettings,
):
pass
```
### Service Feature Flags
Use feature flags to conditionally enable services:
```python
class ServiceSettings(BaseSettings):
ENABLE_REDIS: bool = True
ENABLE_CELERY: bool = True
ENABLE_MONITORING: bool = False
class ConditionalSettings(
AppSettings,
PostgresSettings,
CryptSettings,
ServiceSettings,
):
# Add Redis settings only if enabled
def __init__(self, **kwargs):
super().__init__(**kwargs)
if self.ENABLE_REDIS:
# Dynamically add Redis settings
self.__class__ = type(
"ConditionalSettings",
(self.__class__, RedisCacheSettings),
{}
)
```
## Testing Settings
### Test Configuration
Create separate settings for testing:
```python
class TestSettings(BaseSettings):
# Override database for testing
POSTGRES_DB: str = "test_database"
# Disable external services
ENABLE_REDIS: bool = False
ENABLE_EMAIL: bool = False
# Speed up tests
ACCESS_TOKEN_EXPIRE_MINUTES: int = 5
# Test-specific settings
TEST_USER_EMAIL: str = "test@example.com"
TEST_USER_PASSWORD: str = "testpassword123"
# Use in tests
@pytest.fixture
def test_settings():
return TestSettings()
```
### Settings Validation Testing
Test your custom settings:
```python
def test_custom_settings_validation():
# Test valid configuration
settings = CustomSettings(
CUSTOM_API_KEY="test-key",
CUSTOM_TIMEOUT=60,
MAX_UPLOAD_SIZE=5242880 # 5MB
)
assert settings.CUSTOM_TIMEOUT == 60
# Test validation error
with pytest.raises(ValueError, match="MAX_UPLOAD_SIZE cannot exceed 100MB"):
CustomSettings(MAX_UPLOAD_SIZE=209715200) # 200MB
def test_settings_computed_fields():
settings = StorageSettings(
STORAGE_TYPE="s3",
AWS_ACCESS_KEY_ID="test-key",
AWS_SECRET_ACCESS_KEY="test-secret",
AWS_BUCKET_NAME="test-bucket"
)
assert settings.STORAGE_ENABLED is True
assert settings.STORAGE_CONFIG["bucket"] == "test-bucket"
```
## Best Practices
### Organization
- Group related settings in dedicated classes
- Use descriptive names for settings groups
- Keep validation logic close to the settings
- Document complex validation rules
### Security
- Validate sensitive settings like secret keys
- Never set default values for secrets in production
- Use computed fields to derive connection strings
- Separate test and production configurations
### Performance
- Use `@computed_field` for expensive calculations
- Cache settings instances appropriately
- Avoid complex validation in hot paths
- Use model validators for cross-field validation
### Testing
- Create separate test settings classes
- Test all validation rules
- Mock external service settings in tests
- Use dependency injection for settings in tests
The settings system provides type safety, validation, and organization for your application configuration. Start with the built-in settings and extend them as your application grows!