Introduction
The previous parts of this guide covered everything needed to build a Django application: project structure, models, queries, forms, authentication, APIs, testing, static files, security, and database configuration. Your application works on your development machine and is configured securely.
Now it needs to run somewhere users can access it.
This final part addresses deployment: getting your application onto servers, keeping it running, knowing when things go wrong, and making it fast enough. These are operational concerns — less about writing code and more about running software reliably.
Deployment Options
Django applications can deploy almost anywhere Python runs. The choice depends on your needs: simplicity versus control, cost versus flexibility, operational burden versus customization. There’s no universally correct answer, but understanding the tradeoffs helps you choose.
Platform as a Service (PaaS)
Services like Heroku, Railway, Render, and Fly.io handle infrastructure for you. You push code; they handle servers, scaling, SSL certificates, and maintenance.
A typical Heroku deployment needs a Procfile declaring how to run your application:
web: gunicorn myproject.wsgi
And a runtime.txt specifying the Python version:
python-3.11.0
You’ll also need gunicorn in your requirements:
gunicorn==21.2.0
Deployment becomes git push heroku main. The platform builds your application, installs dependencies, runs migrations if configured, and routes traffic to it. SSL certificates generate automatically. Scaling means moving a slider or changing a number.
PaaS platforms simplify deployment enormously. The tradeoff is cost at scale — you pay for convenience — and less control over the environment. You can’t tune the operating system or install arbitrary software. For small to medium applications, especially with small teams, they’re often the right choice. You’re paying to not think about servers.
The hidden benefit is focus. Time spent configuring Nginx is time not spent building features. For startups and small teams, this tradeoff often makes sense.
Virtual Private Servers (VPS)
Services like DigitalOcean, Linode, Hetzner, or AWS EC2 provide virtual machines you control completely. You get a Linux server with root access. Everything else — Python, web server, database, SSL — you install and configure yourself.
A typical production stack:
- Nginx: Reverse proxy handling HTTPS termination, serving static files, routing requests to Django
- Gunicorn: WSGI server running your Django application
- Supervisor or systemd: Process management ensuring Gunicorn stays running after crashes or reboots
- PostgreSQL: Database, either on the same server or a managed service
- Let’s Encrypt: Free SSL certificates, automatically renewed via Certbot
Setting this up takes hours the first time. You’ll learn about firewall rules, systemd unit files, Nginx configuration syntax, and certificate renewal. Each piece is documented, but combining them requires understanding how they interact.
This approach offers maximum control and can be cost-effective at scale. A $20/month VPS handles significant traffic. The tradeoff is operational burden. You’re responsible for security updates, monitoring, backups, recovery, and scaling. When the server goes down at 3 AM, you fix it.
Small teams often underestimate this burden. “We’ll just run our own server” becomes “we spent the weekend debugging a kernel update that broke networking.”
Docker and Containers
Containerization packages your application with its dependencies into an isolated unit. A Dockerfile defines the environment:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN python manage.py collectstatic --noinput
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "myproject.wsgi"]
Build and run:
docker build -t myproject .
docker run -p 8000:8000 myproject
Containers solve the “it works on my machine” problem. The container includes Python, dependencies, and configuration. What runs in development runs identically in production. New developers set up their environment with docker compose up instead of following a 20-step installation guide.
For single containers, Docker alone suffices. For multiple containers — application, database, cache, background workers — Docker Compose orchestrates them:
version: '3.8'
services:
web:
build: .
ports:
- "8000:8000"
depends_on:
- db
environment:
- DATABASE_URL=postgres://user:pass@db:5432/mydb
db:
image: postgres:15
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
For large-scale deployments across multiple machines, Kubernetes manages container clusters. Kubernetes adds substantial complexity but enables automatic scaling, rolling deployments, and self-healing infrastructure.
Containers add complexity upfront. Learning Docker takes time. Debugging containerized applications requires understanding container networking and volumes. For a simple Django application with one developer, containers might be overkill. For teams managing multiple services or requiring consistent environments, containers pay off substantially.
Choosing Your Path
For your first deployment: use a PaaS. Heroku’s free tier is gone, but Railway and Render offer generous free tiers. Get your application online, learn what production means, then decide if you need more control.
For cost-sensitive projects at scale: VPS becomes attractive. The operational knowledge required is worth acquiring eventually, but not while you’re still learning Django.
For teams and complex applications: containers provide consistency and reproducibility that matter more as projects grow.
WSGI and ASGI Servers
Django needs a server to run. The development server (python manage.py runserver) is explicitly for development — it’s single-threaded, automatically reloads on code changes, shows detailed errors, and wasn’t designed for security or performance. Never use it in production.
Understanding WSGI
WSGI (Web Server Gateway Interface) is the standard protocol between Python web applications and servers. Your Django application speaks WSGI; a WSGI server translates between WSGI and HTTP.
Gunicorn is the most common choice. It’s simple, well-documented, and handles most use cases:
pip install gunicorn
gunicorn myproject.wsgi:application
By default, Gunicorn runs one worker process. For production, run multiple workers to handle concurrent requests:
gunicorn myproject.wsgi:application --workers 3 --bind 0.0.0.0:8000
The rule of thumb: (2 × CPU cores) + 1 workers. A 2-core server runs 5 workers. This formula balances CPU utilization against memory usage.
uWSGI offers more features and configuration options: built-in process management, statistics, multiple protocols. The tradeoff is complexity. Configuration isn’t intuitive, and the documentation assumes knowledge you might not have. Use uWSGI when Gunicorn’s simplicity isn’t enough.
ASGI for Async
ASGI (Asynchronous Server Gateway Interface) supports async Python. If you’re using Django’s async views, WebSockets (via Django Channels), or other real-time features, you need an ASGI server.
Uvicorn is fast and simple:
pip install uvicorn
uvicorn myproject.asgi:application --workers 3
Daphne is the original ASGI server, developed alongside Django Channels:
pip install daphne
daphne myproject.asgi:application
If you’re not using async features, stick with WSGI. It’s simpler, more mature, and thoroughly tested. ASGI becomes necessary when you need real-time functionality — chat applications, live notifications, collaborative editing.
Nginx as Reverse Proxy
Production deployments typically place Nginx in front of Gunicorn. Nginx handles:
- HTTPS termination (SSL/TLS)
- Serving static files directly (faster than Python)
- Load balancing across multiple Gunicorn processes
- Request buffering and connection management
- Basic security (rate limiting, request filtering)
A minimal Nginx configuration:
server {
listen 80;
server_name yourdomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name yourdomain.com;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
location /static/ {
alias /path/to/staticfiles/;
}
location /media/ {
alias /path/to/media/;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The first server block redirects HTTP to HTTPS. The second handles HTTPS requests: static and media files served directly by Nginx, everything else proxied to Gunicorn.
PaaS platforms handle this automatically. On VPS, you configure it yourself.
Logging and Monitoring
Production applications need visibility. When something breaks, you need to know what happened and why. When performance degrades, you need to identify the cause. Logging captures what your application does. Monitoring alerts you to problems.
Configuring Django Logging
Django uses Python’s logging module. Configure it in settings.py:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '{levelname} {asctime} {module} {message}',
'style': '{',
},
},
'handlers': {
'file': {
'level': 'WARNING',
'class': 'logging.FileHandler',
'filename': '/var/log/django/app.log',
'formatter': 'verbose',
},
'console': {
'class': 'logging.StreamHandler',
'formatter': 'verbose',
},
},
'loggers': {
'django': {
'handlers': ['file', 'console'],
'level': 'WARNING',
'propagate': True,
},
'blog': {
'handlers': ['file', 'console'],
'level': 'INFO',
'propagate': True,
},
},
}
This configuration captures warnings and errors from Django itself, plus info-level messages from your application. Messages go to both a file and the console (useful for containerized deployments where logs go to stdout).
Use logging throughout your code:
import logging
logger = logging.getLogger(__name__)
def create_post(request):
logger.info(f"User {request.user} attempting to create post")
try:
# ... create post ...
logger.info(f"Post created successfully: {post.pk}")
except Exception as e:
logger.error(f"Post creation failed: {e}", exc_info=True)
raise
The exc_info=True parameter includes the full stack trace — invaluable for debugging production errors. Without it, you see only the error message, not where it came from.
Log meaningful events: user actions, external API calls, significant state changes, performance-relevant operations. Don’t log every function call; the noise obscures important information. Finding the right balance takes practice.
Error Tracking with Sentry
Log files capture errors but require you to check them. Sentry provides proactive error tracking: when errors occur, you get notified immediately with full context.
pip install sentry-sdk
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
sentry_sdk.init(
dsn="your-sentry-dsn",
integrations=[DjangoIntegration()],
traces_sample_rate=0.1, # capture 10% of transactions for performance monitoring
)
When an error occurs, Sentry captures:
- Full stack trace
- Request data (URL, method, headers)
- User information (if authenticated)
- Local variables at each stack frame
- Breadcrumbs (recent events leading to the error)
Sentry groups similar errors together, tracks their frequency over time, and alerts you to new issues. The difference between checking logs and using Sentry is the difference between “I hope I notice” and “I’ll be notified.”
Sentry’s free tier handles substantial volume. For most Django applications, it’s sufficient.
Uptime Monitoring
Sentry catches application errors. Uptime monitoring catches infrastructure problems: server crashes, network outages, certificate expirations.
Services like UptimeRobot, Pingdom, or Better Uptime periodically request your site and alert you when requests fail. Basic checks verify the site responds. More sophisticated checks verify specific content appears or API endpoints return expected data.
Configure alerts to reach you where you’ll notice them: email, SMS, Slack. An alert that nobody sees is worthless.
Performance Optimization
Performance problems rarely appear during development. Your laptop is fast. Your test database has 50 records. Everything responds instantly.
Production is different. Real databases contain thousands or millions of records. Multiple users make concurrent requests. Network latency adds up. Performance problems emerge when real users arrive with real data.
Measuring Before Optimizing
The cardinal rule of performance work: measure first. Intuition about performance is frequently wrong. The function you assume is slow might be instant; the innocent-looking query might be catastrophically expensive.
Django Debug Toolbar shows query counts, execution times, and template rendering during development:
pip install django-debug-toolbar
Add to installed apps and middleware, then watch the query count on each page. Sudden jumps from 5 queries to 50 indicate N+1 problems.
Django Silk records requests and queries for later analysis, useful for profiling specific flows.
In production, APM (Application Performance Monitoring) tools like Sentry’s performance monitoring, New Relic, or Datadog trace requests through your application, identifying slow endpoints and bottlenecks.
Measure. Identify the actual bottleneck. Fix it. Measure again. Premature optimization wastes time solving problems you don’t have.
Caching
Caching stores expensive results for reuse. Instead of computing something on every request, compute it once and serve the cached result.
Django provides a flexible caching framework. Start with a simple backend:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
}
}
Use caching for expensive operations:
from django.core.cache import cache
from django.db.models import Count
def get_popular_posts():
posts = cache.get('popular_posts')
if posts is None:
posts = list(Post.objects.annotate(
comment_count=Count('comments')
).order_by('-comment_count')[:10])
cache.set('popular_posts', posts, 3600) # cache for 1 hour
return posts
For pages that change rarely, cache entire views:
from django.views.decorators.cache import cache_page
@cache_page(60 * 15) # cache for 15 minutes
def post_list(request):
...
The local memory backend works for development and single-server deployments. For multiple servers, use Redis or Memcached so all servers share the cache:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.redis.RedisCache',
'LOCATION': 'redis://127.0.0.1:6379',
}
}
Caching introduces complexity: cache invalidation. When data changes, cached results become stale. Simple time-based expiration works for many cases. More sophisticated applications invalidate caches explicitly when relevant data changes.
Database Optimization
Database queries are usually the bottleneck. Optimization strategies from earlier parts apply here:
select_related and prefetch_related prevent N+1 queries. If your view loops over objects and accesses related objects, you probably need these.
Database indexes speed up filtering and ordering. Django creates indexes for primary keys and ForeignKey fields automatically. For fields you frequently filter or order by, add indexes:
class Post(models.Model):
title = models.CharField(max_length=200, db_index=True)
created_at = models.DateTimeField(auto_now_add=True, db_index=True)
Query optimization reduces data transfer. Use values() or values_list() when you need only specific fields. Use count() instead of len(queryset). Use exists() instead of checking if a queryset is truthy.
# Instead of
if len(Post.objects.filter(author=user)) > 0:
# Use
if Post.objects.filter(author=user).exists():
The database counts or checks existence far faster than fetching all objects for Python to count.
Conclusion - Staff Note
This guide covered Django from first concepts through production deployment. The journey from startproject to a running public application involves many decisions, and we’ve tried to explain not just what to do but why.
A few parting thoughts:
Start simple. PaaS for early deployment, basic logging before complex monitoring, no caching until you’ve measured. Add complexity when you have evidence it’s needed, not before. Premature optimization — including premature architectural complexity — wastes time.
Expect problems. Production will surprise you. Users do unexpected things. Traffic spikes occur at inconvenient times. Dependencies fail. The goal isn’t preventing all problems; it’s detecting them quickly and recovering gracefully.
Learn operations gradually. You don’t need to master Kubernetes before deploying your first Django application. Start with a PaaS, learn what breaks, understand why, then take on more operational complexity as your needs demand it.
Read errors carefully. Django’s error messages are unusually helpful. The traceback tells you exactly where things went wrong. Sentry shows you what users were doing when errors occurred. Most problems are solvable by reading what your tools tell you.
Keep learning. This guide covered fundamentals. Django has more: class-based views in depth, custom middleware, management commands, signals, multiple databases, GeoDjango, and more. The official documentation is comprehensive. The community is welcoming. Whatever you’re building, someone has probably solved similar problems.
Django has powered applications for nearly two decades. Instagram, Pinterest, Mozilla, and countless smaller applications run on it. The framework is mature, well-documented, and actively developed.
Build something. Deploy it. Learn from what breaks. Iterate.
Good luck.
The Frontek.dev Team