Load Balancing Strategien
Load Balancing: Traffic intelligent verteilen
Load Balancer verteilen Anfragen auf mehrere Server. Lernen Sie die verschiedenen Strategien und ihre Einsatzgebiete.
Warum Load Balancing?
Ohne Load Balancer:
┌──────────┐
│ Client │───────────────────► Server (überlastet!)
└──────────┘
Mit Load Balancer:
┌──────────┐ ┌─────────────┐ ┌──────────┐
│ Client │─────►│ Load Balancer│────►│ Server 1 │
└──────────┘ └─────────────┘ │ ├──────────┤
│ └──►│ Server 2 │
│ ├──────────┤
└─────────────►│ Server 3 │
└──────────┘
Load Balancing Algorithmen
| Algorithmus | Beschreibung | Geeignet für |
|---|---|---|
| Round Robin | Reihum verteilen | Gleichstarke Server |
| Weighted Round Robin | Reihum mit Gewichtung | Unterschiedliche Kapazitäten |
| Least Connections | Server mit wenigsten Verbindungen | Lange Requests |
| IP Hash | Client-IP bestimmt Server | Session Affinity |
| Least Response Time | Schnellster Server | Performance-kritisch |
Nginx als Load Balancer
# /etc/nginx/nginx.conf
# Upstream definieren
upstream backend {
# Round Robin (Standard)
server 10.0.0.1:8080;
server 10.0.0.2:8080;
server 10.0.0.3:8080;
}
# Mit Gewichtung
upstream backend_weighted {
server 10.0.0.1:8080 weight=3; # 3x mehr Traffic
server 10.0.0.2:8080 weight=2;
server 10.0.0.3:8080 weight=1;
}
# Least Connections
upstream backend_least {
least_conn;
server 10.0.0.1:8080;
server 10.0.0.2:8080;
server 10.0.0.3:8080;
}
# IP Hash (Session Affinity)
upstream backend_sticky {
ip_hash;
server 10.0.0.1:8080;
server 10.0.0.2:8080;
server 10.0.0.3:8080;
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Health Checks
upstream backend {
server 10.0.0.1:8080 max_fails=3 fail_timeout=30s;
server 10.0.0.2:8080 max_fails=3 fail_timeout=30s;
server 10.0.0.3:8080 backup; # Nur wenn andere ausfallen
}
# Nginx Plus (kostenpflichtig) - Active Health Checks
upstream backend {
zone backend 64k;
server 10.0.0.1:8080;
server 10.0.0.2:8080;
health_check interval=5s fails=3 passes=2;
}
HAProxy Konfiguration
# /etc/haproxy/haproxy.cfg
global
maxconn 50000
log /dev/log local0
defaults
mode http
timeout connect 5s
timeout client 30s
timeout server 30s
option httplog
frontend http_front
bind *:80
bind *:443 ssl crt /etc/ssl/certs/cert.pem
redirect scheme https if !{ ssl_fc }
default_backend servers
backend servers
balance roundrobin
option httpchk GET /health
http-check expect status 200
server server1 10.0.0.1:8080 check weight 3
server server2 10.0.0.2:8080 check weight 2
server server3 10.0.0.3:8080 check weight 1 backup
# Stats Page
listen stats
bind *:8404
stats enable
stats uri /stats
stats auth admin:password
Docker Swarm Load Balancing
# docker-compose.yml
version: '3.8'
services:
web:
image: nginx
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
# Swarm erstellt automatisch Load Balancing
# Intern über ingress network
Kubernetes Services
# ClusterIP: Internes Load Balancing
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
type: ClusterIP
# LoadBalancer: Externes Load Balancing
apiVersion: v1
kind: Service
metadata:
name: my-service-external
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
Session Persistence (Sticky Sessions)
# Nginx - Cookie-basiert
upstream backend {
server 10.0.0.1:8080;
server 10.0.0.2:8080;
sticky cookie srv_id expires=1h;
}
# HAProxy - Cookie
backend servers
balance roundrobin
cookie SERVERID insert indirect nocache
server server1 10.0.0.1:8080 cookie s1
server server2 10.0.0.2:8080 cookie s2
# Besser: Stateless Design!
# Sessions in Redis/Memcached statt auf Server
SSL Termination
# Load Balancer terminiert SSL
┌────────┐ HTTPS ┌────────────┐ HTTP ┌────────┐
│ Client │───────►│Load Balancer│──────►│ Server │
└────────┘ └────────────┘ └────────┘
└── SSL hier ──┘
# Nginx SSL Termination
server {
listen 443 ssl;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
location / {
proxy_pass http://backend; # HTTP intern
proxy_set_header X-Forwarded-Proto $scheme;
}
}
💡 Tipp:
Überwachen Sie die Verfügbarkeit Ihrer Load-Balanced Server mit dem Enjyn Status Monitor.