Kubernetes Deployments Services
Kubernetes Deployments & Services
Deployments und Services sind das Herzstück von Kubernetes. Lernen Sie, wie Sie Anwendungen zuverlässig deployen und erreichbar machen.
Deployment erstellen
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:1.0.0
ports:
- containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
Deployment anwenden
# Deployment erstellen kubectl apply -f deployment.yaml # Status prüfen kubectl get deployments kubectl get pods # Details anzeigen kubectl describe deployment my-app # Logs eines Pods kubectl logs -f deployment/my-app # In Pod einloggen kubectl exec -it deployment/my-app -- /bin/sh
Service-Typen
| Typ | Beschreibung | Erreichbar von |
|---|---|---|
ClusterIP |
Interner Service | Nur im Cluster |
NodePort |
Port auf jedem Node | Extern via Node-IP:Port |
LoadBalancer |
Cloud Load Balancer | Extern via LB-IP |
ExternalName |
DNS-Alias | CNAME zu externem Service |
Service erstellen
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- port: 80 # Service-Port
targetPort: 8080 # Container-Port
type: ClusterIP
# NodePort Service
apiVersion: v1
kind: Service
metadata:
name: my-app-nodeport
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
nodePort: 30080 # 30000-32767
type: NodePort
# LoadBalancer Service
apiVersion: v1
kind: Service
metadata:
name: my-app-lb
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
Service-Befehle
# Service erstellen kubectl apply -f service.yaml # Services anzeigen kubectl get services # Service-Details kubectl describe service my-app-service # Service testen (von einem Pod aus) kubectl run test --rm -it --image=busybox -- wget -qO- my-app-service:80
Scaling
# Manuell skalieren kubectl scale deployment my-app --replicas=5 # Autoscaling (HPA) kubectl autoscale deployment my-app --min=2 --max=10 --cpu-percent=80 # HPA Status kubectl get hpa
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
Rolling Updates
# Update-Strategie im Deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max zusätzliche Pods
maxUnavailable: 0 # Immer alle verfügbar
# Image updaten
kubectl set image deployment/my-app my-app=my-app:2.0.0
# Rollout-Status
kubectl rollout status deployment/my-app
# Rollout-History
kubectl rollout history deployment/my-app
# Rollback
kubectl rollout undo deployment/my-app
kubectl rollout undo deployment/my-app --to-revision=2
ConfigMaps und Secrets
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
DATABASE_HOST: "postgres"
LOG_LEVEL: "info"
config.json: |
{
"setting": "value"
}
# Secret
apiVersion: v1
kind: Secret
metadata:
name: my-secrets
type: Opaque
data:
DB_PASSWORD: cGFzc3dvcmQxMjM= # base64
# Im Deployment verwenden
spec:
containers:
- name: my-app
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secrets
volumeMounts:
- name: config-volume
mountPath: /app/config
volumes:
- name: config-volume
configMap:
name: my-config
Ingress (HTTP Routing)
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
💡 Tipp:
Überwachen Sie Ihre Kubernetes-Cluster mit dem Enjyn Status Monitor.