Database Connection Pooling
Connection Pooling: Effiziente DB-Verbindungen
Connection Pools halten Datenbankverbindungen bereit. Das vermeidet den Overhead wiederholten Verbindungsaufbaus.
Das Problem
Ohne Pool: Request 1 → Connect → Query → Disconnect (~50ms overhead) Request 2 → Connect → Query → Disconnect (~50ms overhead) Request 3 → Connect → Query → Disconnect (~50ms overhead) Mit Pool: Pool: [Conn1, Conn2, Conn3, Conn4, Conn5] Request 1 → Conn1 → Query → Return to Pool (~1ms) Request 2 → Conn2 → Query → Return to Pool (~1ms) Request 3 → Conn3 → Query → Return to Pool (~1ms)
Node.js: pg Pool
const { Pool } = require('pg');
const pool = new Pool({
host: process.env.DB_HOST,
port: 5432,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
// Pool-Konfiguration
max: 20, // Max Verbindungen
min: 5, // Min Verbindungen (warmhalten)
idleTimeoutMillis: 30000, // Idle Verbindung schließen nach 30s
connectionTimeoutMillis: 5000, // Timeout für neue Verbindung
});
// Event Listeners
pool.on('connect', () => console.log('New connection'));
pool.on('error', (err) => console.error('Pool error', err));
// Query (Pool managed automatisch)
async function getUsers() {
const result = await pool.query('SELECT * FROM users LIMIT 10');
return result.rows;
}
// Oder mit Client für Transaktionen
async function transferMoney(from, to, amount) {
const client = await pool.connect();
try {
await client.query('BEGIN');
await client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from]);
await client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to]);
await client.query('COMMIT');
} catch (e) {
await client.query('ROLLBACK');
throw e;
} finally {
client.release(); // Zurück in Pool!
}
}
// Cleanup bei Shutdown
process.on('SIGTERM', () => pool.end());
PHP: PDO Persistent Connections
<?php
// PDO mit persistenten Verbindungen
$pdo = new PDO(
'mysql:host=localhost;dbname=mydb',
'user',
'password',
[
PDO::ATTR_PERSISTENT => true, // Persistent Connection
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
]
);
// Laravel: config/database.php
'mysql' => [
'driver' => 'mysql',
'host' => env('DB_HOST', '127.0.0.1'),
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
'options' => [
PDO::ATTR_PERSISTENT => true,
],
],
Python: SQLAlchemy Pool
from sqlalchemy import create_engine
from sqlalchemy.pool import QueuePool
engine = create_engine(
'postgresql://user:pass@localhost/db',
poolclass=QueuePool,
pool_size=5, # Permanente Verbindungen
max_overflow=10, # Zusätzliche bei Bedarf
pool_timeout=30, # Warten auf freie Verbindung
pool_recycle=1800, # Verbindung alle 30min erneuern
pool_pre_ping=True, # Verbindung vor Nutzung testen
)
# Verwendung
with engine.connect() as conn:
result = conn.execute(text("SELECT * FROM users"))
for row in result:
print(row)
# Connection automatisch zurück in Pool
Pool-Größe berechnen
# Faustregel: connections = (core_count * 2) + effective_spindle_count # Für SSD: # effective_spindle_count ≈ 0 # Beispiel: 4 Cores # connections = (4 * 2) + 0 = 8 # Beachten: # - Zu wenig: Wartezeiten # - Zu viel: DB-Overhead, Memory # - Meist: 10-20 Verbindungen pro App-Instanz # - DB-seitige Limits beachten!
PgBouncer (PostgreSQL Proxy)
# Zentraler Pool für mehrere App-Instanzen
[App 1] ──┐
[App 2] ──┼──> PgBouncer ──> PostgreSQL
[App 3] ──┘
Pool Single Connection Pool
# pgbouncer.ini
[databases]
mydb = host=localhost dbname=mydb
[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction # session, transaction, statement
max_client_conn = 1000
default_pool_size = 20
Monitoring
// Node.js: Pool-Statistiken
setInterval(() => {
console.log({
total: pool.totalCount, // Alle Verbindungen
idle: pool.idleCount, // Verfügbar
waiting: pool.waitingCount, // Wartende Requests
});
}, 10000);
// PostgreSQL: Aktive Verbindungen
SELECT count(*) FROM pg_stat_activity;
// MySQL: Verbindungen
SHOW STATUS LIKE 'Threads_connected';
SHOW VARIABLES LIKE 'max_connections';
Best Practices
✅ Do:
- Verbindungen immer zurückgeben (finally/using)
- Pool-Events loggen (errors, timeouts)
- Timeouts konfigurieren
- Health Checks aktivieren
❌ Don't:
- Verbindungen nicht releasen (Connection Leak!)
- Pool zu groß konfigurieren
- Globale Variable für einzelne Connection