Every database query needs a connection. Open connection, run query, close connection. Simple.

Except opening a connection is expensive. TCP handshake. TLS negotiation. Authentication. Protocol setup. For MySQL, this can take 20-50ms. Your query might only take 2ms.

You’re spending 10x more time connecting than querying.

The Pool#

A connection pool keeps connections open and ready. Instead of open-query-close, you borrow-query-return.

%%{init: {'theme':'base', 'themeVariables': { 'primaryColor':'#000000','primaryTextColor':'#00ff00','primaryBorderColor':'#00ff00','lineColor':'#00ff00','secondaryColor':'#000000','tertiaryColor':'#000000'}}}%% sequenceDiagram autonumber participant App as Application participant Pool as Connection Pool participant DB as Database Note over Pool: 10 connections ready App->>Pool: Borrow connection Pool-->>App: Here's connection #3 App->>DB: SELECT * FROM users DB-->>App: Results App->>Pool: Return connection #3 Note over Pool: Connection #3 available again

The pool handles the lifecycle. Connections stay warm. Your code just borrows and returns.

In Java with HikariCP:

HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:mysql://localhost:3306/mydb");
config.setMaximumPoolSize(10);
config.setMinimumIdle(5);

HikariDataSource ds = new HikariDataSource(config);

// Borrow, use, return (try-with-resources handles return)
try (Connection conn = ds.getConnection()) {
    // run queries
}

Pool Exhaustion#

You have 10 connections. 11 requests arrive simultaneously. Request #11 waits.

If your queries are slow, or you have a traffic spike, the pool empties. Requests queue up. Latency spikes. Timeouts fire. Users see errors.

This is pool exhaustion. I’ve seen it bring down services during traffic spikes.

// HikariCP log you don't want to see
Connection is not available, request timed out after 30000ms

The fix isn’t always “add more connections.” Sometimes it’s “why are queries taking so long?”

Sizing the Pool#

The classic question: how many connections?

Too small: Pool exhaustion under load. Requests wait.

Too large: Wasted resources. Database has connection limits too. 1000 app servers with 50 connections each = 50,000 database connections. Your database will not be happy.

A starting formula from the HikariCP wiki:

connections = (core_count * 2) + effective_spindle_count

For SSDs, effective spindle count is roughly 1. An 8-core server? Start with (8 * 2) + 1 = 17 connections.

But this is just a starting point. Measure under realistic load.

Key Parameters#

maximumPoolSize: Upper limit. Don’t set this too high. Database connections consume memory on both sides.

minimumIdle: Connections to keep ready even when idle. Avoids cold start on traffic spike.

connectionTimeout: How long to wait for a connection before giving up. Too short and you fail fast. Too long and requests pile up.

idleTimeout: How long an unused connection stays in the pool before being closed. Keeps the pool fresh, avoids stale connections.

maxLifetime: Maximum age of a connection. Connections get recycled even if healthy. Helps with network equipment that kills idle connections, and database failovers.

config.setMaximumPoolSize(10);
config.setMinimumIdle(5);
config.setConnectionTimeout(30000);  // 30 seconds
config.setIdleTimeout(600000);       // 10 minutes
config.setMaxLifetime(1800000);      // 30 minutes

The Leak#

Borrowed a connection but never returned it? That’s a leak.

// BAD: Connection never returned
Connection conn = ds.getConnection();
conn.prepareStatement("SELECT ...").executeQuery();
// forgot to close

// GOOD: try-with-resources ensures return
try (Connection conn = ds.getConnection()) {
    conn.prepareStatement("SELECT ...").executeQuery();
}

Leaks are silent killers. Your pool slowly drains. Eventually, exhaustion. HikariCP has leak detection:

config.setLeakDetectionThreshold(60000); // warn if held > 60 seconds

At Oracle, a connection leak took us three hours to find. The pool drained over 20 minutes, then everything failed at once. We added leak detection the next day.

Beyond Databases#

Connection pooling applies everywhere:

  • HTTP clients: Don’t open a new TCP connection for every request. Apache HttpClient, OkHttp all pool connections.
  • Redis: Jedis, Lettuce pool Redis connections.
  • gRPC: Channels are pooled and reused.

The pattern is the same. Opening is expensive. Reuse is cheap.

What I’m Learning#

Connection pooling is one of those “boring” infrastructure topics that silently determines whether your service survives a traffic spike.

The insight that stuck with me: pool size isn’t about maximizing throughput. It’s about matching your application’s concurrency to your database’s capacity. More connections doesn’t mean faster. Past a point, it means contention, memory pressure, and slower queries.

Start small. Measure. Increase only when you have evidence.

Have you ever debugged a connection pool exhaustion issue? What gave it away?