Why HikariCP Throws Timeout & How to Fix It



This content originally appeared on DEV Community and was authored by Anas Anjaria

Originally published on Medium

Cover photo by luis gomes

TL;DR

If you’re seeing SQLTransientConnectionException from HikariCP, it’s not always the database. Business logic inside a DB connection scope can silently block the pool. Keep non-DB work outside the connection scope to avoid timeouts and cascading failures.

🔍 What’s the Error?

If you’ve worked with HikariCP for connection pooling in Java or Scala, you may have hit this error:

java.sql.SQLTransientConnectionException: your-pool-name -
Connection is not available, request timed out after 30000ms.

In my article How Slow Queries Lead to HikariCP Connection Timeouts, I attributed the issue to slow SQL queries.

But recently, I discovered a different culprit:

non-database logic holding the connection longer than it should.

🧠 Understanding the Core Problem

HikariCP times out and throws an exception SQLTransientConnectionException when:

  • All connections are in use.
  • New queries are queued.
  • If a connection doesn’t become available within the timeout window, an exception is thrown.

Diagram showing how HikariCP manages query queue and JDBC connections

Diagram showing how HikariCP manages query queue and JDBC connections

⚠ This is a simplified conceptual diagram — not an exact representation.

HikariCP doesn’t timeout unless:

  • One thread is using a connection, and
  • Another is waiting for one that never becomes free

⚠ Hidden Trap: Non-DB Code Blocking a Connection

As mentioned earlier, it wasn’t the query that was slow but was extra logic running inside the DB connection block that delayed things.

Here’s a minimal reproducible Scala example:

class HikariCPDemoSpec extends AnyWordSpec with Matchers with BeforeAndAfterAll {

  private val connectionTimeoutInMillis = 3000

  private val config = new HikariConfig()
  ...
  config.setMaximumPoolSize(1)
  config.setConnectionTimeout(connectionTimeoutInMillis)

  private val ds = new HikariDataSource(config)

  private val executorService = Executors.newFixedThreadPool(2, new ThreadFactoryBuilder().setNameFormat(s"app-thread-pool-%d").build())
  private implicit val executionContext: ExecutionContext = ExecutionContext.fromExecutor(executorService)

  private def executeQuery(sleepSeconds: Int, callback: () => Unit): Future[Unit] = Future {
    val connection = ds.getConnection // get connection from a pool
    try {
      val stmt = connection.prepareStatement(s"SELECT pg_sleep($sleepSeconds)")
      stmt.execute()
      callback() // non-DB work could block connection if it's slow
    } finally {
      connection.close() // Always return connection to pool
    }
  }

  private def simulateSlowNonDBWork(): Unit = {
    Thread.sleep(3000)
  }

  "Demo" should {
    "throw an exception when long running non-DB work block a connection" in {
      intercept[SQLTransientConnectionException] {

        val computation1 = executeQuery(1, () => simulateSlowNonDBWork())
        val computation2 = executeQuery(1, () => println("Quick task."))

        val r = (for {
          _ <- computation1
          _ <- computation2
        } yield ())
        Await.result(r, Duration.Inf)
      }
    }
  }
}

❗ What happens?

  • pg_sleep(1) runs quickly.
  • Thread.sleep(3000) delays releasing the connection.
  • Meanwhile, another thread fails to acquire a connection (pool size = 1).
  • As a result, an exception SQLTransientConnectionException is thrown.

Recommended Pattern

Don’t hold a DB connection while doing unrelated work.

Good Practice

Move all non-DB work outside the DB block:

def executeQuery(): Future[ResultType] = Future {
  val conn = ds.getConnection()
  try {
    val stmt = conn.prepareStatement("SELECT something FROM table")
    val rs = stmt.executeQuery()
    extractResult(rs)
  } finally {
    conn.close()
  }
}

val resultsFromDB = executeQuery()
resultsFromDB.map { dbResult =>
  someBusinessLogic(dbResult) // outside the DB scope
}

This ensures that the connection is held only during the actual database operation, not during additional computation, networking, or delays.

Real-World Impact

The example above is simplified, but I’ve seen real production code exhibit similar issues — long-running computation, or network calls done while holding on to the DB connection.

In high-concurrency systems, these small delays compound and saturate the pool, leading to cascading failures under load.

Final Tips to Avoid HikariCP Timeouts

✅ Keep the DB connection lifecycle tight and minimal.
⛔ Don’t mix business logic with DB access.
📦 Configure your pool size and timeout appropriately.

📘 What You’ve Learned

  • HikariCP throws timeout exceptions when a connection isn’t returned quickly enough.
  • It’s not always the DB query — slow non-DB logic can block the pool.
  • Holding a connection for long non-DB logic is a common anti-pattern.
  • You should return the connection ASAP, and process data later.

💬 Have You Faced This?

Have you seen this error before?
What helped you fix it? Let me know in the comments 👇

📘 I write about PostgreSQL, devOps, backend engineering, and real-world performance tuning.

🔗 Find more of my work, connect on LinkedIn, or explore upcoming content: all-in-one


This content originally appeared on DEV Community and was authored by Anas Anjaria