How provisioned concurrency on Lambda overwhelmed our RDS connection pool — and what we did to fix it fast.
When serverless scales fast and your database can't keep up: a real-world debug story.
During an early-access product rollout, we enabled provisioned concurrency on an AWS Lambda function to reduce cold starts and ensure instant responsiveness. We set it to 100, assuming readiness for a surge of traffic.
Almost immediately, our RDS PostgreSQL database began throwing connection errors. Lambda invocations were throttling at the database layer, causing retries, latency spikes, and user-facing failures.
It turned out the RDS connection pool was limited to 50 max connections, while Lambda — with provisioned concurrency of 100 — was creating new database clients at scale. Each Lambda container was consuming a connection, and RDS was simply overwhelmed.
RDS Proxy
to manage connection pooling effectivelyThis issue was a classic case of mismatch between scaling compute and fixed backend infrastructure. AWS gives you the power to scale Lambda instantly, but your database won’t magically scale with it. Always align provisioned concurrency with DB capabilities — or introduce a managed proxy layer.
Bonus Tip: If you're using frameworks like Spring Boot or Sequelize, be mindful of default connection settings — they can sneak in per-request pool creation.
Need help tuning your Lambda performance and backend architecture? Let's chat.