back to posts

PostgreSQL at Scale: 7 Query Optimization Tricks for Fintech Apps

·8 min read

Fintech databases are brutal. A single user action — "send money" — can touch wallets, transactions, ledger entries, fee records, and audit logs in one request. Here are seven PostgreSQL patterns that have actually moved the needle in production.

1. Partial indexes on status columns

Transaction tables commonly filter by status: pending, completed, failed. A full index on status covers all rows — but 95% of rows will eventually be completed. A partial index only on pending rows is dramatically smaller and faster for the dashboard queries that actually matter.

-- index only covers the hot rows
CREATE INDEX idx_txn_pending
ON transactions (created_at)
WHERE status = 'pending';

2. Use CTEs for readable multi-step logic

Fintech queries often need to: filter users, aggregate their balances, then rank them. Nesting subqueries makes this unreadable and hard to debug. CTEs (WITH clauses) let you name each step. PostgreSQL's query planner optimizes them just as well as subqueries in modern versions (13+) — and your future self can actually understand the query.

3. Connection pooling with PgBouncer

PostgreSQL connections are expensive — each one forks a new OS process. A fintech app under load will open hundreds of connections per second. Without a pooler, you hit PostgreSQL's connection limit and everything queues. PgBouncer in transaction mode sits between your app and Postgres, multiplexing hundreds of app connections onto a small pool of real database connections. This alone can cut connection overhead by 60-70%.

4. Covering indexes eliminate heap fetches

A regular B-tree index stores the indexed columns and a pointer to the heap row. When Postgres uses the index, it still has to fetch the heap row to get the columns you're selecting. A covering index (using INCLUDE) stores extra columns alongside the index — the query can be answered entirely from the index, no heap fetch needed. For high-frequency read queries like "get user balance", this is a significant win.

CREATE INDEX idx_wallets_user
ON wallets (user_id)
INCLUDE (balance, currency, updated_at);

5. Partition transaction tables by month

Transaction tables grow forever. A query for "this month's transactions" shouldn't scan two years of data. Range partitioning by created_at (monthly partitions) means Postgres only touches the relevant partition — constraint exclusion at the planner level. Old partitions can be archived or dropped without affecting the live table.

6. SKIP LOCKED for job queues

If you're using PostgreSQL as a job queue (which is reasonable at moderate scale), the classic pattern of SELECT ... FOR UPDATE blocks workers when a row is already locked. Use FOR UPDATE SKIP LOCKED instead — workers skip locked rows and grab the next available one. No blocking, no queue buildup under load.

7. EXPLAIN ANALYZE before you optimize

This one sounds obvious but it's consistently skipped. Before adding any index, run EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) on your slow query. Look at the actual row count vs. estimated row count. Large discrepancies usually mean stale statistics — run ANALYZE first. Also look for sequential scans on large tables and hash joins that spill to disk. These are the two things that will crush you under load.

A slow query in development often becomes a timeout in production because the planner makes different choices with real data distributions. Always test with a production-scale dataset before calling an optimization done.

The meta-lesson

Most database performance problems aren't solved by switching to a different database. They're solved by understanding what your queries are actually doing. PostgreSQL is extraordinarily capable — most fintech teams are running it well below its ceiling, but hitting their own query design limits.

These patterns are from real production systems across Syarpa, TamPay, and Fobework. Your mileage will vary — always benchmark against your own workload. Reach out at oojoseph67@gmail.com if you want to dig into a specific query.