Hacker News Viewer

Keeping a Postgres Queue Healthy

by tanelpoder on 4/11/2026, 4:24:08 PM

https://planetscale.com/blog/keeping-a-postgres-queue-healthy

Comments

by: nine_k

In short:<p>* Postgres still has the same problem with vacuum horizon, when a long-running query can block vacuuming of a quick-churning table. (The author uses a benchmark from 2015 when the problem was already well-understood.)<p>* Stock Postgres still has no tools good enough against it.<p>* The author&#x27;s company special version of Postgres does have such tools; a few polite promotions of it are strewn across the article.<p>My conclusion: it&#x27;s still not wise to mix long (OLAP-style) loads and quick-churning (queue-style) loads on the same Postgres instance. Maybe running 0MQ or even RMQ may be an easier solution, depending on the requirements to the queue.

4/11/2026, 8:21:42 PM


by: cataflutter

Decent article, but some remarks:<p>1) It seems these two statements conflict with each other:<p>&gt; The oldest such transaction sets the cutoff—referred to as the &quot;MVCC horizon.&quot; Until that transaction completes, every dead tuple <i>newer than its snapshot</i> is retained.<p>and<p>&gt; For example, imagine three analytics queries, each running for 40 seconds, staggered 20 seconds apart. No individual query would trigger a timeout for running too long. But because one is always active, the horizon never advances, and the effect on vacuum is the same as one transaction that never ends.<p>If the three analytics *transactions* (it&#x27;s transactions that matter, not queries, although there is some subtlety around deferred transactions not acquiring a snapshot until the first query) are started at different times, they will have staggered snapshots and so once the first completes, this should allow the vacuum to advance.<p>2) Although the problem about this query:<p><pre><code> SELECT * FROM jobs WHERE status = &#x27;pending&#x27; ORDER BY run_at LIMIT 1 FOR UPDATE SKIP LOCKED; </code></pre> having to consider dead tuples is a genuine concern and performance problem, this can also be mitigated by adding a monotonically increasing column and adding a `WHERE column &lt; ?` clause, provided you have also added an index to make that pagination efficient. This way you don&#x27;t need to consider dead tuples and they &#x27;only&#x27; waste space whilst waiting to be vacuumed, rather than also bogging down read perf.<p>There is a little subtlety around how you guarantee that the column is monotonically increasing, given concurrent writers, but the answer to that depends on what tricks you can fit into your application.<p>3) I almost want to say that the one-line summary is &#x27;Don&#x27;t combine (very) long-running transactions with (very) high transaction rates in Postgres&#x27;<p>(Is this a fair representation?)

4/11/2026, 7:30:21 PM


by: simeonGriggs

Yo! Author here, I’ll be around if anyone’s got questions!

4/11/2026, 6:47:33 PM


by: richwater

It would be nice if this ad at least explained a little bit of the technical side of the solution.

4/11/2026, 6:30:26 PM


by: sebmellen

Postgres can do so much. I see people choose Kafka and SQS for things that Graphile Worker could do all day long.

4/11/2026, 7:13:53 PM


by: sp1982

The problem is way worse if you update rows, if you stick with insert and delete you can get quite far.

4/11/2026, 10:20:49 PM


by: Serhii-Set

[dead]

4/11/2026, 7:52:30 PM


by: johnwhitman

[dead]

4/11/2026, 9:00:34 PM