Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd be curious to hear what the general community thinks of putting a job queue in a database and if there are a lot of other QC/Que users out there.

FWIW, the transactional properties of a Postgres-backed queue were so convenient that we took advantage of them for a long time (and still do) despite the fact that they have a few caveats (e.g. poor degraded performance as outlined in the post), but more recently there's been a bit of a shift towards Sidekiq (probably because it's generally very problem-free and has some pretty nice monitoring tools).

(Disclaimer: I authored this article.)



Great article. We've just moved a large portion of our async jobs over from Resque to Que because of the huge benefits you get from transactional enqueuing and processing. Performance seems good so far, and if it really becomes an issue, running two queuing systems side by side (one transactional, one high-throughput) seems viable.

We're super cautious about long-running transactions anyway, as they cause a load of other issues (e.g. http://www.databasesoup.com/2013/11/alter-table-and-downtime... - full blog post coming soon!)


We are using Postgres as our queue backing store. I tried switching to Sidekiq but ran into issues (read here https://github.com/mperham/sidekiq/pull/624). Fortunately our job throughput is small enough to not hit any scaling issues with Postgres, so I stuck with that because of my confidence and experience w/Postgres over the years. The issues I ran into on Sidekiq just made me skeptical of their architecture/code maturity, though that was several years ago and it may be much improved by now.

We use JQJobs (which we authored) to manage queueing and it's architected such that it could be ported to Redis or some other better backing store, or potentially even to QC/Que, which I wasn't aware of until your article (so thanks for that!).


Ah, nice, thank-you!

> Fortunately our job throughput is small enough to not hit any scaling issues with Postgres, so I stuck with that because of my confidence and experience w/Postgres over the years.

I think we're in a pretty similar situation. For what it's worth, I think that a queue in PG can scale up about as well as Postgres can as long as you keep an eye on the whole system (watch out for long-lived transaction and the like).


I use RQ these days. It's a really simple Python / redis solution. We had been using celery / rabbitmq but it's really bad with small numbers of long running jobs (each worker will take a task and reserve another even though it can't strt on it yet). For us that was a killer since we had jobs that could take 10 minutes to complete.

RQ has been good to us so far. There's a simple dashboard for it that works well enough. After messing around trying to find my data in rabbitmq it was a real relief to be able to query a simple set of redis keys.


You may be interested in my comment here: https://news.ycombinator.com/item?id=9578787.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: