Hacker Newsnew | past | comments | ask | show | jobs | submit | fud101's commentslogin

Isn't Dokku a worthy mention anymore?

For better or worse, folks _really_ like a free UI. Dokku doesn't offer that (Dokku Pro is paid). With AI increasingly making that sort of thing easier to build - and Dokku being very easy to integrate via MCP but also good for building tools on top of - I'm not actually sure how to proceed with Dokku Pro.

Whether it's a worthy mention or not, I'm not sure. I'd like to think its worthy :)

Disclaimer: I am the maintainer.


This is some tired fanfiction no one asked for. But please, keep ringing the bell of victimhood, it's surely going to be well received this time.

> 50% of consumer spending is by 10% of the population

This is startling.


These sound wild in terms of promise but I never understood them in a practical way.

Canonical example is rewriting a non transducing set of collection transformations like

   (->> posts
      (map with-user)
      (filter authorized?)
      (map with-friends)
      (into []))
That’s five collections, this is two, using transducers:

    (into []
          (comp
            (map with-user)
            (filter authorized?)
            (map with-friends))
          posts)
A transducer is returned by comp, and each item within comp is itself a transducer. You can see how the flow is exactly like the double threading macro.

map for example is called with one arg, this means it will return a transducer, unlike in the first example when it has a second argument, the coll posts, so immediately runs over that and returns a new coll.

The composed transducer returned by comp is passed to into as the second of three arguments. In three argument form, into applies the transducer to each item in coll, the third argument. In two argument form, as in the first example, it just puts coll into the first argument (also a coll).


That does not sound like a good example. The two-argument form of `map` already returns a lazy sequence. Same for `filter`. I thought lazy sequences are already supposed to get rid of the performance problem of materializing the entire collection. So

Lazy sequences reduce the size of intermediate collections but they “chunk” - you get 32 items at a time, multiply that by however many transformations you have and obviously by the size of the items.

There are some additional inefficiencies in terms of context capturing at each lazy transformation point. The problem gets worse outside of a tidy immediate set of transformations like you’ll see in any example.

This article gives a good overview of the inefficiencies, search on “thunk” for tldr. https://clojure-goes-fast.com/blog/clojures-deadly-sin/ (I don’t agree with its near condemnation of the whole lazy pattern (laziness is quite useful - we can complain about it because we have it, it would suck if we didn’t).)


So what’s your coding style in Clojure? Do you eschew lazy sequences as much as possible and only use either non-lazy manipulation functions like mapv or transducers?

I liked using lazy sequences because it’s more amenable to breaking larger functions into smaller ones and decreases coupling. One part of my program uses map, and a distant part of it uses filter on the result of the map. With transducers it seems like the way to do it is eductions, but I avoided it because each time it is used it reevaluates each item, so it’s sacrificing time for less space, which is not usually what I want.

I should add that I almost always write my code with lazy sequences first because it’s intuitive. Then maybe one time out of five I re-read my code after it’s done and realize I could refactor it to use transduce. I don’t think I’ve ever used eduction at all.


It's evolving, and I'm using transducers more over time, but I still regularly am in situations where a simple map or mapv is all I need.

Lazy sequences can be a good fit for a lot of use cases. For example, I have some scenarios where I'm selecting from a web page DOM and most of the time I only want the first match but sometimes I want them all - laziness is great there. Or walking directories in a certain order, and the number of items they contains varies, so I don't know how many I'll need to walk but I know it's usually a small fraction of the total. Laziness is great there.

This can still work with transducers - you can either pass a lazy thing in as the coll to an eager transducing context (maybe with a "take n" along the way) or use the "sequence" transducing context which is lazy.

I tend to reach for transducers in places in my code where I'm combining multiple collection transformations, usually with literal map/filter/take/whatever right there in the code. Easy wins.

Recently I've started building more functions that return either transducers or eductions (depending on whether I want to "set" / couple in the base collection, which is what eduction is good for) so I can compose disparate functions at different points in the code and combine them efficiently. I did this in the context of a web pipeline, where I was chaining a request through different functions to come up with a response. Passing an eduction along, I could just nest it inside other eductions when I wanted to add transducers, then realize the whole thing at the end with an into and render.

Mentally it took me some time to wrap my head around transducers and when and how to use them, so I'm still figuring it out, but I could see myself ending up using them for most things. Rich Hickey, who created clojure, has said if he had thought of them near the beginning he'd have built the whole language around them. But I don't worry about it too much, I mostly just want to get sh-t done and I use them when I can see the opportunity to do so.


This, by the way, is why the lead example in the original linked post on clojure.org is very much like mine.

Thanks. So is this not an optimiser Clojure runtime can do for you automatically? I find the first one simpler to read and understand.

Performance is one of the niceties of transducers, but the real benefits are from better code abstractions.

For example, transducers decouple the collection type from data-processing functions. So you can write (into #{} ...) (a set), (into [] ...) (a vector) or (into {} ...) (a map) — and you don't have to modify the functions that process your data, or convert a collection at the end. The functions don't care about your target data structure, or the source data structure. They only care about what they process.

The fact that no intermediate structures have to be created is an additional nicety, not really an optimization.

It is true that for simple examples the (-> ...) is easier to read and understand. But you get used to the (into) syntax quickly, and you can do so much more this way (composable pipelines built on demand!).


I'd argue for most people performance is the single best reason to use them. Exception is if you regularly use streams/channels and benefit from transforming inside of them.

To take your example, there isn't much abstraction difference between (into #{} (map inc ids)) vs (into #{} (map inc) ids), nor is there a flexibility difference. The non transducer version has the exact same benefit of allowing specification of an arbitrary destination coll and accepting just as wide range of things as the source (any seqable). Whether in a transducer or not, inc doesn't care about where its argument is coming from or going. The only difference between those two invocations is performance.

Functions already provide a ton of abstractability and the programmer will rightly ask, "why should I bother with transducers instead of just using functions?" (aka other, arbitrary functions not of the particular transducer shape) The answer is usually going to be performance.

For a literal core async pipeline, of course, there is no replacing transducers because they are built to be used there, and there is a big abstraction benefit to being able to just hand in a transducer to the pipeline or chan vs building a function that reads from one channel, transforms, and puts on another channel. I never had the impression these pipelines were widely used, but I'd love to be wrong!


They're not really that interesting. They're "reduce transformers". So, take a reduction operation, turn it into an object, define a way to convert one reduction operation into another and you're basically done. 99% of the time they're basically mapcat.

The real thing to learn is how to express things in terms of reduce. Once you've understood that, just take a look at e.g. the map and filter transducers and it should be pretty obvious. But it doesn't work until you've grasped the fundamentals.


bruh,why not just post the prompt. no one wants to read this slop


Ya sorry brother, I guess I could have but thought people might find some other gems in there if they're running into any trouble


Directionally correct is the other kind of emdash.


It's my understanding that they predate llms, and internet snivel.


Yeah, going to have to go ahead and disagree with you there boss. The man Hegseth in all his 'no quarter' bravado is only affirming his own mother's claim that he is a piece of shit. respectfully of course, I would not put it past him to kill some kids for a political or terrorism reason (the parents).


Also, it's been a while but remember Trump literally said he wanted to "take out the families" of terrorists (https://www.cnn.com/2015/12/02/politics/donald-trump-terrori...).


Obama and Bush both regularly bombed weddings where a single target was present.

It's a non-sequitur point anyway, these kids weren't families of terrorists.


The terrorist is Hegseth and co.


He is a full time nauseating AI shill. If you happen to listen to his recent appearance on Software Engineering Radio podcast, you may just die of cringe. I had my final straw moment on AI hype during that podcast and my first I wish someone would bully that nerd moment.


congrats for.. selling out? wut


they have bad taste obviously otherwise we'd all be using their project


Yes, when the poetry people purposely added a feature to fail CI with a 1/10 chance because they wanted to depreciate a feature, I depreciated poetry.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: