Hacker Newsnew | past | comments | ask | show | jobs | submit | hatthew's commentslogin

FWIW, while I find it appealing, I also strongly associate it with "vibe coded webapp of dubious quality," so personally I'm not gonna try to replicate it myself.

For me and probably many other people, local has nothing to do with cost and everything to do with privacy

genuine question - what are you working on that needs that level of privacy? outside of NSFW stuff most API providers arent doing anything with your prompts

I would answer that, but it's private :)

I can think of several reasons: corporate policy, personal principles, NSFW stuff, illegal stuff


When I'm running a lot of model training workflows concurrently, I can spend a small but noticeable amount of my day just clicking through links to see current progress and logs of any errors. If an AI would be capable of understanding the relatively complex UI, at least enough to find the right links to click, it could make a status report that takes me 15 seconds to read, and from that alone would save $2000 of labor annually.

I think their numbers of $1.6M and 3.25 years is still probably a massive overestimate, but the order of magnitude seems plausible.


If it works, it should be much more efficient than the current system. Of course that is a massive "if"

To me this seems primarily like an aim test, not a color perception test

It feels like a bit of both - the faster you're able to perceive the differing square, the faster you're able to navigate to it.

Writing is fundamentally the transfer of information from your brain to my brain. If you have 1000 bits of information you want to transfer, you can't give 300 bits of information to an LLM and have it fill in the remaining 700, because it doesn't know what those 700 bits are. If it's able to guess those 700 bits correctly, then they aren't true information, and you really only have 300 bits you want to transfer. You might as well transfer those bits to me directly, rather than having the LLM add on an extra superfluous 700 bits that I then have to filter out.

Remember when LLMs were entering the mainstream, and everyone shared tips on how to superprompt, and one of the hot tips was to tell the AI to write a prompt itself?

Like:

BAD: "Write the specs for a system that does X Y and Z"

GOOD: "Write me a prompt to write the specs for a system that does X Y and Z."

As if the LLM magically knew all about itself and the best tips and tricks to prompt itself even if it had just came out and there was no scraped information on how to use them yet.


It's not possible to directly transfer 300 bits of information as that would require a form of telepathy. If it takes 1000 bits of English to distribute something that represents your idea properly it is useful to have a tool to create those 1000s bits instead of doing it manually character by character.

Sorry I wasn't very clear. TFA is talking about using LLMs to write things from scratch, not just to clean up grammar for example. In that context, I was talking about bits of semantic information, not bits of English text information. You might have 300 bits of semantic information in your mind, and then you have to expand that to, say, 600 bits of English text to give to the LLM. If you're using the LLM purely to turn bullet points into prose, it'll add more bits of English, but not more bits of (useful) semantic information.

I prompted Claude with "(information theory) difference between semantic information and english text information in the context of using LLMs for writing": https://claude.ai/share/5925245a-0893-46ba-bca9-30627d4facbc

If you're familiar with LLMs and information theory, the LLM isn't giving you any semantic information that you don't already know. If you aren't familiar with LLMs and information theory, you can learn about them from google and/or your own LLM, using that prompt for keywords. In either case, the LLM's response isn't very helpful, because it's not my ideas that you are reading, it's random information pulled from the internet (directly or indirectly), and it's not actually the semantic information I wanted to convey.

This comment is more useful than the LLM's, because every word is chosen to convey the ideas in my mind as clearly as possible in the context of this article and conversation. It's also half as many words to read.


I do think that a list of bullet points versus an article give a different impression to the reader. The same information packaged in different ways can give a different impression and I think that impression is part of the message people want to give. Reading a prompt of bullet points + desired impression will give the reader a different impression than what the LLM would output.

That is very true! Subjectively, I would much prefer to read either bullet points or the impression you want to convey. I don't care what impression the LLM wants to convey.

While true, I'm not sure what your point is? Centuries ago, everyone got up at sunrise to tend to the farm because the farm needed tending at sunrise. These days, organizations like schools and grocery stores need to coordinate with hundreds to thousands of people daily, and "angle of sun in the sky" is nowhere near precise enough. Let alone phone calls and instant messages that travel across many timezones.

I think point is that now we have technology that is super adaptive to local longitude while changing timezone in all of worlds software is super difficult.

I'm merely saying that the mass adoption of clock time for planning daily routines was an industrialist conspiracy to get people working when by natural right they should be sleeping.

We could easily have software presenting time to us as true solar time. We're not limited to gears and levers anymore, our "clocks" now have GPS and can trivially calculate solar time with that. Doing this one off is easy. The problem is society at large still trying to make plans like when to start work shifts or school hours based on a system of time that flies wildly out of synch with Earth's natural rhythms throughout the year.

Massive self-own for humanity.


By my calculations there will be an average of 500 collisions, no? Each shoebox has an effective width of 2 feet, and with 50k of them that's about 1% density. With 50k in the other direction, and about a 1% collision rate, that's 500 collisions.


Seems like you're right, I didn't actually run through the statistics and just went with intuition. Yikes


Yeah my intuition was the same; if I had 1 second to make a guess I probably would have said 1% chance of >0 collisions


I agree in general, but as a one-off thing I'd very much enjoy getting a lime with a message saying "this was cheaper than sending a letter myself"


I've started sending paperbacks instead of greeting cards when someone I know needs a get-well-soon card. In stores around here, greeting cards are often $7ish + postage. I can frequently ship a paperback with a gift receipt for $5 total. I include a gift message on the gift receipt, and choose a book I think someone might like to read while they're out of commission.

I guess it's a bit like postal arbitrage, if I accept the cost of greeting cards themselves as part of the cost of the activity.

To the extent that anyone has commented much, those who have commented had very positive reactions to what amounts to a book recommendation and a copy of the book I'm recommending along with a little note.


Haven't done it in a long time, but years ago I had a similar realization that picture frames were cheaper than cards. So you can frame a little note, either with a picture or just suggest they can reuse it if they like. Buying greeting cards always felt like kind of a waste. Lately our kids' schools have been doing a thing each year where the kids do some art and then you can buy cards (and other things) with it, so we've been using those as they're at least a bit personal. Once that's done, maybe I'll give picture frames again (or paperbacks or cans of tomato soup...)


This is a good idea, but I also want to point out that a regular piece of paper makes a perfectly good greeting card


It absolutely does! I like the addition of the thing to read (in the case of the book) or the thing to look at (usually a funny or interesting picture in the case of the greeting card) but I agree that the purchased thing is not truly necessary.


Would you say a country of 300 people isn't small?

Big, small, etc. are relative terms. There is no way to decide whether or not 300 is small without implicitly saying what it's small relative to. In context, it was obvious that the point being made was "valve is too small to have direct employees working on things other than the core business"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: