Hacker Newsnew | past | comments | ask | show | jobs | submit | winrid's commentslogin

JimTV on YT is great too

Higher res icons probably add a couple hundred megs alone

Well if you have a 512x512 icon uncompressed it is an even megabyte, so that makes the calculations fairly easy.

But raw imagery is one of the few cases where you can legitimately require large amounts of RAM because of the squaring nature of area. You only need that raw state in a limited number of situations where you are manipulating the data though. If you are dealing with images without descending to pixels then there's pretty much no reason to keep it all floating around in that form, You generally don't have more than a hundred icons onscreen, and once you start fetching data from the slowest RAM in your machine you get pretty decent speed gains from using decompression than trying to move the uncompressed form around.


Win11 IOT runs great on 4gb if that matters :) I have a few machines in the field running it and my java app, still over a gig free usually.

Probably because it reads a little like an LLM and also has an emdash

I’ve seen a lot of anti AI people use ChatGPT to write why AI bubble is about to pop.

Ironic, isn’t it?


  > I ship code every day. I use Claude, I use GPT, I run llama locally.
An "Anti AI" person...

Not really. Most people won't self host.

The general public will self-host it's built in to your next phone or laptop straight out of the box or maybe from the App Store.

I agree that that's what it would take, but compute would need to get very cheap for it to be feasible to keep models running locally. That's an awful lot of memory to have just sitting with the model running in it.

True. I was thinking more of power users. Do you think Opus level capabilities will run on your average laptop in a year? I think that's pretty far away if ever.

You can demonstrate "running" the latest open Kimi or GLM model on a top-of-the-line laptop at very low throughput (Kimi at 2 tok/s, which is slow when you account for thinking time) today, courtesy of Flash-MoE with SSD weights offload. That's not Opus-like, it's not an "average" laptop and it's not really usable for non-niche purposes due to the low throughput. But it's impressive in a way, and it does give a nice idea of what might be feasible down the line.

It already does that, too, with the co-author

I would argue that is a net positive, it is valuable to know if a language model was involved enough to be committing itself.

+1, it definitely changes the way I interact, and the amount of suspicion I would have for the code.

My cat also likes to lick plastic, but they're plastic containers. No idea why.

I cat sat once for a friend who had a plastic goblin. I spent the entire two weeks obsessively checking that I didn't leave any plastic out only to (occasionally) find the cat happily chewing on something that I didn't think they would find.

I learned my lesson about the depths of feline creativity lol and I was thankful they didn't get into anything that hurt them


She doesn't even like to chew plastic. She will just sit there and lick plastic bins.

Copying humans, they feel they need more microplastics

QA should always exist. The question is just do you want to pay for them. Usually the preferred gaslighting is "without QA devs will do better testing", but it's always about money.

Will we get jGraveyard :) https://killedbygoogle.com

Yeah I mean, now you know how managers feel? :)

spend all day talking to people (except it's LLMs) and not sure if you accomplished anything, but people seem happy

The plus side is for your personal things like this you don't have to use it of course!


Last year I got two coworkers. My first in terms of coding. First I looked at everyone code request, but it soone overwhelmed me. We got a third and there was no way I could oversee everything and since I got a team of three management gave me other responsibilities on top.

I have no idea what they code and how they code it. I only go over the specs with them. Everything got quicker but the quality went down. I had to step in and we now have e2e-Test for everything. Maybe it's too much, but bugs got squashed and catched before we shipped.

So that's a win. Before I could test everything by hand. I worked more on things like creating a working release cycle and what tools we should use.

With or without AI the situation would have been similar.

I became a manager. We move the needle. I don't really get to code anymore and I don't see much of the code. It's strange.


Article addresses that.

Author says he does enjoy managing people, challenging them, and seeing them grow and accomplish things they couldn’t before.

None of that accompanies “managing” an LLM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: