Well if you have a 512x512 icon uncompressed it is an even megabyte, so that makes the calculations fairly easy.
But raw imagery is one of the few cases where you can legitimately require large amounts of RAM because of the squaring nature of area. You only need that raw state in a limited number of situations where you are manipulating the data though. If you are dealing with images without descending to pixels then there's pretty much no reason to keep it all floating around in that form, You generally don't have more than a hundred icons onscreen, and once you start fetching data from the slowest RAM in your machine you get pretty decent speed gains from using decompression than trying to move the uncompressed form around.
I agree that that's what it would take, but compute would need to get very cheap for it to be feasible to keep models running locally. That's an awful lot of memory to have just sitting with the model running in it.
True. I was thinking more of power users. Do you think Opus level capabilities will run on your average laptop in a year? I think that's pretty far away if ever.
You can demonstrate "running" the latest open Kimi or GLM model on a top-of-the-line laptop at very low throughput (Kimi at 2 tok/s, which is slow when you account for thinking time) today, courtesy of Flash-MoE with SSD weights offload. That's not Opus-like, it's not an "average" laptop and it's not really usable for non-niche purposes due to the low throughput. But it's impressive in a way, and it does give a nice idea of what might be feasible down the line.
I cat sat once for a friend who had a plastic goblin. I spent the entire two weeks obsessively checking that I didn't leave any plastic out only to (occasionally) find the cat happily chewing on something that I didn't think they would find.
I learned my lesson about the depths of feline creativity lol and I was thankful they didn't get into anything that hurt them
QA should always exist. The question is just do you want to pay for them. Usually the preferred gaslighting is "without QA devs will do better testing", but it's always about money.
Last year I got two coworkers. My first in terms of coding. First I looked at everyone code request, but it soone overwhelmed me. We got a third and there was no way I could oversee everything and since I got a team of three management gave me other responsibilities on top.
I have no idea what they code and how they code it. I only go over the specs with them. Everything got quicker but the quality went down. I had to step in and we now have e2e-Test for everything. Maybe it's too much, but bugs got squashed and catched before we shipped.
So that's a win. Before I could test everything by hand. I worked more on things like creating a working release cycle and what tools we should use.
With or without AI the situation would have been similar.
I became a manager. We move the needle. I don't really get to code anymore and I don't see much of the code. It's strange.
reply