Why should it be the new norm? We have an abnormal situation now, of massive amounts of investor money being poured into unprofitable bets, that this time had the side effect of eating up hardware components. There are two possible outcomes:
1. Yes, it's the new normal, then production capacity will be increased and prices fall.
2. No, it's not the new normal, the bubble pops and component prices come crashing down when buyers default etc.
Option 2 has been the normal outcome of these situations so far. But sure, questions remains how long all of this will take.
I don't know if it'll be a year or two, hard to say exactly when the AI bubble will pop, but I feel quite certain it's coming. The AI stuff is great but most of the money being thrown around to all these different companies is mostly going to be wasted. Investors don't know who the winners and losers will be, just like when people were investing in pets.com instead of amazon.com.
I have been doing this with claude code and openai codex and/or cline. One of the three takes the first pass (usually claude code, sometimes codex), then I will have cline / gemini 2.5 do a "code review" and offer suggestions for fixes before it applies them.
The MoE version with 3b active parameters will run significantly faster (tokens/second) on the same hardware, by about an order of magnitude (i.e. ~4t/s vs ~40t/s)
It supports AMD cpus because, if I understand correctly, AMD licenses x86 from Intel, so it shares the same bits needed to run openVINO as Intel’s cpus.
Go look at CPUs benchmarks on Phoronix; AMD Ryzen cpus regularly trounce Intel cpus using openVINO inference.
Yes, offloading some layers to the GPU and VRAM should still help. And 11gb isn't bad.
If you're on linux or wsl2, I would run oobabooga with --verbose. Load a GGUF, start with a small number of GPU layers and creep up, keeping an eye on VRAM usage.
If you're on windows, you can try out LM Studio and fiddle with layers while you monitor VRAM usage, though windows may be doing some weird stuff sharing ram.
Would be curious to see the diffs. Specifically if there's a complexity tax in offloading that makes the CPU-alone faster but in my experience with a 3060 and a mobile 3080, offloading what I can makes a big diff.
> Specifically if there's a complexity tax in offloading that makes the CPU-alone faster
Anecdotal, but I played with a bunch of models recently on a machine with a 16GB AMD GPU and 64GB of system memory/12 core CPU. I found offloading to significantly speed things up when dealing with large models, but there was seemingly an inflection point as I tested models that approached the limits of the system, where offloading did seem to significantly slow things down vs just running on the CPU.