So far the only company that is really outspoken about the scale of their vibe coding has been Anthropic. However their uptime and bug count is atrocious.
FWIW I have never used NVLink, and I’m not sure why people are bringing up “daisy chaining” because as far as I’m aware that is not a thing with modern GPUs at all.
> The mac will just work for models as large as 100B, can go higher with quantized models. And power draw will be 1/5th as much as the 3090 setup.
This setup will work for 100B models as well. And yes, the Mac will draw less power, but the Nvidia machine will be many times faster. So depending on your specific Mac and your specific Nvidia setup, the performance per watt will be in the same ballpark. And higher absolute performance is certainly a nice perk.
> You can certainly daisy chain several 3090's together but it doesn't work seamlessly.
Citation needed; there's no "daisy chaining" in the setup I describe, and low level libraries like pytorch as well as higher level tools like Ollama all seamlessly support multiple GPUs.
> I think it's bad form to say "citation needed" when your original claim didn't include citations.
I apologize, but using multiple GPUs for inference (without any sort of “daisy chaining”) is something that’s been supported in most LLM tooling for a long time.
> Regardless - there's a difference between training and inference.
No one brought up training vs. inference to my knowledge, besides you — I was assuming the machine was for inference, because my experience building a machine like the one I described was in order to do inference. If you want to train models, I know less about that, but I’m pretty sure the tooling does easily support multiple GPUs.
> And pytorch doesn't magically make 5 gpus behave like 1 gpu.
I never said it was magic, I just said it was supported, which it is.
1800W is the max on a 15A circuit, but yes, it’s usually under 1600W. For LLM inference, limiting the TDP to 225W or so per card saves a lot of power, for a 5% drop in performance.
For a country that prides itself on CapItAlIsM, U.S. healthcare is the farthest thing from it.
- Doctors and hospitals don't compete on price
- Prices aren't just opaque, they are unknowable
- Shopping around is not possible
- Insurer incentive is to maximize billing (cost). They pass along cost as increased premiums to an employer. Employer passes along increased costs to employee as below-inflation wage increases
Capitalism doesn’t work well for goods with inelastic demand. Every other developed country understands this and has a nationalized system. The only reason we don’t have universal healthcare is basically unlucky flukes.
> The only reason we don’t have universal healthcare is basically unlucky flukes.
You think its a fluke and not intentional corruption of the system? These companies pays both parties a lot so nobody will ever fix this, that isn't a fluke that is just plain old corruption.
Voters don’t want universal healthcare. There is some lobbying, but an entire party’s voters are composed of people who only care about taxes and ensuring that those less than them do not benrfit from wealth redistribution.
This is why even the meager amount of wealth redistribution we got (which was really young to old and not wealthy to poor) came about due to a fluke 6 months in 2009 that one party had 60 senate votes, and 58 or so votes supported a taxpayer funded option, but 42 did not, so the taxpayer funded option did not make it into the final bill.
I think this is a cool tech demo. But the commonality I see in all of these "let the agent run free" harnesses is that the output is never something I would want to use/watch/play.
I think minimizing the amount of human effort in the loop is the wrong optimization, and it's the reason we end up with "slop".
It's the dream of a lot of people to have a magic box that makes you things you can sell, or enjoy for personal leisure. But LLMs are not the magic box. And there may not ever be a magic box. The sooner we can accept that the magic box isn't in the room with us, then the sooner we can start getting real utility out of LLMs.
TLDR: Human taste is more important than building things for the sake of building them.
Maybe OP could try an angle where at various points, the process presents the user with 2-6 options, and they choose their favourite. With a bit of intentional chaos in there, the user and tool could potentially discover interesting game concepts and eventually build them as prototypes.
Are you implying that landlords are naturally incentivized to build homes? Because in most circumstances, the exact opposite is true. In the U.S., the government has a number of programs that offer landlords vouchers in order to encourage them to build out more homes.
I was not conflating the two - I literally meant that there are incentives in place to encourage landlords to develop new homes. I am not referring to groups whose primary interest is to develop and sell - but to develop and own.
I know a lot of landlords think they are a persecuted class that is providing a necessary service - but that largely isn't true.
Broadly you can imagine two scenarios for simplicity sake.
1) Housing supply is abundant. Landlords have to compete on service superior maintenance, better units, better locations, etc.). Renting a home behaves like any other service industry that we come to know and love.
2) Housing supply is constrained. The situation plaguing much of the modern world. Land is limited. Landlords earn a higher IRR from jacking rents than they do from buying additional units. The landlords profit from control of the access to a scare resource rather than from providing anything of value.
"You're absolutely right" is a dead LLM giveaway. It's just not something that people use in every day English, especially on the Internet where no one ever admits they're wrong lol
By that I mean - the people claiming hyper-productivity from their GasTown setup never have actual products to demo.
reply