Hacker Newsnew | past | comments | ask | show | jobs | submit | rvz's commentslogin

Define specifically who the “terrorist” is in this war.

All this engineering and they still have these basic sub-optimal memory leaks with just a few tabs.

Makes me wonder that they don't even do performance tests at all.


> At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.

It is deliberate. Period.

It's always been known that you make money in the private markets and pre-IPO companies and retail is the final exit for insiders and early investors.

Retail is not allowed to be early into these companies (Because that would ruin the point of being an insider) and this "exposure" has to be at the near top.


There are ways now for retail to get in to these companies including, check out hiive or equityzen...just beware of massive dilution.

Who are "these" companies? Did retail get into Google, Facebook, Amazon, Tesla, etc before the top?

Also, aren't AI businesses losing a lot of money each year? Pretty sure there is some risk involved that is not good for retail.


Anthropic doesn't have anything else other than the Claude models.

But notice that no-one, not a single mention of Deepseek tells me that they are preparing to scare everyone again. Which is why Dario continues to scare-monger on local models.

Sometimes you do not need hundreds of billions of dollars for inference when it can be done locally with efficient software; and Google proved that. But where is the money in that? So continues the flawed belief in infinitely buying GPUs to scale which Nvidia needs you to do.

Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.


> Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.

Can confirm. Kimi K2.5 is pretty intelligent and most of the time there's no difference between Opus and Kimi.


Local models just make no economic sense since the GPU will idle 99% of the time.

You have a GPU already (at least an iGPU and an NPU on most newer platforms) as part of your computer, might as well get some use out of it with local inference. And trying to do inference on a larger model with an undersized GPU will have you idling a lot less than 99% - but that still makes a lot of sense for most casual users who will only rarely need a genuine "Pro" class answer from AI. Doing that locally is way less hassle than paying for a subscription or messing with API spend.

False on a team that’s distributed

No mention of "AGI" this time. Since we all knew it was a scam. But this is the most damning of them all:

> The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow.

FTX had a "flywheel". It fell off. Being saddled with hundreds of billions of debt makes this situation ten times worse.


I guess "centralizing everything" to GitHub was never a good idea and called it 6 years ago. [0]

Looking at this now, you might as well self host and you would still get better uptime than GitHub.

[0] https://news.ycombinator.com/item?id=22867803


So this issue was open all this time and came back to bite Anthropic.

There is a $2T dollar use-case.

Would have believed you if you have said that a day later.

This is AGI.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: