Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

None of that means that the current companies will be profitable or that their valuations are anywhere close to justified though. The future could easily be "Open-weight models are moderately useful for some niches, no-name cloud providers charge slightly higher than the cost of electricity to use them at low profit margins".


Dot-com boom/bubble all over again. A whole bunch of the current leaders will go bust. A new generation of companies will take over, actually focused on specific customer problems and growing out of profitable niches.

The technology is useful, for some people, in some situations. It will get more useful for more people in more situations as it improves.

Current valuations are too high (Gartner hype cycle), after they collapse valuations will be too low (again, hype cycle), then it'll settle down and the real work happens.


The existing tech giants will just hoover up all the niche LLM shops once the valuations deflate somewhat.

There's almost a negligible chance any one of these shops stays truly independent, unless propped up by a state-level actor (China/EU)

You might have some consulting/service companies that will promise to tailor big models to your specific needs, but they will be valued accordingly (nowhere near billions).


Yeah, that's probably true, the same happened after the dot-com bubble burst. From about 2005-15 if you had a vaguely promising idea and a few engineers you could get acqui-hired by a tech giant easily. The few profitable ones that refused are now middle-sized businesses doing OK (nowhere near billions).

I don't know if the survivors are going to be in consulting - there is some kind of LLM-base product capability, you could conceivably see a set of LLM-based products building companies emerge. But it'll probably be a bit different, like the mobile app boom was a bit different from the web boom.


"The technology is useful, for some people, in some situations"

this endgame AI company are to create Intelligence equal to human, imagine you are not paying 23k workforce, that's a lot of money to be made


That's been the 'endgame' of technology improvements since the industrial revolution - there are many industries that mechanized, replaced nearly their entire human workforce, and were never terribly profitable. Consider farming - in developed countries, they really did replace like 98% of the workforce with machines. For every farm that did so, so did all of their competitors, and the increased productivity caused the price of their crops to fall. Cheap food for everyone, but no windfall for farmers.

If machines can easily replace all of your workers, that means other people's machines can also replace your workers.


Yeah, the overblown hype is a feature of the hype cycle. The same was true for the web - it was going to replace retail, change the way we work and live, etc. And yes, all of that has happened, but it took 30 years and COVID to make it happen.

LLMs might lead to AGI. Eventually.

Meanwhile every company that is spruiking that, and betting their business that that's going to happen before they run out of VC funding, is going to fail.


I think it will go in the opposite direction. Very massive closed-weight models that are truly miraculous and magical. But that would be sad because of all the prompt pre-processing that will prevent you from doing much of what you'd really want to do with such an intelligent machine.

I expect it to eventually be a duopoly like android and iOS. At world scale, it might divide us in a way that politics and nationalities never did. Humans will fall into one of two AI tribes.


Except that we've seen that bigger models don't really scale in accuracy/intelligence well, just look at GPT4.5. Intelligence scales logarithmically with parameter count, the extra parameters are mostly good for baking in more knowledge so you don't need to RAG everything.

Additionally, you can use reasoning model thinking with non-reasoning models to improve output, so I wouldn't be surprised if the common pattern was routing hard queries to reasoning models to solve at a high level, then routing the solution plan to a smaller on device model for faster inference.


Exactly. If some company ever does come up with an AI that is truly miraculous and magical the very last thing they'll do is let people like you and me play with it at any price. At best, we'd get some locked down and crippled interface to heavily monitored pre-approved/censored output. My guess is that the miracle isn't going to happen.

If I'm wrong though and some digital alchemy finally manages to turn our facebook comments into a super-intelligence we'll only have a few years of an increasingly hellish dystopia before the machines do the smart thing and humanity gets what we deserve.


By the time the capital runs out, I suspect we'll be able to get open models at the level of current frontier and companies will buy a server ready to run it for internal use and reasonable pricing. It will be useful but a complete commodity.


I know folk now that are selling, basically, RAG on lammas, "in a box". Seems a bunch of mid-level at SME are ready to burn budget on hype (to me). Gotta get something deployed in the hype-cycle for quarterly bonus.


I think we can already get open-weight frontier class models today. I've run Deepseek R1 at home, and it's every bit as good as any of the ChatGPT models I can use at work.


Which companies? Google and Microsoft are only up a little over the past several years, and I doubt much of their valuation is coming from LLM hype. Most of the discussions about x.com say it's worth substantially less than some years ago.

I feel like a lot of people mean that OpenAI is burning through venture capital money. It's debatable, but it's a huge jump to go from that to thinking it's going to crash the stock market (OpenAI isn't even publicly traded).


The "Magnificent Seven" stocks (Apple, Amazon, Alphabet, Meta, Microsoft, Nvidia, and Tesla) were collectively up >60% last year and are now 30% of the entire S&P500. They are all heavily invested in AI products.


I just checked the first two, Apple and Amazon, and they're trading 28% and 23% higher than they were 3 years ago. Annualized returns from the SP 500 have been a little over 10%. Some of that comes from dividends, but Apple and Amazon give out extremely little in the way of dividends.

I'm not going to check all of the companies, but at least looking at the first two, I'm not really seeing anything out of the ordinary.


Currently, Nvidia enjoys a ton of the value capture from the LLM hype. But that's a weird state of affairs and once LLM deployments are less dependent on Nvidia hardware, the value capture will likely move to software companies. Or the LLM hype will reduce to the point that there isn't a ton of value to capture here anymore. This tech may just get commoditized.


Nvidia is trading below its historical PE from pre-AI times at this point. This is just on confirmed revenue, and its profitability keeps increasing. NVIDIA is undervalued right now


Sure, as long as it keep selling $130B worth of GPUs each year. Which is entirely predicated on the capital investment in Machine Learning attracting revenue streams that are still imaginary at this point.


> None of that means that the current companies will be profitable ... The future could easily be "Open-weight models are moderately useful for some niches, no-name cloud providers charge slightly higher than the cost of electricity to use them at low profit margins".

They just need to stay a bit ahead of the open source releases, which is basically the status quo. The leading AI firms have a lot of accumulated know-how wrt. building new models and training them, that the average "no-name cloud" vendor doesn't.


> They just need to stay a bit ahead of the open source releases, which is basically the status quo

No, OpenAI alone additionally need approximately $5B of additional cash each and every year.

I think Claude is useful. But if they charged enough money to be cashflow positive, it's not obvious enough people would think so. Let alone enough money to generate returns to their investors.


The big boys can also get away with stealing all the copyrighted material ever created by human beings.


How far back do you think copyright should extend? Is it perpetual, forever?


How short should it be? Two years? Two months?


I wasn’t the one outraged at the “theft” of ancient works.


Congress keeps extending it and I do not approve of that. I think 50 years is a reasonable length of time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: