Hacker Newsnew | past | comments | ask | show | jobs | submit | lossolo's commentslogin

Honestly, this is the most reasonable comment here, especially coming from someone in Taiwan. I hear similar views when I'm in Asia, which are very different from what I hear back in the West.

> Plenty of researchers do think so and claiming consensus for your position is just false

Can you name a few? Demis Hassabis (Deepmind CEO) in his recent interview claims that LLMs will not get us to AGI, Ilya Sutskever also says there is something fundamental missing, same with LeCunn obviously etc.


Peter Norvig and Blaise Arcas (https://www.noemamag.com/artificial-general-intelligence-is-...)

Jared Kaplan (https://www.youtube.com/watch?v=p8Jx4qvDoSo)

Geoffrey Hinton

come to mind. Just saying, I don't think there's a "consensus of 'serious researchers'" here.


Europe is one of the world's largest agricultural producers and exporters. France alone is one of the top grain exporters globally. The EU exports massive quantities of wheat, barley, dairy, and processed food to North Africa, the Middle East, and Sub-Saharan Africa. Countries like Egypt, Algeria, and Nigeria are heavily dependent on European grain imports. An AMOC collapse would devastate growing seasons, slash yields, and potentially make large parts of Northern Europe unsuitable for current agriculture.

And it's not just food. Europe is a major producer and exporter of fertilizers. If European industrial and agricultural output collapses, the ripple effects hit global food supply chains hard. Countries that depend on those imports will face famine.

Then there's the knock-on, hundreds of millions of people in food-insecure regions losing a key supply source, simultaneous disruption to Atlantic weather patterns affecting rainfall in West Africa and the Amazon, potential shifts in monsoon systems affecting South and East Asia. It's a cascading global food security crisis.

> lots of time to adjust

This assumes a gradual slowdown, but paleoclimate evidence suggests AMOC transitions can happen within a decade or even less. The idea that we'd just smoothly adapt to one of the most dramatic climate shifts in human civilization is not supported by what we know about how these systems behave.


Because gathering training data and doing post-training takes time. I agree with OP that this is the obvious next step given context length limitations. Humans work the same way in organizations, you have different people specializing in different things because everyone has a limited "context length".

They couldn't do it because they weren't fine-tuned for multi-agent workflows, which basically means they were constrained by their context window.

How many agents did they use with previous Opus? 3?

You've chosen an argument that works against you, because they actually could do that if they were trained to.

Give them the same post-training (recipes/steering) and the same datasets, and voila, they'll be capable of the same thing. What do you think is happening there? Did Anthropic inject magic ponies?


They're very good at reiterating, that's true. The issue is that without the people outside of "most humans" there would be no code and no civilization. We'd still be sitting in trees. That is real intelligence.

Why's that the issue?

"This AI can do 99.99%* of all human endeavours, but without that last 0.01% we'd still be in the trees", doesn't stop that 99.99% getting made redundant by the AI.

* vary as desired for your preference of argument, regarding how competent the AI actually is vs. how few people really show "true intelligence". Personally I think there's a big gap between them: paradigm-shifting inventiveness is necessarily rare, and AI can't fill in all the gaps under it yet. But I am very uncomfortable with how much AI can fill in for.


Here's a potentially more uncomfortable thought, if all people through history with potential for "true intelligence" had a tool that did 99% of everything do you think they would've had motivation to learn enough of that 99% to give insight into the yet discovered.

Language doesn't really matter, it's not how things are mapped in the latent space. It only needs to know how to do it in one language.

Ok you can say this about literally any compiler though. The authors of every compiler have intimate knowledge of other compilers, how is this different?

grace hopper spinning in her grave rn

What's funny is that most of this "progress" is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that.

"post-training shaping the models behavior" it seems from your wording that you find it not that dramatic. I rather find the fact that RL on novel environments providing steady improvements after base-model an incredibly bullish signal on future AI improvements. I also believe that the capability increase are transferring to other domains (or at least covers enough domains) that it represents a real rise in intelligence in the human sense (when measured in capabilities - not necessarily innate learning ability)

What evidence do you base your opinions on capability transfer on?

> is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that.

sure, but acquiring/generating/creating/curating so much high quality data is still significant moat.


>There is no moat besides that.

Compute.

Google didn't announce $185 billion in capex to do cataloguing and flash cards.


Google didn't buy 30% of Anthropic to starve them of compute

Probably why it's selling them TPUs.

Na początku próbowałem czytać to po angielsku i czytam "Ale był.." i potem nic nie miało już sensu haha

Especially when some of the responses are in English :D

> Sometimes the user forgets to check, and it will result in using Allegro's infrastructure even if the user didn't want it.

Strasznie denerwujące, też mnie to spotkało.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: