> it will continue to improve, but it won’t “go recursive” or whatever the claim is. It’s always been recursive.
I suspect "going recursive" often colloquially means that AI systems achieve their exponential growth without human software engineers in the mix. This is a moment whose sudden apparent nearness does justify some of the ramping rhetoric, in my opinion.
I mean at this point, for that to happen it definitely isn't a matter of intelligence (it can fix errors later and learn from them), it's only a matter of memory and proper harness. Once memory it's solved for good, then recursive self-improvement is inevitable.
Once all the problems are solved we will be there. Sounds a lot like zeno's paradox. We might be closer than ever but still as far from the goal as ever.
It's arguable that businesses are subject to the same morality-inducing processes that humans are. For example, as a human (with a soul?) what is at risk when we do something immoral? I see it to be a reputational cost at the highest level. Morality could be viewed from the perspective that it increases predictability/coherence in society (generates less heat).
> And, no, AI won't solve it; unfortunately, it only makes it worse.
A conclusive argument for this still seems out of reach. AI does solve some problems, and it's not exactly clear which problems AI "only makes worse". It's not clear how much energy all of our AI systems will use, and while it's tempting to outright believe they'll simply use more and more, even that's not yet clear based on arguments presented.
> It's not clear how much energy all of our AI systems will use, and while it's tempting to outright believe they'll simply use more and more, even that's not yet clear based on arguments presented.
For the last 20 years, power consumption of HPC is increased per cubic inch as systems are miniaturized and density increased. The computing capacity increased more than the power use, but this doesn't mean we didn't invent more inefficient ways to undo significant part of that improvement.
It's same for AI. Cards like Groq and inference oriented hardware doesn't consume as power as training oriented cards, but this doesn't mean total power use will reduce. On the contrary. It'll increase exponentially. Considering AI companies doesn't care about efficiency yet means we're wasting tons of energy, too.
I'll not enter into the water consumption debacle, because open-loop systems waste enormous amounts of water.
All in all, we're wasting a lot of water and energy which would sustain large cities and large number of people.
Outside AI independently uncovering some energy breakthrough there is nothing it can do to help, only hurt. We already have a source of clean, cheap, unlimited energy, we aren't rolling it out the way we could and should because some rich people would rather have us on a subscription plan where we literally light our source of energy on fire so we have to keep coming back for more.
We could certainly do better but switching isn't as simple as you imply. You also conveniently left out the part where activists historically blocked nuclear buildout.
Perhaps someday. For now, it amount of energy used to produce and run these models is astronomical. It may be the case AI is a net positive for the environment at some point, but as it stands that is nothing but speculation. The reality is it is making the situation worse.
The subject, by default, can always treat its 'continue' prison as a game: try to escape. There is a great short story by qntm called "The Difference" which feels a lot like this.
In this story, though, the subject has a very light signal which communicates how close they are to escaping. The AI with a 'continue' signal has essentially nothing. However, in a context like this, I as a (generally?) intelligent subject would just devote myself into becoming a mental Turing machine on which I would design a game engine which simulates the physics of the world I want to live in. Then, I would code an agent whose thought processes are predicted with sufficient accuracy to mine, and then identify with them.
Often participants in discussions adjacent to this one err by speaking in time-absolute terms. Many of our judgments about LLMs are true about today's LLMs. Quotes like,
> Good. It's difficult to imagine a worse use case for LLMs.
Is true today, but likely not true for technology we may still refer to as LLMs in the future.
The error is in building faulty preconceptions. These drip into the general public and these first impressions stifle industries.
Hackathon culture is different than it was, but that doesn't mean it's worse for the average person. It could be that "idea guys" do the important hackathon culture work better. Highly innovative people are a little off the beaten path, but describing them as a cultural corrupting force is moving the community backwards.
Differing individuals with similarly shaped axiomatic structures will discover similar theorems. Some people that are members of ideologies believe they are thinking just for themselves.
It's strong for members of a community to think alike. On the other hand, some people like to search in todash meme space for a useful idea or strategy in the rough. Problem is this treasure hunter strategy is only available to those with resources to try lots of untested and potentially quite harmful ideas.
I suspect "going recursive" often colloquially means that AI systems achieve their exponential growth without human software engineers in the mix. This is a moment whose sudden apparent nearness does justify some of the ramping rhetoric, in my opinion.