We don't know what "think like a person" entails, so we don't know how different human thought processes are to predicting what goes next, and whether those differences are meaningful when making a comparison.
Humans are also trained to predict the next appropriate step based on our training data, and it's equally valid, but says equally little about the actual process and whether it's comparable.
We do know that in terms of external behavior and internal structure (as far as we can ascertain it), humans and LLMs have only an passing resemblance in a few characteristics, if at all. Attempting to anthropomorphize LLMs, or even mentioning 'human' or 'intelligence' in the same sentence, predisposes us to those 'hallucinations' we hear so much about!
We really don't. We have some surface level idea about differences, but we can't tell how that does affect the actual learning and behaviours.
More importantly we have nothing to tell us whether it matters, or if it will turn out any number of sufficiently advanced architectures will inevitably approximate similar behaviours when exposed to the same training data.
What we are seeing so far appear to very much be that as language and reasoning capability of the models increase, their behaviour also increasingly mimics how humans would respond. Which makes sense as that is what they are being trained to.
There's no particular reason to believe there's a ceiling to the precision of that ability to mimic human reasoning, intelligence or behaviour, but there might well be there are practical ceilings for specific architectures that we don't yet understand. Or it could just be a question of efficiency.
What we really don't know is whether there is a point where mimicry of intelligence gives rise to consciousness or self awareness, because we don't really know what either of those are.
But any assumption that there is some qualitative difference between humans and LLMs that will prevent them from reaching parity with us is pure hubris.
But we really do! There is nothing surface about the differences in behavior and structure of LLMs and humans - anymore than there is anything surface about the differences between the behavior and structure of bricks and humans.
You've made something (at great expense!) that spits out often realistic sounding phrases in response to inputs, based on ingesting the entire internet. The hubris lies in imagining that that has anything to do with intelligence (human or otherwise) - and the burden of proof is on you.
> But we really do! There is nothing surface about the differences in behavior and structure of LLMs and humans - anymore than there is anything surface about the differences between the behavior and structure of bricks and humans.
This is meaningless platitudes. These networks are turing complete given a feedback loop. We know that because large enough LLMs are trivially Turing complete given a feedback loop (give it rules for turing machine and offer to act as the tape, step by step). Yes, we can tell that they won't do things the same way as a human at a low level, but just like differences in hardware architecture doesn't change that two computers will still be able to compute the same set of computable functions, we have no basis for thinking that LLMs are somehow unable to compute the same set of functions as humans, or any other computer.
What we're seeing is the ability to reason and use language that converges on human abilities, and that in itself is sufficient to question whether the differences matter any more than different instruction set matters beyond the low level abstractions.
> You've made something (at great expense!) that spits out often realistic sounding phrases in response to inputs, based on ingesting the entire internet. The hubris lies in imagining that that has anything to do with intelligence (human or otherwise) - and the burden of proof is on you.
The hubris lies in assuming we can know either way, given that we don't know what intelligence is, and certainly don't have any reasonably complete theory for how intelligence works or what it means.
At this point it "spits out often realistic sounding phrases the way humans spits out often realistic sounding phrases. It's often stupid. It also often beats a fairly substantial proportion of humans. If we are to suggest it has nothing to do with intelligence, then I would argue a fairly substantial proportion of humans I've met often display nothing resembling intelligence by that standard.
> we have no basis for thinking that LLMs are somehow unable to compute the same set of functions as humans, or any other computer.
Humans are not computers! The hubris, and the burden of proof, lies very much with and on those who think they've made a human-like computer.
Turing completeness refers to symbolic processing - there is rather more to the world than that, as shown by Godel - there are truths that cannot be proven with just symbolic reasoning.
You don't need to understand much of what "move like a person" entails to understand it's not the same method as "move like a car" even though both start with energy and end with transportation. I.e. "we also predict the next appropriate step" isn't the same thing as "we go about predicting the next step in a similar way". Even without having a deep understanding of human consciousness what we do know doesn't line up with how LLMs work.
What we do know is superficial at best, and tells us pretty much nothing relevant. And while there likely are structural differences (it'd be too amazing if the transformer architecture just chanced on the same approach), we're left to guess how those differences manifest and whether or not these differences are meaningful in terms of comparing us.
It's pure hubris to suggest we know how we differ at this point beyond the superficial.
Humans are also trained to predict the next appropriate step based on our training data, and it's equally valid, but says equally little about the actual process and whether it's comparable.