Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s definitely not the case. LLMs of any sort do not in any sense reason or understand anything.

They literally just make stuff up (technically just a continuation of whatever you fed in), which usually sounds good, is often true and sometimes even helpful. Because those are qualities of the training data that was used and is the basis for the stuff it’s making up.



> It’s definitely not the case. LLMs of any sort do not in any sense reason or understand anything.

This seems like a claim about the way that the LLM neural net algorithm works. But AFAIK no one has a good understanding of how the LLM NNs work.

Why are you so certain that the LLM NN isn't doing the reasoning-algorithm or the understanding-algorithm?


Neural networks are not new, and they're just mathematical systems.

LLMs don't think. At all. They're basically glorified autocorrect. What they're good for is generating a lot of natural-sounding text that fools people into thinking there's more going on than there really is.


> Neural networks are not new

I agree. The McCullough-Pitts paper was published in 1943.

> they're just mathematical systems.

What do you mean by "mathematical system"? AFAIK the GPT4 model is literally a computer program.

> LLMs don't think. At all.

This is the same assertion that OP made and I'm still confused as to how anyone could be certain of its truth given that no one actually knows what is going on inside of the GPT4 program.

> They're basically glorified autocorrect. What they're good for is generating a lot of natural-sounding text that fools people into thinking there's more going on than there really is.

Is that an argument for the claim "LLMs don't think."? It doesn't seem like it to me, but maybe I'm mistaken.


Not new, but we don't understand how they work at the large scale.

I don't think reductionistic arguments hold much water. Sure, neural networks are just matrix multiplication. In the same way that a brain is just a bunch of cells. Understanding the basic building blocks doesn't mean understanding the whole.

We can always say that LLMs don't think if we define "think" as using a biological brain, but the fact is that they generate outputs that from the human perspective, can only plausibly be generated via reasoning. So they, at the very least, have processes that can functionally achieve the same goal as reasoning. The "stochastic parrot" metaphor, while apt in its day, has proven obsolete with pretty much all the examples of things that LLMs "could not do" in early papers being actually doable with the likes of GPT-4; so arguments against the possibility of LLMs reasoning look like constant moving of the goalposts.


> and they're just mathematical systems

Obvious question: can Prolog do reasoning?

If your definition of reasoning excludes Prolog, then... I'm not sure what to say!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: