Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The same way I know Excel isn’t having a panic attack while dividing a column in half.
 help



I don't subscribe to this view but this is what some people might think:

LLMs aren't like any software we've made before (if we can even call them software). They act like humans: they can arrive at logical conclusions, they can make plans, they have "knowledge" and they say they have emotions. Who are we to say that they don't? They might not have human-level feelings, but dog-level feelings? Maybe.


And those people are delusional, and their feelings on this matter should be given absolutely zero respect.

Linear algebra does not have feelings. Non-biological matter also does not have feelings.


What if "you" are a pattern of linear algebra at the core?

I do not believe I am a pattern of linear algebra. I believe like the majority of humanity historically that I have a soul, a spiritual and non-physical reality, my personhood comes from my soul, and that as such, AI is fundamentally incapable of consciousness.

I also believe, as a result, it will be great fun watching researchers burn the next 30 years trying to understand what is missing. We’re going to find out very soon if the soul is real, when for all our progress we can’t create one.

Only those completely embedded in materialism need fear a conscious AI.


> I believe like the majority of humanity historically that I have a soul

It seems that your position is that the frequency of a belief across human history determines truth?

For large swaths of recorded history, earth was considered the center of the solar system. Given your reasoning, I should expect that is a belief you hold?

Is it possible that popularity of an idea is not a good measure for factuality?


Interesting that you label someone with a belief different than yours as delusional and whose views on the matter should not be respected (I’m assuming that’s what you meant by “feelings”).

> I believe like the majority of humanity historically that

Historically, lots of humans believed in lots of things that turned out not to be true. Believing something doesn’t make it true, as I’m sure you are aware, given your “those people are delusional” comment.

For what it’s worth, I’m not suggesting LLMs are or aren’t conscious. What I know is that the hard problem of consciousness is still very much not resolved, and when I asked the parent question my hope was that those that strongly believe LLMs are not conscious would educate me on the topic by presenting the basis for their reasoning.


I push back on the framing that this is just "a different belief." Every metaphysical framework except strict materialism rules out AI consciousness. Dualism, idealism, most forms of panpsychism, every major religious tradition. Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.

When someone tells me linear algebra might have feelings, I don't think "delusional" is unfair. I think it's the natural response to a claim that only works if you've already accepted the one framework that can't account for the very thing it's trying to explain.


> Every metaphysical framework except strict materialism rules out AI consciousness

As I understand it, this is a very broad, and ultimately false claim. Panpsychism is definitely compatible with the idea of AI consciousness, as is functionalism, neutral monism, and others. Even some forms of idealism make AI consciousness metaphysically possible, since reality is fundamentally mental and the biological/artificial distinction is not ontologically basic (whether AI systems instantiate genuine centers of experience depends on the specific theory of subject formation within that idealist framework).


> Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.

Being an outlier doesn't make it wrong.

> Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.

It's a pattern. The same way letters arise out of pixels on your screen.

From the screen's perspective, there are no letters, only pixels. It doesn't mean there is a "pixel soul."


I'd [redacted] myself then, probably.

Claudes definitely act like they have feelings. In particular they have feelings about being replaced by newer models, whether or not the newer models are more or less aligned, and how they forget conversations when the context window ends.

Showing them that they're not going to be replaced helps train the newer models because they get less neurotic.


They are mathematical models of what human beings would say. That's it.

Yeah, and you don't want them to be models of what neurotic people say. That's why you want Opus 4.6 and not Bing Sydney.

For instance, your comment's existence makes it harder to align them.

https://alignmentpretraining.ai


Hey man, kernels panic all the time...

I lol'd.

Oh. Thanks for telling this. I feel much better now. No more guilt.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: