Hacker Newsnew | past | comments | ask | show | jobs | submit | johnsmith1840's commentslogin

What's the difference. Act upset or is upset the results are the same?

Some humans lack certain emotions, them telling you something, and doing something doesn't really matter if they "felt" that emotion?


If one is unable to feel emotion X, then:

1. One has some ulterior motive for faking it.

2. One’s actions will likely diverge from emotion X. (Eventually)

If everybody believe the same lie, then it could be indistinguishable from the truth. (Until, the nature of the lie/truth become clear)


Or their ulterior motive is that they don't have one and want to fit in? Meaning they would never diverge?

Didn't realize my point was so philisophical lol


This is still an ulterior motive (even if benign; we all do it to some extent).

Behavior will diverge eventually.

Because emotions are what drives our decisions.

If you really love tennis, then you spend time and money on tennis. If you just say it to be nice (or to impress somebody), you will not invest into activity that much and will search for opportunity to stop.


It's the rise of the P-zombie. https://en.wikipedia.org/wiki/Philosophical_zombie

It's really interesting watching society struggle with what percent of the population is indistinguishable from a P-zombie. There's definitely not zil, but it definitely is a segment of the population.

Do you think people are born pzombies or is there some fixed point in time, puberty, or middle aged, or around when a lot of psychological problems set in. Do we think some environmental contaminants like Lead push people towards the pzombie?


Cool read! Yeah I suppose this is my point AI is the perfect P-zombie here.

I was thinking of clear cases like true pychopaths on certain emotions.


Ever consider the enclave route for this kind of work?


How much energy does a human + work enviroment cost vs an LLM call?

Human driving into work? Heating/cooling?

Wonder why big AI hasn't sold it as an enviromental SAVING technology.


After AI tech matures more, we will be able to save EVEN MORE energy by eliminating all the people from the environment.


That's the point? I agree and roughly it's one of two.

A: you made this as a free gift to anyone including openai B: you made this to profit yourself in some way

The argument he makes is if you did the second one don't do opensource?

It does kill a ton of opensource companies though and truth is that model of operating now is not going to work in this new age.

Also is sad because it means the whole system will collapse. The processes that made him famous can no longer be followed. Your open source code will be used by countless people and they will never know your name.

It's not called a distruptive tech for nothing. Can't un opensource all that code without lobotomizing every AI model.


Companies competing to buy ad space and SEO of every website on too of it.


Because it's a fantasy for an unknown amount of time. 1 year? 10? 50? Never? There hasn't been a single proper breakthrough in continual learning that would enable it. Anyone that studies CL will also get super pissed at it the problem and solution counteract each other to our current understanding but a fruit fly does it no problem!


Echoing the other comment they showed another big thing which is that the output if an AI model is the AI model. If you mass prompt scrape their AI you can recreate it almost exactly.

Very dangerous if you think about it that the product itself is the raw building block for itself.

Openai spends 1B$ on their model, releases it and instantly it gets scrapped by a million bots to build some country or company their own model.


I did a lot of playing around early on for this with LLMs.

Some early testing I found that injecting a "seed" only somewhat helped. I would inject a sentance of random characters to generate output.

It did actually imrpove its ability to make unique content but it wasn't great.

It would be cool to formaile the test for something like password generation.


I bet you the predictions are largely correct but technology doesn't care about funding timelines and egos. It will come in its own time.

It's like trying to make fusion happen only by spending more money. It helps but it doesn't fundamentally solve thr pace of true innovation.

I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.


Well, the predictions are tied to the timelines. If someone predicts that AI will take over writing code sometime in the future I think a lot of people would agree. The pushback comes from suggesting it's current LLMs and that the timeline is months and not decades.


> I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.

It comes from the company best equipped with capital and infra.

If some university invents a new approach, one of the nimble hyperscalers / foundation model companies will gobble it up.

This is why capital is being spent. That is the only thing that matters: positioning to take advantage of the adoption curve.


Yes scaling is always capitol hungry but the innovation itself is not


All of moltbook is the same. For all we know it was literally the guy complaining about it who ran this.

But at the same time true or false what we're seeing is a kind of quasi science fiction. We're looking at the problems of the future here and to be honest it's going to suck for future us.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: