Hacker Newsnew | past | comments | ask | show | jobs | submit | SubiculumCode's commentslogin

well, in that case, its bad. Obviously.

One Offs. A lot of research results in one-off code. You may not go back to this dataset, these ideas again. When you do, sometimes years, later, you go, oh shit, this is hard to work with. So then you begin to build better structures, do the extra work it takes to make things easy to apply to new purposes or to accept new (but slightly different) datasets. It takes time and effort, and money. And that is where it all breaks down. Most scientists have to be jacks of many trades to get by.

Reminder: Human cognition is complex and determining whether something is "good" or "bad" won't come from 1 or 2 studies.

Point for discussion: We know that task and context switching imposes substantial cognitive costs, leading to lower and slower performance for a time. I think it is may be reasonable to hypothesize that interacting with a LLM to solve tasks tends to focus the brain on a more strategic level of focus. What do I want to solve? What is my goal? Actually solving individual problems is very different. It is more concrete and mechanistic, requiring a different mode of thought. Switching from the former to the latter is a cognive task switch, where the context changes, and resetting into the new context takes time and that imposes costs. Unless they had a control arm that imposed a task switching cost...


TRUMP: "I did post it and I thought it was me as a doctor and had to do Red Cross. Only the fake news could come up with that one."

Translation: I don't think I'm Jesus, I'm just too stupid to know the difference between Jesus Healing People and the Red Cross medics healing People

or

Translation: I am a megalomaniac but MAGA is too stupid that they'll believe anything.


I just stumbled on this site which monitors the stability of performance of various models like ChatGPT codex 4.3, etc. Some models seem to fluctuate in performance, probably by dynamic reallocations of compute budgets, etc. Fairly interesting stuff, and gives credence to the idea that the same model performs differently on different days, and some models e.g. Chat GPT Codex 5.2 are more consistent than newer models e.g. Chat GPT 5.4

Every article like this there is someone that points this out. Not hard to do but sure is reliable.

The hard work would be maintaining a database of ideas which were similarly hyped over the past (say) couple centuries - including details on if/when each idea worked out, or fell out of hype-space, or was proven useless.

From that, you might be able to draw useful conclusions. Well...you'd also need correction factors for how profitable the hype itself was, over time, in the various scientific & technical fields.

The business model would be selling db access to VC's, R&D managers, and other folks making decisions about real money.


And the failure of the technologies to deliver is equally reliable!

As an aside: I personally have no use for unicode for bash commands, and the potential for sneaky maliciousness worries me. Does anyone know of a way to automatically strip (e.g. with tr) all unicode away when pasting into a terminal?

That's a fairly common human error as well, btw. Source attribution failures.


Yes. There is a ton of Russian propaganda against the Catholic church claiming the current and last popes to be "anti-popes" and spawns of Satan, and all that, and it is exactly this progression Catholic Church-->Russian Orthodox Church which is under Putin's thumb.


:) Coffee is good


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: