Hacker Newsnew | past | comments | ask | show | jobs | submit | Lerc's commentslogin

I did some similar shenanigans when I did a silly little system on NeoCities https://lerc.neocities.org/

It uses IndexedDB for the filesystem.

Rather Dumbly it is loading the files from a tar archive that is encoded into a PNG because tar files are one of the forbidden file formats.


I don't drink things with Aspartame because it makes me feel queasy. I don't know of any mechanism that causes that effect. Occasionally I encounter something that I would not have expected to contain Aspartame that I notice the feeling before I have even considered the possibility that it might be present. I take that as a sign that it is not psychological.

Same here. Very little amounts already give me a weird tummy feel. A normal amount (ex. Half a can) gets my tummy turned around for a few hours.

How do you determine if they are mentally convincing themselves they are the good guy, when in fact it is you who is the good guy.

From either perspective, if the roles were reversed, wouldn't it look the same? Both parties thinking they are doing the right thing.

There are a lot of legitimate criticisms out there, they seem to be vastly outnumbered by illegitimate criticisms, no matter what position you hold. It's easy to hold your opinion when you are inundated with a constant stream of invalid arguments that say little more than "I don't like the tribe you chose". Any valid argument is easily overlooked without a sense of guilt in that environment.


This is kind of what Golden Gate Claude was.

A perturbation of the the activations that made Claude identify as the Golden Gate Bridge.

Similarly, in the more recent research showing anxiety and desperation signals predicting the use of blackmail as an option opens the door for digital sedatives to suppress those signals.

Anthropic has been mostly cautious about avoiding this kind of measurement and manipulation in training. If it is done during training you might just train the signals to be undetectable and consequently unmanipulatable.


> A perturbation of the the activations that made Claude identify as the Golden Gate Bridge.

Great, now we've got digital Salvia


Golden Gate Claude was two years ago and it's surprising there hasn't been as much research into targeted activations since.

There’s been some, but naive activation steering makes models dumber pretty reliably and training an SAE is a pretty heavy lift.

>You already see it in entire towns losing water or their water becoming polluted

Do you have any references for such cases? I have seen talk of such thing at risk, but I am unaware of any specific instances of it occuring


I know I've seen such a story on HN before, you can probably find it by searching for "water" and "data center/AI."

The closest match I found was https://news.ycombinator.com/item?id=44562052

The article tries to play sleight of hand with the specific instance that they cite but it seems that the loss of water is alleged to be caused by sediment from construction rather than water use.

It's not great that it happened and it is something local government should take action on, but it is also something that could have been caused by any form of industrial construction. I suspect there are already laws in place that cover this. If they are not being enforced that's another issue entirely.


That's exactly the article I was thinking of.

Data center construction exposing weaknesses in local infrastructure is a double-edged sword; you wanna know if things need upgrading but you don't wanna be negatively affected by it.

Maybe there should be some clause in these contracts that mandate tech companies foot the bill for local infrastructure improvements.


In that case it does not depict the scenario you suggested.

This is not a data center issue at all, it is a construction issue, that it was a data center being constructed was incidental.

I believe there are regulations that cover things like this already.

To characterise it as representative or specific to data centers is ad best disingenuous.


I didn't write the article man

That may be part of the issue. Perhaps LLMs are just causing people to reveal how much they consider a maintainer as providing a service for them. Maintainers don't work for you, they let you benefit from the service they perform.

That workload of maintaining a fork doesn't come from nowhere, it's just a workload someone else would have to do before the fork occured.


I think it's more likely that libraries will give way to specified interfaces. Good libraries that provide clean interfaces with a small surface area will be much less affected by thos compared to frameworks that like to be a part of everything you do.

The JavaScript ecosystem is a good demonstration of a platform that is encumbered with layers that can only ever perform the abilities provivded by the underlying platform while adding additional interfaces that, while easier for some to use, frequently provide a lot of functionality a program might not need.

Adding features as a superset of a specification allows compatibility between users of a base specification, failure to interoperate would require violating the base spec, and then they are just making a different thing.

Bugs are still bugs, whether a human or AI made them, or fixed them. Let's just address those as we find them.


I could certainly see that direction earlier in some communities, but reaching agreement on specs seems like the opposite of where distributed low cost code writing is headed.. I.e. I like 20% of your OSS library and have one different opinion so I pull part of it in directly, change something, and ask an LLM to freshen it where that should mean what the LLM thinks I usually mean which is kind of like what some other people mean.

TLA overload strikes again.

Reading this after a day of fighting microcontrollers made me interpret the headline quite differently.

Ignoring DMA requests and contradictory documentation sounded entirely on point.


I wonder if some of these headlines are impacted by the submit article character limit

"12 too long" might lead to acronyms.

would be nicer if they could say:

Apple ignores 56 interoperability requests under the Digital Markets Act, leaving third-party developers locked out of iOS and iPadOS.


I too honestly thought that this was going to be a deep dive on the M-series PCIe controller or something similar.

I too was confused

Same here lol.

Ok now we need 1541 flash attention.

I'm not sure what the venn diagram of knowledge to understand what that sentence is suggesting looks like, it's probably more crowded in the intersection than one might think.


Believe me, using the 1541 as co-processor and extra storage was super tempting and on my mind all the time! So what do you think? Flash attention with K on the front side and V on the backside? :)

..and we would keep the human in the loop:)

How many 40+ AI pillers? Assume 10M devs in the world. 10% heard of flash attention, 1% heard of 1541 then 10,000

Ahh but you also have to know the significance of the 1541 that makes the Flash attention reference work

That is essentially what the reasoning reinforcement training does. It is getting the model to say things that are more likely to result in the correct final answer. Everything it does in between doesn't necessarily need to be valid argument to produce the answer. You can think of it as filling the context with whatever is needed to make the right answer come out next. Valid arguments obviously help. but so might expressions of incorrect things that are not obviously untrue to the model until it sees them written out. The What's The Magic Word paper shows how far that could go. If the policy model managed to learn enough magic words it would be theoretically possible to end up with an LLM that spouts utter gibberish until delivering the correct answer seemingly out of the blue.

That's pretty cool, thanks for the extra context! (pardon the... not even pun I guess)

Also, thanks for pointing me at that specific paper; I spend a lot more of my life closer to classical control theory than ML theory so it's always neat to see the intersection of them. My unsubstantiated hypothesis is that controls & ML are going to start getting looked at more holistically, and not in the way I normal see it (which is "why worry about classical control theory, just solve the problem with RL"). Control theory is largely about steering dynamic systems along stable trajectories through state space... which is largely what iterative "fill in the next word" LLM models are doing. The intersection, I hope, will be interesting and add significant efficiency.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: