Hacker Newsnew | past | comments | ask | show | jobs | submit | -_-'s commentslogin

“The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”

So DoW did get the “all lawful purposes” language they were after, with reference to existing (inadequate, in my view) regulations around autonomous weapons and mass surveillance.


Organizer of the march here.

I think while our messaging was more provocative, our beliefs are pretty similar to what PG outlined in https://paulgraham.com/ineq.html or what Garry Tan has been saying about the tax.


> similar to what PG outlined in https://paulgraham.com/ineq.html or what Garry Tan has been saying

Oh, those people who loved what DOGE was doing, even after it became apparent DOGE's cuts would kill millions of people and permanently compromise critical government systems. Cool role models :/

Almost as cool as Bezos, the centibillionaire whose employees piss in bottles for efficiency.

'These billionaires told us that it's actually great to cut taxes on billionaires, and let them do whatever they like!' ... Bruh. Our species really doesn't have time for this.

Billionaires are an existential threat to democracy, the planet, the oceans, space, etc. Working for them on a voluntary basis to fight a 5% wealth tax is extraordinarily negative EV. For everyone.

... Have you tried looking up your billionaire heroes in the Epstein files? I'd highly recommend it before organizing any more marches for them.


What do you mean? OpenAI's main offices have been in Mission Bay since 2024


Author here! 1a. LLMs fundamentally model probability distributions of token sequences—those are the (normalized) logits from the last linear layer of a transformer. The closest thing to ablating temperature is T=0 or T=1 sampling. 1b. Yes, you can do something like this, for instance by picking the temperature where perplexity is minimized. Perplexity is the exponential of entropy, to continue the thermodynamic analogy. 1c. Higher than for most AI written text, around 1.7. I've experimented with this as a metric for distinguishing whether text is written by AI. Human-written text doesn't follow a constant-temperature softmax distribution, either.

2b. Giving an LLM control over its own sampling parameters sounds like it would be a fun experiment! It could have dynamic control to write more creatively or avoid making simple mistakes. 2c. This would produce nonsense. The tokens you get with negative temperature sampling are "worse than random"


> . I've experimented with this as a metric for distinguishing whether text is written by AI. Human-written text doesn't follow a constant-temperature softmax distribution, either.

oo that sounds like a cool insight. like just do a trailing 20-30 token average of estimated temperature and look for variance like one might do a VO2 max


What model did you use? I ran this with the original Llama 13B. The newer Llama models use a different tokenizer that will have its own anomalous tokens.


Yep! Very large negative temperatures and very large positive temperatures have essentially the same distribution. This is clearer if you consider thermodynamic beta, where T = ±∞ corresponds to β = 0.


That's the premise behind Workshop Labs! https://workshoplabs.ai


I’ve also noticed recently that when I click a Twitter link from Telegram, it hijacks the Telegram webview to open the tweet in Safari.



If true, bad news for Elon Musk and xAI because they have to start over. He's already indicated this in regards to Wikipedia. He wants to train on Grokepedia and not Wikipedia. Removing NSFW material gives him another reason.


Yes! At https://RunRL.com we offer hosted RL fine-tuning, so all you need to provide is a dataset and reward function or environment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: