Hacker Newsnew | past | comments | ask | show | jobs | submit | jph00's commentslogin

"already substantially completed" isn't accurate. $450m of the eventual $1.65b cost had been spent at that point - so less than half.

I'd call that substantial

Indeed, considering the much of the cost in the end consists of carrying costs, litigation, and year-of-expenditure overruns that were caused by the delay.

Yeah this has always been the glaring blind spot for most of the "AI Safety" community; and most of the proposals for "improving" AI safety actually make these risks far worse and far more likely.


It makes quite a lot of sense to focus on reducing the risks of every human everywhere dying, rather than the risks of already existing oppression getting worse.


No, you are deeply misunderstanding the issue. Creating a rivalrous good that powers fight over for control, then use violence to maintain control of, creating a global feudalism, is not "existing oppression getting worse". It actually makes the risks of every human everywhere dying far higher, and even if that doesn't happen, decreases global utility by a similar percentage (99%, instead of 100%). It could actually be worse, if average human utility becomes negative.

GPL was created as a workaround for copyright - it wouldn’t have been needed if there wasn’t copyright. There are complex arguments both for and against copyright and there’s no reason to simply assume it must always be just as now even as circumstances change.


Nearly all my coding for the last decade or so has used literate programming. I built nbdev, which has let me write, document, and test my software using notebooks. Over the last couple of years we integrated LLMs with notebooks and nbdev to create Solveit, which everyone at our company uses for nearly all our work (even our lawyers, HR, etc).

It turns out literate programming is useful for a lot more than just programming!


This seems to be the best link? https://solve.it.com/

The name is quite hard to search for, as it's used by a lot of different things.

Jeremy it's pretty hard to understand what this is from the descriptions, and the two videos are each ~1 hour long. Please consider showing screenshots and one or two short videos.


This is exactly the thinking that has characterized responses to new sources of power through history, and has been consistently used to excuse hoarding of that power. In the end, enlightenment thinking has largely won out in the western world, and society has prospered as a result.

Centralizing power is dangerous and leads to power struggles and instability.


In the alternative, asymmetry is guaranteed.

When you only allow gov and big tech access to powerful AI, you create a much more dangerous and unstable world.


Ideally, sufficiently powerful AI would not be created unless the necessary safety mechanisms are established.

But also, that’s a different kind of asymmetry?


Yes there is. Lots of researchers are more interested in making a contribution to societal flourishing than in making incredible sums of money. That’s why there’s still lots of top AI researchers in academia.


llms.txt files have nothing to do with crawlers or big LLM companies. They are for individual client agents to use. I have my clients set up to always use them when they’re available, and since I did that they’ve been way faster and more token efficient when using sites that have llms.txt files.

So I can absolutely assure you that LLM clients are reading them, because I use that myself every day.


Thanks for the clarification.

>for use in LLMs such as Claude (1)

From your website, it seems to me that LLMs.txt is addressed to all LLMs such as Claude, not just 'individual client agents' . Claude never touched LLMs.txt on my servers, hence the confusion.

1. https://llmstxt.org


It's not a mistake. It's correct, and is a excellent way to present this information.


> I don’t know how to trust the author if stuff like this is wrong.

She's not wrong.

A good way to do this calculation is with the log-ratio, a centered measure of proportional difference. It's symmetric, and widely used in economics and statistics for exactly this reason. I.e:

ln⁡(1.2/0.81) = ln⁡(1.2)-ln⁡(0.81) ≈ 0.393

That's nearly 40%, as the post says.


so if the numbers were “99% slower than without AI but they thought they would be 99% fast”, you’d call that “they were 529% slower”, even though it doesn’t make sense to be more than 100% slower? And you’d not only expect everyone to understand that, but you really think it’s more likely a random person on the internet used a logarithmic scale than they just did bad math?


Well, this random person we are referring to happens to have a PhD in math from Duke.

I find that satisfying.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: