It's not per se madness; companies pay much more than that for code. Instead it's an empirical question about whether they're getting that value from the code.
The difference is that if those companies were to rely only on the AI part, and hence to transform us (computer programmers) only in copy-pasters and less, in about one to two years the "reasoning" behind the latest AI models would have become stale, i.e. because of no new human input. So good luck with that.
But my comment was not about companies, it was just about writing code, about the freedom that used to come from it, about the agency that we used to have. There's no agency and no freedom left when you start paying that much money in order to write code. I guess that can work for some companies, but for sure it won't work for computer programmers as actual human beings (and imo this blog-post itself tries to touch on that aspect).
The thing that most surprises me is that IDEs don't have a standard protocol for this, so you basically need a custom test runner if you want one-click "this snapshot failed; update it" self-modifying tests.
I wrote WoofWare.Expect for F#, which has an "update my snapshots on disk" mode, but you can't go straight from test failure to snapshot update without a fresh test run, even though I'm literally outputting a patience diff that an IDE could apply if it knew how.
Worse, e.g. Rider is really bad at knowing when files have changed underneath it, so you have to manually tell it to reload the files after running the update or else you clobber them in the editor.
I am envisioning the PR arguments now when the first instinct of the junior developer is to clobber the prior gold standard outputs. Especially lovely when testing floating point functionality using tests with tolerances.
Some things should be hatefully slow so one's brain has sufficient chance to subconsciously mull over "what if I am wrong?"
Could you share anything about your creation, without having been to school where we taught you what the answers were? Can you deduce the existence of your hippocampus just by thinking really hard?
Could you clarify the question? When everything's going up, it's definitionally not a crash; do you mean something like "where are people going to flee to now/soon, in anticipation of a crash, given how buoyant everything is"?
The banks will start to pull back on AI financing because their risk calculations are going up. That will make the news and people will sell their AI stocks and just put cash in a money market fund or something. Stock price decline confirms the bank’s calculations and now they def. aren’t lending to AI companies. That makes the news and now people are really selling their AI stocks which drives the price down further. The banks react again…
In 2008/9 people became paranoid there was nowhere safe to go and that really screwed things up on top of everything happening in the stock market.
> just put cash in a money market fund or something
So you are predicting everybody will escape into dollars. Which by themselves are extremely risky because the world is at the verge of ditching dollar as global currency.
There was already double digit inflation just because during the pandemics US overprinted dollars in relation to the size of the global economy. Imagine what the inflation will be if the dollar economic domain shrinks by half or more.
This is a great question, and one that drives right to the key issue! (oh god, that sounds like an LLM response, sorry)
Like with the implosion of the Japanese economy, people will just not invest, instead parking their money in low-yield bank accounts. It was, in some cases continues to be, an issue for that country.
They're not definitionally the same. Normally a (stock market) crash is just "everyone's assessment of expected future cash flows goes down, meaning that what everyone owns is less valuable". One thing that can cause people's assessments to drop is "everyone else is withdrawing from it, which I assume means they're assessing it as being much less valuable, so they have information I don't, so I should revise downwards", which can make a self-sustaining feedback loop, but that's certainly not the only possible cause of a crash; I wouldn't even say it was the most likely cause of an AI-bubble crash.
My guesses would be "everyone's assessments go down together because OpenAI et al's predictions of their future revenue are observed to be consistently vastly overinflated vs actual performance, but everyone was previously assuming they were roughly correct" or "some political thing happens which makes OpenAI et al's services obviously much less valuable or makes them much less able to provide services".
SBF was really unusual in that he claimed to be a pure expected-utility maximiser. He admitted that he would take 51% coin-flips forever on Conversations with Tyler in March 2022, long before everything blew up:
> COWEN: Then you keep on playing the game. So, what's the chance we're left with anything? Don't I just St. Petersburg paradox you into nonexistence?
> BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That's the other option.
I'm not saying the pressures are absent, but they are hopefully vastly less compelling for any normal person with a more standard view of risk and utility. ("Sure, I'll just cover up this little bit of fraud, because that's got a better than 50% chance of success" is a course of action SBF all but said he would take, months in advance!)
TPOT is "this part of Twitter", a loose community of (roughly) post-rat affiliated people; the "squishy half" is presumably referring to the fact that a substantial number of such people end up quite big into woo of various sorts.
Is that statement about C based on anything in particular? C was 18th of all the languages in the article's chart (the worst!), which I'd guess was due to the absence of a standard library.
Fair point. There is a distinction between syntactic efficiency (C is terse) and task-completion efficiency (what the benchmark likely measured). If the tasks involved string manipulation, hash maps, JSON, etc. then C pays a massive token tax because you are implementing what other languages provide in stdlib. Python has dict and json.loads(), C has malloc and strcmp.
So: C tokenizes efficiently for equivalent logic, but stdlib poverty makes it expensive for typical benchmark tasks. Same applies to Factor/Forth, arguably worse.
reply