OpenAI's naming convention is slowly converging with Gillette razors. Can't wait for GPT-5.3 Instant Turbo Max Pro.
Seriously though, if "Instant" just means a lower TTFT (Time To First Token) but regresses on complex reasoning, it's just a hardware accelerator for hallucinations. Fast wrong answers are still wrong answers.
Another AI, obviously. And then a third AI to monitor the first two for conflicts of interest.
Jokes aside, this is exactly the era where formal verification (like TLA+ or Lean, seeing the other post on the front page) actually makes commercial sense. If the code is generated, the only human output of value is the spec. We are moving from writing logic to writing constraints.
And I still run into naysayers claiming that we cannot extract valuable opinions or warnings from fiction because "they're fictional". Fiction comes from ideas. Fiction is not meant to model reality but approximate it to make a point either explicitly or implicitly.
Just because they're not 1:1 model of reality or predictions doesn't mean that the ideas they communicate are worthless.
Exactly. Neither firm would have been (successfully) sued by their shareholders for failing to make significant profits, so let's not blame on capitalism what is instead the individual greed driving these decisions. In fact, OpenAI is now going to trial because it gave up its non-profit status, reneging on the commitments it made to its shareholders (fraud, by another word).
I don’t get it. Even the Soviet Union used money. Simply paying for stuff isn’t necessarily capitalism? Or are you suggesting Anthropic should be state-owned?
Using money as a medium to facilitate exchange of goods and services is not capitalism. Abandoning one of your core principles in the pursuit of money, or more charitably because not doing so means your competitors will make more money and overtake you in the marketplace is an outgrowth of capitalism
In the Soviet Union the reasons might have been "to beat the Capitalists", "for the pride of our country" or "Stalin asked us to and saying no means we get sent to Siberia". Though a variant of the last one may well have happened here, and the justification we read is just the one less damaging to everyone involved
>Though a variant of the last one may well have happened here, and the justification we read is just the one less damaging to everyone involved
Hegseth was planning on getting the model via the Defense Production Act or killing Anthropic via supply chain risk classification preventing any other company working with the Pentagon from working with Anthropic. So while it wasn't Siberia, it was about as close as the US can get without declaring Claude a terrorist. Which I'm sure is on the table regardless
Exactly. He recently said the following in an interview:
"AI safety and anti-capitalism [...] are at least strongly analogous, if not exactly the same thing." [0]
[0] Nick Land (2026). A Conversation with Nick Land (Part 2) by Vincent Lê in Architechtonics Substack. Retrieved from vincentl3.substack.com/p/a-conversation-with-nick-land-part-a4f
Once they are a dominant market leader they will go back to asking the government to regulate based on policy suggestions from non-profits they also fund.
It is well know that big corporations take good regulations and change them to make them:
1. Easier to bypass for themselves.
2. Create extra work for incumbents.
3. Convince the public that the problems are solved so no other action is needed.
In many industries goverment and corporations work together to create regulations bypassing the social movements that asked for the industry to be regulated and their actual problems. The end result are regulations that are extremely complex to add exceptions for anything that big corporations paid to change instead of regulations that protect citizens and encourage competition.
See the Mattel lead painted toy scandal. The end result was congress passed regulations that manufacturers had to have their toys tested for lead and then made large companies like Mattel exempt from it because they were deemed large enough to handle it on their own. Even though they were the reason for the legislation because they weren't handling it on their own. Mattel sells lead painted toys and congress responds by hobbling their competitors.
I think it is cynicism; at least, there’s an idea that once a company is dominant it should want regulation, as it’ll stifle competition (since the competition has less capacity for regulatory hoop-jumping, or the competition will have had less time to do regulatory capture).
It's not just AI, replace "safe" with "open" and you will find a close match with many companies. I guess the difference is that after the initial phase, we are continuously being gaslighted by companies calling things "open" when they are most definitely not.
Claiming higher accuracy than Whisper Large v3 is a bold opening move. Does your evaluation account for Whisper's notorious hallucination loops during silences (the classic 'Thank you for watching!'), or is this purely based on WER on clean datasets? Also, what's the VRAM footprint for edge deployments? If it fits on a standard 8GB Mac without quantization tricks, this is huge.
The hypocrisy is staggering. Big Tech scraped the entire open web, ignoring robots.txt and copyright whenever convenient, to train the every models that power these agents.
But the moment users employ a tool like OpenClaw to regain agency over that same data, it's branded as a "security threat" or "exploitation".
The "Authorized Scraping fro Me, But Not for thee" doctrine is becoming the standard ToS.
The problem isn't CI/CD; the problem is "programming in configuration". We've somehow normalized a dev loop that involves `git commit -m "try fix"`, waiting 10 minutes, and repeating. Local reproduction of CI environments is still the missing link for most teams.
These tool fails are as a consequence of a failure of proper policy.
Tooling and Methodology!
Here’s the thing: build it first, then optimize it. Same goes for compile/release versus compile/debug/test/hack/compile/debug/test/test/code cycles.
That there is not a big enough distinction between a development build and a release build is a policy mistake, not a tooling ‘issue’.
Set things up properly and anyone pushing through git into the tooling pipeline are going to get their fingers bent soon enough, anyway, to learn how the machine mangles digits.
You can adopt this policy of environment isolation with any tool - it’s a method.
Yes AND… more. He discusses your (correct) sentiment before and during his bash temptation segment. It’s only one of the gripes, but imho this one’s the 80%/pareto
The "innovator's dilemma" seems to have a new chapter: having enough TPU capacity to simply out-brute-force the competition once the direction is clear. It's less about "who built it first" and more about "who has the most H100s/TPUs and a decent enough compiler."
Google was building TPUs before OpenAI existed, they wrote the paper that made OpenAI, by most accounts they were first and have far more experience operating these systems at scale in both hardware and users
The writing has been on the wall since Ilya left, OpenAI is in decline. Recent news at Microsoft, Nvidia backing out of a deal, decreased enterprise usage... seems to indicate the trend is deepening and OpenAI is on a path to becoming the favorite example of the anti-Ai crowd when it all finally comes crumbling down
It's genuinely terrifying to think how much of the modern internet rests on the shoulders of a few people maintaining core utilities like sudo, curl, and openssl for decades. Todd is a legend.
For clarity: the final algorithm isn’t novel research — it’s inspired by watermark-removal implementations shared in the community, then adapted specifically to Gemini’s logo geometry and optimized for browser-side execution.
The main engineering challenge for me wasn’t detection quality, but finding a point that balanced:
• reliability on textured images
• millisecond latency
• small binary size
• zero server-side processing
I also made the extension paid mostly as an experiment to understand pricing, payments, and distribution for small developer tools. Building the payment/licensing flow took more time than the algorithm itself.
If anyone is curious, I’m happy to share more details about the detection heuristics and fill strategy.