But you can't divorce that from computing technology in general. A TI-83 used a z80 in 2000 and was priced at 1990's z80 rates, it was already gouging even back then! Now 26 years later the TI-84 uses an ez80 (or something something similar), which was introduced in 2001.
TI has always gouged their captive market. It is just increasingly ridiculous when those students also have smartphones.
FWIW I think these graphing calculators are quite good for 2026 students! It is nice to have a computer which is actually comprehensible. They just need to be more like $50. $160 is just evil.
Shrug. The SAT and ACT don't let you use an iPhone on their exams. $160 is what the market will bear. I'm not saying it's right or wrong, it just is, and perhaps there's a market for a much cheaper competitor to beat TI here.
You can use any calculator that meets the restrictions for things like the SAT.
However.
The entire year, your textbooks, your teacher, your in-class practice, was walking you through the specific commands you need to select to actually do the things, like graphing and solving.
If little Timmy is unable to read the manual about how to do math he doesn't yet know with whatever his specific calculator is, he is at a severe disadvantage, and the teacher basically cannot help him.
A friend in high school bucked the trend and used a casio in our TI based education, and did just fine for himself, but he was apparently a smart kid.
You previously acknowledged it's a "very captive market" that you "would've expected Texas Instruments to try gouging" :) "$160 is what the very captive market will bear until the state-sanctioned gouging backfires" is a less compelling argument.
"Shrug" is kind of gross. Seems like you're being reflexively cynical.
Edit: to be clear the problem here is really local school boards being antidemocratic and unaccountable, not TI being greedy.
It is also things like "I can feel that my left knee is bearing a little too much weight, I should shift weight to my right hand and use that to push myself up" - things that come automatically to animals after learning the hard way in infancy (some of it is innate; baby animals are clumsy, but usually more mobile than human infants). Regardless of learned-vs-instinct, these abilities rely on sophisticated "sensors" and cognition. I suspect engineering the sensors is actually a bit harder, but I'm also not optimistic about a deep learning approach to the cognition.
A significant underappreciated advantage of animals over AI: lifeforms can "learn the hard way" more easily than 2020s robots because of cheap self-repair. AI labs are reluctant to damage their robots, but an essential part of humans learning to move safely is severely bonking your head and reckoning with the consequences - "hey, dummy, why did you trip and fall and bonk your head? Because you were running like an idiot."
I am learning the hard way to this day :) I have been practicing with work knives. A few months ago I got stupid and impatient, and sliced my thumb nastily. If I didn't block the cut with my thumbnail (still ruined) I might have chopped bone. It is hard to say precisely what I learned from this experience - "don't be stupid and impatient" is facile - but I know I learned a lot. I am actually optimistic about targeted surgical robotics. But for a general-use humanoid robot, I would not want to give it a knife if it's not capable of feeling pain. I never use big knives anywhere near my cats because I understand intuitively that they are nimble and unpredictable and easily stabbed by knives. I didn't need to be trained on this. A robot kind of does. Yikes.
As a longtime F# developer and longtime recipient of STEM academic bullying[1] I refuse to use LLMs in large part because ChatGPT-3.5 was so ridiculously bad and obvious about copy-pasting from F# GitHub repos. I never felt the AGI, I just saw a plagiarism machine whose decorations had fallen off.
Eventually I am sure someone at Microsoft noticed and rang the RLHF alarm, so GPT improved substantially. It seems pretty usable for F#. I am sure some unprincipled F#er is crushing it with agents these days. But I didn't think "oh boy they solved the plagiarism problem, let's go generate some slop!" I thought "oh great, now it's no longer going to be blatantly obvious when ChatGPT plagiarizes." I really don't want to roll a d100, or even a d1000, to completely compromise a core value of mine in in exchange for a productivity benefit. I'll just be slow and jobless, thanks. This is serious: I am getting into solar installations and junk hauling.
[1] The "students don't want to think" problem is much older than LLMs. In 2007 I took a senior-level PDEs class, and almost everyone copied my homework because I was actually motivated to study PDEs, and too psychologically weak to resist those mean lazy math majors. Then it happened again in math grad school! Actually unbelievable. Why are you even in the program?
Intelligence is certainly not compression. People need to think more carefully about how it is that cockroaches and house spiders are able to live comfortably and adaptably in human houses, which are totally novel environments that have only existed for at most 10,000 years. Does it really make sense to say that they decompressed some latent knowledge about attics and pantries, perhaps from a civilized species of dinosaur? I think they have some tiny spark of true general intelligence that lets them adapt to situations vastly outside the scope of their "training data."
I would be much more convinced about AGI 2027 if someone in 2026 demonstrates one (1) robot which is plausibly as intelligent as a cockroach. I genuinely don't think any of us will live to see that happen.
I doubt you would ever blurt out a copyrightable portion of a book without realizing that's what you're doing. That's the biggest difference.
In particular, you are a legal person who can be sued in civil court if you infringe on copyright. If I ask you "can you help me write a blog about Manhattan?" and you plagiarize the New York Times, then the NYT sues me for copyright infringement, then I would correctly assume you conned me, and you are responsible for the infringement, and I would vindictively drag you into the lawsuit with me. With LLMs it involves dragging in a corporation, much much uglier. Claude is not actually a person and cannot testify in any legally legitimate trial. (I am sure it will happen soon in some kangaroo court.)
Yes, we've known the line is blurry for hundreds of years, that's why we have courts. That has nothing to do with the specific problem of LLMs infringing copyright. LLMs needs to be held to much higher scrutiny because they are not capable of taking legal responsibility for copyright infringement, regardless of whether its verbatim or a more ambiguous case, and their users can't be expected to know off-hand whether the output is copyrighted or not.
Based on this comment: https://news.ycombinator.com/item?id=47960014 it seems like you are just ignorant about the basics of copyright law, and pretending this ignorance is some sort of flaw in the idea of copyright itself.
>> Whereas earlier you had to use something that was mass produced to be satisfactory for everyone
As someone who recently started using OpenSCAD for a project I find this attitude quite irritating. You certainly did not "have to" use popular tools.
The OpenSCAD example is particularly illuminating because it's fussy and frustrating and clearly tuned towards a few specific maintainers; there's a ton of things I'd like changed. But I would never trust an LLM to do it! "Oh the output looks fine, cool" is not enough for a CAD program. "Oh, there are a lot of tests, cool" great, I have no idea what a thorough CAD test suite looks like. I would be a reckless idiot if I asked Claude to make me a custom SCAD program... unless I put in a counterproductive amount of work. So I'm fine with OpenSCAD.
I am also sincerely baffled as to how this stimulates the "labor economy." The most obvious objection is that Anthropic seems to be the only party here getting any form of economic benefit: the open-source maintainers are just plain screwed unless they compromise quality for productivity, and the LLM users are trading high-quality tooling built by people who understand the problem for shitty tooling built by a robot, in exchange for uncompensated labor. It only stimulates the "labor economy" in a Bizarro Keynesian sense, digging up glass bottles that someone forgot to put the money in.
I have seen at least 4 completely busted vibe-coded Rust SQLite clones in the last three months, happily used by people who think they don't need to worry their pretty little heads with routine matters like database design. It's a solved problem and Claude is on the case! In fact unlike those stooopid human SQLIte developers, Claude made it multithreaded! So fucking depressing.
This is funny because I was in the same situation, and actually used Claude to make a custom CAD program inspired by OpenSCAD :) https://fncad.github.io
You definitely need to have a strong sense of code design though. The AIs are not up to writing clean code at project scale on their own, yet.
This is a good example of what I mean! fnCAD appears to be a significantly buggier and highly incomplete version of OpenSCAD, where AI essentially grabbed the low-hanging fruit - albeit an impressively large amount of fruit - and left you with the hard parts. I fail to see how this solved any problems. Maybe it was an experiment, which is fine. But it's not even close to a viable CAD product, even by OpenSCAD's scruffy FOSS standards, and there's no feasible way to get it there without a ton of human work.
Not trying to denigrate the work here, as such. But this certainly didn't convince me that using AI to replace OpenSCAD (or any other major open-source project) is a good idea. The LLMs still aren't even close to being able to pull it off.
It solves all my problems! It's buggy and incomplete because it's "1.0 feature complete" for my own use. I've been doing lots of 3D printing with it, so it's definitely being dogfooded. File bug reports? I'm confident that features can be added as required, it's reasonably clean code.
I mean, to be fair, a one-user project is not ever going to be as bugfree as a tens-of-thousands-of-users project. That's just inherent and not an AI issue. If you judge AI projects by that standard, they'll always come up short. It's a sampling issue. An AI project that's gotten to a level where it competes with a traditional project will always be buggier and less feature complete and polished, because AIs speed up development. It will simply have seen far less, well, polish to get there.
Anthropic will probably do what Google did in the 2000s, which is give jobs to all the open source developers whose work helped them get there.
Civilization isn't monotonic. People keep solving the same problems over and over again, telling the same stories with a different twist. For example in 1964 having a GUI work environment with a light pen as your mouse was a solved problem on IBM System/360. They had tools similar to CAD. So why don't we all just use that rather than make the same mistakes again. Each time a new way of doing things comes out, people get an opportunity to rewrite everything.
Well good luck compiling a CAD software from 1964 on 2026's aarch64
machine, and good luck in treating it as an applicable solution for
today's problems.
The most effective response is rarely to argue the general case. Instead, acknowledge the concern, offer a brief reframe, and propose one concrete demonstration on the person’s own code. Most concerns are resolved by a single successful experience.
First of all, Google didn't have to write this stuff about Kubernetes, suggesting psychological tricks and magic demonstrations to cajole people into agreeing with you. Kubernetes was happy to discuss the general case - I don't like k8s and don't think they had a bulletproof argument, but they offered a pretty good one. What Anthropic is doing here is very very weird. I said "Scientology" earlier and I was not kidding.
Part of the reason LLMs have led me to tear out so much of my own hair is how many people seem to have made it through four years of STEM college without developing any scientific thinking ability whatsoever. A truly stunning number of people have been wowed by "a single successful experience." Actually that section is full of horrible logic:
Concern
"I am faster without it."
Suggested response
That is likely true for code the person writes routinely. Suggest trying it on the work they tend to avoid: legacy files, unfamiliar services, or test scaffolding, where the leverage is highest.
Evidence to offer
Time one tedious task both ways and compare.
This isn't just unscientific and manipulative: it's really goddamn annoying! If someone times me at 1.5 hours reading about and learning an unfamiliar service, and smugly says Claude learned it in 12 seconds of "thinking," either my laptop or a certain Claude Champion is getting thrown out the window.
This Scientology-ass blog aligns startlingly well with my hypothesis that certain tech workers (including CEOs like Dario Amodei and Satya Nadella) are excessively enamored with LLMs because of a fundamental spiritual emptiness and ignorance.
Imagine calling yourself a "Champion" and dispensing nuggets of wisdom like this:
>> When a colleague asks how you accomplished something, the most useful response is the prompt you actually used. They will learn more from running that prompt against their own problem than from any description you could write, and it gives them something they can act on immediately.
Colleague: How did you get it to find that race condition?
Champion: I asked, "The test in @tests/scheduler.test.ts is flaky, figure out why," and it traced two unjoined promises in the scheduler. Try the same phrasing on your test.
People quickly became too embarrassed to call themselves "prompt engineers." I don't think anyone is jumping at the bit to be the office Claude Champion.
I am a musician and deeply morally opposed to any form of generative AI that has unauthorized training data. That means I am morally opposed to any useful system. This seems to be a majority opinion among creatives: they are plagiarism machines built on stolen data. Using them for anything is always unacceptable.
>> Notepad++ for macOS is maintained by Andrey Letov, who wrote the Objective-C++ Cocoa UI that replaces Notepad++'s Win32 front-end. The app is available to download from the Notepad++ website.
That is not the Notepad++ website! It's some other website. I understand that this is a fairly legitimate and professional port. But this framing is unacceptable. It's especially grating considering "Notepad++" is trademarked in France: https://data.inpi.fr/marques/FR5133202 [1]. The software is GPL but that doesn't mean you can slap the trademark on any derived codebase - legally problematic in France, but it's disrespectful worldwide. The Mac port really should have been released under a similar but clearly distinct name, and MacRumors should have been way more responsible about framing the story.
TI has always gouged their captive market. It is just increasingly ridiculous when those students also have smartphones.
FWIW I think these graphing calculators are quite good for 2026 students! It is nice to have a computer which is actually comprehensible. They just need to be more like $50. $160 is just evil.
reply