No. AI is a must for software development. It's non-negotiable. The productivity gains are too great. The era of 100% human-written code is over. People will still do it as an idle curiosity, for personal projects only they intend to use. But even those open source projects with significant user bases that forbid the use of AI (like, afaik, NetBSD) will be eclipsed by those that support it in terms of features, capability, and security. And the commercial world? Forget it. You cannot keep pace with your employer's expectations unless you learn to use these tools well. This is not up for debate. It's reality.
Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI, so if you're still not, that's a PEBKAC. You are not using the tools correctly. Figure it out before you complain.
> "No. AI is a must for software development. It's non-negotiable."
Absolutist rubbish.
> "But even those open source projects with significant user bases that forbid the use of AI [...] will be eclipsed by those that support it in terms of features, capability, and security."
As is this. If a language model is relevant to a project, open source or otherwise, is of course heavily dependent on its nature (ethics, use case, deployment, working environment/culture, et cetera).
LLMs may be a must for programming, but not for engineering. Writing code is the easy part once you figure out what actually needs to be built in the first place.
Indeed. But figuring out what actually needs to be built is the systems analyst's job, not the programmer's. It takes people skills and holistic thought, something programmers are generally poor at (and AI certainly is no good at, at least not today).
I know how to do things by hand, man. But the writing is on the wall: that skill is going the way of writing programs on punchcards. And there's little we can do about it because the economics in favor of LLMs are like laws of physics.
Yes, model collapse is gonna suck. But LLMs are not just left to self-train, they are guided by human researchers who are going to find ways to groom and direct the models to avoid collapse. They can make billions by shipping better models, so why wouldn't they invest a lot of effort in that?
Our lives are much more than our computing environments. By surrendering a bit of control of our computing environments we free up our brains to devote to other things in life: loved ones, pets, gardening, home maintenance, other hobbies and sports...
Millions of happy Apple users can't be wrong on this.
Maybe, but for some of us, the peace of mind comes from stability and minimal friction with our tools.
Whenever I touch my config is because I get frustrated with one operation and tries to see if it can be done faster. If you use your computer like a toaster, then you wouldn’t care that much about power usage. But for me it’s a creative lab and I don’t want a generic cubicle.
As a big fan of WordPerfect on my first DOS machine (286 clone), I agree. I respect authors like GRRM for sticking with WordStar, but whenever I get nostalgic and wondering about WordPerfect in DosBox, I remember I use emacs and typst. All the good things about WordPerfect, but vastly superior.
I keep seeing ads for expensive "writerdecks" that run between $500 and $1200 and have a bare-minimum OS that is intended for distraction-free writing. I keep wondering how these are any better than an old laptop, FreeDOS, and WordPerfect 5.2, except as Veblen goods.
Favorite ZSNES moment: I took a math class in a lecture hall equipped with laptops in a year when my university was experimenting with laptops as a pedagogical tool, but hadn't yet pulled the trigger on requiring them or offering them for sale (as compared to the standard dorm room desktop). While the lecture was being given, we were supposed to have our laptops open with the lecture material up. But of course this one kid had installed ZSNES on his and was playing Killer Instinct...
This is one of those happy little lies we tell ourselves, you know, like Agile is to help us collaborate better. No, your company adopted Agile so that upper management has near-real-time feedback in how teams are doing, who are the top performers and who are the stragglers.
When it comes to AI, the entire value proposition is that it is a substitute for human thought. Your thought is now no longer as valuable, and it is also no longer the bottleneck. That's the reality.
"Oh it's just here to remove the drudgery and let us focus on what's really important." When it comes to many forms of knowledge work, including programming, the "drudgery" is the important bit. Doing the long, slow work of integrating systems is how you gain intimate knowledge of the shapes of those systems. "Oh, you just have to think at a higher level of abstraction now." Since "higher levels of abstraction" are all about removing details, in the end this means less information to be manipulated, hence less thinking that you are responsible for.
Make no mistake, we programmers are coal miners, and the world is going solar punk. Have a nice day.
My father, who was a mechanical engineer, has noted an instance of "brainrot" occurring with younger engineers: they are instructed in how to design parts, but not how to machine them, so they lack physical intuition about what kind of finish and tolerance is appropriate for a given part. This isn't really the fault of the young engineers, nor is it the fault of CAD which is still mainly a more efficient, more expensive draftsman's pencil, just a consequence of the fact that engineering curricula have largely optimized away the craftsmanship aspects of actually building things, leaving mechanical design work to be a mainly theoretical exercise.
With AI-assisted development we are at risk of something similar happening; the promise of LLM-based programming assistance is the ability to very rapidly knock together something according to a high-level specification without developing the craftsman's "feel" for how it actually runs. The scope of what's passed on in the discipline is narrowing, and people are forgetting essential skills they used to rely on in order to craft quality software.
I consider the scene with Dr. Chandra and SAL 9000 to be a fairly realistic predictive description of how experts interact with LLMs. SAL even has a somewhat obsequious personality.
Iosevka is the king of scalable terminal/programming fonts. I'm not sure why, maybe it's because the glyphs have lines and angles that look "terminal-y" in the same pleasing way Terminus and the 3270 font do whilst avoiding the problems that accompany trying to scale a pixel font.
Then you are probably not interested in this work at all. It is meant to develop Lisp—a language whose primary advantage in 2026 is ergonomics to humans, particularly a certain kind of human. If you're doing 100% agentic development, that advantage disappears and you might as well use something popular and statically typed, like Rust or TypeScript.
> If you're doing 100% agentic development, that advantage disappears
I beg to differ. Turns out, Lisp REPL - an actual, "true" REPL, not something like Python's (which is not the same), is an enormous multiplier for agentic workflows.
a) Lisp code can be very terse yet retain its readability - it never becomes cryptic like APL. Therefore, it's more token efficient. It was actually proven that Clojure is one of the most token-efficient "mainstreamish" PLs. https://martinalderson.com/posts/which-programming-languages...
b) When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically. Instead of predicting what code will do, it can run it, read the output, adjust, and iterate - the same way a skilled human developer works. Incremental evaluation of forms maps naturally to how an LLM generates tokens.
This isn't some theoretical hand-waving - I experience it every day - my WM on Mac is yabai that gets controlled via Hammerspoon, which uses Lua, which means I can use Fennel, which means I can use Lisp REPL. I would give the LLM a task, something to do with my app windows - it connects to the live REPL and starts analyzing, prototyping and poking into things interactively.
All my custom MCPs are written in babashka (Clojure) https://github.com/agzam/death-contraptions - whenever there's a problem or I need to improve my AI harness, LLM just does it from "inside out" and it takes less time and fewer tokens.
My main editor is Emacs - LLM can fully control it. I can make it change virtually any aspect of it. To load-test the MCP that does that, I made it play Tetris in Emacs. And not just to run it, but to play it for real - without losing. It was insane.
And of course, day-to-day I have to deal with non-Lispy, non-homoiconic languages more. And to be honest (even though of course I'm biased in this) static type systems is the exact thing in practice where their advantages feel like stop making any big difference. While Lisp REPL feels far more useful.
Technically, I think this is meant to develop Coalton, which is also statically typed and incredibly effective as a language for agents. All those ergonomic benefits that humans enjoy also allow AIs to develop lisp systems quite rapidly and robustly.
Not true. Are people not interested in archeology or history or museums? Denying such things as invalid is offensive. There are projects to reproduce things from ancient history like the Lycurgus cup.
Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI, so if you're still not, that's a PEBKAC. You are not using the tools correctly. Figure it out before you complain.
reply