"Agentic typewriters are the future of typewriting. The idea is that something intelligent understands what you want to type and types it for you. Unless you really think we've reached the pinnacle of typewriter interfaces with repetitive key taps and carriage returns."
See how that sounds a bit silly? It's because it presents a false dichotomy. That our choice is between either the current state of interfaces or an agentic system which strips away your autonomy and does it for you.
> I just think games aren’t an area where this is nearly as much of an issue.
That's news to me as a game dev. I only get a few milliseconds every frame in which all my calculations need to run. If my program is built on spaghetti code, performance suffers and it becomes very noticeable very quickly.
If there was a website called InfiniteAppStore, which contained every app imaginable, and where you could type in your search and it would return the code for that app, would you find that as satisfying to use as Claude Code?
On the surface this does not sound as satisfying, because it more resembles shopping than coding. But once Claude Code is finally tuned to do its job perfectly, you will essentially be using that infinite app store. You're actually using it right now, every time you use Claude Code — just an imperfect version of it.
If you enjoy using AI because it allows you to "will anything into existence", it's because the process is currently imperfect. Using Claude Code is closer to shopping than coding, but because the process is obfuscated, it feels like you're the one making the products in the shopping catalogue every time you place an order.
For folks who are not familiar, this is "The Library of Babel" by Borges. There is no creating, just selecting among characters sequences we already knew were possible.
the library of babel contain all possible books, but people are unable o find the good ones among the sea of random rubbish.
the LLM equivalent would be to prompt "give me an app", without specifying what that app does and then repeating that until you get the app you are looking for, each time, checking by hand if the app does what you want.
I agree. Thinking about it a little more, I've realized that people create things today even if unnecessary (e.g. grow their own food), a lot of it for the satisfaction of it.
So we would still build stuff, but it would not be out of necessity.
Trust me, the two are not the same, and are orders of magnitude different in terms of human satisfaction.
When I walk down a street, I get 10 people stopping me to ask "Where did you get that?". When I tell them I made it, their heads explode. I know which side of that interaction is more satisfying.
We also go all-out for Halloween, and at the big Halloween festival there is literally a line down the street of people waiting to take photos with us. We created something amazing.
In media there was a rule 1-9-90. One creates, 9 comment, 90 use or are silent/don’t care.
Richard Branson realized that a company starts to behave differently when it reaches more than stuff of 135 people that coincides with average number of people you can consider as personally known to you.
Context switching is a bitch. You cannot do it for a long time. Abundance brought by AI will somehow consolidate as people cannot digest everything created by it.
There are more than 45,000 models avail at HF (if I remember it right). Choose wisely :)
One potential solution to this is AI summarization. Imagine coming home, and while preparing dinner your AI assistant recounts what happened in all your favourite tv shows that day. Then while you're doing the laundry, it tells you about all the new games it found and tested for you.
These are just thought starters, but something like this could significantly raise the ceiling on what one person is able to consume in a 24 hour period.
Adults tend to forget that they gained their powers of reasoning by exercising them.
Getting a summary, the way you described it, will be minus the effort required to think about it. This is great for information that you are already informed.
This is related to the illusion of explanatory depth. Most of us “know” how something works, until we have to actually explain it. Like drawing a bi-cycle, or explaining how a flush works.
People in general are not aware of how their brain works, and how much mental exercise they used get with the way the world is set up.
I suppose we can set up brain gyms, where people can practice using mental skills so that they don’t atrophy?
If there was an infinite App Store, we wouldn't have scarcity and I'd be doing literally anything else other than selling my time for money. I'd also be killed because there's no point to my owners/the world keeping me around anymore in that scenario, except, maybe for my winning personality/companionship.
No, I don't. But I hoped that asking the question "what would an AI tool look like once it was functioning perfectly?" might reveal something important about its underlying nature.
For example, if a carpenter was given a perfect hammer, or a painter a perfect paintbrush, would they find their craft any less enjoyable? AI, on the other hand, falls into a different category of tools (if we can call them tools at all) since they would no longer be enjoyable as tools of creation once they reached their "perfect" state.
I dunno, browsing McMaster-Carr feels like both shopping and creating at the same time.
Typing is just choosing from the latent space something special, too. Could just be random words, or, even fewer, random grammatically correct sentences.
There are two opposite answers here, and I feel like I could argue either one:
1) Humans were never held accountable, really
Outside of a few regulated industries, the worst that happens to an engineer who pushes negligent code is that they get fired. But after that happens, what actually changes? The organizational structure of the company that allowed the employee to push bad code still exists.
2) Humans will still be held accountable
If a human (managing a fleet of AI agents, let's say) ends up deploying bad code to production, they won't be able to point to the AI agent and say "it was them that did it!" -- it will still be the human at the end of the line that is held responsible.
I think the difference was that before all this, there would be additional information embedded in the way a person types, or the way they'd written their code, that you could use to build a larger picture of the situation.
Right now it's as if everyone started wearing digital face masks that replaced their facial expressions with "better" ones. Sure, maybe everyone's faces weren't perfect before, but their expressions contained useful information.
I have more family members who’ve been to prison than college. The mainstream narrative around how dangerous prison is is extremely overblown and limited to a few prisons and generally to those who engage in organized crime
Most people come out of prison in WAY better shape than they went in
It's not prison, but I know people who spent time in various county jails for weeks to months, and all of them definitely came out worse, and did their best to stay as far away as possible from going back (at least as far as I could tell).
if this was true and not just anecdotal the number of repeated offenders would be a lot less than there are now (ask your family members how many of their cellmates were there on their first stint…)
How many of those people had 400 million dollars to their name?
You're discounting the risk of inheriting a large sum of money while surrounded by criminals. Getting sudden access to that sort of money is dangerous at the best of times. I'd be scared enough outside of prison, let alone in the presence of organized crime.
I think it depends on your process. Problems that require creative solutions are often solved through the act of doing. For me, it's the act of writing code itself that ignites the pathways and neural connections that I've built up in my brain over the years. Using AI circumvents that process, and those brain circuits go unused.
> Toby Pohlen, a former DeepMind researcher, was put in charge of the “Macrohard” project to build digital agents that Musk said could replicate entire software companies. Musk said it was the “most important” drive at the company. The name is a “funny” reference to Microsoft, the billionaire added. Pohlen left 16 days later.
When I was 9 years old, my uncle asked me what I was going to do for work when I got older. I told him I was going to start a company called "MacroHard", and become the richest man alive. He told me that's not how the world works. Turns out it is.
Turns out it works a bit like that, yes - especially if you are the loudest chest-beating hominid with the largest pile of fruits, - but mostly not really.
I suppose I see the split a little bit differently. To me it's more that one camp of developers can still get a hit of satisfaction as if they built something themselves even if it was entirely generated by AI.
Would they get the same satisfaction from cloning a public repo? Probably not. It's too clear to their brain that they didn't have anything to do with it. What about building the project with cmake? That requires more effort, yes, but the underlying process is still very obviously something that someone else architected, and so the feeling of satisfaction remains elusive.
AI, however, adds a layer of obfuscation. For some, that's enough to mask the underlying process and make it feel as if they're still wielding a tool. For others, not so much.
I don't follow your analogy at all. Suppose I want to build an application with xyz features. My research yields that there are no such applications that include xyz features. However, there are plenty of applications that might have x feature, y feature or z feature, or a combination of two, but not all three.
If there are no such applications, I don't have a choice but to write it myself. This could take some time, especially if an MVP is all I'm interested in. LLMs are a novel tool in building an MVP. If time is a constraint, I can use an LLM, which should excel since xyz features are in its training set.
I suppose your analogy follows for developers who write applications that support abc features even though there are already applications out there that support abc features. Yes, I don't think that is very interesting. Your umpteenth clone of Snake is not interesting.
Further, I don't argue that 100% prompting an application together isn't building something themselves. Built on the shoulders of leviathans, as libraries were built on the shoulders of giants of yore.
But an application that combines xyz features is novel in this scenario. There is inherent value in that.
I'm not arguing whether AI has value as a code-generating application. I'm more interested in whether you, as a developer, still get satisfaction from building with AI, the same way you would get satisfaction if you built it yourself.
See how that sounds a bit silly? It's because it presents a false dichotomy. That our choice is between either the current state of interfaces or an agentic system which strips away your autonomy and does it for you.
reply