Hacker Newsnew | past | comments | ask | show | jobs | submit | titzer's commentslogin

We're going to have to put all the bad code into a Wasm sandbox.

It's partly the industry and it's partly the failure of regulation. As Mario Wolczko, my old manager at Sun says, nothing will change until there are real legal consequences for software vulnerabilities.

That said, I have been arguing for 20+ years that we should have sunsetted unsafe languages and moved away from C/C++. The problem is that every systemsy language that comes along gets seduced by having a big market share and eventually ends up an application language.

I do hope we make progress with Rust. I might disagree as a language designer and systems person about a number of things, but it's well past time that we stop listening to C++ diehards about how memory safety is coming any day now.


In the long term, you're right, but in the short term, it's going to be a bloodbath.

That's assuming the model is actually as good as they say it is. Given the amount of AI researchers over the past 3 years claiming supernatural capability from the LLM they have built, my bayesian skepticism is through the roof.

don't confuse bayesian skepticism with plain old contrarian bias. a true bayesian updates their priors, I'd say this is an appropriate time to do so. also don't confuse what they sell with what they have internally.

There haven't been any priors to update so far.

All LLMs got better for sure, but they are still definitively LLM and did not show any sign of having purpose. Which also made sense, because their very nature as statistical machines.

Sometimes quantity by itself lead to transformative change... but once, not twice, and that has already happened.


Anthropic has behaved the least like this of the AI companies.

They made a claim that 100% of code would be AI generated in a year, over a year ago.

That was a prediction. It was not a claim of their current capabilities. If that is the one you reach for then I feel my point has been made.

They were right, it's hit 100% at a number of large tech companies. (They missed their initial prediction of 90% 6 months ago, because the models then available publicly weren't capable enough.)

Please tell me those companies so I can find alternatives. I'm using AI every day and there's no way I would trust it do that.

The transition is pretty complete at e.g. Google and Meta, IIUC. Definitely whoever builds the AI tools you're using every day isn't writing code by hand.

I really just don't believe it. I have not met anyone in tech who writes zero code now. The idea that no one at Google writes any code is such a huge claim it requires extraordinary evidence. Which none ever gets presented.

Anecdotally I and my colleagues haven’t written a substantial line of code since January and this isn’t a mag7; I would be very surprised if mag7 were writing anything by hand unless it’s a custom DSL.

Can confirm that basically no one at Google or Meta hand writes code outside extremely extremely niche projects

I'm surprised to hear that. One of us is in a bubble, and I'm genuinely not sure who. I have not met anyone in tech (including multiple people at Google) who does still write code. I've been recreationally interested in AI for a long time, which is a potential source of skew I suppose, but I do not and most people in my circles do not work on anything directly related to AI.

So why aren’t they laying people off and pumping the extra money towards research efforts associated with Llm’s? Lmao.

They should all cut down their labour input right now if what you claim is true.


Have you considered that some companies want to grow instead of laying people off? No one at Anthropic writes code, they manage 20 Claude Code SWEs.

At many of the best tech companies, the conventional wisdom has always been that there's a huge backlog of stuff to be done. They don't want to deliver 100% of their roadmap with 50% of their employees, they want to deliver 200% of their roadmap with 100% of their employees. (And the speedup is not as high as these numbers imply for many kinds of performance, security, or correctness-critical software.)

Some companies like Block, Oracle, and Atlassian have indeed been laying people off.


Lmao man this is absolute nonsense.

Google has done nothing but destroy value with many of its ‘bets’. Your roadmap stuff is irrelevant - if you don’t have value creating projects in the pipeline and/or labour is augmented you should be laying off - period. Sundar’s job is to maximise the stock price.

So once again - nonsense. Now stop spreading crap that clearly fills people with fear. I can tell you have no understanding of corporate finance and how the management of tech firms actually think these things through.


I'm spreading what people involved in management of tech firms have told me. Perhaps they were lying, but to me it seems consistent with what I observe in the news and in my personal capacity.

I'm also not quite sure your alternate theory is self-consistent. If Google has been frequently destroying value, and companies invariably lay people off when their projects aren't producing value, doesn't that mean they should have already been laying people off?


Back in the 1990s when Telecoms needed to built out networks and infrastructure to cover a vast geography with cell towers to get the last mile delivery, these pricing and delivery models made sense. Today, with hundreds of megabits of bandwidth, a planet-spanning internet, and space-based internet, it does not. SMS is a hilariously backward and outdated system desperately clinging to a pricing model that doesn't reflect carrier costs in the slightest. I shudder to think what the internet would have looked like if thinking like this led to mobile networks being so twisted.


The reaction was swift. Let's hope there are real consequences, because consequences are the only thing that change behavior.

They wrote at the bottom:

> I'm going to be THE WHIZ KID BILLIONAIRE OF THIS GENERATION. WITNESS HISTORY.

Hey, wonderful. But the rules apply to us all, your whiz-ness.


> At this point I was genuinely feeling really fucking cool, cos I was just thinking this would be a really cool story to tell investors when I have a startup. The site was just booming.

This is Mark Zuckerberg posting this, check.


It's in orbit. I for one love the fact they recycled some well-designed engines and made this mission a success (so far).

The irony is that the vast deskilling that's happening because of this means that most "software engineers" will become incapable of understanding, let alone fixing or even building new versions of the systems that they are utterly dependent on.

There should be thousands or tens of thousands people worldwide that can build the operating systems, virtual machines, libraries, containers, and applications that AI is built on. But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.

God I hope it doesn't all crash at once.


There is a deadly game of chicken going on. Junior recruiting already stopped for the most part. Only way this doesn’t end in a catastrophe is if AI becomes genuinely as good as the most skilled developers before we run out of them. Which I doubt very much but don’t find completely impossible.

And the irony is that AI usage should make onboarding juniors easier.

Before it was "hey $senior_programmer where's the $thing defined in this project?", which either required a dedicated person onboarding or someone's flow was interrupted - an expected cost of bringing up juniors.

Now a properly configured AI Agent can answer that question in 60 seconds, unblocking the Junior to work on something.

And no, it doesn't mean Juniors or anyone else get to make 10k line PRs of code they haven't read nor understand. That's a very different issue that can be solved by slapping people over the head.


The problem is that juniors given access to AI don't seem to learn as much. AI just gives them fish over and over instead of learning how to fish.

  > The problem is that juniors given access to AI don't seem to learn as much.
i see this first-hand; they don't even know what they don't know so they circle over and over with ai leading them down rabbit holes and code that breaks in weird ways they cant even guess how to fix... stuff that if you were a real programmer you would have wrote in a few minutes let alone hours or days...

Yea, giving people a blank Claude with no setup will get you that.

What you could do is encourage (or force with IT's assistance) them to use a prompt (or hook or whatever) that refuses to do work for them, but instead telling them where to change and what without actually doing the work.


Or if code quality stops mattering, in a kind of "ok, the old codebase is irretrievably sphagettified. Lets just have the chatbot extract all the requirements from it, and build a clean room version" kind of way. It's also not impossible we go that route.

How many kernel devs does the world need? A dozen or two?

It will be the same with software. AI will be writing and consuming most software. We will be utilizing experiences built on top of that, probably generated in real time for hyper personalization. Every app on your phone will be replaced by one app. (Except maybe games, at least for a short while longer).

Everyone's treating writing code as this reverent thing. No one wrote code 100 years ago. Very few today write assembly. It will become lost because the economic neccesity is gone.

It's the end of an era, but also the beginning of a new one. Building agentic systems is really hard, a hard enough problem that we need a ton of people building those systems. AI hardware devices have barely been registered, we need engineers who can build and integrate all sorts of systems.

Engineering as a discipline will be the last job to be automated, since who do you think is going to build all the worlds automation?


> How many kernel devs does the world need? A dozen or two?

You're low by several orders of magnitude. "The 2025 development cycle saw 2,134 developers contribute to [Linux] kernel 6.18" [1]

[1] https://commandlinux.com/statistics/linux-kernel-contributor...


How wildly dismissive of the foundation of the X$ billion dollar software industry. You think humans just stumbled into writing code by accident or something?

How does building agentic systems, a "really hard" problem, not just end up a "regular code" problem? Because that is what it is. A distributed systems problem with non-deterministic run lengths. How do you switch agent contexts? Similar to how you solve regular program context switching. How do you search tool capabilities and verify them? How do you effectively manage scheduled tasks?

Oh, look, you've just invented the operating system kernel. Suddenly, those 'dozen or two' experts don't seem so archaic after all!


Does it even make sense to build everything on top of machines that are 70% reliable? The sheer orchestration and validation overhead at scale risks being more expensive than just keeping most software engineers and having them manage a few AI agents.

Also, 200 years ago we didn't have bike mechanics. Car mechanics. Boat mechanics. Plumbers. Electricians. Not all new professions fade away.


I feel I've upskilled in so many directions (not just "ability to prompt LLMs") since going all in on LLM coding. So many tools, techniques, systems, and new areas of research I'd never have had the time to fully learn in the past.

I have a hard time believing any tenured developer is not actually learning things when using LLMs to build. They make interesting choices that are repeatable (new CLIs I didn't even know existed, writing scripts to churn through tricky data, using specific languages for specific tasks like Go for concurrently working through large numerous tasks, etc.)

Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems, or they had no foundational knowledge or interest in programming to begin with (which is also a valid way to use these tools, but they don't work very well without guidance for too long [yet]).


Learning calculus by watching the professor solve integrals on the board for an hour doesn't result in the same level and depth of understanding as working through homeworks every week for a semester. If you ran off to your TA to solve every problem in your homework, you just won't learn calculus.

I've vibe coded plenty. I mostly don't look at the crap coming out. Don't want to. When I do I absorb a tiny bit, but not enough to recreate the thing from scratch. I might have a modicum more surface-level knowledge, but I don't have deep understanding and I don't have skills. To the extent that I've fixed or tweaked AI-generated code, it's not been to restructure, rearchitecture, or refactor. If this is all I did day in and day out, my entire skillset would atrophy.


"I mostly don't look at the crap coming out."

This is pretty much my point. I use LLMs to code _and_ to learn. I read everything that comes out. Half of it is wrong or incomplete. The other half saved me a bunch of time and taught me things.


This. I never had patience to figure how to build a from-scratch iOS app because it required too much boilerplate work. Now i do, and i got to enjoy Swift as a language, and learned a lot of iOS (and Mac) APIs.

But it isn't "from scratch", is it? It's "from Claude".

If you build a house from scratch but you didn't mill the lumber, did you build it from scratch?

If you make a pizza from scratch but you used canned sauce was it from scratch? What if you used store bought dough? What if you made the sauce and the dough but you didn't grow the tomato?


I think there's a considerable difference in its ability to help with breadth vs. depth of expertise.

For me both are true at the same time.

I vividly remember understanding how calculus works after watching some 3blue1brown videos on youtube, but once I looked at some exercises I quickly realized I was not able to solve them.

Similar thing happens with LLMs and programming. Sure I understand the code but I'm not intimately familiar with it like if I programmed it "old school".

So yes, I do learn more but I can't shake the feeling that there is some dunning kruger effect going on. In essence I think that "banging my head against the wall" while learning is a key part of the learning process. Or maybe it's just me :D


It's not just you. I feel the same thing, and I saw it in practice helping my son study for a chemistry test just last night. He had worked through a bunch of problems by following the steps in his notes and got the right answers, but couldn't solve them without the notes because his comprehension of why he was taking all the steps wasn't solid.

Once we addressed that, he did great solo. Working the mechanics of the problems with the notes helped, but it was getting independent understanding of the reason for each step that put everything together for him.


> Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems

How many bytes is a pointer in C? How many bytes is a shared pointer in C++? What does sysctl do? What about fsync?

What is a mutex lock? How is it different from a spin lock?

You want to find the n nearest points to a given point on a 2-D Cartesian plane. Could you write the code to solve that on your own?

Can you answer any of these questions without searching for the answer?

I don't use LLMs and I learn things fine. Always have. For several decades. I care deeply about the underlying code and systems. It annoys me when people say they do and they cannot even understand how the computer works. I'm fine with people having domain-specific knowledge of programming: maybe you've only been interested in web development and scripting DOM elements. But don't pretend that your expertise in that area means you understand how to write an operating system.

Or worse: that it prevents you from learning how to write an operating system.

You can do that without an LLM. There's no royal road. You have to understand the theory, read the books, read the code, write the code, make mistakes, fix mistakes, read papers, talk to other people with more experience than you... and just write code. And rewrite it. And do it all again.

I find the opposite is true: those who use LLM coding exclusively never enjoyed programming to begin with, only learned as much as they needed to, and want the end results.


Agree with pretty much everything you wrote here, I guess with the addendum that LLMs can be a part of the learning experience you're describing. It's as easy as telling the LLM "don't write a single line of code nor command, I want to do everything, your goal is to help me understand what we're doing here."

There are always going to be people who just want the end result. The only difference now is that LLM tools allow them to get much closer to the end result than they previously were able to. And on the other side, there are always going to be people who want to _understand_ what's happening, and LLMs can help accelerate that. I use LLMs as a personalized guide to learning new things.


I know it sounds extreme to dismiss that workflow, but I don't think people are talking enough about the subtle psychological consequences of LLM writing for this kind of thing.

In the same way that googling for an SEO article's superficial answer ends up meaning you never really bother to memorize it, "ask chat" seems to lead to never really bothering to think hard about it.

Of course I google things, but maybe I should be trying to learn in a way that minimizes the need. Maybe its important to learn how to learn in way that minimizes exposure to sycophantic average-blog-speak.


Best of luck in your journey!

To those reading this thread though, be wary of the answers LLMs generate: they're plausible sounding and the LLM's are designed to be sycophants. Be wary, double check their answers to your queries against credible sources.

And read the source!


What do you mean by "LLM coding"? That's not a very meaningful term, it covers everything from 100% vibe coded projects, to using the LLM to gradually flesh out a careful initial design and then verifying that the implementation is done correctly at every step with meticulous human review and checking.

The latter.

Trust me. All those people do it for the love of doing it, so I don't think they will outsource the jobs to some automation....

I have been coding long before internet and before there were huge demand for software devs..and I would be coding even after there is no demand for the same.


If a catastrophic failure occurs we will have to return to first principles and re-derive the solutions. Not so bad, probably enlivening even to get to spin up the mind again after a break.

We found 500 zero-days in ten year old widely used open-source projects. Was that not a demonstration of the catastrophic failure of human debugging capability?

And yet the world keeps turning we’ll figure it out

"What if the AI goes away" seems to be a common argument but it's just not ever going to happen without, say, a solar flare wiping out all electronics, which will be an issue which skilled programmers can't help anyway. The same thing happened when high-level languages came around, not many people can hand-write assembly anymore or work with punch cards but society hasn't collapsed.

>But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.

That's only a brief moment in time. We learned it once, we can learn it again if we have to. People will tinker with those things as hobbies and they'll broadcast that out too. Worst case we hobble along until we get better at it. And if we have to hobble along and it's important, someone's going to be paying well for learning all of that stuff from zero, so the motivation will be there.

Why do people worry about a potential, temporary loss of skill?


Because they may have studied history... There are countless examples of eras of lost technology due to a stumble in society. Where those societies were never able to recover the lost "secrets" of the past. Ultimately, yes, humans can rediscover/reinvent how to do things we know are possible. But it is a very real and understandable concern that we could build a society that slowly crumbles without the ability to relearn the way to maintain the systems it relies upon, fast enough to stop it from continued degradation.

Like, yeah, you have the resources right now to boot strap your knowledge of most coding languages. But that is predicated on so many previous skills learn through out your life, adulthood and childhood. Many of which we take for granted. And ultimately AI/LLM's aren't just affecting developers, they are infecting all strata of education. So it is quite possible that we build a society that is entirely dependent on these LLM's to function, because we have offloaded the knowledge from societies collective mind... And getting it back is not as simple as sitting down with a book.


And we're still here right? We have more books and knowledge and capabilities than ever. Despite theoretically losing knowledge along the way, we're okay (mostly).

Society can replace the systems it relies on. The replacement might not be the best, but it'll probably handle things until we can reinvent a newer, better system. It probably won't be easy, but you can't convince me that humanity suddenly cannot adapt and fix problems right in front of them. How long does history have us doing that?

These are extraordinary claims that all of society will just become dumb and not be able to do any of this. History is also littered with people fretting about the next generation not being smart enough or whatever, and those fears rhyme pretty closely with what we're talking about here.


You could have lived 200 years. But instead, people decided they'd rather invest in crypto or LLMs instead.

Maybe humans will still be here in a century. But you won't be. It didn't have to be this way.


I don't see how they are actually exclusive in the long-term. Crypto investment isn't that big, and LLMs, or AI in general, may provide support for better treatments, thus possibly allowing people to reliably live onto 200 years.

>"That's only a brief moment in time. We learned it once, we can learn it again if we have to. "

Yes we can but there is a big problem here. We will "learn it again" after something breaks. And the way the world currently functions there might not be a time to react. It is like growing food on industrial scale. We have slowly learned it over the time. If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.


>It is like growing food on industrial scale.

How many people do you think know how to do that today? It's in the millions (probably 10s to 100s), scattered all across the globe because we all need to eat. Not to mention all of the publications on the topic in many different languages. The only credible case for everyone forgetting how to farm is nuclear doomsday and at that point we'll all be dead anyway.

>If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.

I don't think there is a single piece of technology that is so critical to civilization that everyone alive easily forgets how to do it and there is also zero documentation on how it works.

These vague doomsday scenarios around losing knowledge and crashing civilization just have zero plausibility to me.


I imagine it being a "does anybody know COBOL?!" but much sooner than sixty years rom now.

COBOL also came to mind.

The COBOL thing seems to be working out just fine last I heard. Today a small number of people get paid well to know COBOL's depths and legacy platforms/software. The world moved on, where possible, to lower cost labor and tools.

Arguably, that outcome was the right creative destruction. Market economics doesn't long-term incentivize any other outcomes. We'll see the arc of COBOL play out again with LLM coding.


I know it's just anecdotal, but I looked for COBOL salaries a couple of years ago, curious about this "paid well".

The salaries were ok but not good for COBOL.

Here's an anecdotal Reddit thread about it. https://www.reddit.com/r/developpeurs/comments/1ixfpsx/le_sa...


I've been waiting for the article talking about how AI is affecting COBOL. Preferably with quotes from actual COBOL programmers since I can already theorize as well as the next guy but I'm interested in the reports from the field.

While LLMs have become pretty good at generating code, I think some of their other capabilities are still undersold and poorly understood, and one of them is that they are very good at porting. AI may offer the way out for porting COBOL finally.

You definitely can't just blindly point it at one code base and tell it to convert to another. The LLMs do "blur" the code, I find, just sort of deciding that maybe this little clause wasn't important and dropping it. (Though in some cases I've encountered this, I sometimes understand where it is coming from, when the old code was twisty and full of indirection I often as a human have a hard time being sure what is and is not used just by reading the code too...) But the process is still way, way faster than the old days of typing the new code in one line at a time by staring at the old code. It's definitely way cheaper to port a code base into a new language in 2026 than it was in 2020. In 2020 it was so expensive it was almost always not even an option. I think a lot of people have not caught up with the cost reductions in such porting actions now, and are not correctly calculating that into their costs.

It is easier than ever to get out of a language that has some fundamental issue that is hard to overcome (performance, general lack of capability like COBOL) and into something more modern that doesn't have that flaw.


I mean there should be. But there's not. Despite the millions of CS grads produced many people could not reasonably be expected to produce many 'standard' parts of a software stack

I laugh jollily in the face of AI. I know the coming shit pile, its nature isn't going to be surprising, only the speed and utter surrender of the vast majority of humanity to mediocrity.

What AI represents to me is a teacher! I have so long lacked a music teacher and musical tools. I spent my entire career doing invisible software at the lowest levels and now I can finally build cool tools that help me learn and practice and enjoy playing music! Screw all the haters; if you're curious about a wide range of topics and already have some knowledge, you can galavant across a vast space and learn a lot along the way.

AI is a bit of a bullshitter but don't take its bullshit as truth, like you should never take anything your teacher says as gospel. How do we know what's true? The truth of the universe and the world is that underneath it all, it is self consistent, and we keep making measurement errors. The AI is an enormous pot of magic that it's up to you to organize with...your own skills.

You have to actively resist deskilling by doing things. AI should challenge you and reward you, not make you passive.

Use AI to teach yourself by asking lots of questions and constantly testing the results against reality.

For me right now, that's the fretboard.


> Markets will not reward slop in coding, in the long-term.

Forgive my cynical take, but we're currently experiencing a market that doesn't appear to be rewarding anything specific in the long-term, as huge sums of money are traded on a minute-to-minute, day-to-day, and week-to-week basis. There's an explosion of uncertainty in today's markets and complete collapse of long-range planning echoing at many levels in society--particularly at the highest levels of governments. So I kind of don't want to hear about markets are going to reward.

But what exactly is "good code" (presumably the opposite of slop)?

I'd say that good code is terse, robust, suits its function, yet admits just the right amount of testability, performance, and customizability for the future. Good code anticipates change well. That means that if it has one job, it does that one job well, and not twenty others. If the job is going to stay the same, the code is going to stay the same. Good systems are made from well-factored pieces of code that have proper jobs and do their proper jobs without taking on jobs they shouldn't.

I for one think that AI code is going to reflect its training. If it's trained on just a random selection of everything out there, it's probably going to be pretty mediocre and full of bugs.


Markets have always rewarded popularity in the short term. in the long term though it has always rewarded quality.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: