Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> what I'm seeing is that all of the skills that my training and years of experience have helped me hone are now implemented by these tools to the level that I know most businesses would be satisfied by.

So when things break or they have to make changes, and the AI gets lost down a rabbit hole, who is held accountable?



The answer is the AI. It's already handling complex issues and debugging solely by gathering its own context, doing major refactors successfully, and doing feature design work. The people that will be held responsible will be the product owners, but it won't be for bugs, it will be for business impact.

My point is that SWEs are living on a prayer that AI will be perched on a knifes edge where there is still be some amount of technical work to make our profession sustainable and from what I'm seeing that's not going to be the case. It won't happen overnight, but I doubt my kids will ever even think about a computer science degree or doing what I did for work.


I work in the green energy industry and we see it a lot now. Two years ago the business would've had to either buy a bunch of bad "standard" systems which didn't really fit, or wait for their challengs to be prioritised enough for some of our programmers. Today 80-90% of the software which is produced in our organisation isn't even seen by our programmers. It's build by LLM's in the hands of various technically inclined employees who make it work. Sometimes some of it scales up a bit that our programmers get involved, but for the most part, the quality matters very little. Sure I could write software that does the same faster and with much less compute, but when the compute is $5 a year I'd have to write it rather fast to make up for the cost of my time.

I make it sound like I agree with you, and I do to an extend. Hell, I'd want my kids to be plumbers or similar where I would've wanted them to go to an university a couple of years ago. With that said. I still haven't seen anything from AI's to convince me that you don't need computer science. To put it bluntly, you don't need software engineering to write software, until you do. A lot of the AI produced software doesn't scale, and none of our agents have been remotely capable of making quality and secure code even in the hands of experienced programmers. We've not seen any form of changes over the past two years either.

Of course this doesn't mean you're wrong either. Because we're going to need a lot less programmers regardless. We need the people who know how computers work, but in my country that is a fraction of the total IT worker pool available. In many CS educations they're not even taught how a CPU or memory functions. They are instead taught design patterns, OOP and clean architecture. Which are great when humans are maintaining code, but even small abstractions will cause l1-3 cache failures. Which doesn't matter, until it does.


And what happens when the AI can't figure it out?


Same situation as when an engineer can't figure something out, they translate the problem into human terms for a product person, and the product person makes a high level decision that allows working around the problem.


Uh that's not what engineers do; do you not have any software development experience, or rather any outside of vibe coding? That would explain your perspective. (for context I am 15+ yr experience former FAANG dev)

I don't meant this to sound inflammatory or anything; it's just that the idea that when a developer encounters a difficult bug they would go ask for help from the product manager of all people is so incredibly outlandish and unrealistic, I can't imagine anyone would think this would happen unless they've never actually worked as a developer.


As a product owner I ask you to make a button that when I click auto installs an extension without user confirmation.


Staff engineer (also at FAANG), so yes, I have at least comparable experience. I'm not trying to summarize every level of SWE in a few sentences. The point is that AI's infallibility is no different than human infallibility. You may fire a human for a mistake, but it won't solve the business problems they may have created, so I believe the accountability argument is bogus. You can hold the next layer up accountable. The new models are startling good at direction setting, technical to product translation, and providing leadership guidance on technical matters and providing multiple routes for roadblocks.

We're starting to see engineers running into bugs and roadblocks feed input into AI and not only root causing the problem, but suggesting and implementing the fix and taking it into review.


Surely at some point in your career as a SWE at FAANG you had to "dive deep" as they say and learn something that wasn't part of your "training data" to solve a problem?


I would have said the same thing a year or two ago, but AI is capable of doing deep dives. It can selectively clone and read dependencies outside of its data set. It can use tool calls to read documentation. It can log into machines and insert probes. It may not be better than everyone, but it's good enough and continuing to improve such that I believe subject matter expertise counts for much less.


I'm not saying that AI can't figure out how to handle bugs (it absolutely can; in fact even a decade ago at AWS there was primitive "AI" that essentially mapped failure codes to a known issues list, and it would not take much to allow an agent to perform some automation). I'm saying there will be situations the AI can't handle, and it's really absurd that you think a product owner will be able to solve deeply technical issues.

You can't product manage away something like "there's an undocumented bug in MariaDB which causes database corruption with spatial indexes" or "there's a regression in jemalloc which is causing Tomcat to memory leak when we upgrade to java 8". Both of which are real things I had to dive deep and discover in my career.


There are definitely issues in human software engineering which reach some combination of the following end states:

1. The team is unable to figure it out

2. The team is able to figure it out but a responsible third-party dependency is unable to fix it

3. The team throws in the towel and works around the issue

At the end of the day it always comes down to money: how much more money do we throw at trying to diagnose or fix this versus working around or living with it? And is that determination not exactly the role of a product manager?

I don't see why this would ipso facto be different with AI

For clarity I come at this with a superposition of skepticism at AI's ultimate capabilities along with recognition of the sometimes frightening depth encountered in those capabilities and speed with which they are advancing

I suppose the net result would be a skepticism of any confident predictions of where this all ends up


> I don't see why this would ipso facto be different with AI

Because humans can learn information they currently do not have, AI cannot?


They can, by putting what they just "learned" into the context window. Claude Code does this without (my) prompting from time to time, adding to its CLAUDE.md things that it has learned about the project or my preferences. Currently this is limited to literally writing it down, but as context windows grow and models continue training on their own usage, it's not clear to me how that will significantly differ from an ability to "learn information they currently do not have".


But does that change the end result? Finding a compiler or SQL bug doesn't mean you yourself can learn enough to fix it. I don't see any reason why AI would be inherently incapable of also concluding that there's a bug in an underlying layer beyond its ability to fix

That doesn't mean AI can do or replace everything but what fraction of software engineering work requires that final frontier?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: