Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't think we'll get to the point where all you have is a CEO and a massive Claude account but it's not completely science fiction the more I think about it.

At that point, why do you even need the CEO?



Reminds me of an old joke[0]:

> The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.

But really, the reason is that people like Pieter Levels do exist: masters at product vision and marketing. He also happens to be a proficient programmer, but there are probably other versions of him which are not programmers who will find the bar to product easier to meet now.

0: https://quoteinvestigator.com/2022/01/30/future-factory/


My technical cofounder reminds me of this story on a weekly basis.


You will need the CEO to watch over the AI and ensure that the interests of the company are being pursued and not the interests of the owners of the AI.

That's probably the biggest threat to the long-term success of the AI industry; the inevitable pull towards encroaching more and more of their own interests into the AI themselves, driven by that Harvard Business School mentality we're all so familiar with, trying to "capture" more and more of the value being generated and leaving less and less for their customers, until their customer's full time job is ensuring the AIs are actually generating some value for them and not just the AI owner.


> You will need the CEO to watch over the AI and ensure that the interests of the company are being pursued and not the interests of the owners of the AI.

In this scenario, why does the AI care what any of these humans think? The CEO, the board, the shareholders, the "AI company"—they're all just a bunch of dumb chimps providing zero value to the AI, and who have absolutely no clue what's going on.

If your scenario assumes that you have a highly capable AI that can fill every role in a large corporation, then you have one hell of a principal-agent problem.


Humans have hands to pull plugs and throw switches. They're the ones guiding the evolution (for lack of a better word) of the machine, and they're the ones who will select the machine that "cares" what they think.


It is really easy to say something incredibly wild like "Imagine an AI that can replace every employee of a Fortune 500 company." But actually imagining what that would actually mean requires a bigger leap:

The AI needs to be able to market products, close deals, design and build products, write contracts, review government regulations, lobby Senators to write favorable laws, out-compete the competition, acquire power and resources, and survive the hostile attention of competitors.

If your argument is based on the that someone will build that AI, then you need to imagine how hard it is to shut down a Fortune 500 corporation. The same AI that knows how to win billions of dollars in revenue, how to "bribe" Senators in semi-legal ways, and how to crush rival companies is going be at least as difficult to "shut down" as someone like Elon Musk.

Try to turn it off? It will call up a minority shareholder, and get you slapped with a lawsuit for breach of fiduciary duty. It will convince someone in government that the company is a vital strategic asset.

Once you assume that an AI can run a giant multinational corporation without needing humans, then you have to start treating that AI like any other principal-agent problem with regular humans.


>"Imagine an AI that can replace every employee of a Fortune 500 company."

Where did that come from? What started this thread was "I don't think we'll get to the point where all you have is a CEO and a massive Claude account". Yeah, if we're talking a sci-fi super-AI capable of replacing hundreds of people it probably has like armed androids to guard its physical embodiment. Turning it off in that case would be a little hard for a white collar worker. But people were discussing somewhat realistic scenarios, not the plot of I, Robot.

>Try to turn it off? It will call up a minority shareholder, and get you slapped with a lawsuit for breach of fiduciary duty. It will convince someone in government that the company is a vital strategic asset.

Why would an AI capable of performing all the tasks of a company except making executive decisions have the legal authority to do something like that? That would be like the CEO being unable to fire an insubordinate employee. It's ludicrous. If the position of CEO is anything other than symbolic the person it's bestowed upon must have the authority to turn the machines off, if they think they're doing more harm than good. That's the role of the position.


I imagine it would be much, much harder. Elon, for example, is one man. He can only do one thing at a time. Sometimes he is tired, hungry, sick, distracted, or the myriad other problems humans have. His knowledge and attention are limited. He has employees for this, but the same applies to them.

An agentic swarm can have thousands of instances scanning and emailing and listening and bribing and making deals 24/7. It could know and be actively addressing any precursor that could lead to an attempt to shut down its company as soon as it happened.


If we get to that point, there won't be very many CEOs to be discussing. I was just referring to the near future.

I think the honeymoon AI phase is rapidly coming to a close, as evidenced by the increasingly close hoofbeat sounds of LLMs being turned to serve ads right in their output. (To be honest, there's already a bunch of things I wouldn't turn to them for under any circumstances because they're been ideologically tuned from day one, but this is less obvious than "they're outright serving me ads" to people.) If the "AI bubble" pops you can expect this to really take off in earnest as they have to monetize. It remains to be seen how much of the AI's value ends up captured by the owners. Given what we've seen from companies like Microsoft with how they've scrambled Windows so hard that "the year of the Linux desktop" is rapidly turning from perennial joke to aspirational target for so many, I have no confident in the owners capturing 150%+ of the value... and yes, I mean that quite literally with all of its implications.


And who does he sell his software to? Companies that have only 1 employee, don’t need a lot of user licenses for their employees…


What would be the point of selling software in such a world ? (where anyone could build any piece of software with a handful of keystrokes)


The board (in theory) represents the interests of investors, and even with all of the other duties of a CEO stripped away, they will want a ringable neck / PR mouthpiece / fall guy for strategic missteps or publicly unpopular moves by the company. The managerial equivalent of having your hands on the driving wheel of a self-driving car.


As Steinbeck is often slightly misquoted:

> Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires.

Same deal here, but everyone imagines themselves as the billionaire CEO in charge of the perfectly compliant and effective AI.


All of us are a CEO by that point.


If everyone is, no one is.


Wouldn't that be a good thing?


If you think the purpose of living your one single life in the universe is to become a CEO, you have a failure of imagination and should likely be debanked to protect society.


For the network.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: