Hacker Newsnew | past | comments | ask | show | jobs | submit | hyperpape's commentslogin

His book appears to have been published in 1942 or earlier: https://time.com/archive/6786636/books-biography-in-pictures....

I uploaded the book here, I can't find that quote or the photo in it, though:

https://transfer.it/t/wCLoeh9XEZrZ


> Founders who started pre-2025 typically have built a technical stack optimized for a world where software development was bespoke and expensive.

Of all the things that AI has changed, tech stacks aren't one of them. The bots will gladly write Typescript, Java, Python, Rust, what have you. They could not give less of a shit.


I caught that too.

What is he getting at? How does the code and infra stack differ at all between a company that is using AI, vs one that is not?


Here's my take on what he was getting at:

Build vs. buy is an eternal question in enterprises. I remember many in-house data teams trying to build tools for "digital transformation" and cloud migration about 10 years ago. The challenge was, building those tools was more expensive than those enterprises could budget for (IT as cost center), so a startup like Snowflake would easily outcompete in-house solutions with their custom, cloud-based tech stack that was necessarily complex because it needed to serve the needs of thousands of customers.

If he's right, the build vs. buy equation has shifted more towards build, at least as far as enterprise software is concerned. IT is still a cost center, but in theory an internal team can now handle more requests for custom tools without looking to outside vendors. Essentially the cost of building in-house might be collapsing and therefore enterprise software startups will be serving fewer customers (who would all pay you more because if solving the problem was cheap they'd do it).

If you had to build a stack for dozens of customers paying huge amounts of money, how would that stack differ from the stack you'd build to serve thousands of customers? Certainly it wouldn't need to be as scalable! And that's probably what he's getting at. I think what you'd do instead, to capture those higher price point customers, is solve their problems more specifically, in a higher value manner.

Many companies already do this, investing far more in field engineers than they do in their tech stack, since customization is essential.


Thanks, this is a good explanation, though I would not have phrased it the way he did.

As blind as my belief that Asia exists, because I haven't personally navigated there. Hell, I've used electricity (using it right now), but I couldn't do the experiments you need to do to get myself to an 1850s level of understanding of how it works, much less our current level.

I trust that Linux has a process. I do not believe it is perfect. But it gives me a better assurance than downloading random packages from PyPi (though I believe that the most recent release of any random package on PyPi is still more likely safe than not--it's just a numbers game).


I get what you are saying but as you said, if you are already under attack you can't trust your own computer, you just hope that you aren't downloading another exploit/bogus update. Real software I imagine is not so easy to pwn so completely but I don't know.



1. They maintain and sell one of the largest relational databases.

2. They're the primary maintainer of one of the largest programming languages.

3. They do tons of HR/ERP type software.

4. They have a supply chain division (my company is a direct competitor, and we have 2000 employees--it's a drop in the bucket, but a few thousand here, a few thousand there and it starts to add up. Afaik, their supply chain org is bigger than ours).

5. Other things I probably don't know about.

Many of these things come with swarms of consultants who implement the software for companies that don't have any internal technical competency, which swells the number of workers by a lot.

Don't get me wrong, I'm not remotely a fan, I like to quote Bryan Cantrill's rant. However, they do a lot of things.


>> Many of these things come with swarms of consultants who implement the software for companies that don't have any internal technical competency,

I have some anecdotal evidence for this. I worked at a medium sized family owned business. They were going through a massive ERP upgrade/replacement. One of the bids was from Oracle. The company was able to essentially test drive each company they were reviewing to see if the software was going to be a good fit.

Oracle's sales team was like a having a football on site. They sent over no less than about 20 people to swarm our pretty small office, barge into the dev spaces and generally annoy the fuck out of everybody for several months. The other vendors? They sent one, maybe two people to work alongside us as we test drove their software.

It was funny being in those meetings listening to people talk about the Oracle people. Nobody even remembered how good or bad their software was. Every single comment was about how overbearing and pushy their sales people were.

Needless to say, we went with a different company.


That sales process is directly tied to the type of customer they're aiming for, which is larger than a "medium-sized family-owned business".

They mis-aligned but for someone like Boeing or United, they'd go gaga over the footy-crowd.


They also own multiple other huge companies that had tens of thousands of their own employees working in completely different areas (Netsuite, Cerner, Acme, etc)


6. Lawyers


"The first thing we do, let's AI all the lawyers" ?


Also their cloud

And all the supporting legal team of course.


No better proof that they're a huge company than that I could forget about an entire public cloud offering. Good point.


If I do my python right, from 2010-2020 they grew by 2.5% annually, from 2020 to 2025, they grew headcount by 3.7% annually.

After the layoffs, they'll apparently now have grown by 1.0% annually since 2020.

So yes, from 2021 to 2023, they had a huge spike, but overall, it's a net slowdown in growth relative to the 2010-2020 period.

If this was about reversion to the old pattern they'd have done a smaller set of layoffs or simply wait for a few years of zero growth.


Or a pickup from 2015 - 2021 which was 0% growth.

It's tricky to pick an end-of-decade year also - recessions tend to happen +/- 2 years of the end of each decade in the USA, or at least have done since records began in the 19th century. For example 2010 was recovery over 2008/2009's bust. It's not like comparing March to Ma4ch for a crude seasonal adjustment.


You did the Python right but the analysis wrong. Looking at it on a graph you can see that interpreting a single growth rate for the entire period (even if you stop pre-covid) doesn’t make sense.

You can see linear growth from 2010-2017. Then slow decline or at best a flatline from 2018-2021. Then they went crazy in 2022-2025.

Now if we just do 162k - 30k we are back to 132k, basically same ballpark as pre-COVID.


That's not how stocks are measured on wall street. They picked the dumb metric.


Why didn't you use log-scale? It seems like the obvious call.


> Magnificent 7 companies are increasing capex to their biggest ever to differentiate their tech from each other and the big AI labs, but the key realization is that they don’t have to spend it to win. It’s a defensive move for them, if they commit $50B, OpenAI and Anthropic need to go raise $100B each to stay competitive, which makes them reliant on investors’ money.

Stay competitive how? If the Magnificent 7 aren't spending the money, then how could it possibly hurt OpenAI/Anthropic to not raise equal amounts of money? Maybe you can pull together an explanation, but this author didn't even try to do so.

This piece seems poorly thought-out, but well designed to get shared.

Promote writers who will actually explain their claims carefully.


they have to fight to stay competitive because mag7 can outspend them, but my hypothesis is that they wont need to ultimately.


Returning no results is going to be linear in any DFA or NFA based implementation, though. You go character by character, and confirm that there are no matches.

It's only when you return multiple matches that the engines have a problem and become superlinear.


It might be possible to use a randomised algorithm to estimate the number of matches in only linear time.


Given the subject, it is funny to me that this post is meandering and repetitive.


Right, but if you say something essential in a meeting with 10 people and it has to percolate through five levels of management to reach the front-lines and gets watered down, that could be much more lost, even millions.

Scale cuts both ways.

What matters isn't how big the meeting is, it's how important the material is, and how well presented it is.


I don't think I've ever heard a top leader say anything essential in such a meeting. The stuff they work on is not related to my job at all. It's all gartner level strategy stuff. In our company they do take time talking about it in large calls but it's always boring and never relevant. And a lot of political spin you have to poke through to see the real message.

If I ever attend it just put it on mute and look at the slides while I do some real work. That way my attendance gets registered and it doesn't stress me out later with too much stuff left hanging.

That percolation is also translation of what they say to things that are relevant at my level. Like what we will be working on next year, if there's going to be bonus or job losses.

I couldn't give a crap about the company's strategy as a whole and that's not my job anyway. Why should I. I'm not here because I believe in some holy mission. I just wanna do something I like and get paid.


Most of those meetings are pretty damn fluffy. No one goes back to their desk and does anything different because they've introduced new company values and the acronym is S.M.I.L.E.

But this meeting is a course correction for how they're using AI, which is a huge initiative. He'll be trying to sell the right balance of "keep using the technology, but don't fuck anything up."

Too cautious, everyone freezes and there's a slowdown[0]. Too soft, everyone thinks it's "another empty warning not to fuck up" and they go right back to fucking everything up because the real message was "don't you dare slow down." After the talk, people will have conversations about "what did they really mean?"

[0] If you hate AI, feel free to flip the direction of the effect.


Well this is the main problem with AI right now isn't it? How to use it successfully without having it fuck up.

How are they expecting some juniors to do this when the industry as a whole doesn't know where to begin yet?

Like that Meta AI expert who wiped her whole mailbox with openclaw. These are the people who should come up with the answers.

Ps I mostly hate AI but I do see some potential. Right now it feels like we're entering a fireworks bunker looking for a pot of gold and having only a box of matches for illumination.

What we need to know from management is exactly what you mention. Do we go all out and accept that shit will hit the fan once in a while (the old move fast and break things) or do we micromanage and basically work manually like old. And that they accept the risk either way. That kind of strategy is really business leader kind of work. Blaming it on your techs when it inevitably goes wrong is not.

Because the tech as it is right now is very non-deterministic. One day it works magic and the next day it blows up.

And yes that SMILE thing was a good example. Been in too many of those time wasters.


Lol this reads like some transcript from the court of an ancient Roman Emperor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: