This is actually a false premise pushed later to justify layoffs. They started overhiring in 2018-2019. They just continued a preexisting trend through 2021.
It’s weird because I could see raising money on the premise that GitHub is garbage, not git. But then you can’t say I co-founded GitHub as your bona fides.
> architecture is what happens when all those local pieces interact, and you can’t get good global behaviour by stitching together locally correct components
This is a great article. I’ve been trying to see how layered AI use can bridge this gap but the current models do seem to be lacking in the ambiguous design phase. They are amazing at the local execution phase.
Part of me thinks this is a reflection of software engineering as a whole. Most people are bad at design. Everyone usually gets better with repetition and experience. However, as there is never a right answer just a spectrum of tradeoffs, it seems difficult for the current models to replicate that part of the human process.
I’ve had a couple wins with AI in the design phase, where it helped me reach a conclusion that would’ve taken days of exploration, if I ever got there. Both were very long conversations explicitly about design with lots of back and forth, like whiteboarding. Both involved SQL in ClickHouse, which I’m ok but not amazing at — for example I often write queries with window functions, but my mental model of GROUP BY is still incomplete.
In one of the cases, I was searching for a way to extract a bunch of code that 5-6 queries had in common. Whatever this thing was, its parameters would have to include an array/tuple of IDs, and a parameter that would alter the table being selected from, neither of which is allowed in a clickhouse parameterized view. I could write a normal view for this, but performance would’ve been atrocious given ClickHouse’s ok-but-not-great query optimizer.
I asked AI for alternatives, and to discuss the pros and cons of each. I brought up specific scenarios and asked it how it thought the code would work. I asked it to bring what it knew about SQL’s relational algebra to find the an elegant solution.
It finally suggested a template (we’re using Go) to include another sql file, where the parameter is a _named relation_. It can be a CTE or a table, but it doesn’t matter as long as it has the right columns. Aside from poor tooling that doesn’t find things like typos, it’s been a huge win, much better than the duplication. And we have lots of tests that run against the real database to catch those typos.
Maybe this kind of thing exists out there already (if it does, tell me!) but I probably wouldn’t have found it.
I’ve found them to be pretty good if you tell them to be more critical and to operate as a sophisticated rubber duck. They are actually pretty decent at asking questions that I can answer to help move things forwards. But yeah by default they really like to tell me I’m a fucking genius. Such insight. Wow.
I agree with you very much, if what you are building actually benefits from that much client side interactivity. I think the counterpoint is that most products could be server rendered html templates with a tiny amount of plain js rather than complex frontend applications.
Facebook moved to mercurial because of specific problems related to the size of their monorepo. Moreover the git maintainers were unwilling to work with Facebook to improve git to solve some of these problems. Mercurial was a better fit and was open to the help. But all that said if you don’t have a truly enormous monorepo like Facebook or Google then git is arguably the better tool given the network effects. I don’t think Facebook wanted to promote Mecurial as some vastly superior solution outside because for most people it isn’t.
From the Facebook blog post, it seems like the key issue was Facebook's internal filesystem monitoring tool (Watchman) was easier to integrate with Mercurial than with Git:
So, neither Mercurial-out-of-the-box nor Git-out-of-the-box could handle huge monorepos. But Mercurial's willingness to make some modifications made it easier for Facebook to integrated their custom tooling to avoid the slow Big-O O(n) scans for changed files.
> From the Facebook blog post, it seems like the key issue was Facebook's internal filesystem monitoring tool (Watchman) was easier to integrate with Mercurial than with Git:
This is not true. I’ve seen x-rays of a child’s mouth with clearly no adult teeth visible below the gums. Later I’ve seen X-rays of the same mouth with one or two adult teeth below the gums where baby teeth are about to fall out. The adult teeth are there underneath once the baby teeth fall out but they are not there “from the start”. That isn’t even to mention the size problem.
reply