Hacker Newsnew | past | comments | ask | show | jobs | submit | lethain's commentslogin

I've been blogging for about 16 years. Writing is an underrated way to cement what you learn any given day or year, and over time has made it possible to reach into any part of the industry and get an actual response. Writing is particularly powerful in combination with actually doing things that (are perceived to) matter; the credibility from doing both is much higher than doing either.

Concretely answering the questions asked:

1. At various points I spent a lot of time maintaining, but now it's just a static blog deployed via Github Actions onto a Github Page. I haven't done any meaningful changes in a few years, and the changes are for fun, not necessity

2. I got my first job in tech thanks to blogging: https://lethain.com/datahub/

3. My blogging has made it possible to write two pretty successful books: https://staffeng.com/book/ and https://press.stripe.com/an-elegant-puzzle (working on a third now)

4. Hard to assess, but I believe I've been able to subtly but meaningfully advance the technology industry through my writing :-)

5. A significant majority of folks are unaware that I write, and that's great! I don't think impact depends on folks connecting their colleague to the writer or whatnot


> Writing is nature's way of letting you know how sloppy your thinking is.

- Dick Guindon


I like how you have no filler or cruft on your blog posts, and jump straight to the topic. I went on a "binge" of your blog a year and half ago and left with a lot of actionable advice. Thank you!


I try to read your new posts when I get the chance.

Half the time, I find them really interesting and an interesting perspective on corporate psychology.

The other half of the time, I read them and have no idea what you’re talking about, which leads me to worry that my trajectory in my engineering career is doomed to insignificance, because I never have any of these meetings with execs or high level people like the ones you describe.

Do you have any guidance for people like me, in the second scenario?


Your writing is awesome! I have followed your blog for a while now and have recommended it to my team.


Love your writing, Will. Have both of your books!


In my role as unofficial, self-appointed late-stage Digg historian [0], it's my belief that Digg ultimately had to change as the Google SEO changes had fatally wounded its near-profitability. Further, as a VC funded company it made the inevitable (and I think best for everyone involved) decision to modernize in an attempt to be a member of the Facebook, Twitter cohort rather than experience a long-term shrinking into mediocrity.

[0]: https://lethain.com/digg-v4/


But did Digg serve its purpose for that period of the internet? That is, is it simply the case that Digg had limited shelf life and should have been left alone to die out naturally. If it was losing numbers then maybe the redesign or whatever it was accelerated its demise. Sometimes it's better to use the iPod effect as momentum to launch then next new thing (iPhone). That new thing is totally separate from the old thing (Digg).


If you take investor money and start to fail, you aren't gonna die out naturally, you'll be killed and drained of blood, or die attempting a triple pirouette off a 200 meter high dive into a kiddy pool.


That doesn't seem to have happened here. Digg isn't popular, and I'd definitely stipulate that it's lower quality, but it still exists and functions as a general interest content aggregator thing.


New Digg is a resurrection of old Digg by a completely different company. [1]

The Digg v4 thing was a very big deal back when it happened. In a matter of a month or two, Digg lost the majority of its traffic to Reddit. I personally remember switching pretty much overnight after v4 when beforehand I always thought Reddit was inferior to Digg.

[1] https://en.wikipedia.org/wiki/Digg#Sale_and_relaunch

Edit:

Found an old graph of the traffic dropoff when v4 launched [2]

[2] http://i.imgur.com/FuqV9.png


As I understand, Digg the original company was killed and drained of blood. The company that acquired the rights to digg.com as a result then re-launched it. The original Digg, the company, is long dead and buried.


Just because a website is up doesn't mean its relevant


I think it's tempting to attribute to structure what can be better explained by accident.

You obviously have more context, but from my experience developing product, there are almost always ways to change direction incrementally and in a manner that allows you to get your toe wet to test the temperature, and not dive in unshielded.

Although I realize now you may be saying that change was inevitable, but catastrophe wasn't.


Google Panda launched in 2011. Digg V4 launched in 2010. This seems to contradict the timeline in your post?


Sorry, I think that sentence was a bit unclear. What it meant to convey is that we had 200 daily active Facebook uniques, essentially that very few folks used FB to connect to Digg.


(Stripe infra lead here)

This was a focus in our after-action review. The nodes responded as healthy to active checks, while silently dropping updates on their replication lag, together this created the impression of a healthy node. The missing bit was verifying the absence of lag updates. (Which we have now.)


You might want to clarify this in the post. To me it reads like you knowingly had degraded infra for days leading up to an incident which might have been preventable had you recovered this instances.


Thanks for the suggestion, we’re adding a clarifying note to the report’s timeline.


I am a curious and very amateur person, but do you think that if "100%" uptime were your goal, this:

"[Three months prior to the incident] We upgraded our databases to a new minor version that introduced a subtle, undetected fault in the database’s failover system."

could have been prevented if you had stopped upgrading minor versions, i.e. froze on one specific version and not even applied security fixes, instead relying on containing it as a "known" vulnerable database?

The reason I ask is that I heard of ATM's still running windows XP or stuff like that. but if it's not networked could it be that that actually has a bigger uptime than anything you can do on windows 7 or 10?

what I mean is even though it is hilariously out of date to be using windows xp, still, by any measure it's had a billion device-days to expose its failure modes.

when you upgrade to the latest minor version of databases, don't you sacrifice the known bad for an unknown good?

excuse my ignorance on this subject.


> could have been prevented if you had stopped upgrading minor versions, i.e. froze on one specific version and not even applied security fixes, instead relying on containing it as a "known" vulnerable database?

This is a valid question.

As a database and security expert, I carefully weigh database changes. However, developers and security zealots typically charge ahead "because compliance."

Email me if you need help with that.


You could use that same logic to argue that they should never write any new code, just live forever on the existing code.

But customers want new features, so Stripe does changes.


How do you have a ATM thats not networked?


Same user (sorry I guess I didn't enter my password carefully as I can't log in.)

Well I mean they're not exactly on the Internet with an IP address and no firewall, are they? (Or they would have been compromised already.)

Whatever it is, it must be separated off as an "insecure enclave".

So that's why I'm wondering about this technique. You don't just miss out on security updates, you miss performance and architecture improvements, too, if you stop upgrading.

But can that be the path toward 100% uptime? Known bad and out of date configurations, carefully maintained in a brittle known state?


Secure .. enclave? I'm sorry but I think you're throwing buzzwords around hoping to hit a homerun here.


No, it's a fair question. The word "enclave" has a general meaning in English as a state surrounded entirely by another, or metaphorically a zone with some degree of isolation from its surroundings.

So the legit question is, can insecure systems (e.g. ancient mainframes) be wrapped by a security layer (WAF, etc.) to get better uptime than patching an exposed system?


yes, thank you.


It's fun to see that deck since my first (software) job out college was working with that team (about a year after this slide), when only two of those four mentioned individuals remained on the team, but consequently I got the pretty rare/fun opportunity to write Erlang.

We had a major advantage for new technology rollout as we did "devops" (e.g. the ops team decided they did not have bandwidth to support us and we needed to launch), and were building greenfield technology (Y! BOSS) with relatively few integration points with existing Y! technology (except for Vespa which is similar to SOLR/ElasticSearch and some weird C++ libraries that were somehow mislabeled from "junky prototype" to "high technology" and were ported forward).

Years later, my sense is that the biggest initial stumbling block was getting the existing devs to have any interest in learning a new technology / way of doing things. I think we lacked some perspective there, and should have made a much larger effort to get the team excited and trained with the technology (in jobs since, I've never had a team who turns down technology training), and could have probably won our local team over if we'd been more intentional.

Ultimately though, the final stumbling block was Y! itself, which was very focused on keeping the number/diversity of technologies low. I think at the organizational level this is probably the right decision, so I can't really fault them for that. Ironically Node.js popped up just a year or so later as the great language hope to rescue Y!, and did manage to get significant traction, so if you wanted to study adoption, finding someone who could explain how they got Y!'s Node.js adoption going in the right direction would be pretty fascination.


As a hiring manager in SF, my anecdotal experience is that most large companies are still hiring at the same pace they were a year ago, but that capital and subsequently hiring has dried up for smaller companies.

For experienced developers/managers, things seem to be business as usual, but in particular the "top tier" companies generally are more focused on avoiding false positives than in reducing false negatives, so I see us as entering a slightly unpleasant period for non-traditional and entry-level candidates.


As someone with a lot of great experience and no CS degree, I'm definitely seeing a lot more gunshyness in my interviews.


As a "non-traditional" (self-taught) and "entry-level" candidate who who just moved to SF a week ago (crashing on my bro's couch), I've been nervous as hell about my prospects.

If anyone has any advice other than "go back home", it'd be much appreciated.


Befriend some Googlers, do lots of mock interviews with them, then go for Google. (Some for Facebook etc.) They still got oodles of cash.


Doesn't Google still but a lot of weight on your degree? Sounds like a waste of time to me.


I work for them without one.


They changed their heavy reliance on top universities and degrees for signalling around 2012, I think.


As someone in Europe, with 10+ years experience, I'm seeing a LOT less in the way of serious recruiter outreach (not spam) and opportunities than 5 or 10 years ago (when I had correspondingly less experience). Just one data point of course and could be personal circumstances, but it looks pretty dire here outside one or two tech hubs.


Interesting. Now that you mention it, my LinkedIn recruiter volume has been dropping down a bit. Was just noticing that the other day. I'm based in Seattle.


Thanks! I can't imagine that being good for a lot of the bootcamp grads.


I think many and perhaps most poor estimates are caused by initial estimates being viewed as too high for the project, and instead of deciding the project isn't worth doing at its estimated cost, instead deciding the estimates must be wrong in order to align expected project value with expected project cost.

Perhaps in a twisted future where we estimate project cost before deciding which projects to take on, we might discover our estimates are much better.

A related pathology is trading technical debt for speed, every time, on every project. The debt will be paid.


Well, exactly. If I understand the paper, the only condition you need in order to arrive at accurate estimates is the absence of pressure to underestimate.


You've worked on one of those projects too, eh? I call them pony projects, as in I want a fucking pony...

People have a recurring delusion that the world is shaped by their wants.


Generally the advice is to breakout GPA by major courses and overall (e.g. 4.0 GPA in major, 3.43 overall).


I'm occasionally a hiring manager for engineers, and yes, it's very possible to get a job without a tech degree. Tech interviewing usually has four major steps: 1) sourcing candidates, 2) filtering resume candidates, 3) technical phone screen, 4) in-person interview.

For most companies, having a degree only matters in the first two phases, and ability/interviewability matters in the last two.

Experienced engineers avoid getting filtered out in the first two phases by working through their network, which allows them to skip those phases entirely. If you're trying to break in without any experience, it's much harder because you probably don't have a network and degrees are often used as a filter during candidate discovery and resume filtering (especially when the engineering manager is working with a recruiter).

My thought would be to proactively send your resume directly to a bunch of job postings, especially ones which go to a "jobs@$company.com". Anecdotally, I know I don't get many direct resumes these days, so I'd end up reading them, skipping any explicit filtering.


For a bit more context behind what we've done here, we also have a blog post at http://about.digg.com/blog/sifting-for-diamonds-with-the-dig... .

I know a few of us read HN as well, and would be glad to answer any questions on the Newswire product or implementation (in brief: Redis).


All due respect, I couldn't gather from http://digg.com/newswire what exactly Newswire is / does / is different from. You can't rely on a blog post to explain a site feature.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: