Hacker Newsnew | past | comments | ask | show | jobs | submit | dap's commentslogin

I’m curious what you think an abstraction is. Even running “ls” involves several layers of abstraction: a shell, a process (abstracts memory), a thread (abstracts CPU)… you think it would be simpler if you had to deal with all that to list a directory (another abstraction)? Even bits are an abstraction over analog voltage levels.

You're taking it out of context. I'm specifically referring to abstractions introduced in the codebase to maximize code reuse, as per the OP's comment.

I don't think these things are as different as you think. I started at "ls" and worked down. If you work up, you get things like a "socket", an "object" within a programming language, a "linked list" in a standard library, an "HTTP client" within an application-level package. You can keep going up and rattle off lots of useful abstractions in application-level code.

There are certainly _bad_ abstractions that ought not to exist, which I think is what you're getting at. There are poorly built abstractions, and leaky abstractions. But abstraction itself isn't the problem -- abstraction is what allows us to build anything at all without being crushed by the sheer complexity.


You're conflating system level abstractions with code-based abstractions. As a counter-example, introducing a factory constructor to handle object creation makes the codebase harder to understand.

Is the MET right? They launched about 29 hours ago but it says 1d18h


Having used Rust professionally for six years now, I share your fear. Like many of the commenters below, coherence just hasn't been a big problem for me. Maybe there are problem spaces where it's particularly painful?

How does the Rust language team weigh the benefits of solving user problems with new language features against the resulting increased complexity? When I learned Rust, I found it to be quite complex, but I also got real value from most of the complexity. But it keeps growing and I'm not always sure people working on the language consider the real cost to new and existing users when the set of "things you have to know to be competent in the language" grows.


You can look at the discussions in any of the language RFCs to see that increased complexity is one of the recurring themes that get brought up. RFCs themselves have a "how do we teach this?" section, that IMO makes or break a proposal.

Keep in mind that as time goes on, features being introduced will be more and more niche. If you could do things in a reasonable way without the new feature, the feature wouldn't be needed. That doesn't mean that everyone needs to learn about the feature, only the people that need that niche have to even know about it (as long as it is 1) it interacts reasonably with the rest of the language, 2) its syntax is reasonable in that it is either obvious what's going on or easy to google and memorable so that you don't have to look it up again and 3) it is uncommon enough that looking at a random library you won't be seeing it pop up).


Thanks for the context. That makes a lot of sense! Those three constraints seem pretty important and a useful way to think about the problem.


I've never once pulled in a new dependency and had the program fail to compile just by virtue of that dependency being present [because both my code and the new code both impl'd the same trait on the same type in some other code]. Because that can't happen because of coherence. (Right?)

It's so easy to forget about the problems we don't have because of the (good) choices people have made in the past.


> Because that can't happen because of coherence. (Right?)

yes

Through you still can run into it when unsafe is involved, e.g. C FFI/no_mange or ASM with non-mangled labels as they are globally unique. Through IMHO, it's not a common problem and has ways to make it very unlikely for the projects where it matters.

In the end if you pull in C-FFI code (or provide it) you do ope yourself up to C ABI specific problems.


I don’t think that sums up the post well. I would say:

The phrase “it turns out” has a surprising way of disarming skepticism in readers. While surprising, it’s easy to understand: the phrase suggests a story of the author’s journey of believing one thing, investigating, and finding something different. That’s more credible than simply saying the thing they now believe. The touch of vulnerability (admitting being wrong) avoids the reader’s ego getting defensive when asked to admit they were wrong. The net result is that it’s surprisingly easy to be convinced without any real argument.

I’ve noticed this as well (often in podcasts and programs like Radiolab) and I think it’s quite valuable to just be aware of it as a reader/listener, if you care about thinking critically about your own beliefs.


Yes, the post is evaluating a personal essay of subjective experiences as if it is an obscured mechanism for convincing people about factual claims.


> How long did humanity survive without vaccines for _everything_? Oh that's right.

Is this a trick question? Humanity survived by having enough people with enough other useful traits (like thinking, including the ability to reason about disease and how to prevent it) to overcome the numbers lost to disease. Humans died to disease in enormous numbers.

> nor that they're all good for _me_ as an individual.

Herd immunity presents a real challenge to idea that people should generally be allowed to make their own choices. One's choice here affects everyone else, in a minuscule way that nonetheless adds up to many thousands of lives saved. I'm not sure what the answer is for this, but generally I come down on the side of: if a democratic process creates rules requiring us all to be immunized for the common good, that's okay with me.


> One's choice here affects everyone else

You still owe me a court trial if you want to act on that in a way that reduces my rights. Prove that my individual choices are affecting anyone.

> if a democratic process creates rules requiring us all to be immunized for the common good, that's okay with me.

Drinking is universally a harm. We should ban alcohol. It's for the common good, obviously, and there are zero arguments against this. Why do we allow drinking? At the very least we should ban _public_ drinking. There's no sense in socially allowing this to occur.


> Drinking is universally a harm. We should ban alcohol.

The actions that cause possible bad societal harms from drinking alcohol are indeed banned or heavily penalized. Drinking and Driving. Public Intoxication. Domestic Abuse. Child Endangerment and more.


It destroys your liver. Which one of those actions prevents that? Where is drinking to excess prevented? Why do people still die in vehicle accidents caused by alcohol?

Really. We should just stop selling it. It's insane that you think you can write a set of rules that somehow prevents harm. It merely manages the consequences of the harms. Your court cases cannot bring back the dead.

Why do we tolerate this yet take a hard line stance on far less important issues?


I can’t tell if you’re serious or not… there was that whole Prohibition thing, back in the day.


I can't tell if you're interested or not. I simply disagree with you. If you'd like to probe those differences I'm happy to oblige. If your only effort is to be dismissive then I find that rather rude.

How did Prohibition work out? Is it still going? /Why not?/


"Prohibition failed because it created a massive illegal market, fueling organized crime, widespread corruption, and disrespect for the law, while failing to stop drinking, leading to dangerous bootleg alcohol and lost tax revenue, ultimately causing public support to collapse and leading to its repeal in 1933."


Herd immunity isn't on its own enough to justify coercion of medical interventions.

You might want to read up on the principle of informed consent: https://en.wikipedia.org/wiki/Medical_ethics#Informed_consen...

> After receiving and understanding this information, the patient can then make a fully informed decision to either consent or refuse treatment.

You are overly simplifying vaccines as if they do not affect individuals individually. They absolutely do, for so many reasons, like allergies. But even if that wasn't the case, _all_ vaccines carry some risk/benefit tradeoff, and each individual is entirely in their right to weigh this for themselves.

Also did we learn nothing from covid?


There is important truth in your post, yet you seem to miss the really important pieces that make this hard.

> It's the parents obligation to educate their child.

> It's the child's obligation to use that education wisely.

Two obvious things complicate this:

- You weren't taught how to use a real gun at 6 months old, right?

- Would it not follow from what you said above that if you had accidentally shot and killed yourself at age 7, then it would be your own fault and nobody else's? That seems (to me, at least) like an absurd conclusion.

I think about it like this: as a parent, my jobs include identifying when my child is capable of learning about something new, providing the guidance they need to learn it (which is probably not all up front, but involves some supervision, since it's usually an iterative process), allowing them to make mistakes, accepting some acceptable risks of injury, and preventing catastrophe. I'll use cooking as an example. My kids got a "toddler knife" very young (basically a wooden wedge that's not very sharp). We showed them how to cut up avocados (already split) and other soft things. As they get older, we give them sharper knives and trickier tasks. We watch to see if they're understanding what we've told them. We give more guidance as needed. It's okay if they nick themselves along the way. But we haven't given them a sharpened chef's knife yet! And if they'd taken that toddler knife and repeatedly tried to jam it into their sibling's eye despite "educating" them several times, while I wouldn't regret having made the choice to see if they were ready, I would certainly conclude that they weren't yet ready. That's on me, not them.

You allude to this when you say:

> I am very much for showing kids how to use the internet responsibly, but I'm not of the opinion that parental controls are particularly desirable beyond an initial learning period.

Yes, the goal should be to teach kids how to operate safely, not keep them from all the dangerous things. But I'd say that devices and the internet are more like "the kitchen". There are lots of different risks there and it's going to take many years to become competent (or even safe). Giving them an ordinary device would be like teaching my 2-year-old their first knife skills next to a hot stove in a restaurant kitchen with chefs flying around with sharp knives and hot pots. By contrast, without doing any particular child-proofing, our home kitchen is a much more controlled environment where I can decide which risks they're exposed to when. This allows me to supervise without watching every moment to see if they're about to stab themselves -- which also gives them the autonomy they need to really learn. The OP, like other parents, wants something similar from their device and the internet: to gradually expose elements of these things as the parents are able to usefully guide the children, all while avoiding catastrophe.


I inferred that they’re referring to the fact that in typical C the compiler must have seen a function earlier in the file for you to use it. One solution (that the author doesn’t like) is to put the leaf functions first so that they’re defined when the compiler sees their callers. The author seems to be ignoring the alternative approach: declaring functions at the top and then writing the in the top-down order that they like.


Doesn't this depend a lot on how long your actions run? Like, you may have already invested in your own hardware (maybe because your actions use a lot of resources and it's cheaper) and now you have to pay per-minute of action runtime for the API that does the bookkeeping?


It is, although you can have sharded PostgreSQL, in which case I agree with your assessment that you want random PKs to distribute them.

It's workload-specific, too. If you want to list ranges of them by PK, then of course random isn't going to work. But then you've got competing tensions: listing a range wants the things you list to be on the same shard, but focusing a workload on one shard undermines horizontal scale. So you've got to decide what you care about (or do something more elaborate).


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: