Hacker Newsnew | past | comments | ask | show | jobs | submit | ceteia's commentslogin

Nitpicking:

https://github.com/thelowsunoverthemoon/mahler.c/blob/4ebfe8...

Should that type have been mah_acci instead of int? mah_acci doesn't seem to be used anywhere.

Also, have you considered using (edit) designated initializer syntax for some of the function calls that take in structs as arguments?

https://cppreference.com/w/c/language/struct_initialization....

    struct mah_scale scale = mah_get_scale(
        (struct mah_note) {
            .tone=MAH_C,
            .acci=MAH_NATURAL,
            .pitch=4
        },
        &MAH_BLUES_SCALE, notes,
        MAH_ASCEND,
        NULL
    );


Ah yes, that is by design. This is because it supports more than a double sharp / double flat (eg triple sharp, quad sharp, etc). The enums are there just for the most common values so it's easier to read.

As for second, that's a good point! Would definitely make readability better, thanks.


Would educating people instead and giving them more options for information, not be better than banning access to information?


If educating people worked there wouldn’t be any obese people, or drunkards, or smokers, druggies, gamblers, people addicted to doomscrolling or video games or ragebait "news" or…

Education is as useful as preaching abstinence at horny teenagers instead of providing access to contraceptives


  >If educating people worked there wouldn’t be any [bad stuff]
I think you're confusing "works" and "works perfectly."

Education works. It doesn't work perfectly.


Cause and correlation, education gives you options, it always comes to a choice, I know the donuts lead somewhere but I choose to eat two anyway.

Education doesn't cause good choices but it is sometimes correlated to better situations, the difference between the criminals in prison and the ones in the C suite is only education.


> If educating people worked there wouldn’t be any obese people, or drunkards

This assumes that a) everyone is the same, and b) education would always work. Matthew Perry explained that this is not the case. Some people respond differently to drugs. Whether these people are educated or not, changes very little. Education helps, but not in the way as to be able to bypass physiological aspects completely.

> Education is as useful as preaching abstinence at horny teenagers instead of providing access to contraceptives

Education can still help. For instance, I decided very early on that the best way to avoid e. g. addiction is to not "give in and try once". So I never tried drugs (ok ok, I did drink a beer occasionally). This was the much simpler and easier strategy to pursue, simply via avoidance behaviour.

Thus I disagree that the premise can be "if educating worked" - people will always respond differently to drugs. And they will have different strategies to cope with something too - some strategies work, others don't work. One can not generalize this.


Many people believe their mind is a passive reflection of reality, thus any change that happens to mind is infallible by definition. I wonder how can they possibly resist addiction with such mindset.


Clearly education doesn't work, so Europe must ban any speech concerning fattening foods, drinking alcohol, smoking, drugs, gambling, upsetting news and video games.

If you oppose these speech bans... Why you're as silly as a preacher telling teens not to fuck!


Oh my, that is a depressing view on the human condition.


But can't you then set up a system such that if a person only picks one source or a few sources, and that turns out to be bad, that it primarily impacts negatively only themselves? Letting it be their own responsibility?


That depends on what "education" entails. If it's one source only chances of it being propaganda is high.


Intuitively yes, but it's possible that this is one of our biases speaking

From my memory (might be mistaken) there have been attempts to somewhat study this via polls etc, and determined that coverage via propaganda (specifically Fox News) is less helpful then randomly guessing what actually happened...

But ymmv, social studies are always hard to trust, because it's borderline impossible to prove cause and effect


> From my memory (might be mistaken) there have been attempts to somewhat study this via polls etc, and determined that coverage via propaganda (specifically Fox News) is less helpful then randomly guessing what actually happened...

Ironically the studies of that nature are often themselves a form of propaganda, because it's entirely straightforward to structure the study to produce your preferred outcome.

There is a well-known human bias where people use information they know to try to guess information they don't. If you're given three random people and the only thing anyone has told you about them is that one is a drug addict and then you're asked to guess which one is a thief, more people are going to guess the drug addict. So now all you have to do is find a situation where the thief isn't actually the drug addict, let the media outlet tell people which one is the drug addict, and you'll have people guessing the wrong answer a higher proportion of the time than they would by choosing at random.


People need to decide on their own, so I am against censorship.


In this thread, which comment gave you the impression they were in favor of censorship?

I hope it's not me, whom you responded to, because I cannot fathom how you could've gotten that impression considering my phrasing...what's up with this topic getting so many people with arguing via complete strawmen


[flagged]


Your phrasing implies someone spook out against that, but nobody did?


What if educating people takes decades and lies can be prompted in a few minutes?


Then you failed at education if a prompt can undo decades of education.

And the failure of education was an intentional feature, not a bug, since the government wants obedient tax cattle that will easily accept their propaganda at elections, not freethinkers that question everything because then they might notice your lies and corruption.

It's like building a backdoor into your system thinking you're the only one who gets to use it for the upper hand, but then throw fits when everyone else is using your backdoors to defeat you.


What if it's easier to call opposing viewpoints "lies" than it is to defend yours.


For real... the species is not going to last long if a subset of it gets to control the information flow of the other part... literally unsustainable


Enum matching is one of the better aspects of Rust.


> When I update the rust compiler, I do so with very little fear. My code will still work. The rust stdlib backwards compatible story has been very solid.

This is not always true, as seen with rustc 1.80 and the time crate. While it only changed type inference, that still caused some projects like Nix a lot of trouble.


That caused compilation errors though, which are alright in my book, and don't increase my fear to update.

Silent runtime changes are what spook me and what I've gotten more often with Go.


> No, we don't. All of the ones we have are heavily leveraged in Chromium or were outright developed at Google for similar projects. 10s of billions are spent to try to get Chromium to not have these vulnerabilities, using those tools. And here we are.

Chromium is filled with sloppy and old code. Some of the source code (at least if dependencies are included) is more than 20 years old, and a lot of focus has been on performance, not security.

Using Rust does not necessarily solve this. First, performance-sensitive code can require 'unsafe', and unsafe allows for memory unsafety, thus going back to square one, or further back. And second, memory safety isn't the only source of vulnerabilities. Rust's tagged unions and pattern matching help a lot with general program correctness, however, and C++ is lagging behind there.


> Chromium is filled with sloppy and old code. Some of the source code (at least if dependencies are included) is more than 20 years old, and a lot of focus has been on performance, not security.

Chromium is also some of the most highly invested in software with regards to security. Literally entire technologies that we now take for granted (seccomp-ebpf comes to mind) exist to make Chrome safe. Sanitizers were a Google project that Chromium was an aggressive adopter and contributor towards. I could go on.

> Using Rust does not necessarily solve this. First, performance-sensitive code can require 'unsafe', and unsafe allows for memory unsafety, thus going back to square one, or further back.

This isn't really true? I have no idea what "further back" means here. The answer seems to just be "no". Unsafe does allow for memory unsafety but it's hilarious to me when people bring this up tbh. You can literally `grep unsafe` and ensure that your code in that area is safe using all sorts of otherwise insanely expensive means. Fuzz that code, ensure coverage of that code, run `miri`, which is like a sanitizer on steroids, or literally formally verify it. It's ridiculous to compare this to C++ where you have no "grep for the place to start" capability. You go from having to think of 10s of millions of lines of code that holds a state space vastly greater than the number of particles of this universe 100000000x over, to a tiny block.

With the level of investment that Google puts into things like fuzzing, Rust would have absolutely made this bug harder to ship.

> And second, memory safety isn't the only source of vulnerabilities.

It's the source of this one and every ITW Chromium exploit that I can recall off of the top of my head.


[flagged]


Please be more specific about where you think the parent commenter is getting the basics wrong. A single quoted sentence will do.


I'll need you to be much more specific. I'm actually quite familiar with Rust, having worked with it since 2015, speaking at the first rustconf, having written in it professionally, having worked on a team that did vulnerability research with a highly hardened Rust codebase[0] in which `unsafe` usage introduced a vulnerability, etc.

If you'd like me to construct a formal syllogism to communicate my points then I might be able to abide. You first though because I find it utterly ridiculous to complain about "logical errors" in what I've written when your post made numerous unsupported or seemingly irrelevant claims.

[0] https://web.archive.org/web/20221001182026/https://www.grapl...


https://materialize.com/blog/rust-concurrency-bug-unbounded-...

Edit: Replying to ghusbands:

'unsafe' is a core part of Rust itself, not a separate language. And it occurs often in some types of Rust projects or their dependencies. For instance, to avoid bounds checking and not rely on compiler optimizations, some Rust projects use vec::get_unchecked, which is unsafe. One occurrence in code is here:

https://grep.app/pola-rs/polars/main/crates/polars-io/src/cs...

And there are other reasons than performance to use unsafe, like FFI.

Edit2: ghusbands had a different reply when I wrote the above reply, but edited it since.

Edit3: Ycombinator prevents posting relatively many new comments in a short time span. And ghusbands is also wrong about his answer not being edited without him making that clear.


Those kind of arguments is like posting news about people still dying while wearing seat belts and helmets, ignoring the lifes that were saved by having them on.

By the way, I am having these kind of arguments since Object Pascal, back when using languages safer than C was called straighjacket programming.

Ironically, most C wannabe replacements are Object Pascal/Modula-2 like in the safety they offer, except we know better 40 years later for the use cases they still had no answer for.


People made similar arguments regarding C++ versus Ada. The US military and defense industry even got something like a mandate in the 1990s to only write in Ada.

And then there was https://en.wikipedia.org/wiki/Ariane_flight_V88 , where US$370 million was lost. The code was written in Ada.

And using seat belts and wearing helmets do not help in those cases where 'unsafe' is used to take the seat belts and helmets off. And that is needed in Rust in a number of types of cases, such as some types of performance-sensitive code.


Yes, people like to point out Ariane explosion, without going into the details, and missing out on F-35 budget explosion much worse, with ridiculous failures like having to reboot its avionics in flight.

It is like bringing the news of that lucky soul, that only survived a car crash, because it was thrown out of the car, managed to land in such a way that it survived the crash, survival statistics be dammed.


Wasn't the F-35 budget "explosion", or overruns, caused in general by mismanagement? But I will not argue that C++ is perfect. Instead, the ttps://en.wikipedia.org/wiki/Ariane_flight_V88 , where US$370 million was lost, with code written in Ada, is an example where Ada was presented as a safer language and even mandated in the military industry, but where it turned out less well in practice. Even proclaimed "safer" languages can have catastrophic failures, and one can suspect that they might even be less safe in practice, especially if they need mandates to be picked. Instead of Ada companies or other organizations lobbying to force industry to use their language, maybe it is better if there is free competition, and then the onus is on the software development companies to deliver high quality. Ada has improved since the 1990s, perhaps because it has been forced to compete fairly with C, C++ and other languages. Following that thinking, increased, not decreased, competition should be encouraged.

Your lucky soul analogy argument doesn't make any sense.


Yes, once you use 'unsafe' to bypass the safety model, you don't get safety.

Edit: If you reply with a reply, rather than edits, you don't get such confusion.


People also write Rust code that is not memory-safe.

https://materialize.com/blog/rust-concurrency-bug-unbounded-...


But not "routinely".


How can you be sure? When I looked at for instance sudo-rs, it proclaimed loudly that it is memory safe, but its code has lots of unsafe.

https://github.com/trifectatechfoundation/sudo-rs

https://grep.app/search?f.repo=trifectatechfoundation%2Fsudo...

And Miri is very popular in Rust. Even if a Rust project doesn't have unsafe, sometimes people still run Miri with it, since dependencies might have messed up their unsafe usage.


> but its code has lots of unsafe.

And every instance of unsafe that I could find (except one, in test-only code) was a call to libc with a clarifying comment on why this particular use was safe. That is, all (or at least, all of it that I could find) was wrapping an unsafe API with documented (and usually straightforward and local) invariants that maintain safety, such that the calling code is safe.

I'd say that the fact that miri's trophy-shelf[0] has 39 entries and is 7 years old and still regularly updated is a pretty good indicator that memory bugs are sufficiently rare in rust so as to be notable. That is the opposite of "regular"

[0]: https://github.com/rust-lang/miri/blame/master/README.md


[flagged]


> A comment does not automatically make code safe. What even is that argument? There have directly been examples of Rust code with SAFETY comments that later were found to be memory unsafe.

I did not make this argument. I encourage you to reread my comment and do so with the HN guidelines in mind!


The vast majority of Rust code out there doesn't use the `unsafe` keyword at all, and the vastly smaller amount of unsafe code that exists allows for focused and precise testing and verification. You really have no idea what you're talking about if you're trying to say that Rust is anywhere in the ballpark of C or C++ here.


Indeed, unsafe Rust is overall more difficult than C++, like one speaker at a Rust conference claimed: https://lucumr.pocoo.org/2022/1/30/unsafe-rust/

And you are not being honest nor accurate here.


Does the Rust implementation not use any unsafe and does not use libraries using unsafe?


No. What would be the point of that?


Not Firefox, but Servo has quite a lot of unsafe, even though some of the results are false positives.

https://grep.app/search?f.repo=servo%2Fservo&f.repo.pattern=...

So Servo at the very least cannot be said to be 'safe'. And I believe the Rust code in Firefox is similar.


Rust projects generally use licenses like MIT instead of GPL, and thus some major corporations support Rust a lot, and thus Rust will continue getting popular.


Growing in absolute numbers doesn't mean growing the market share, and even a growing market share is not necessarily sufficiently fast growth to become a safe bet. All languages that ended up becoming very popular grew their market share much faster than Rust does. Being an old language with some real market share is obviously better than being an old language with negligible market share, but being an old language with real, but small, market share is not exactly a sign of confidence.

It's true that the total market share for low level languages (C + C++ + Rust + Zig + others) continues declining, as it has for a couple of decades now (that may change if coding agents start writing everything in C [1] but it's not happening yet), but that's all the more reason to find some "safe bet" within that diminishing total market. Rust's modest success is enough for some, but clearly not enough for many others to be seen as a safe bet.

[1]: https://stephenramsay.net/posts/vibe-coding.html


Do you know of a good way to measure market share? I know of GitHub's and StackOverflow's surveys, but I'm not sure how well they reflect reality. There is also Redmonk.

GitHub's survey did not say much about Rust I think, despite Rust projects often having lots of starring. Rust projects might have a greater ratio of stars-to-popularity than projects in other languages, though.

StackOverflow's survey was much more optimistic or indicated popularity for Rust.

Redmonk places Rust at place 19th.


The best sources are industry studies by market research companies that collect information from companies. The best public sources, IMO, are those based on job openings (as jobs correlate more with total number of lines of code than sources based on number of repos, PRs, or questions). Some of these are about a year out-of-date:

https://www.devjobsscanner.com/blog/top-8-most-demanded-prog...

https://uk.indeed.com/career-advice/career-development/codin...

https://www.itransition.com/developers/in-demand-programming...

https://www.hackerrank.com/blog/top-developer-skills-in-2025...

Viewing these numbers through an optimistic or pessimistic lens is a matter of perspective and, of course, no one knows the future. But when you compare Rust, which is a middle-aged language now, to how languages that ended up "making it" were at the same age, the comparison is not favourable.


The first genuinely usable version of Rust was only released in late 2018. Rust is a very new language still.


Except you could say something similar about the first few years of every language that became very popular, and the comparison would still not be in Rust's favour.


The GitHub project has some activity at least, and they might be coming with some announcement later this year.

https://github.com/carbon-language/carbon-lang/


There is an announcement already planned at NDC Toronto 2026.

> Carbon: graduating from the experiment

https://ndctoronto.com/agenda/carbon-graduating-from-the-exp...

As for it being widely adopted, people keeping missing the point that Carbon is mostly for Google themselves, as means to integrate into existing C++ projects.

They are the very first ones to assert that for green field projects there are already plenty of safe languages to chose from.


What concerns me is that the design of Carbon in aspects seem to have serious issues already now.

In case that you are well familiar with for instance pattern matching, might you have any opinions on the pattern matching that is currently proposed for Carbon?

https://docs.carbon-lang.dev/docs/design/pattern_matching.ht...


I am not a Google employee, as such I don't care where they take Carbon, other than being a technology nerd that had compiler design as one of the areas I majored in.

Regarding the linked pattern matching proposal, it seems alright to me, not everything has to be ML like.


Are you really OK with runtime "expression patterns"?

    match (0, 1, 2) {
      case (F(), 0, G()) => ...
    }
> Here (F(), 0, G()) is not an expression, but three separate expressions in a tuple pattern. As a result, this code will call F() but not G(), because the mismatch between the middle tuple elements will cause pattern matching to fail before reaching G(). Other than this short-circuiting behavior, a tuple pattern of expression patterns behaves the same as if it were a single expression pattern.

How would that work with exhaustiveness checking? As far as I can tell, they themselves believe that Carbon's exhaustiveness checking will be very poor.

And OK with implicit conversions? Especially when combined with their way of handling templates for pattern matching?


As mentioned I have no interest in ever using Carbon, the language still isn't 1.0, and full end to end compiler is yet to be made available.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: