Hacker Newsnew | past | comments | ask | show | jobs | submit | h4ch1's commentslogin

I've been using rolldown-vite for the past 3-4 months with absolutely no issues on a very large monorepo with SvelteKit, multiple TS services and custom packages.

Just upgraded to 8 with some version bumping. Dev server time reduced to 1.5s from 8s and build reduced to 35s from 2m30. Really really impressed.


I can access it fine? The site loads after a quick bot check.

Same, worked with an automatic bot check, not even a captcha.

Aha, OK. I see what's going on. For reasons to do with a yak-shaving process that was required for our HN moderation software to work, I use Chromium, and it seems that requests from Chromium user-agents are blocked on that server, perhaps due to a presumption that they're bots.

But yes, as tptacek points out, proposed legislation is off-topic here, unless it's effectively certain it will pass.


I mirror what you're saying but non-gentle parenting along with corporal punishment can take a strong psychological toll on a child (speaking from experience). It took me quite a while to find my voice because I a had not-so-great childhood all in the name of obedience and discipline.

But I also see, the new generation of parents taking gentle parenting to an extreme raising feral iPad kids who call their moms a bitch.

I really don't think there's an optimal framework for proper parenting but a firm hand delivered gently and logically seems to be a good starting point.


I didn't say anything about corporal punishment. I believe it works, but everyone should use discipline specific to the child, IMO.

Kid isn't supposed to make a hotspot but does anyway? Lost that phone. Kid is playing games on the PC? Remove the games. Etc.

If the children aren't listening to their father, the problem isn't android parental controls.


The buzzword you’re looking for is authoritative parenting. It’s about setting boundaries from love instead of fear and demonstrating mutual respect. It’s a lot easier to get a kid to listen to screen limits when they see you as a loving and responsible person, and especially when you have fun things to do with them off of a screen, and a real relationship with them.

Thanks for putting it into (much better) words haha. It sounds exactly like what I was trying to describe.

You can basically condense this entire "language" into a set of markdown rules and use it as a skill in your planning pipeline.

And whatever codespeak offers is like a weird VCS wrapper around this. I can already version and diff my skills, plans properly and following that my LLM generated features should be scoped properly and be worked on in their own branches. This imo will just give rise to a reason for people to make huge 8k-10k line changes in a commit.


> And whatever codespeak offers is like a weird VCS wrapper around this.

I'm still getting used to the idea that modern programs are 30 lines of Markdown that get the magic LLM incantation loop just right. Seems like you're in the same boat.


Are we that far gone that "hand coding" is a term now? I hope there's an /s missing

I hope "hand coding" is an antonym for "convention coding" or something.

I’m guessing hand coding means, not vibe coding.

Did you use AI? .. Nah I hand coded it.


Real programmers use butterflies. https://xkcd.com/378/

I would really like to hear from people using Zig in production/semi-serious applications; where software stability is important.

How's your experience with the constantly changing language? How're your update/rewrite cycles looking like? Are there cases where packages you may use fall behind the language?

I know Bun's using zig to a degree of success, was wondering how the rest were doing.


I maintain a ~250K LoC Zig compiler code base [0]. We've been through several breaking Zig releases (although the code base was much smaller for most of that time; Writergate is the main one we've had to deal with since the code base crossed the 100K LoC mark).

The language and stdlib changing hasn't been a major pain point in at least a year or two. There was some upgrade a couple of years ago that took us awhile to land (I think it might have been 0.12 -> 0.13 but I could be misremembering the exact version) but it's been smooth sailing for a long time now.

These days I'd put breaking releases in the "minor nuisance" category, and when people ask what I've liked and disliked about using Zig I rarely even remember to bring it up.

[0]: https://github.com/roc-lang/roc


At this point do you believe porting the codebase to Zig was the right decision? Do you have any regrets?

Also, I'm excited about trying out your language even moreso than Zig. :)


What's the main value proposition of roc? I found interesting tags (like symbols in mathematica) and tags with payloads (like python namedtuple or dataclasses). I haven't seen this elsewhere. Otherwise seems quite typical (Pattern matching is quite common, for example).

Example programs that you couldn't easily express in other languages?


Are you aware that your Github README doesn't actually tell us anything about Roc is or why we might be interested?

This might be on purpose given the first words are "Work in progress" and "not ready for release", but linking as above does lose some value.


Instead of being rude, consider checking Roc's website: https://roc-lang.org/

He wasn't pitching the language directly, but linking to the codebase as that was what was relevant to the comment he was replying to.


If I missed with tone, that's on me. I was going for "helpful constructive feedback".

I've worked on two "production" zig codebases: tigerbeetle [0] and sig [1].

These larger zig projects will stick to a tagged release (which doesn't change), and upgrade to newly tagged releases, usually a few days or months after they come out. The upgrade itself takes like a week, depending on the amount of changes to be done. These projects also tend to not use other zig dependencies.

[0]: https://github.com/tigerbeetle/tigerbeetle/pulls?q=is%3Apr+a...

[1]: https://github.com/Syndica/sig/pulls?q=is%3Apr+author%3Akpro...


I really wanted to deep dive into zig but I'm into rust now kinda late as I'm really just started like 2024.

Have you tried rust? how does it compared to zig?

* just asking


Two different philosophical approaches with Zig and Rust.

- Zig: Let's have a simple language with as few footguns as possible and make good code easy to write. However we value explicitness and allow the developer to do anything they need to do. C interoperability is a primary feature that is always available. We have run time checks for as many areas of undetermined behaviour as we can.

- Rust: let's make the compiler the guardian of what is safe to do. Unless the developer hits the escape hatch, we will disallow behaviour to keep the developer safe. To allow the compiler to reason about safety we will have an intricate type system which will contain concepts like lifetimes and data mobility. This will get complex sometimes so we will have a macro system to hide that complexity.

Zig is a lot simpler than Rust, but I think it asks more of it's developer.


> However we value explicitness and allow the developer to do anything they need to do*

* except for having unused variables. Those are so dangerous the compiler will refuse the code every time.


They are indeed dangerous, and I think this is a pretty good example of why.

https://andrewkelley.me/post/openzfs-bug-ported-zig.html


don't know if it's still on the table, but Andrew has hinted that the unused variables error may in the future still produce an executable artefact but return an nonzero return code for the compiler. And truly fatal errors would STILL produce an executable artefact too, just one that prints "sorry this compilation had a fatal error" to stdout.

It’s hard to say that one needs unused variables.

If I comment out sections of code while debugging or iterating I don't want a compile error for some unused variable or argument. Warning. fine, but this happens to me so frequently that the idea of unused variables being an error is insane to me.

It is insane and you are completely right. This has been a part of programming for over 50 years. Unfortunately you aren't going to get anywhere with zig zealots, they just get mad when confronted with things like this that have no justification, but they don't want to admit it's a mistake.

But even the solutions would be so trivial - have a separate 'prod' compiler flag. With that, make these errors, without make these warnings.

Problem solved, everyone happy.


i think the plan is to make no distinction between error and warning, but have trivial errors still build. that said i wouldn't be surprised if they push that to the end because it seems like a great ultrafilter for keeping annoying people out so they don't try to influence the language.

You are right of course, the solution is trivial.

They also made a carriage return crash the compiler so it wouldn't work with any default text files on windows, then they blamed the users for using windows (and their windows version of the compiler!).

It's not exactly logic land, there is a lot of dogma and ideology instead of pragmatism.

Some people would even reply how they were glad it made life difficult for windows users. I don't think they had an answer for why there was a windows version in the first place.


I'm not sure why you shouldn't make your compiler accept CRs (weird design decision), but fixing it on the user-side isn't exactly hard either. I don't know an editor that doesn't have an option for using LF vs CRLF.

The unused variable warning is legitimately really annoying though and has me inserting `_ = x;` all over the place and then forgetting to delete it, which is imo way worse than just... having it be a warning.


I don't know an editor that doesn't have an option for using LF vs CRLF.

And I don't know any other languages that don't parse a carriage return.

The point is that it was intentionally done to antagonize windows even though they put out a windows version. Some people defend this by saying that it's easy to turn off, some people defend it by saying windows users should be antagonized.

No zig people ever said this was a mistake, it was all intentional.

I'm never going to put up with behavior like that with the people making tools actively working against me.


> And I don't know any other languages that don't parse a carriage return.

fair enough.


Same for kernel drivers

Rust is a Bugatti Veyron, Zig is a McLaren F1.

as few footguns as possible

There are no destructors so all the memory ownership footguns are still there.


sure, but when I've written zig this has never been an issue for me. `defer` makes memory management really easy.

If you want to auto-generate destructors, zig has really good comptime features that can let you do that.


defer is still something you have to consciously put in every time so it destroys the value semantics that C++ has, which is the important part. You don't have to "just write defer after a string", you can just use a string.

The 'not a problem for me' is what people would say about manual memory in C too. Defer is better but it isn't as good as what is already in use.


That's disingenous, Rust tries to minimize errors, first at compile time then at runtime, even if it at some discomfort of to programer.

Zig goes for simplicity while removing a few footguns. It's more oriented towards programmer enjoyment. Keep in mind that programmers don't distinguish ease of writing code from ease of writing unforeseen errors.


Yes, I've written a few unsafe-focused crates [0], some of which have been modified & merged into the stdlib [1] [2] exposing them to the fringe edge-cases of Rust like strict provenance.

IMO, Rust is good for modeling static constraints - ideal when there's multiple teams of varying skill trying to work on the same codebase, as the contracts for components are a lot clearer. Zig is good for expressing system-level constructs efficiently: doing stuff like self-referential/intrusive data structures, cross-platform simd, and memory transformations is a lot easier in Zig than Rust.

Personally, I like Zig more.

[0] https://crates.io/users/kprotty

[1] https://github.com/rust-lang/rust/pull/95801

[2] https://github.com/rust-lang/rust/blob/a63150b9cb14896fc22f9...


Zig is a modern C,

Rust is a modern C++/OCaml

So if you enjoy C++, Rust is for you. If you enjoy C and wish it was more verbose and more modern, try Zig.


As someone who never liked writing anything C++ since 2000+ (did like it before) I cannot agree with this. C++ and Rust are not comparable in this sense at all.

One can argue Rust is what C++ wanted to be maybe. But C++ as it is now is anything but clean and clear.


See my other comment[1]

It replaces C++ for me, so I would say it's "a C++"

[1]: https://news.ycombinator.com/item?id=47334275


And in a world of only you your claim is true.

I think the comparison is fair, strictly in the sense that both Rust and C++ are designed around extensible programming via a sort of subtyping (C++ classes, Rust traits), and similar resource management patterns (ownership, RAII), where Zig and C do not have anything comparable.

My take, unfortunately, is that Zig might be a more modern C but that gives us little we don’t already have.

Rust gives us memory safety by default and some awesome ML-ish type system features among other things, which are things we didn’t already have. Memory safety and almost totally automatic memory management with no runtime are big things too.

Go, meanwhile, is like a cleaner more modern Java with less baggage. You might also compare it to Python, but compiled.


Zig gives things we really dont have yet: C + generics + good const eval + good build system + easy cross compilation + modern niceties (optionals, errors, sum types, slices, vectors, arbitrary bit packing, expression freedom).

Are there any other languages that provide this? Would genuinely consider the switch for some stuff if so.


It is kind of interesting that the Linux kernel is slowly adopting Rust, whereas Zig seems like it would be a more natural fit?

I know, timelines not matching up, etc.


Definitely not. Rust gives you a tangible benefit in terms of correctness. It's such a valuable benefit that it outweighs the burden of incorporating a new language in the kernel, with all that comes with it.

Zig offers no such thing. It would be a like-for-like replacement of an unsafe old language with an unsafe new one. May even be a better language, but that's not enough reason to overcome the burden.


actually that's not true at all. Zig offers you some more safety than C. And it also affords you a compiler architecture and stdlib that is so well designed you could probably bolt on memory safety relatively easily as a 3rd party static checker

https://github.com/ityonemo/clr


"More safety than C" is an incredibly low bar. These are hygiene features, which is great, but Rust offers a paradigm shift. It's an entirely different ballpark.

negative. For example bounds checking is turned on by default in Zig, which prevents classes of overflow safety errors.

I don't think you've necessarily understood the scope and impact of the borrow checker. Bounds checking is just a sane default (hygiene), not a game changer.

I mean, I'm the author of this?

https://github.com/ityonemo/clr

so yes, I understand that it's important. It doesn't need to be in the compiler though? I think it's likely the case that you also don't need to have annotations littering the language.


I wish you good luck! Successive attempts to achieve similar levels of analysis without annotations have failed in the C++ space, but I look forward to reading your results.

yeah afaik you cant easily intercept c++ at a meaningful IR in the same way as you can zig. Zig's AIR is almost perfect for this kind of thing.

Memory safety by default in kernel sounds like a good idea :). However I don't think that C is being _replaced_ by Rust code, it's rather that more independent parts that don't need to deeply integrate with the existing C constructs can be written in a memory safe language, and IMO that's a fine tradeoff

It is not about timelines. Linux Torvalds doesn't spend nights reading bunch of books with crabs on their covers rewriting random bits and pieces of the kernel in Rust. It is basically a dedicated group of people sponsored by megacorps doing the heavy lifting. If megacorps wanted Zig we could have had it in the kernel instead (Linux might have rejected it though, not sure what he thinks of it).

I believe Rust is mainly being used for driver development, which seems a great fit (there's so many people of different skill levels who write Linux drivers, so this should help avoid bad driver code being exploited). It may also end up in the core systems, but it also might not fit there as well.

And “if you enjoy C++/if you enjoy C” are gross oversimplifications.

And Zig isn't stable yet

Comparing Rust to C++ feels strange to me.

It’s like people do it just because Zig is very comparable to C. So the more complex Rust must be like something else that is also complex, right? And C++ is complex, so…

But that is a bit nonsensical. Rust isn’t very close to C++ at all.


I wrote lots of C++ before learning Rust, and I enjoyed it. Since learning Rust, I write no more C++. I found no place in which C++ is a better fit than Rust, and so it's my "new C++".

For example, high performance servers (voltlane.net), programming languages (https://github.com/HF-Foundation, https://github.com/lionkor/mcl-rs, and one private one), webservers (beampaint.com) and lots of other domains.

Rust is close to C++ in that it is a systems language that allows a reasonable level of zero-cost abstractions.


> found no place in which C++ is a better fit than Rust, and so it's my "new C++".

Writing the compiler toolchains that Rust depends on, industry standards like CUDA, SYSCL, Metal, Unreal or the VFX Reference Platform.


There are places a language could be a better fit, but which haven't adopted it. E.g. most languages over typescript on the backend, most systems programming languages over Java for games.

The fallacy is to discuss programming languages in isolation without taking the whole ecosystem into consideration.

Rust uses LLVM because it's pretty great, not because you couldn't implement LLVM in Rust.

Maybe cranelift will eventually surpass LLVM, but there isn't currently much reason to push for that.


If anything, making cranelift an LLVM replacement would likely go counter to its stated goals of being a simple and fast code generator.

Thus Rust cannot really replace C++ when its reference toolchain depends on it.

If you define success for Rust as "everything is written in Rust", then Rust will never be successful. The project also doesn't pursue success in those terms, so it is like complaining about how bad a salmon is at climbing trees.

That is however how the Rust Evangelism Strike Force does it all the time, hence these kind of remarks I tend to do.

C++ is good for some things regardless of its warts due to ecosystem, and Rust is better in some other ones, like being much safer by default.

Both will have to coexist in decades to come, but we have this culture that doesn't accept matches that end in a draw, it is all about being in the right tribe.


So... Like, what? Do you agree that there is no technical reason for LLVM to be written in C++ over Rust?

Have you considered that you perhaps do more damage to the conversation by having it with this hypothetical strike force instead of the people that are actually involved in the conversation? Whose feelings are you trying to protect? What hypocrisy are you trying to expose? Is the strike force with us in the room right now?


I assert there is no reason to rewrite LLVM in Rust.

And I also assert that the speech that Rust is going to take over the C++, misses on that as long as Rust depends on LLVM for its existence.

Or ignoring that for the time being NVidia, Intel, AMD, XBox, PlayStation, Nintendo, CERN, Argonne National Laboratory and similar, hardly bother with Rust based software for what they do day to day.

They have employees on WG14, WG21, contribute to GCC/clang upstream, and so far have shown no interest in having Rust around on their SDKs or research papers.


> I assert there is no reason to rewrite LLVM in Rust.

Everybody agrees with that, though? Including the people writing rustc.

There's a space for a different thing that does codegen differently (e.g. Cranelift), but that's neither here nor there.

> And I also assert that the speech that Rust is going to take over the C++, misses on that as long as Rust depends on LLVM for its existence.

There's a huge difference between "Rust depends on LLVM because you couldn't write LLVM in Rust [so we still need C++]" and then "Rust depends on LLVM because LLVM is pretty good". The former is false, the latter is true. Rust is perfectly suited for writing LLVM's eventual replacement, but that's a massive undertaking with very little real value right now.

Rust is young and arguably incomplete for certain use cases, and it'll take a while to mature enough too meet all use cases of C++, but that will happen long before very large institutions are also able to migrate their very large C++ code bases and expertise. This is a multi-decade process.


> Rust is close to C++ in that it is a systems language that allows a reasonable level of zero-cost abstractions.

That's like saying php is close to haskell because they both have garbage collection.


I found swift way more enjoyable than rust as a C++ alternative. It even has first class-ish interop now.

Seriously asking, where Go sits in this categorization?

Nowhere, or wherever C# would sit. Go is a high level managed language.

Go is modern Java, at least based on the main area of usage: server infrastructure and backend services.

Tbh Go is also really nice for various local tools where you don’t want something as complex as C++ but also don’t want to depend on the full C# runtime (or large bundles when self-contained), or the same with Java.

With Wails it’s also a low friction way to build desktop software (using the heretical web tech that people often reach for, even for this use case), though there are a few GUI frameworks as well.

Either way, self contained executables that are easy to make and during development give you a rich standard library and not too hard of a language to use go a long way!


Go is modern/faster Python.

- It was explicitly intended to "feel dynamically-typed"

- Tries to live by the zen of Python (more than Python itself!)

- Was built during the time it was fashionable to use Python for the kinds of systems it was designed for, with Google thinking at the time that they would benefit from moving their C++ systems to that model if they could avoid incurring the performance problems associated with Python. Guido Van Rossum was also employed at Google during this time. They were invested in that sort of direction.

- Often reads just like Python (when one hasn't gone deep down the rabbit hole of all the crazy Python features)


i wonder what makes go more modern than java, in terms of features.

The tooling and dependency management probably

I still don't understand how they managed to make a build system as bad as Gradle. It's like they tried to make it as horrible as possible to use.

Yes, every time I fire up an old Android project it needs to download 500MB just for gradle upgrades. It's nuts.

It's also a modern C.

If you enjoy C and wish it was less verbose and more modern, try Go.


Go has a garbage collector though. This makes it unsuitable for many use cases where you could have used C or C++ in the past. Rust and Zig don't have a GC, so they are able to fill this role.

GC is a showstopper for my day job (hard realtime industrial machine control/robotics), but would also be unwanted for other use cases where worst case latency is important, such as realtime audio/video processing, games (where you don't want stutter, remember Minecraft in Java?), servers where tail latency matters a lot, etc.


> GC is a showstopper for my day job (hard realtime industrial machine control/robotics)

Which is a very niche use case to begin with, isn't it? It doesn't really contradict what the parent comment stated about Go feeling like modern C (with a boehm gc included if you will). We're using it this way and it feels just fine. I'd be happy to see parts of our C codebase rewritten in Go, but since that code is security sensitive and has already been through a number of security reviews there's little motivation to do so.


> Which is a very niche use case to begin with, isn't it?

My specific use case is yes, but there are a ton of microcontrollers running realtime tasks all around us: brakes in cars, washing machine controllers, PID loops to regulate fans in your computer, ...

Embedded systems in general are far more common than "normal" computers, and many of them have varying levels of realtime requirements. Don't believe me? Every classical computer or phone will contain multiple microcontrollers, such as an SSD controller, a fan controller, wifi module, cellular baseband processor, ethernet NIC, etc. Depending on the exact specs of your device of course. Each SOC, CPU or GPU will contain multiple hidden helper cores that effectively run as embedded systems (Intel ME, AMD PSP, thermal management, and more). Add to that all the appliances, cars, toys, IOT things, smartcards, etc all around us.

No, I don't think it is niche. Fewer people may work on these, but they run in far more places.


See TamaGo, used to write firmware in Go, being shipped in production.

Not familiar with it, but reading the github page it isn't clear how it deals with GC. Do you happen to know?

Some embedded use cases would be fine with a GC (MicroPython is also a thing after all). Some want deterministic deallocation. Some want no dynamic allocator at all. From what I have seen, far more products are in the latter two categories. While many hobby projects fall into the first two categories. That is of course a broad generalization, but there is some truth to it.

Many products want to avoid allocation entirely either because of the realtime properties, or because they are cost sensitive and it is worth spending a little bit extra dev effort to be able to save an Euro or two and use a cheaper microcontroller where the allocator overhead won't fit (either the code in flash, or just the bookkeeping in RAM).


Yes, just like with real time Java for embedded targets from PTC and Aicas, it is its own implementation with another GC algorithm, additionally there are runtime APIs for regions/arenas.

Here is the commercial product for which it was designed,

https://reversec.com/usb-armory

A presentation from 2024,

https://www.osfc.io/2024/talks/tamago-bare-metal-go-for-arm-...


Not everybody is writing web apps.

You can also see it differently: If the language dictates a 4x increase in memory or CPU usage, you have set a much closer deadline before you need to upgrade the machine or rearchitect your code to become a distributed system by a factor 4 as well.

Previously, delivering a system (likely in C++) that consumed factor 4 fewer resources was an effort that cost developer time at a much higher factor, especially if you had uptime requirements. With Rust and similar low-overhead languages, the ratio changes drastically. It is much cheaper to deliver high-performance solutions that scale to the full capabilities of the hardware.


Thanks. I write some Go, and feel the same about it. I really enjoy it actually.

Maybe I'll jump to Zig as a side-gig (ha, it rhymes), but I still can't motivate myself to play with Rust. I'm happy with C++ on that regard.

Maybe gccrs will change that, IDK, yet.


Go is a language which sits perfectly where using garbage collection is no problem with ya.

C++ added OOP to C.

Rust is not object-oriented.

That makes your statement wrong.


It certainly is according to the various CS definitions of type systems.

Plenty of OOP architectures can be implemented 1:1 in Rust type system.


> Plenty of OOP architectures can be implemented 1:1

Plenty of OOP architecture can be implemented in C. That's an extremely flawed and fuzzy definition. But we've been through this before.


Yet people have to keep be reminded of it.

I think the issue is OOP patterns are one part missing features, one part trying to find common ground for Java, Modula, C++, SmallTalk, that it ends up too broad.

A much saner definition is looking at how languages evolved and how term is used. The way it's used is to describe an inheritance based language. Basically C++ and the descendants.


> one part trying to find common ground for Java, Modula, C++

The primary common ground is that their functions have encapsulation, which is what separates it from functions without encapsulation (i.e. imperative programming). This already has a name: Functional programming.

The issue is that functional, immutable programming language proponents don't like to admit that immutability is not on the same plane as imperative/functional/object-oriented programming. Of course, imperative, functional, and object-oriented language can all be either mutable or immutable, but that seems to evade some.

> SmallTalk

Smalltalk is different. It doesn't use function calling. It uses message passing. This is what object-oriented was originally intended to reference — it not being functional or imperative. In other words, "object-oriented" was coined for Smalltalk, and Smalltalk alone, because of its unique approach — something that really only Objective-C and Ruby have since adopted in a similar way. If you go back and read the original "object-oriented" definition, you'll soon notice it is basically just a Smalltalk laundry list.

> how term is used.

Language evolves, certainly. It is fine for "object-oriented" to mean something else today. The only trouble is that it's not clear to many what to call what was originally known as "object-oriented", etc. That's how we end up in this "no its this", "no its that" nonsense. So, the only question is: What can we agree to call these things that seemly have no name?


> The primary common ground is that their functions have encapsulation

You omitted Smalltalk. Most people would agree that SmallTalk is object-oriented.

But that kinda ruins the common ground thesis.

> Language evolves, certainly. It is fine for "object-oriented" to mean something else today.

pjmlp definition is very fuzzy. It judges object-orientedness based on a few criteria, like inheritance, encapsulation, polymorphism, etc. More checks, stronger OOP.

By that, even Haskell is somewhat OOP, and so is C, assembly, Rust, and any language.

---

What I prefer is looking at it as it's used. And how it's used for appears to be akin to using it as an everyday term fish or fruit.

No one would agree that a cucumber is a fruit. Or that humans are fish. Even though botanically and genetically they are.


> You omitted Smalltalk.

Exactly. It isn't functional. It doesn't use functions. It uses message passing instead. That is exactly why the term "object-oriented" was originally coined for Smalltalk. It didn't fit within the use of "imperative" and "functional" that preceded it.

> But that kinda ruins the common ground thesis.

That is the thesis: That Smalltalk is neither imperative nor functional. That is why it was given its own category. Maybe you've already forgotten, but I will remind that it was Smalltalk's creator that invented the term "object-oriented" for Smalltalk. Smalltalk being considered something different is the only reason for why "object-oriented" exists in the lexicon.

Erlang is the language that challenges the common ground thesis: It has both functions with encapsulation and message passing with encapsulation. However, I think that is easily resolved by accepting that it is both functional and object-oriented. That is what Joe Armstrong himself settled on and I think we can too.

> What I prefer is looking at it as it's used.

And when you look you'll soon find out that there is no commonality here. Everyone has their own vastly different definition. Just look at how many different definitions we got in this thread alone.

> No one would agree that a cucumber is a fruit.

Actually, absent of context defining whether you are referring to culinary or botanical, many actually do think of a cucumber as a fruit. The whole "did you know a tomato is actually a fruit?" is something that made the big leagues in the popular culture. However, your general point is sound: The definitions used are consistent across most people. That is not the case for object-oriented, though. Again, everyone, their brother, and pjmlp have their own thoughts and ideas about what it means. Looking at use isn't going to settle on a useful definition.

Realistically, if you want to effectively use "object-oriented" in your communication, you are going to have to explicitly define it each time.


> That is exactly why the term "object-oriented" was originally coined for Smalltalk.

Sure but your definition doesn't cover it. If language for which the term was coined, it's a bit meaningless, ain't it.

Problem with making encapsulation and polymorphism essential to OOP definition, is that it then starts garbling up functional languages like Haskell and imperative like C.

I can see them being necessary but not enough to classify something as OOP.

> And when you look you'll soon find out that there is no commonality here.

Perhaps, but broadly speaking people agree that C++ and Java are OOP, but for example C isn't.

Same way when people say and give me a fruit (as in fruits and vegetables), you'd be looked oddly if you gave a cucumber, rather than an apple.

Same way can be thought of OOP. The common definition is basically covers Message-passing-languages, and inheritance/prototype based languages.


Fruit = { Apple, Cucumber, … }

Veg = { Cucumber, … }

Fruit As In = Fruit − Veg


> Sure but your definition doesn't cover it.

How does it not cover it?

> Problem with making encapsulation and polymorphism essential to OOP definition, is that it then starts garbling up functional languages like Haskell and imperative like C.

Polymorphism? That was never mentioned. Let me reiterate the definitions:

- Imperative: Plain functions (C, Fortran, Pascal).

- Functional: Functions with encapsulation (C++, Java, Haskell, Erlang).

- Object-oriented: Message passing (Smalltalk, Objective-C, Ruby, Erlang).

Let me also reiterate that there are other axis of concerns. Imperative, functional, and object-oriented are not trying to categorize every last feature a programming language might have. Mutable/immutable, or polymorphic/monomorphic, etc. are others concern and can be independently labeled as such.

> Perhaps, but broadly speaking people agree that C++ and Java are OOP

Many do, but just as many hold on to the original definition. Try as you might, you're not going to find a common definition here, I'm afraid. If you want to use the term effectively, you're going to have to explicitly define it each time.


Yes, of course you can call objc_msgSend or equivalent in Rust just as you can in C. But you are pushing the object-oriented model into a library. It is not native to the language.

I am talking about Rust OOP language features for polymorphism, dynamic and static dispatch, encapsulation, interfaces.

Which allowed me to port 1:1 the Raytracing Weekend tutorial from the original OOP design in C++ to Rust.

Also the OOP model used by COM and WinRT ABIs, that Microsoft makes heavy use of in their Rust integration across various Windows and Office components.


Objective-C added OOP to C. C++ did not. C++ is neither an OO language nor a C superset.

If you make up your own definitions things can be anything you want and have or not have any label.

Absolutely. That's why it is best to stick to the already established definitions. Kay was quite explicit about what "object-oriented" meant when the term was uttered for the first time; including specifically calling out C++ as not being object-oriented.

And yes, we all know the rest of the story about how the C++ guys were butthurt by that callout and have been on a mission to make up their own pet definition that allows C++ to become "object-oriented" ever since. I mean, who wouldn't want to latch onto a term that was about the unique features of a failed programming language that never went anywhere?


Don't you think it's a bit silly to keep rehashing tribalistic arguments that people moved on from 40 years ago?

Once someone offers up the replacement name so that we can continue to talk about what "object-oriented" referred to 40 years ago — and still refers to today, sure. Nobody cares about the exact letters and sounds.

But, until then, no. It is still something we regularly talk about. It needs a name. And lucky for us it already has one — and has had one for 40 years.


Zig is Modula-2/Object Pascal re-packaged with a C like syntax.

Time to start zig++

Zig is what you want to write, because it gets out of the way.

Rust is what you want your colleagues to write, to enforce good practices and minimise bugs. It's also what I want my past self to have written, because that guy is always doing things that make my present life harder.


I'd rather my colleagues (and past self) write Rocq.

Rust is what you use when you'd rather spend time doing sales and marketing for Rust than building software.


zig really makes it unappealing to architecture astronaut, and rust pushes you towards it. id rather my colleagues write zig

Zig 0.15 is pretty stable. The biggest issue I face daily are silent compiler errors (SIGBUS) for trivial things, e.g. a typo in an import path. I've yet to find exactly why this [only sometimes] causes such a crash, but they're a real pain to figure out over a large changeset. `zig ast-check` sometimes catches the error, else Claude's pretty good at spotting where I accidentally re-used a variable name (again, 90% of the time I do that, it's an easy error, but the other 10%, I get a message-less compiler crash). It sounds like the changes in the OP might be specifically addressing these types of issues.

Also, my .zig-cache is currently at 173GB, which causes some issues on the small Linux ARM VPS I test with.

As for upgrades. I upgraded lightpanda to 0.14 then 0.15 and it was fine. I think for lightpanda, the 0.16 changes might not be too bad, with the only potential issue coming from our use of libcurl and our small websocket server (for CDP connections). Those layers are relatively isolated / abstracted, so I'm hopeful.

As a library developer, I've given up following / tracking 0.16. For one, the change don't resonate with me, and for another, it's changing far too fast. I don't think anyone expects 0.16 support in a library right now. I've gotten PRs for my "dev" branches from a few brave souls and everyone seems happy with that arrangement.


> The biggest issue I face daily are silent compiler errors (SIGBUS) for trivial things, e.g. a typo in an import path.

I don't use zig. My experience has been that caches themselves are sources of bugs (not talking about zig only, but in general). Clearing all relevant caches occasionally is useful when you're experiencing weird bugs.


I don't know why I was downvoted here. One day, I was experiencing weird compilation errors. Clearing the `ccache` C/C++ compiler cache helped get past the problem. Yes, I could have investigated in detail what was the issue and if ccache had a bug but sometimes you don't have the luxury of investigating everything your toolchain throws at you.

You don't use it, but you're offering unsolicited advice about it, and that advice is very generic.

It's not even an argument that you're wrong, just that it's not contributing much and people think that other replies should come first.


Never mind that the previous poster’s insight about caches is correct.

Zig has had caching bugs/issues/limitations that could be worked around by clearing the cache. (Has had, and more that likely still has, and will have.)


That .zig-cache seems massive to me. I keep mine on a tmpfs and remove it every time the tmpfs is full.

Do you see any major problems when you remove your .zig-cache and start over?


Just a slower build. From ~20 seconds to ~65 seconds the first time after I nuke it.

But why is it so big in the first place?

I was searching around for causes and came across the following issues: https://github.com/ziglang/zig/issues/15358 which was moved to https://codeberg.org/ziglang/zig/issues/30193

The following quotes stand out

> zig's caching system is designed explicitly so that garbage collection could happen in one process simultaneously while the cache is being used by another process.

> I just ran WizTree to find out why my disk was full, and the zig cache for one project alone was like 140 GB.

> not only the .zig-cache directory in my projects, but the global zig cache directory which is caching various dependencies: I'm finding each week I have to clear both caches to prevent run-away disk space

Like what's going on? This doesn't seem normal at all. I also read somewhere that zig stores every version of your binary as well? Can you shed some light on why it works like this in zigland?


AFAIK garbage collection is basically not implemented yet. I myself do `ZIG_LOCAL_CACHE_DIR=~/.cache/zig` so I only have to nuke single directory whenever I feel like it.

what exactly people call 'garbage collection' in Zig? build cache cleanup?

Indeed what was referred to here is the zig build system cache.

Does Zig have incremental builds yet? Or is it 20 secs each time for your build.

20 seconds each time. Last time I tried to enable incremental build, it wasn't working for us. It was a while ago, but I think it had to do with something in our v8 bridge.

I have heard that from other Zig devs too. Must get a bit annoying as the project grows. But I guess it will be supported sooner or later.

It's discussed in the post.

The forever backwards compatible promise of C++ was a tremendous design mistake that has resulted in mindshare death as of 2026. It might suck to have to modify your code to continue to get it to work, but it’s the right long term approach.

> that has resulted in mindshare death as of 2026

I could make a bet that as of 2026 still more C++ projects are being started than Rust + Zig combined.

World is much more vast than ShowHN and GitHub would indicate.


Being started? I would take that bet.

It suffices to use the games industry, HFT and HPC as domains.

Mindshare death is a very large overstatement given the massive amount of legacy C++ out there that will be maintained by poor souls for year to come. But you are right, there used to be a great language hiding within C++ if the committee ever dared to break backwards compat. But even if they did it now it would be too late and they'd just end up with a worse Rust or Zig.

The biggest problem with C++ is that while everyone agrees there is a great language hiding in it, everyone also has a remarkably different idea of what that great language actually is.

I don't agree there's a great language hiding in C++. My high level objections would be that the type system is garbage and the syntax is terrible, so you'd need a different type system and syntax and that's nothing close to C++ after the changes.

After many years of insisting that "dialects" of C++ are a terrible idea, despite the reality that most C++ users have a specific dialect they use - Bjarne Stroustrup has endorsed essentially the same thing but as "profiles" to address safety issues. So for people who think there is a "great language" in there perhaps in C++ 29 or C++ 32 you will be able to find out for yourselves that you're wrong.


There are multiple great languages hiding within it

As proven a few times, it doesn't matter if committee decides to break something if compiler vendors aren't on board with what is being broken.

There is still this disconnection on how languages under ISO process work in the industry.


The C++ standards committee’s antiquated reliance on compiler “vendors” holds it back. They should adopt maintenance of clang and bless it as the reference compiler.

And you will be the one telling the losers that their compiler, operating systems and OS doesn't count?

By the way this applies to the C language so beloved on this corner as well.

As it does to COBOL, Fortran, Ada and JS (ECMA is not much different from ISO).


Rust has managed just fine to remain mostly backwards compatible since 1.0 , while still allowing for evolution of the language through editions.

This puts much more work on the compiler development side, but it's a great boon for the ecosystem.

To be fair, zig is pre 1.0, but Zig is also already 8 years old. Rust turned 1.0 at ~ 5 years, I think.


Rust started in 2006 and reached v1 in 2015, that's 9 years.

Rust existed nearly entirely on paper until 2009, when Mozilla started funding researchers to work on it full-time. It wasn't announced in any sort of official capacity until 2010, and had no official numbered release until 2012. It was less than three and a half years between 0.1 and 1.0, and in that time, hard as it is to believe, it underwent more overall change than Zig has.

There is a reason GCC, LLVM, CUDA, Metal, HPC,.. rely on C++ and will never rewrite to something else, including Zig.

Yes, inertia. If those projects started today, they would likely choose rust.

Why isn't rustc using Cranelift then?

I can think a few reasons:

- Cranelift applies less optimizations in exchange for faster compilation times, because it was developed to compile WASM (wasmtime), but turns out that is good enough for Rust debug builds.

- Cranelift does not support the wide range of platforms (AFAIK just X86_64 and some ARM targets)


So it isn't just a matter of "they would use Rust instead".

There is a whole ecosystem of contributions across the globe and the lingua franca used by those contributors.


> There is a whole ecosystem of contributions across the globe and the lingua franca used by those contributors.

which is slowly changing with wider rust adaptation.


Where are the LLVM pull requests adding Rust code?

I am not expert in compilers, but in databases space there are multiple prominent projects which gain traction, create ecosystem and making Rust strong contender against C++ dominance.

Same reason Android and Chrome and git and Linux weren't written in Rust when they started. Rust didn't exist. All of these projects integrate Rust now, after being single language projects for the longest time.

It's notable that the projects you mentioned mostly don't need to deal with adversarial user input, while the projects I mentioned do. That's one area that Rust shines in.


Rust presence in Android is minimal, and not officially supported for userspace.

Android team is quite clear that Java, Kotlin, C and C++ are the official languages for app developers.

Chrome even has less Rust than Firefox.

Linux has some baby adoption, and it isn't without drama, even with Microsoft and Google pushing for it.


Rust will never be in Android user space, because it's not competing with Kotlin. Kotlin is already excellent there. Rust will replace the parts of Android written in C++ gradually. That was always the plan. It feels weird and cope-y to move the goalposts to say it's not a big deal unless Rust also replaces Kotlin.

Chrome only needs to replace the parts of their codebase that handle untrusted input with Rust to get substantial benefits. Like codec parsers. They don't need to rewrite everything, just the parts that need rewriting. The parts that are impossible to get right in C++, to the point where Chrome spins up separate processes to run that code.

Rust is the future for Android, and it will become an important of Chrome and Linux and git (starting 3.0). That's just the way it is.


Looking at LLVM build times I seriously believe that C would have been the better choice :/ (it wouldn't be a problem if LLVM wouldn't be the base for so many other projects)

Same for the Metal shading language. C++ adds exactly nothing useful to a shading language over a C dialect that's extended with vector and matrix math types (at least they didn't pick ObjC or Swift though).


CUDA, SYSCL, HLSL evolution roadmap, and Khronos future for Vulkan beg to differ with your opinion.

> HLSL evolution roadmap

...that's just because of the traditional-game-dev Stockholm syndrome towards C++ (but not too much C++ please!).

> Khronos future for Vulkan

As far as I'm aware Khronos is not planning to move the Vulkan API to C++ - and the 'modern C++' sample code which adds a C++ RAII wrapper on top of the Vulkan C API does more harm than good (especially since lifetime management for Vulkan object is a bit more involved than just adding a class wrapper with a destructor).


See the talks around Vulkan ecosystem, and GPU shading languages from February.

There is more than one sample using C++, now they make use of C++20, including modules if desired.


> now they make use of C++20, including modules if desired.

It's in line with many other shitty design decisions coming out of Khronos, so I'm not even surprised ;)

IMHO it's a pretty big problem when the spec is on an entirely different abstraction level than the sample code (those new samples also move significant code into 'helper classes', which means all the interesting stuff is hidden away).


The forever backwards compatible promise of C++ is pretty much the main reason anyone is using C++.

Hilariously, they broke this compatibility. std::auto_ptr was an abomination, but removing it from the language was needless and undermined the long term stability that differentiates C++ from upstarts.

those that used it were rightly punished by the removal

Mitchell Hashimoto (developer of Ghostty) talks about Zig a lot. Ghostty is written in it, and he seems to love it. The churn doesn't seem to bother him at all.

I asked him about in a thread a while back: https://news.ycombinator.com/item?id=47206009#47209313

The makers of TigerBeatle also rave about how good Zig is.


It's been a non-issue for us at tvScientific. Once or twice a year you settle in for a mass refactor, and when that's done you move on with your day.

Packages do fall behind. We only use a couple, so it's pretty easy to point to an internal fork while we wait for upstream to update or to accept our updates. That'd probably be a pain point if you were using a lot of them.


Running a ~20Kloc 0.16 Zig in prod, compiled and deployed as DebugSafe. No issues, superstable. This was rewrite of Node.js/Typescript computation module, and we chose Zig over Rust due to better support for f128. Zig/DebugSafe is approximately twice faster than TypeScript/Node.js 25 for our purpose, with approximately 70% less memory consumption. We were not impacted by WriterGate and other recent scandals much because we primarily rely on libc, and we don't use much of Zig's I/O standard lib.

Zig has a better support for sqlite/JSON serialization (everything is strongly typed and validated) than Node.js, so that was a plus as well.

Zig minuses are well known: lack of syntax sugar for closures/lambdas/vtable, which makes it hard to isolate layers of code for independent development.

We use Arcs (atomic reference counting) with resource scopes (bumper allocators) extensively, so memory safety is not a concern despite aggressively multithreading logic. The default allocator automatically detects memory leaks, use-after-free, etc so we are planning to continue running it in DebugSafe indefinitely. We tried switching to ReleaseFast and gained about 25%, which is not that much faster to lose memory safety guarantees.


The language itself does not change much, but the std does. It depends on individuals, but some people rely less on the std, some copy the old code that they still need.

> Are there cases where packages you may use fall behind the language?

Using third party packages is quite problematic yes. I don't recommend using them too much personally, unless you want to make more work for yourself.


Using third party packages has gotten a lot easier with the changes described in this devlog https://ziglang.org/devlog/2026/#2026-02-06

Pinning your build chain and tooling to specific commits helps with stability but also traps you with old bugs or missing features. Chasing upstream fixes is a chore if you miss a release window and suddenly some depended package won't even compiile anymore.

I recently tried to learn it and found it frustrating. A lot of docs are for 0.15 but the latest is (or was) 0.16 which changed a lot of std so none of the existing write ups were valid anymore. I plan to revisit once it gets more stable because I do like it when I get it to work.

0.16 is the development version. 0.15.2 is latest release.

I stopped updating the compiler at 0.14 for three projects. Getting the correct toolchain is part of my (incremental) build process. I don't use any external Zig packages.

I think one of the more PITA changes necessary to get these projects to 0.15 is removing `usingnamespace`, which I've used to implement a kind of mixin. The projects are all a few thousand LOC and it shouldn't be that much trouble, but enough trouble that none of what I gain from upgrading currently justify doing it. I think that's fine.


> I know Bun's using zig to a degree of success, was wondering how the rest were doing.

Just a degree of success?


hi, i'm the founder of https://github.com/zml/zml, very happy with Zig

You can fix you code 10 times you will fix it.

For those like me who have never heard of this software: Bun, some sort of package management service for javascript. https://en.wikipedia.org/wiki/Bun_(software)

Bun is a full fledged JavaScript runtime! Think node.js but fast

> Think node.js but fast

Color me extremely sceptical. Surely if you could make javascript fast google would have tried a decade ago....


Bun uses JSC (JavaScriptCore) instead of V8. From what I understand, whereas Node/V8 has a higher tier 4 "top speed", JSC is more optimized for memory and is faster to tier up early/less overhead. Good for serverless. Great for agents -> Anthropic purchase.

> Good for serverless. Great for agents -> Anthropic purchase.

Surely nobody would use javascript for either yea? The weaknesses of the language are amplified in constrained environments: low certainty, high memory pressure, high startup costs.


I think Bun helps with the memory pressure, granted this is relative to V8. I'd pushback on the certainty with the reality that TS provides a significant drop in entropy while benefiting from what is a sweet spot between massive corpus size and low barrier for typical problem/use-case complexity. You'll never have the fastest product with JS, but you will always have good speed to market and be able to move quickly.

> Surely nobody would use javascript for either yea?

It's probably the most popular language for serverless.


I can't vouch for this behavior but obviously you can have a better serverless experience than writing lisp with the shittiest syntax invented by man

Only because the likes of Vercel and Netlify barely offer anything else on their free tiers.

When people go AWS, Azure, GCP,... other languages take the reigns.


Do you have any stats on the latter?

My own anecdote.

Claude CLI is based on bun. The dependency is so complete that Bun have now joined Anthropic.

Claude CLI is not exactly a reference of usable software

It's plenty usable. Most of the problems with the claude TUI stem from it being a TUI (no way to query the terminal emulator's displayed character grid), so you have to maintain your own state and try to keep them in sync, which more than a few TUIs will fail at at least sometimes, hence the popularity of conventions like Ctrl+L to redraw.

I don't know what a TUI is (i'm guessing "terminal ui" as if the term CLI doesn't exist lmao) but yea, they could have put effort into their product and not forced people to use their atrocious ncurses interface which is like the worst of all worlds: text interface without the benefit of the shell, zero accessibility.

CLI refers more to non-interactive programs used through a shell, programs like grep or or indeed the `claude` program in non-interactive modes. TUIs (text user interfaces) refers to interactive programs implemented in a terminal interface, what you call ncurses interfaces (but usually aren't implemented using ncurses these days.) They're GUIs in text, so TUIs.

Anyway, their decision to implement a TUI was definitely not done out of laziness nor even pragmatism. It was a fashion choice. A deliberate choice to put their product in the same vibes-space as console jockey hotshot unix pros who spit out arcane one liners to get shit done. They very easily could have asked claude to write itself a proper GUI interface which completely avoids all the pitfalls of TUIs and simplifies a lot of things they went out of their way to make work in a TUI. Support for drag-and-drop for instance, isn't something you'll find in many TUIs but they have it. They put care into making this TUI, the problem is that TUIs are kind of shit, and they certainly know that. They did it this way anyway effectively for marketting reasons.


Hmm. You've given me a lot to think about. I appreciate the effort, and thank you for replying with it.

The actual JS code is in the same ballpark as nodejs. They get fast by specializing to each platform's fastest APIs instead of using generic ones, reimplementing JS libraries in Zig (for example npm or jest) and using faster libraries (for example they use the oniguruma regex engine). Also you don't need an extra transpiling step when using TypeScript.

they have, v8 is a pretty fast engine and an engineering marvel. bun is faster at cost of having worse jit and less correctness

Catastrophe 1914 - Max Hastings : Great book about what triggered WW1, the state of the world at that time. Just a great read in general.

The Seven Military Classics Of Ancient China : Sun Tzu isn't the only military strategist on the block.

Kaplan and Sadock's Synopsis of Psychiatry: I make sure to study at least one university/high school textbook on any subject that I find interesting.

Also, not reading but watching Professor Jiang Xueqin's lectures [0] has been pivotal towards helping me find new avenues to research and understand geopolitics .

[0] https://www.youtube.com/@PredictiveHistory


You might enjoy Dr Sarah Payne's lectures on YouTube.

There's also https://github.com/openai/symphony that's being developed following a similar Kanban pattern based agent manager (though yours is more sophisticated at the moment imo)

Interesting to see the Kanban workflow being adapted to managing agents, makes sense; each item having the same UX as a Github Issue.


Thanks. I also saw Vibe Kanban which looked quite mature with lots of features. But I was really liking working with markdown files in VS Code - with everything that comes with that (version control capability etc). I went down the route of implementing a full harness for a while like Vibe Kanban, but the issue was that it was unlikely (without significant effort) to be as good as Github Copilot chat, and it meant forfeiting all of the IDE integrations etc (like diff visualisation for the agents actions etc).

Having the Kanban board in VS Code where I'm working, backed by markdown, GitOps friendly files works really well. Moving from the markdown file to the chat editor to type 'plan', 'implement' isn't what I really wanted, but it's really not a problem once you get used to the flow.


What I don't get is why not just use GitHub/Jira kanban directly with the CLI. We don't put them in git for multiple reasons. What are the people who only work in the browser going to do with this?

It reminds me of how Zed wants to have built in Slack when it will be impossible to get everyone to use Zed. A feature that can never really materialize because developers get a say in their tools


You can look at this as how small sets of a primitive lexicon give rise to a larger, more complex language. At least that's how I interpret it.

Hey, cool library had just a small nitpick/request wrt https://mujs.org/playground

Could you please add all sources as tabs? For example in Form (GET) I would really like to see /demos/search-results.html and the same goes for other examples.

Thanks!


Good news, I've just added the server-side source as a tab in the Playground. Thanks again for the suggestion!

Glad to hear, good luck with your project!

Good point, thanks! That would definitely make the examples clearer. I'll keep it in mind.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: