Hacker Newsnew | past | comments | ask | show | jobs | submit | thayne's commentslogin

When ipv6 was first created DHCP and NAT were new and not widely deployed. They weren't trying to "fix" them, they solved the same problems independently.

And if you need NAT or DHCP, there isn't any reason you can't use them with ipv6. DHCP6 had been around for a long time.


that's not at all true. DHCP was very much part of the operational canon of the internet at the time, which is why it persisted as a model. V6 really wanted to back that out so that networks 'just worked' without depending on an administrator to manage that local service.

NAT was already in use, and a substantial motivation for the IPv6 work was to provide an alternative before it got too entrenched, which sadly failed.


The RFC for dhcp was published in 1997, two years after the first RFC for IPv6, and three years after work on IPv6 started.

There isn't any reason you can't set up a NAT like that with ipv6.

And it added those 16 bits in a way that causes a lot of problems

> But in an ipv42 type setup, you would have determnistic embedding so that every ipv4 address is represented inside the larger address space

IPv6 supports that, but it ended up not getting used very much.

See https://en.wikipedia.org/wiki/List_of_IPv6_transition_mechan...


I remember reading about that a long time ago. I wonder why it never really caught on?

I think part of the problem is not so much a technical one, as a coordination issue. Who are you more likely to get on board? ISP and backbone providers. What is the path forward? Here is the recommended path forward, kind of thing.


> None of that ever worked properly, consistently, at google.

My experience is it worked pretty well on Google for a while, but then it got progressively worse.


Right, for this first 5 years or so, it worked. But then they started to optimize for “the masses”, and they don’t use boolean logic in queries.

They optimized for ad impressions. There was no technical reason not to keep around a Boolean mode - some competitors effectively exist because of that single feature.

Is there anything in here for something like a "slice" or dynamically sized array that carries its length along with it?

Just use compiler option -std=c++20 and use std::span. Don't try reinventing it in C.

If someone needs more than C provides, why on earth would they choose C++?

No rational person is going to want to have to deal with 10x the number of foot guns.

Literally anything when moving from C is better than C++.


Switching to C++ is relatively easy in an existing codebase. It's in many cases as simple as renaming a file from .c to .cpp. But for writing something from scratch it's better to use Rust.

Renaming c. to .cpp may work with ancient c89 code, but not with anything remotely modern. But while the code then is technically C++, it is not better. I still prefer C for new projects to any other language, because I value short compilation time and reduced complexity. For me, this translates in higher productivity and more fun. With modern tooling, also most C issues are detected early.

Slightly tweaking C code to allow it working in C++ is still much easier compared to full rewrite in some other language.

Slightly tweaking might not always be sufficient. Reengineering my numerical code would certainly a bit of effort. But anyhow, I do not think C++ is better. Recently I removed one (!) file with templates (which someone else added) from one of my project because it doubled compilation times (in a project with 750 other files or so). I do no need slow build times, more complexity, and more footguns.

> It's in many cases as simple as renaming a file from .c to .cpp.

That is rather optimistic, but, for example, scpptool has a feature [1] that auto-converts from C to a subset of C that can (hopefully) be compiled with clang++. If the original C source uses C11 extensions, clang++ seems to generally produce warnings rather than compile errors.

> But for writing something from scratch it's better to use Rust.

scpptool attempts to make C++ a more viable option by enforcing a memory and data race safe subset using a similar safety strategy.

[1] https://github.com/duneroadrunner/SaferCPlusPlus-AutoTransla...


std span is not bounds checked.

It is when enabling the respective compiler switches for hardned standard library.

Even better in C++26.


That produces a bit of a chicken and egg probablem for a stdlib overhaul. Compilers and libc implementations don't have a strong reason to implement safer APIs, because if it is non-standard then projects that want to be portable won't use it , but it won't get standardized unless they do add safer APIs.

So the best hope is probably for a third party library that has safet APIs to get popular enough that it becomes a de facto standard.


I think the real failing is that new language features then must be prototyped by people who have a background in compilers. That's a very small subset of the overall C community.

I don't have any clue how to patch clang's front end. I'm not a language or compiler person. I just want to make stuff better. There needs to be a playground for people like me, and hopefully lib0xc can be that playground.


By adding to the language itself, you mostly make stuff worse. The major reason why C is useful is its quite stable syntax and semantics. Language is typically not the area where you want to add code. It's much better (and much easier) to invent function APIs. See how they shake out, if they're good you might get some adoption.

Interesting that a project from Microsoft doesn't support MSVC or Windows.

I suspect in 20 years Windows will be a Linux distribution with a compatibility layer.

People say that kind of thing on HN every now and then. I have no idea why this idea is around, it's a complete fantasy in my opinion. I say this as someone who mostly uses Linux.

> I have no idea why this idea is around,

To the best of my fallible knowledge, the notion was first popularized via <http://esr.ibiblio.org/?p=8764>.


> My approach is to utilize https://pre-commit.com/ to have all checks available to run locally during commit

That works fine for some things, but it doesn't work for building and testing on other platforms. For example, if I am running on linux, pre-commit won't be able to check that my changes also work on Mac and Windows.


This could also solve the problem Github has where anyone with an account an "approve" a PR, but if you aren't a maintainer for the project your approval doesn't mean anything as far as actually getting the PR merged, but can be a signal to the original author that it is probably good, and to the actual maintainer that the PR is worth considering.

But with this, a non-maintainer could review be allowed to give a +1 or -1, but not a -2 or +2, and it is more clear that a "+1" isn't sufficient for actually merging the PR.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: