When ipv6 was first created DHCP and NAT were new and not widely deployed. They weren't trying to "fix" them, they solved the same problems independently.
And if you need NAT or DHCP, there isn't any reason you can't use them with ipv6. DHCP6 had been around for a long time.
that's not at all true. DHCP was very much part of the operational canon of the internet at the time, which is why it persisted as a model. V6 really wanted to back that out so that networks 'just worked' without depending on an administrator to manage that local service.
NAT was already in use, and a substantial motivation for the IPv6 work was to provide an alternative before it got too entrenched, which sadly failed.
I remember reading about that a long time ago. I wonder why it never really caught on?
I think part of the problem is not so much a technical one, as a coordination issue. Who are you more likely to get on board? ISP and backbone providers. What is the path forward? Here is the recommended path forward, kind of thing.
They optimized for ad impressions. There was no technical reason not to keep around a Boolean mode - some competitors effectively exist because of that single feature.
Switching to C++ is relatively easy in an existing codebase. It's in many cases as simple as renaming a file from .c to .cpp. But for writing something from scratch it's better to use Rust.
Renaming c. to .cpp may work with ancient c89 code, but not with anything remotely modern. But while the code then is technically C++, it is not better. I still prefer C for new projects to any other language, because I value short compilation time and reduced complexity. For me, this translates in higher productivity and more fun. With modern tooling, also most C issues are detected early.
Slightly tweaking might not always be sufficient. Reengineering my numerical code would certainly a bit of effort. But anyhow, I do not think C++ is better. Recently I removed one (!) file with templates (which someone else added) from one of my project because it doubled compilation times (in a project with 750 other files or so). I do no need slow build times, more complexity, and more footguns.
> It's in many cases as simple as renaming a file from .c to .cpp.
That is rather optimistic, but, for example, scpptool has a feature [1] that auto-converts from C to a subset of C that can (hopefully) be compiled with clang++. If the original C source uses C11 extensions, clang++ seems to generally produce warnings rather than compile errors.
> But for writing something from scratch it's better to use Rust.
scpptool attempts to make C++ a more viable option by enforcing a memory and data race safe subset using a similar safety strategy.
That produces a bit of a chicken and egg probablem for a stdlib overhaul. Compilers and libc implementations don't have a strong reason to implement safer APIs, because if it is non-standard then projects that want to be portable won't use it , but it won't get standardized unless they do add safer APIs.
So the best hope is probably for a third party library that has safet APIs to get popular enough that it becomes a de facto standard.
I think the real failing is that new language features then must be prototyped by people who have a background in compilers. That's a very small subset of the overall C community.
I don't have any clue how to patch clang's front end. I'm not a language or compiler person. I just want to make stuff better. There needs to be a playground for people like me, and hopefully lib0xc can be that playground.
By adding to the language itself, you mostly make stuff worse. The major reason why C is useful is its quite stable syntax and semantics. Language is typically not the area where you want to add code. It's much better (and much easier) to invent function APIs. See how they shake out, if they're good you might get some adoption.
People say that kind of thing on HN every now and then. I have no idea why this idea is around, it's a complete fantasy in my opinion. I say this as someone who mostly uses Linux.
> My approach is to utilize https://pre-commit.com/ to have all checks available to run locally during commit
That works fine for some things, but it doesn't work for building and testing on other platforms. For example, if I am running on linux, pre-commit won't be able to check that my changes also work on Mac and Windows.
This could also solve the problem Github has where anyone with an account an "approve" a PR, but if you aren't a maintainer for the project your approval doesn't mean anything as far as actually getting the PR merged, but can be a signal to the original author that it is probably good, and to the actual maintainer that the PR is worth considering.
But with this, a non-maintainer could review be allowed to give a +1 or -1, but not a -2 or +2, and it is more clear that a "+1" isn't sufficient for actually merging the PR.
And if you need NAT or DHCP, there isn't any reason you can't use them with ipv6. DHCP6 had been around for a long time.
reply