Hacker Newsnew | past | comments | ask | show | jobs | submit | mmbleh's commentslogin

It is because the IPv6 rollout has not been consistent. Some assign /64 per machine, some assign /64 per data center. Some even go the other way and do a /56 per machine. We've had to build up a list of overrides to do some ranges by /64 and others by /128 because of how they allocate addresses. This creates extra burden on server operators and it's not surprising that some just choose not to deal with it.

This problem exists for ipv4 too: some machines have static address, others have dynamic, so you can implement overrides.

Ipv6 is cheap though. If I want to get past your IP or per Network limit, options abound.

What can you do to get a new IPv6 network that is easier than getting a new IPv4?

Stuff like bouncing a modem, getting a new VPS, making a VPN connection I would expect to be pretty similar. And getting a block officially allocated to you is a lot of work.


If you allocate a dedicated spam network, it will make spam easy to detect and block.

Why are we pretending that you are checking logs and adding firewall rules manually. Anything worth ddosing is going to have automatic systems that take care of this. If not put an ai agent on it.

IPv6 is very difficult to implement and enforce reliable rate limits on anonymous traffic. This is something we've struggled a lot with - there is no consistent implementation or standard when it comes to assigning of IPv6 addresses. sometimes a machine gets a full /64, other times a whole data center uses a full /64. So then we need to try and build knowledge of what level to block based on which IP range and for some it's just not worth the hassle.

Well, even if there was a standard, that's still not a guarantee that the other side of the /64 would be following it. It's correct for you to rate-limit the whole /64.

... But that's no different from IPv4. Sometimes you have one per user, sometimes there are ~1000 users per IP.

Most of the ipv4 world is now behind CGNAT, one user per ip is simply a wrong assumption.


Anonymous rate limits for us are skewed towards preventing abusive behavior. Most users do not have a problem, even there is a CGNAT on IPv4.

For IPv6, if we block on /128 and a single machine gets /64, a malicious user has near infinite IPs. In the case of Linode and others that do /64 for a whole data center, it's easy to rate limit the whole thing.

Wrong assumption or not, it is an issue that is made worse by IPv6


I don't doubt your experience, but I wouldn't expect it to continue. I don't think Tuna-Fish is correct that "most" of the IPv4 world is behind CGNAT, but that does appear to be the trend. You can't even assume hosting providers give their subscribers their own IPv4 addresses anymore. On the other hand, there's a chance providers like Linode will eventually wise up and start giving subscribers their own /64 - there are certainly enough IPv6 addresses available for that, unlike with IPv4.

> I don't think Tuna-Fish is correct that "most" of the IPv4 world is behind CGNAT

~60%+ of internet traffic is mobile, which is ~100% behind CGNAT.

On desktop, only ~20% of US and European web traffic uses CGNAT, but in China that number is ~80%, in India ~70% and varies among African countries but is typically well over 70%, with it being essentially universal in some countries.

Overall, something a bit over 80% of all ipv4 traffic worldwide currently uses CGNAT. It's just distributed very unevenly, with US and European consumers enjoying high IP allocations for historical reasons, and the rest of the world making do with what they have.


Oh wow, thanks for those numbers!

Since mmbleh mentioned Linode I'm guessing they're more concerned with traffic from servers, where CGNAT is uncommon. But even that may be changing - https://blog.exe.dev/ssh-host-header


Yeah, our traffic is more from automated systems/servers, nothing from mobile

Yeah, absolutely no expectations for the future. My point was more that while there may be clear benefits for users, IPv6 presents real problems for service operators with no clear solutions in sight.

Given that GitHub also offers free services for anonymous users, I can imagine they face similar problems. The easiest move is simply to just not bother, and I can't blame them for it.


If a single machine gets /64 and you rate limit by /64, what doesn't work?

>Linode and others that do /64 for a whole data center

That's how it's supposed to work.


> That's how it's supposed to work.

According to who?

It could fit best practices if your datacenter has one tenant and they want to put the entire thing on a single subnet? In general I would expect a datacenter to get something like a /48 minimum. Even home connections are supposed to get more than /64 allocated.

And Linode's default setup only gives each server a single /128. That's not how it's supposed to work. But you can request /64 or /56.


If the OS uses SLAAC by default, then it will just work, but SLAAC is for humans and makes less sense for web servers (yet can make sense for vpn servers). For web servers /128 is more meaningful.

Maybe a different take, but as someone that manages a large public API that allows anonymous access, IPv6 has been a nightmare to try and enforce rate limits on. We've found different ISPs assign IPv6 addresses differently - some give a /64 to every server, some give /64 to an entire data center. It seems there is no standard and everyone just makes up what they think will work. This puts us in an awkward place where we need abuse protections, but have to invest into more complicated solutions that were needed for IPv4. Or we give up and just say if you want to use IPv6, you have to authenticate.

Does anyone have any success stories from the server side handling a situation like this? Looks like cloudflare switched to some kind of custom dynamic rate limiting based on like addresses, but it's unrealistic to expect everyone to be able to do such a thing.


The ISPs assigning only /64s to whole data centers are not following the standards and best practices. For rate limiting I would block at the /64 level. Just like if someone is behind a CG-NAT they might run into ip reputation issues. They need to complain to their carrier about the poor service/configuration or switch providers.


Common practice is to block no finer than /64s. If you treat an IPv6 /64 like an IPv4 /32, you should be off to the races.


CVE response time is a toss up, they all patch fast. Chainguard can only guarantee zero active exploits because they control their own exploit feed, and don't publish anything on it until they've patched. So while this makes it look better, it may not actually be better


Hey!

I work at Chainguard. We don't guarantee zero active exploits, but we do have a contractual SLA we offer around CVE scan results (those aren't quite the same thing unfortunately).

We do issue an advisory feed in a few versions that scanners integrate with. The traditional format we used (which is what most scanners supported at the time) didn't have a way to include pending information so we couldn't include it there.

The basic flow was: scanner finds CVE and alerts, we issue statement showing when and where we fixed it, the scanner understands that and doesn't show it in versions after that.

so there wasn't really a spot to put "this is present", that was the scanner's job. Not all scanners work that way though, and some just rely on our feed and don't do their own homework so it's hit or miss.

We do have another feed now that uses the newer OSV format, in that feed we have all the info around when we detect it, when we patch it, etc.

All this info is available publicly and shown in our console, many of them you can see here: https://github.com/wolfi-dev/advisories

You can take this example: https://github.com/wolfi-dev/advisories/blob/main/amass.advi... and see the timestamps for when we detected CVEs, in what version, and how long it took us to patch.


40/hour is higher than the current limits for authenticated free users.


These platforms do cache quite a bit. It's just that there is a very high volume of traffic and a lot of it does update pretty frequently (or has to check for updates)


Are you saying they cache transparently? Because I haven't seen that mentioned in the docs.


The storage enforcement costs have been delayed until 2026 to give time for new (automated) tooling to be created and for users to have time to adjust.

The pull limits have also been delayed at least a month.


Do you have a source for that? My company was dropping dockerhub this week as we have no way of clearing up storage usage (untagging doesn't work) until this new tooling exists and can't afford the costs of all the untagged images we have made over the last few years.


(I work there) If you have a support contact or AE they can tell you if you need an official source. Marketing communications should be sent out at some point.


Thanks, Just seems like quite poor handling on the comms around the storage changes as there is only a week to go and the current public docs make it seem like the only way to not start paying is to delete the repos or I guess your whole org.


Yep, agree that comms have a lot of room for improvement. We do have initial delete capabilities of manifests available now, but functionality is fairly basic. It will improve over time, along with automated policies.


These dates have been delayed. They will not take effect March 1. Pull limit changes are delayed at least a month, storage limit enforcement is delayed until next year.


Step one for me is just educating people on how cloud providers charge for resources. So many people don't understand everything that goes into an AWS bill.

Take AWS for example - everyone seems to account for lambda runtime cost, but a lot of people forget/ignore execution cost, API Gateway cost, bandwidth costs, etc. Or they'll account for S3 storage but not S3 API costs.

While good tagging certainly helps figure out where money is spent, sometimes it's too late since things have been built on bad architectures based on misunderstandings of charges.


Interesting reaction. This could also be interpreted as making it _more_ reputable, by removing abuse and cruft, allowing engineering time to be focused on things that provide value to end users.


I'm not sure if I misunderstand the limits[1], but I want my customers to be able to pull the image as many times as they need. While this may help with the concern about quality of images, it still leaves the rate limiting unresolved.

[1] https://www.docker.com/increase-rate-limits


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: