Hacker Newsnew | past | comments | ask | show | jobs | submit | phirephly's commentslogin

If every torrent includes a webseed, then you're still left with the problem of needing to build a full HTTP CDN, and now have to also maintain the largest tracker infrastructure ever deployed for Bittorrent.

Under normal conditions with well behaved clients, raw bandwidth for the large packages is essentially never the issue. Misbehaving clients, cache thrashing, IOPS are the sort of issues that cause pain for mirrors.


I've made a point of calling out Digital Ocean in Linux mirroring talks as the gold standard for being a good citizen; run their own internal mirrors, which are FAST, making it a value add feature for them as well.


EPEL is a separate module from fedora-enchilada, but uses the same backend CDN infrastructure and most mirrors tend to carry both /fedora/ and /epel/, so they're not technically the same mirrors, but most mirrors tend to carry both.


256 next hops isn't enough. Typical ASICs support 20,000 to 160,000 next hops FECs.

Cisco tried caching routing decisions from non-line rate routing engines in the 90s, and the industry learned the lesson that it's a bad idea. Caching works until you overflow the cache for some reason, and then the box completely falls over as it thrashes.


The base power with any colo space is going to be minimal. You typically come back and spec out what additional power you want with the rack a la carte.


When you want line rate forwarding across several Tbps of front panel ports, you need the packet pipeline to be able to make all the routing decisions without involvement from the OS. 8Bpps just doesn't give you time to be able to walk any kind of data structure in memory.

Running full internet tables on a x86 server where you can only get a few Gbps up to maybe a few dozen Gbps is much easier.


Isn't this what BPF and XDP try to target with their SmartNIC offloading, like the ones supported from Metronome?


Sure. And switch ASICs hang off a CPU as a PCIe device, so the line between smartNICs and low end ASICs start to blur. The key is hardware acceleration of some sort.


I only got a few years out of it. I'm running an Arista 7280SR-48C8 now.

The problem is that the million tcam entries are split between IPv4 and IPv6, so I really ran out of space.


Bummer! Didn’t realize the graph was not for not IPv4 and IPv6. Have you done anything fun with the AS or had an opportunity to say “Luckily, I do have an AS!” in a time of need?


I started an Internet Exchange Point adjacent to it, use it to host mirror.fcix.net, and got a second ASN to build the anycast ns-global.zone service


Is the map on https://ns-global.zone/ actually the MicroMirror map? There seem to be a lot more dots on the map than in the traffic graph. If that map is accurate, I’m curious where the Minneapolis-ish node is (network wise). I’m not seeing 23.128.97.0/24 via MICE.


The map is a latency heat map. I should probably replace it with a POP map


This article is also essentially available as a podcast. https://oxide.computer/podcasts/on-the-metal/kenneth-finnega...


A good old fashioned slash dotting on a self hosted server. Chill dude.


As far as I can tell, SU traffic is almost useless. I may get 1,000x as hits from SU, but I derive more value from some hole-in-the-wall site writing about me and sending 20 people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: