Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

UDP is perfectly reliable when you control both sides of the link. With adjustments to ring buffers, IRQ/core affinity, and receive buffer size, you can receive a saturated 40GbE stream with no packet loss.

It's only when you pass through a router that's specifically instructed "drop these first" that you run into drops.



UDP has a checksum for a reason, bitflips happen and packets get mandled or lost, even if you control both sides of the link.

Your chances are better but you can't assume packet loss or data errors can't happen just because the fiber is right in front of you.


The UDP checksum is optional and weak. It's mostly a waste of time.

It's far more likely if you get a bit flip that the layer 1 hardware will just drop the packet because it has a much better checksum that detected the error first. Even if you know every part of the hardware and software stack you have to plan for an occasional packet drop.


My point wasn't that you should rely on the weak checksum, but rather that the possibility for failure is already baked in even for such a small set of fields, because errors of any kind can happen at any point even in the simplest setups.

The statement

    With adjustments to ring buffers, IRQ/core affinity, and receive buffer size, you can receive a saturated 40GbE stream with no packet loss.
_Probably_ works out _most_ of the time in physical setups (especially in server environments with strict ECC memory), but it's not something you can write a protocol around if you need all of that data to _actually_ arrive.


I would argue with "only". Full queues are only one reason you get packet drops. For example, packets can be damaged in transmission. This isn't common but that's not the same as saying it doesn't happen.


Okay, but the important part of a protocol is what happens when things don't go perfectly to plan.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: