Hacker Newsnew | past | comments | ask | show | jobs | submit | billysham's commentslogin

Great paper! One thing that isn't mentioned is how do you deal with audio?


Audio encodes far faster than video, so there isn't really a need to parallelize it. You'd probably combine this with Opus in WebRTC.


Audio is fast however if you are attempting to encode hours of content ~min it's gonna be a bottleneck


If you end up being limited, there is a very similar trick you can do with audio, but even simpler. Unlike video, most audio formats don't have keyframes, but rather will converge to a correct decode a few packets after you start fresh or seek. So the solution to encode in parallel is to split the file into a bunch of chunks that overlap by a few packets (in the case of Opus, 80ms worth, or 4 packets, should be enough). You then encode all of these chunks, and then merge them together, throwing away the extra packets in the overlap. Unlike video, no final encode pass is needed.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: