Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Things have improved a little on this front. It turn out gnupg was being a little gluttonous when it came to entropy:

"Daniel Kahn Gillmor observed that GnuPG reads 300 bytes from /dev/random when it generates a long-term key, which, he observed, is a lot given /dev/random's limited entropy . Werner explained that GnuPG has always done this. In particular, GnuPG maintains a 600-byte persistent seed file and every time a key is generated it stirs in an additional 300 bytes. Daniel pointed out an interesting blog post by DJB explaining that a proper CSPRNG should never need more than about 32 bytes of entropy. Peter Gutmann chimed in and noted that a 2048-bit RSA key needs about about 103 bits of entropy and a 4096-bit RSA key needs about 142 bits, but, in practice, 128-bits is enough. Based on this, Werner proposed a patch for Libgcrypt that reduces the amount of seeding to just 128-bits."[^1]

On a related not why are you generating keys on a remote vm? Its probably not fair to say that gpg "failed to work (literally would not do anything)." It was doing something, gnupg was waiting for more entropy. Needing immediate access to cryptographic keys that you just generated on a recently spun up remote VM is kind of a strange use case?

[^1]: https://www.gnupg.org/blog/20150607-gnupg-in-may.html



Thanks very much for the updates on entropy requirements.

Re: "Why are you generating keys on a remote VM" - prior to this, it hadn't occured to me I couldn't generate a gpg key on linode/digital ocean, VMs. I realize now that keys should be generated on local laptops (or such), and copied up.

Re: "Fair to say failed to work" - It just sat their for 90+ minutes - I spent a couple hours researching, and found a bug (highly voted on) that other people had run into the same issue. But, honestly - don't you think that gpg just hanging for 90+ minutes for something like generating a 2048 bit RSA key should be considered, "failing to work?" - I realize under the covers (now) what was happening - but 99% of the naive gpg using population would just give up in the same scenario instead of trying to debug it.


Yeah, the bug was really how it handled the case of waiting forever without telling you why. In GPG's defense, before it actually stars reading from /dev/random, it does give you all kinds of warnings that it needs sources of entropy before it can make any progress.

Hard to get that kind of thing right, but fundamentally it did stop you from making exactly the kind of terrible mistake that I was talking about. ;-)


On environments with limited initial entropy (such as VMs) just use haveged [1]

[1] - https://wiki.archlinux.org/index.php/Haveged


Whoa. I didn't realize it was reading 300 bytes. That is excessive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: