Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you have a 5-year old database using a given bcrypt work factor, how difficult is it to transition to a new, higher work factor?


This is a good practical question. My read of the algorithm is that you must force each user to enter a new password, and encrypt that at the new higher cost.

If you wanted to "upgrade" the passwords to the higher cost key schedule, you'd just continue the key schedule where it left off--but this would require knowing the original password! So that's not really an option.


You can upgrade passwords when the user logs in. The sanest thing to do seems to be to store all the public variables (work factor, salt) alongside the digest so each password can be handled separately.


What about performing the hashing again, on the existing hash (of course it would be simplest to encrypt from the beginning when user logs in again, but just for discussion's sake). Let's say we have a legacy db with hashes created with small work factor. We could simply perform the hashing on these previous hashes (increasing work factors as appropriate). We could simply annotate the fact that one have to perform the hashing two times.

Of course we're 'overthinking' it again, but is the above solution viable?


I'm no expert, but I think you could just convert a database via function composition:

    new_hash = (bcrypt new_work_factor) . old_hash -- new hashing function

    new_hashed_passwords = map bcrypt old_hashed_passwords -- convert the old hashed passwords to new
Of course, this will fail horribly if (bcrypt new_work_factor) is somehow an inverse (or partial inverse) of old_hash. It could also fail horribly if (bcrypt new_work_factor) maps it's input into a low "rank" (sorry, I'm a mathematician, not a crypto expert) region of old_hash's domain.


But if one of those two properties where true, that would probably give you some hints into how to attack bcrypt.


Imagine this Python code (I'm using SHA1 iterated multiple times, basically PBKDF2):

    import hashlib
    hashed = password
    for i in range(50000):
        hashed = hashlib.sha1(salt + hashed)
Provided we also store the number of iterations (along with the salt), and provided I didn't do anything stupid above, we could simply add more iterations after these 5 years and update hash and number of iterations field. Would it be a viable solution?


Yes.


So the original question remains, is it possible with bcrypt?


I don't know, but Oliver Hunt suggested just validating the password on the next login and upgrading it on the fly, which, to be honest, is what I'd probably do.


  import bcrypt
  hashed = bcrypt.hashpw(password, bcrypt.gensalt(log_rounds=13))
Increasing log_rounds by one increases the work factor exponentially (2 * * log_rounds).


I rigged up a small test on my Macbook. Do you think 50,000 iterations would be enough for a general website (such as HN)?

    import timeit
    
    t = timeit.Timer(stmt="""\
    def test(pwd, n_iter):
        for i in range(n_iter):
            pwd = hashlib.sha1(pwd).hexdigest()
    test('hello', 50000)
    """, setup='import hashlib')

    print t.timeit(100) / 100
    
    >>> 0.126629960537


If a user logs in, and their work factor is the low one, authenticate them, and then calculate the new hash with the higher work factor, and save the new hash and work factor.


Follow-up question: I assume this also means that it would result in new hashes for the same passwords?


Yes. But from what I understand, that's even the case if you bcrypt() the same password with the same load factor multiple times, as it uses a random salt.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: