Are the keys encrypted with a key derived from a master password?
Does the decryption only occur on the user's device?
Is this master password not reused for the account or has account authentication been changed to use a cryptographic proof produced on-device?
If the key is ever decrypted on vendor's servers, everything else is theater.
And this is all of course also excluding auto-updating vendor-supplied authentication code from the threat model because the industry is not ready for that conversation yet.
These are all implementation decisions, since behind-the-scenes this is all just Web Authentication API.
That also means if you dislike the idea of some big company holding all your keys in cloud backed-up vault, you can just use one of the dozens of hardware FIDO key manufacturers.
On iOS, the keys are stored in iCloud Keychain, which is also the password auto-fill vault.
These keys are protected with two levels - iCloud encryption and an effective HSM cluster of apple security enclaves.
There is no master passphrase/secret exposed to the user, it is synchronized by phones on the account. You must join the 'ring' of personal devices in addition to using the iCloud login to decrypt iCloud information.
This means unlike basic iCloud encryption (which has a recovery HSM used to help people gain access to their accounts and which legal processes may grant access to read data), you need to perform additional actions to get access to this vault.
Each 'passkey' (Web Authentication Credential) is a locally generated secp256r1 key pair in that keychain, with associated site information and storage for additional information such as the site-specified user identifier and friendly display name.
There's basically three levels of protection for the data
1. whatever the cloud hosting provider has for data at rest
2. the per-account iCloud encryption key, which is never shared with the hosting provider but exists on an Apple recovery HSM
3. the per-account device ring key, which is not visible to Apple.
so no, the credential's private key itself is never visible to Apple.
Apple does have a mechanism (if you go into Passwords) to share a passkey with another person's Apple device. You need to be mutually known (e.g. need to have one another as contacts, with the contact record containing a value associated with their Apple ID) and it needs to be done over Airdrop for a proximity check. Presumably, this uses the public key on their account to do an ECDH key agreement and send a snapshot of the private information over the encrypted channel.
Auto-updating vendor-supplied authentication code for iPhones is complex because of the split between the operating system code and the Secure Enclave firmware, the (misuse) of that API via a compromised operating system, and the potential to get malicious changes into the Secure Enclave firmware itself.
> Are the keys encrypted with a key derived from a master password?
No, because PBKDFs are not a good mechanism for creating encryption keys. Instead you have an actual random key, and your devices gate access to that key with your device password.
> Does the decryption only occur on the user's device?
Yes, because only the user's devices have access to the key material needed to decrypt. Apple cannot decrypt them.
> Is this master password not reused for the account or has account authentication been changed to use a cryptographic proof produced on-device?
Not sure what you're asking here?
> If the key is ever decrypted on vendor's servers, everything else is theater.
As above the vendor/apple cannot decrypt anything[1] because they do not have key material.
> And this is all of course also excluding auto-updating vendor-supplied authentication code from the threat model because the industry is not ready for that conversation yet.
Don't really agree. The malicious vendor update is something that is discussed reasonably actively, it's just that there isn't a solution to the problem. Even the extreme "publish all source code" idea doesn't work as auditing these codebases for something malicious is simply not feasible, and even if it were ensuring that the course code you get exactly matches the code in the update isn't feasible (because if you assume a malicious vendor you have to assume that they're willing to make the OS lie).
Anyway, here's a basic description of how to make a secure system for synchronizing anything, including key material (secure means "no one other than the end user can ever access the key material, in any circumstance without having broken the core cryptographic algorithms that are used to secure everything").
Apple has some documentation on this scattered around, but essentially it works like this:
* There is a primary key - presumably AES but I can't recall off the top of my head. This key is used to encrypt a bunch of additional keys for various services (this is fairly standard, the basic idea is that a compromise of one service doesn't compromise others - to me this is "technically good", but I would assume that the most likely path to compromise is getting an individual device's keys in which case you get everything anyway?)
* The first device you use to create an iCloud account or to enable syncing generates these keys
* That device also generates a bunch of asymmetric keys and pushes public keys to anywhere they need to go (i.e. iMessage keys)
* When you add a new device to your account it messages your other devices asking to get access to your synced key material, when you approve the addition of that new device on one of your existing ones, that existing device encrypts the master key with the public key provided by your new device and sends it back. At that point the new device can decrypt that response and use that key to then decrypt other key material for your account.
All this is why in the Apple ecosystem if you lose all your devices, you historically lost pretty much everything in your account.
A few years ago Apple introduced "iCloud Key Vault" or some such marketing name for what are essentially very large sets of HSMs. When you set up a new device that device pushes its key material to the HSMs, in what is functionally plaintext from the point of view of the HSMs, alongside some combination of your account password and device passcode. You might now say "that means apple has my key material", but Apple has designed these so that it cannot. Ivan Krstic did a talk about this at BlackHat a few years back, but essentially it works as following:
* Apple buys a giant HSM
* Apple installs software on this HSM that is essentially a password+passcode protected account->key material database
* Installing software on an HSM requires what are called "admin cards", they're essentially just sharded hardware tokens. Once Apple has installed the software and configured the HSM, the admin cards are put through what Krstic called a "secure one way physical hashing function" (aka a blender)
* Once this has happened the HSM rolls its internal encryption keys. At this point it is no longer possible for Apple (or anyone else) to update the software, or in any way decrypt any data on the HSM.
* The recovery path through requires you to provide your account, account password, and passcode, and the HSM will only provide the key material if all of those match. Once your new device gets that material it can start to recover all the other material needed. As with your phone the HSM itself has increasing delays between attempts. Unlike your phone once a certain attempt count is reached the key material is destroyed and the only "recovery path" is an account reset so at least you get to keep your existing purchases, email address, etc.
You might think it would be better to protect the data with some password derived key, but that is strictly worse - especially as the majority of passwords and passcodes are not great, nor large. In general if you can have a secure piece of hardware gate access to a strong key is better than having the data encrypted to a weak key. The reason being that if the material is protected by that key rather than enforced policy then an attacker can copy the key material and then brute force it offline, whereas a policy based guard can just directly enforce time and attempt restrictions.
[1] Excepting things that aren't end-to-end encrypted, most providers still have a few services that aren't E2E, though it mostly seems to be historical reasons.
>> No, because PBKDFs are not a good mechanism for creating encryption keys
> I'm curious about what you mean by this. Isn't it in part what PBKDFs are designed for?
Password-based key derivation functions start with the assumption that some entropy is provided by the user. Which means that the entropy is typically of awful quality. A PBKDF does the best it can with that low entropy, which is to make it into a time- and maybe space-expensive brute-forcing problem. But a PBKDF is starting with one hand tied behind its back if the user-supplied entropy is "password" or "hunter2." If we aren't burdened by that assumption, then we can generate high-quality entropy -- like 128 or 256 bits of CSRNG-generated noise -- and merely associate it with the user, rather than basing it on the user's human-scale memory.
PBKDFs also generally assume that users are transmitting their plaintext passphrases to the server, e.g., when you HTTP POST your credentials to server.com. Of course, browsers and apps use transport security so that MITMs can't grab the passphrase over the wire, but the server actually does receive the phrase "hunter2" at some point if that's your passphrase. So again, it's a rotten assumption -- basically the foundation of most password-database compromises on the internet -- and PBKDF does the best it can.
If you remove that assumption and design a true asymmetric-encryption-based authentication system, then you don't need the obfuscation rounds of a PBKDF because the asymmetric-encryption algorithm is already resistant to brute-forcing. The script kiddie who steals /etc/passwd from a server would effectively obtain a list of public keys rather than salted hashes, and if they can generate private keys from public keys, then they are already very wealthy because they broke TLS and most Bitcoin wallets.
Think of passkeys as a very user-friendly client-side certificate infrastructure. You wouldn't let your sysadmin base your enterprise website's TLS certificates on a private key derived from their dog's birthday. You wouldn't let users do that for their certs, either.
sowbug has a more detailed answer, but the TLDR is the PBKDFs were consider ok a long time ago before the security implications were really understood. Essentially they're low entropy in practice (e.g. a person _could_ make a 10+ word password, but they're not going to for a password they have to enter frequently).
You're much better off using the password to a truly random key, though that of course immediately raises the question of how you store the true key :D Modern systems use some kind of HSM to protect the true keys, and if they're super integrated like the SEP (or possibly the SE, I can never recall which is which) on apple hardware they can simply never expose the actual keys and only provide handles and have the HSM encrypt and decrypt data directly before it's seen by the AP.
tl;dr: Yes, and further they're only decrypted using the secure chip on the device, so the vendor supplied authentication firmware can't be updated without user interaction/approval.
From a post linked in the article:
> Passkeys in the Google Password Manager are always end-to-end encrypted: When a passkey is backed up, its private key is uploaded only in its encrypted form using an encryption key that is only accessible on the user's own devices. This protects passkeys against Google itself, or e.g. a malicious attacker inside Google. Without access to the private key, such an attacker cannot use the passkey to sign in to its corresponding online account.
> Additionally, passkey private keys are encrypted at rest on the user's devices, with a hardware-protected encryption key.
> Creating or using passkeys stored in the Google Password Manager requires a screen lock to be set up. This prevents others from using a passkey even if they have access to the user's device, but is also necessary to facilitate the end-to-end encryption and safe recovery in the case of device loss.
The question I always ask to figure out how things work: What happens if I lose my phone?
Vendors trying to peddle a solution will always try to answer this question in a way that doesn't say "well in that case you're screwed" and any answer except "you're screwed" means there is some kind of potentially-vulnerable recovery process, and the description of how the process works usually gives you an idea of how secure it is (or at least a starting point to ask more questions).
If you lose your phone (and all other devices you might have), apple does have a secure (as in apple cannot access it) last ditch recovery path (see my other wall of text/word soup answer).
But in the absence of that the data is gone - it's one of the big concerns that come up in response to "E2E everything": people are not used to the idea that forgetting your password (or losing devices in a 2fa world) means the data is actually irrecoverable and it's not just a matter of reseting the account password (e.g. you can't go into a store with your ID to "prove it's you" because that isn't the problem)
"Only on the user's device", right.