I've always felt that having your keys tied to specific hardware will just lock you out when the hardware breaks. Maybe I'm overstating the risk. It's a good idea to have multiple methods and test the backups.
The general consensus has been that you create a key pair per client computer that you use. If one is stolen (say your laptop), you login from your desktop and revoke the stolen key. If the hard drive fails, you login from another client.
I don’t see much difference between that and storing the key on a TPM. If you have one key and you lose access to that key, then you lose access to the server.
Just paste all of your devices' public keys into your authorized_keys file (in Userify, it literally goes right into your nodes' authorized_keys file almost verbatim) and leave a comment at the end for what device it's for.
And then, if you leave your token or laptop at the airport or whatever, just remove that key right from your phone and it'll take effect in seconds across all the nodes (if you're using Userify) or you can just write a quick for-inline-sed loop to remove it from your authorized keys everywhere.
Cloud-based passkeys are okayish (1pass, bitwarden), as they are available on multiple devices.
However not all devices play well with it, e.g. iOS and Android don't ask 1pass for the passkey. I also couldn't make it ask NFC for my hardware Yubikey with passkeys, but maybe I just did something wrong.
Passkeys are supposed to cover two authentication factors at once (having your device + biometrics). Because your yubikey doesn't implement biometrics, it's only a single factor, and thus cannot be used as a passkey.
A passkey is just a thing that authenticates with FIDO2 (or is it WebAuthn?), I believe.
With a password, you open your password manager, copy the password in memory, paste it into the input field and trust that nobody could read it from your clipboard and that the program handling the password does it correctly. If your password leaks on the way, it's leaked.
With FIDO2, the server sends a challenge and asks your HSM (or TPM, not sure what the right word is) to sign it with your private key. So the server can verify that you own the private key, but if the challenge or the response leaks, it's just this one time. Next time it will be a new challenge.
Also for the average Joe, the result is that the "passkey" is the fingerprint or the face recognition and there is no password. It feels like they have only one password: the biometry/face recognition (or a master password, I guess?). So passkeys are superior to passwords in that sense.
Fun fact 1: some people hate passkeys because they don't want to be forced to rely on TooBigTech for them. Currently I use my Yubikeys as passkeys everywhere and it works well, so I do NOT depend on TooBigTech.
Fun fact 2: FIDO2 on current Yubikeys (and HSM in general, I think) tend to use classic cryptography which would be broken by quantum computers. A password used with symmetric encryption is not broken by quantum computers. So there may be a period of time where this becomes a tradeoff (you may have to decide whether the most likely attack is a quantum computer breaking your authentication or a malware stealing your password)?
I'd consider storing (generating) them in AWS KMS. It's $1/key/month and you don't have to worry about hardware failures, etc. Each key must have a separate policy attached which controls who it can be used by and how. It is possible to create keys the root account cannot touch. If you have anything running on EC2, it's an extremely compelling option because you can authenticate directly via IMDSv2 tokens and IAM roles, avoiding the need for any kind of secret strings.
I remember reading a very similar chain last year, trying it on my Proxmox host, and then being surprised it didn't work. I'm sure it's not the only modern distro this way, but I can't claimed to have tried very many after that.
This is a neat trick that people have been doing with Yubikeys for a long time, but from an operational security perspective, if you have a fleet rather than just a couple of hosts, the win is only marginal vs. short-lived keys, certificates, and a phishing-proof IdP.
The integration of the ed25519-sk keys is just so easy and similar to normal ssh keys, so the upgrade is way easier.
You just need to tighten your sshd config, you can even add a "touch required" of the Yubikey to the sshd config. Has been in debian stable since like 11 I think?
So it's super friendly to integrate and very secure, as you need to physically be on your pc, have your yubikey and have your exact pc. So that's a lot of factors.
Seems a little pointless, your keys can't be stolen but they can be instantly used by malware to persist across anything you have access to. The keys don't have any value in their own right, the access they provide does.
The idea with HSM-backed keys is that even in case of compromise, you can clean up without having to rotate the keys. It also makes auditing easier as you can ensure that if your machine was powered down or offline then you are guaranteed the keys weren't used during that timeframe.
That's still an improvement. In sophisticated attacks, attackers might well store stolen credentials and use them at a later, more opportune time.
Of course a real secure attention sequence would be preferable, such as e.g. requiring a Touch ID press on macOS for keys stored in the Secure Enclave. Not sure if TPM supports something similar when a fingerprint sensor is present?
Well... it would also be confusing to call them "passwords" because they are not that.
In addition to biometric authentication, Windows Hello supports authentication with a PIN. By default, Windows requires a PIN to consist of four digits, but can be configured to permit more complex PINs. However, a PIN is not a simpler password. While passwords are transmitted to domain controllers, PINs are not. They are tied to one device, and if compromised, only one device is affected. Backed by a Trusted Platform Module (TPM) chip, Windows uses PINs to create strong asymmetric key pairs. As such, the authentication token transmitted to the server is harder to crack. In addition, whereas weak passwords may be broken via rainbow tables, TPM causes the much-simpler Windows PINs to be resilient to brute-force attacks.[139]
So you see, Microsoft needs a way to describe an access code that isn't a password, because it's more secure than that, but yet it isn't exactly a number, so what do you call it? "PIN" is perhaps an unfair recycling of an in-use term, but should they coin a neologism instead? Would that be less confusing?
Without presence test (e.g. yubikeys touch) it's certainly not perfect. But it does close some real world attacks. Like the key can only be used while your laptop is on. (assuming laptop, here).
And keys cannot be stolen from backups.
Or stolen without your knowledge when you left your laptop unguarded for 5min.
Not every attacker has persistent undetected access. If the key can be copied then there's no opportunity for the original machine's tripwires to be triggered by its use. Every second malware runs is a risk of it being detected. Not so, or not in the same way, with a copied key.
Android actually supports secure transaction confirmation on Pixel devices using a secure second OS that can temporarily take control of the screen and volume button as secure input and output! https://android-developers.googleblog.com/2018/10/android-pr...
This is really cool and goes beyond the usual steps of securing the key, but handling "what you see is what you sign" and key usage user confirmation at the OS level, which can be compromised much more easily (both input and output).
Quote: "Android Protected Confirmation is deprecated due to the high
support/maintenance cost for Android device makers and low adoption rate
among app developers. APC requires Android device makers to have a
substantial amount of device-specific UI code running in the trusted
execution environment. That has proven to be expensive to maintain and
non-scalable, as there cannot be a single implementations device makers
can share or use as a reference. Additionally, app developers have not
adopted this feature, as the Android platform offers other mechanisms
for authentication a user's intent. These mechanisms, such as
authentication-bound Keystore keys, are less secure than Trusted UI, but
are more wide-spread. While we explore alternatives to APC that are
viable to the device makers ecosystem, we sunset the APC API."
Oh damn, I missed that, thank you. I could see how it was a very expensive thing to maintain for an effectively Pixel-only feature.
Still, I think this was one of the most ambitious and user-beneficial implementations of trusted computing I've seen so far, in that it theoretically safely allows a completely rooted/user-owned device to still participate in things like online banking or e-government transaction authorization. I hope it'll return in some form.
Anything pkcs#11 you can proxy. I'm using that on some systems - I have an old notebook with a nitrokey hsm at home. It binds pkcs11-proxy to a local wireguard interface, so I'm registering systems I want to be able to use those keys to that notebooks wireguard. They still need a pin for unlocking a session as well.
Or put them in a $2 FLOSS Gnuk token/smart card that you can carry with you and still have strong password protection and AES encrypted data at rest with KDF/DO:
I would love a world where I could put all my API keys in the TPM so malware couldn't gain persistent access to services after wiping my computer. This would be so easy if more providers used asymmetric keys, like through SSH or mTLS. Unfortunately, many don't, which means that stealing a single bearer token gives full access to services.
There's also the TPM speed issue. My computer takes ~500ms to sign with an ECC256 key with the TPM, which starts to become an issue when running scripts that use git operations in serial. This is a recurring problem that people tend to blame on export controls: https://stiankri.substack.com/p/tpm-performance
In some cases there is a work-around for bearer tokens. If they allow key/cert login to generate the token (either directly, or via oath), and the token can be generated with a short lifetime, you can build something pretty safe (certainly safer then having a not-expiring, or long TTL token in a wallet).
apologies for asking this question here instead of actually doing the research, but it always seemed to be that while putting keys in a secure environment would help against leakage of the private bits, there really isn't a great story around making sure than only authorized requests can be signed. is this a stupid concern?
Yubikey can require touch, and Secretive for Apple Secure enclave can require touch with fingerprint id. Some people disable these, it depends exactly on your use case.
yes, but what's to stop a malicious actor from intercepting a signature request and replacing its own contents in place of the legitimate one. yes you would find out when your push was rejected, but that would be a bit late.
We created Keeta Agent [0] to do this on macOS more easily (also works with GPG, which is important for things that don't yet support SSH Signatures, like XCode).
Since it just uses PKCS#11, it also works with tpm_pkcs11. Source for the various bits that are bundled is here [1].
Here's an overview of how it works:
1. Application asks to sign with GPG Key "1ABD0F4F95D89E15C2F5364D2B523B4FDC488AC7"
2. GPG looks at its key database and sees GPG Key "1ABD...8AC7" is a smartcard, reaches out to Smartcard Daemon (SCD), launching if needed -- this launches gnupg-pkcs11-scd per configuration
3. gnupg-pkcs11-scd loads the SSH Agent PKCS#11 module into its shared memory and initializes it and asks it to List Objects
4. The SSH Agent PKCS#11 module connects to the SSH Agent socket provided by Keeta Agent and asks it to List Keys
5. Key list is converted from SSH Agent protocol to PKCS#11 response by SSH Agent PKCS#11 module
6. Key list is converted from PKCS#11 response to gnupg-scd response by gnugpg-pkcs11-scd
7. GPG Reads the response and if the key is found, asks the SCD (gnugpg-pkcs11-scd) to Sign a hash of the Material
8. gnupg-pkgcs11-scd asks the PKCS#11 module to sign using the specified object by its Object ID
9. PKCS#11 module sends a message to Secretive over the SSH Agent socket to sign the material using a specific key (identified by its Key ID) using the requested signing algorithm and raw signing (i.e., no hashing)
10. Response makes it back through all those same layers unmodified except for wrapping
For Yubikey, this guide is worth looking at: https://github.com/drduh/yubikey-guide ("Community guide to using YubiKey for GnuPG and SSH - protect secrets with hardware crypto.")
It's also a bit outdated. OpenSSH supports FIDO2 natively, so all this gnupg stuff is unnecessary for ssh. One can even use yubikey-backed ssh keys for commit signing.
And the best thing is that you can create several different ssh keys this way, each with a different password, if that's something you prefer. Then you need to type the password _and_ touch the yubikey.
These work flawlessly with the KeepassXC ssh-agent integration. My private keys are password protected, saved securely inside my password vault, and with my ssh config setup, I just type in the hostname and tap my Yubikey.
This assumes that the server is running a recent enough OpenSSH. Configured with this enabled. For Linux servers, sure. For routers, less obviously so.
We've got private Git repos only accessible through ssh (and the users' shell is set to git-shell) and it's SSH only through Yubikey. The challenge to auth happens inside the Yubikey and the secret never leaves the Yubikey.
This doesn't solve all the worlds' problem (like hunger and war) but at least people are definitely NOT committing to the repo without physically having access to the Yubikey and pushing on it (now ofc a dev's computer may be compromised and he may confirm auth on his Yubikey and push things he didn't meant to but that's a far cry from "we stole your private SSH key after you entered your passphrase a friday evening and are now pushing stuff in your name to 100 repos of yours during the week-end").
Keep a CA (constrained to your one identity) with a longish (90 day?) TTL on the TPM. Use it to sign a short lived (16h?) keys from your TPM, use that as your working key.
Didn't Tailscale try to do something similar but found out quickly that TPMs 1) aren't as reliable as common wisdom makes them out to be, and 2) have gotchas when it comes to BIOS updates?
I can't find it now, but I believe someone from Tailscale commented on HN (or was it github?) on what they ran into and why the default was reverted so that things were not stored in the TPM.
EDIT: just saw the mention in the article about the BIOS updates.
If you run into the link to this, is love to read it. Proper, modern, pcrphase binding with a signing key should remove these firmware update issues irt the raw pcr value changing
This could make real sense for ssh host keys, since they need to be used without presence and they're generally tied to the lifetime of the machine anyway.
I saw a write up where someone successfully got sshd to use a host key from a fido2 yubikey without touch, but I can't find it...
As far as "TPM vs HSM", it is soooo much simpler to make a key pair with a fido2 hardware key:
I had a friend tell me once that his yubikey is more secure than my authenticator app on my phone because my phone has this giant attack surface that his yubikey doesn't. Yet the yubikey has an entire attack surface of the computer it is plugged into. Which is largely the same or worse than my phone's.
I'm wondering why that doesn't apply here. The TPM holds the key to the cipher that is protecting your private keys. Someone uses some kind of RCE or LPE to get privileged access to your system. Now it sits and waits for you to do something that requires access to your SSH keys. When you do that you are expecting whatever user prompts come up, the malware rides along this expectation and gets ahold of your private SSH keys and stores them or sends them off somewhere. I'm not even positive that they need high degree of privileges on your box, if they can manipulate your invocation of the ssh client, by modifying your PATH or adding an ssh wrapper to something already in your path, then this pattern will also work.
What am I gaining from using this method that I don't get from using a password on my ssh private key?
The promise of HSM, TPM and smart cards are that you have a tiny computer (microcontroller) where the code is easier to audit. Ideally a sealed key never leaves your MCU. The cryptographic primitives, secret keys and operations are performed in this mini-computer.
Further promises are RTC that can prevent bruteforce (forced wait after wrong password entry) or locking itself after too many wrong attempts.
A good MCU receives the challenge and only replies with the signature, if the password was correct. You can argue that a phone with a Titan security chip is a type of TPM too. In the end it doesn't matter. I chose the solution that works best for me, where I can either only have all keys in my smart card or an offline paper wallet too in a fireproof safe. The choice is the user's.
For SSH to use your keys a calculation has to be done using your private key and then send the results back to the remote site so it can validate that you got the results that prove you have your private key. The TPM and your yubikey do not do this calculation. They allow software on your computer to access the private key in plaintext form, perform this calculation, and then send the result (and then presumably overwrite the plaintext key in RAM). If your system has been compromised, then when this private key is provided to the host based software, it can be taken.
Yubikey (and nitrokey and other HSMs) are technically smart cards, which perform crypto operations on the card. This can be an issue when doing lots of operations, as the interface is quite slow.
Downvoted - this is false, sorry. The whole point of security keys (whether exposed via PKCS#11, or FIDO) is that the private key material never leaves the security key and instead the cryptographic operations are delegated to the key, just like a commercial HSM.
Technically, a private key that was imported (and is marked as exportable) to a PKCS#11 device can subsequently be re-exported (but even then, during normal operation the device itself handles the crypto), but a key generated on-device and marked as non-exportable guarantees the private key never leaves the physical device.
They can use the key as long as they can access your computer, but they shouldn't be able to get the secret key out of the TPM or Yubikey and use it elsewhere while your computer is off. That's the main point of HSMs.
Yeah but they already mentioned that they expect the attacker to hijack your ssh command so you'll touch it yourself, thinking you're authorizing something else than you actually are.
It does mean that they can't use the key a thousand times. But once? Yeah sure.
> hijack your ssh command so you'll touch it yourself, thinking you're authorizing something else than you actually are.
That doesn’t do anything at all.
1. If the attacker is redirecting you to a different host then ssh will simply refuse to connect due to known_hosts (I guess they could have added to that file too, redirect you to a honeypot and then hopefully you’ll run “sudo” before realizing but then at that point just hijack “sudo” itself in the local machine)
2. If the attacker is trying to let you connect and eavesdrop your connection to still credentials then that also still doesn’t work as the handshake for ssh is not vulnerable to replay attacks
The attacker could trick you into signing something I guess but then that still doesn’t do anything because secrets are not divulged at any point
I guess if the yubikey is also used for `sudo` then your attack makes more sense, as the attacker could prompt you to authenticate a sudo request when you call the evil `ssh`
Okay let me elaborate how I envision that attack to work:
1. attacker wants to use your yubikey-backed ssh key, let's say for running ssh-copy-id once with their own key so they can gain access to your server
2. thus they need to trick you into touching the key when they run that command
3. the best way to trick you is to wait until you do something where you'd normally need to touch that key yourself
4. so they alias ssh to a script that detects when you're trying to connect to this server yourself, and invoke ssh-copy-id instead, which prompts you to touch the yubikey and you do
5. spit out a reasonable looking error (something that makes you think "bloody DNS, it's always DNS, innit" or something silly like that); then they undo the alias so you succeed on the next try and suspect nothing
I know the title says "in your TPM chip" but the method described does not store your private key in the TPM, it stores it in a PKCS keystore which is encrypted by a key in your TPM. In actual use the plaintext of your private ssh key still shows up in your ssh client for validation to the remote host.
The recommended usage of a yubikey for ssh does something similar as otherwise your key consumes one of the limited number of slots on the key.
This article's method is bad, basically the same as systemd-creds (not itself bad, just extremely compatible), take a look at tpm-ssh-agent or gnupg for how to do that part the right way (the party they don't do right is bind/sign to pcrs, which is just low hanging fruit in today's day and age...)
I really don't think this is true for FIDO2 like Yubikey. My understanding is that your ssh client gets a challenge from the server, reads the key "handle" from the private key file, and sends both to Yubikey. The device then combines its master key with the handle to get the actual private key, signs the challenge, and gives the result back to your ssh client. At no point does the private key leave the Yubikey.
I don't know if you are missing anything. That's why I'm asking and making statements about how I understand the various processes to work. I want to understand how it is that the only device that interacts with the yubikey/tpm, when compromised, can't be subverted to the attackers ends.
Perhaps one extra bit to add: you've mentioned consuming slots on the device - that's what happens if you generate a resident key. Those keys live on the device and can be used from any computer you plug them into, without having to retain/copy any files. A non-resident key, on the other hand, is derived from the master key on the device, and a "handle" that's stored as a file on your computer. You can have as many as you want, but if you lose either the file or the hardware device, they're gone.
(Others in the thread have confirmed that both resident and non-resident keys never leave the hardware. If you generate one that requires touch, they're fairly secure - you need physical presence and confirmation for every operation.)
Yes, with TPM and yubikey you have the option to store the per key material on disk, encrypted by the TPM. But the way this is then used is that the PKCS software sends that encrypted blob AND the requested operation, and gets only the output back. The CPU doesn't get the SSH private key back. Just the output of the RSA operation using the key.
Depending on which authenticator app (or maybe applies to all?), that data either is, or can be, backed up.
A yubikey cannot be cloned.[1]
> the malware rides along this expectation and gets ahold of your private SSH keys and stores them or sends them off somewhere.
Ah, this is where your misunderstanding lies. No, the crypto operation runs ON the TPM or yubikey. The actual secret key NEVER lives in RAM. (ehem, after it was imported, if importing is the method by which it was generated)
[1] You know what I mean. Of course in principle it can be. But not like a phone where it can literally be sent via scp.
Or, alternatively, don't. Stuff in a TPM isn't for "security" in the abstract, it's fundamentally for authentication. Organizations want to know that the device used for connection is the one they expect to be connecting. It's an extra layer on top of "Organizations want to know the employee account associated with the connection".
"Your SSH keys" aren't really part of that threat model. "You" know the device you're connecting from (or to, though generally it's the client that's the mobile/untrusted thing). It's... yours. Or under your control.
All the stuff in the article about how the TPM contents can't be extracted is true, but missing the point. Yes, you need your own (outer) credentials to extract access to the (inner) credentials, which is no more or less true than just using your own credentials in the first place via something boring like a passphrase. It's an extra layer of indirection without value if all the hardware is yours.
TPMs and secure enclaves only matter when there's a third party watching[1] who needs to know the transaction is legitimate.
[1] An employer, a bank, a cloud service provider, a mobile platform vendor, etc... This stuff has value! But not to you.
> TPM isn't for "security" in the abstract, it's fundamentally for authentication
What on earth do you think I make my users present keys for???
You know all those guides saying "you should never copy an ssh private key over the network. Make a new one for each device" that every idiot dev ignored? Now I can enforce that.
> The advantage of this approach is that malware can't just send off your private key file to its servers.
The use case is ssh keys! If malware can run an ssh command on the remote host, it doesn't need to steal your key, it can just install itself there. Or add its own keys to the access, etc... At best, you'd have to detect and fix that sort of thing with auditing and control, something that's isomorphic to the "third party" requirements I was mentioning.
To repeat the third time: this is all terrible threat model analysis. TPMs do not have value for individuals managing access between trusted devices. TPMs are for third-party validation.
TPMs can be useful to you as an individual if you're trying to protect against an evil maid attack. Although I think Linux isn't quite there yet with its support for it. The systemd folks are making progress though.
That only helps if you set a strong password as your TPM PIN. Otherwise its hardware-bound with no access control, and just as susceptible to evil maid attacks as storing the keys directly in a file.
I don't see how entering a passphrase into a compromised boot loader/kernel/initramfs is as safe as a measured boot with TPM providing the decryption key only if nothing seems to have been tampered with. Can you elaborate please?
I said this elsewhere in the thread, but to repeat here:
Can you explain why securing the ssh keys on a host that was fully compromised like that is anything but theater? Fine, you can't get the key out. You can just run the command directly.
Again, there are use cases where TPMs provide value to authenticate specific devices. But they are not and never have been about "keeping secrets". Your secrets are trash once the device is compromised.
Well I wasn't talking about ssh keys at all - that's where the misunderstanding comes from. I was simply trying to counter your claim that TPMs are never ever useful for individuals. They can be useful to individuals worried about having their boot tampered with.
I absolutely agree that they do zilch to protect your SSH keys. Hardware security keys that need physical confirmation of presence are much better for that use-case.
I don’t see much difference between that and storing the key on a TPM. If you have one key and you lose access to that key, then you lose access to the server.
Point: you need a backup key anyway.
Just paste all of your devices' public keys into your authorized_keys file (in Userify, it literally goes right into your nodes' authorized_keys file almost verbatim) and leave a comment at the end for what device it's for.
And then, if you leave your token or laptop at the airport or whatever, just remove that key right from your phone and it'll take effect in seconds across all the nodes (if you're using Userify) or you can just write a quick for-inline-sed loop to remove it from your authorized keys everywhere.
However not all devices play well with it, e.g. iOS and Android don't ask 1pass for the passkey. I also couldn't make it ask NFC for my hardware Yubikey with passkeys, but maybe I just did something wrong.
I don't think average Joe is going to understand these passkeys either.
With a password, you open your password manager, copy the password in memory, paste it into the input field and trust that nobody could read it from your clipboard and that the program handling the password does it correctly. If your password leaks on the way, it's leaked.
With FIDO2, the server sends a challenge and asks your HSM (or TPM, not sure what the right word is) to sign it with your private key. So the server can verify that you own the private key, but if the challenge or the response leaks, it's just this one time. Next time it will be a new challenge.
Also for the average Joe, the result is that the "passkey" is the fingerprint or the face recognition and there is no password. It feels like they have only one password: the biometry/face recognition (or a master password, I guess?). So passkeys are superior to passwords in that sense.
Fun fact 1: some people hate passkeys because they don't want to be forced to rely on TooBigTech for them. Currently I use my Yubikeys as passkeys everywhere and it works well, so I do NOT depend on TooBigTech.
Fun fact 2: FIDO2 on current Yubikeys (and HSM in general, I think) tend to use classic cryptography which would be broken by quantum computers. A password used with symmetric encryption is not broken by quantum computers. So there may be a period of time where this becomes a tradeoff (you may have to decide whether the most likely attack is a quantum computer breaking your authentication or a malware stealing your password)?
This may be bash-only, but a space before the command excludes something from history too.
Personally I like this which reduces noise in history from duplicate lines too. export HISTCONTROL=ignoreboth:erasedups
You just need to tighten your sshd config, you can even add a "touch required" of the Yubikey to the sshd config. Has been in debian stable since like 11 I think?
So it's super friendly to integrate and very secure, as you need to physically be on your pc, have your yubikey and have your exact pc. So that's a lot of factors.
In theory the Linux kernel keyring would help here, even with a tsm or in conjunction with it.
Unfortunately as the industry abandoned the core Unix permission system (uid/gid) all of these methods just get a devfs[null] bind mount.
Only process that also support the traditional co-hosting model like nginx and Postgres do.
We would need nonce keys to gain no value from kernel memory or hardware storage.
Of course a real secure attention sequence would be preferable, such as e.g. requiring a Touch ID press on macOS for keys stored in the Secure Enclave. Not sure if TPM supports something similar when a fingerprint sensor is present?
The PIN can be an arbitrary string (password).
So you see, Microsoft needs a way to describe an access code that isn't a password, because it's more secure than that, but yet it isn't exactly a number, so what do you call it? "PIN" is perhaps an unfair recycling of an in-use term, but should they coin a neologism instead? Would that be less confusing?
Communicates it is meant to be secret, and can be a short memorable thing.
And keys cannot be stolen from backups.
Or stolen without your knowledge when you left your laptop unguarded for 5min.
Not every attacker has persistent undetected access. If the key can be copied then there's no opportunity for the original machine's tripwires to be triggered by its use. Every second malware runs is a risk of it being detected. Not so, or not in the same way, with a copied key.
This is really cool and goes beyond the usual steps of securing the key, but handling "what you see is what you sign" and key usage user confirmation at the OS level, which can be compromised much more easily (both input and output).
Quote: "Android Protected Confirmation is deprecated due to the high support/maintenance cost for Android device makers and low adoption rate among app developers. APC requires Android device makers to have a substantial amount of device-specific UI code running in the trusted execution environment. That has proven to be expensive to maintain and non-scalable, as there cannot be a single implementations device makers can share or use as a reference. Additionally, app developers have not adopted this feature, as the Android platform offers other mechanisms for authentication a user's intent. These mechanisms, such as authentication-bound Keystore keys, are less secure than Trusted UI, but are more wide-spread. While we explore alternatives to APC that are viable to the device makers ecosystem, we sunset the APC API."
Still, I think this was one of the most ambitious and user-beneficial implementations of trusted computing I've seen so far, in that it theoretically safely allows a completely rooted/user-owned device to still participate in things like online banking or e-government transaction authorization. I hope it'll return in some form.
https://github.com/ran-sama/stm32-gnuk-usb-smartcard
There's also the TPM speed issue. My computer takes ~500ms to sign with an ECC256 key with the TPM, which starts to become an issue when running scripts that use git operations in serial. This is a recurring problem that people tend to blame on export controls: https://stiankri.substack.com/p/tpm-performance
I put my ssh keys into the Mac’s TPM and now it asks for a password/touch ID when I use it.
Unfortunately I forget what commands I used
Since it just uses PKCS#11, it also works with tpm_pkcs11. Source for the various bits that are bundled is here [1].
Here's an overview of how it works:
1. Application asks to sign with GPG Key "1ABD0F4F95D89E15C2F5364D2B523B4FDC488AC7"
2. GPG looks at its key database and sees GPG Key "1ABD...8AC7" is a smartcard, reaches out to Smartcard Daemon (SCD), launching if needed -- this launches gnupg-pkcs11-scd per configuration
3. gnupg-pkcs11-scd loads the SSH Agent PKCS#11 module into its shared memory and initializes it and asks it to List Objects
4. The SSH Agent PKCS#11 module connects to the SSH Agent socket provided by Keeta Agent and asks it to List Keys
5. Key list is converted from SSH Agent protocol to PKCS#11 response by SSH Agent PKCS#11 module
6. Key list is converted from PKCS#11 response to gnupg-scd response by gnugpg-pkcs11-scd
7. GPG Reads the response and if the key is found, asks the SCD (gnugpg-pkcs11-scd) to Sign a hash of the Material
8. gnupg-pkgcs11-scd asks the PKCS#11 module to sign using the specified object by its Object ID
9. PKCS#11 module sends a message to Secretive over the SSH Agent socket to sign the material using a specific key (identified by its Key ID) using the requested signing algorithm and raw signing (i.e., no hashing)
10. Response makes it back through all those same layers unmodified except for wrapping
(illustrated at [2])
[0] https://github.com/KeetaNetwork/agent
[1] https://github.com/KeetaNetwork/agent/tree/main/Agent/gnupg/...
[2] https://rkeene.org/tmp/pkcs-sign.png
And the best thing is that you can create several different ssh keys this way, each with a different password, if that's something you prefer. Then you need to type the password _and_ touch the yubikey.
These work flawlessly with the KeepassXC ssh-agent integration. My private keys are password protected, saved securely inside my password vault, and with my ssh config setup, I just type in the hostname and tap my Yubikey.
https://www.stavros.io/posts/u2f-fido2-with-ssh/
We've got private Git repos only accessible through ssh (and the users' shell is set to git-shell) and it's SSH only through Yubikey. The challenge to auth happens inside the Yubikey and the secret never leaves the Yubikey.
This doesn't solve all the worlds' problem (like hunger and war) but at least people are definitely NOT committing to the repo without physically having access to the Yubikey and pushing on it (now ofc a dev's computer may be compromised and he may confirm auth on his Yubikey and push things he didn't meant to but that's a far cry from "we stole your private SSH key after you entered your passphrase a friday evening and are now pushing stuff in your name to 100 repos of yours during the week-end").
that's not good
Keep a CA (constrained to your one identity) with a longish (90 day?) TTL on the TPM. Use it to sign a short lived (16h?) keys from your TPM, use that as your working key.
I can't find it now, but I believe someone from Tailscale commented on HN (or was it github?) on what they ran into and why the default was reverted so that things were not stored in the TPM.
EDIT: just saw the mention in the article about the BIOS updates.
https://github.com/tailscale/tailscale/issues/17622
https://news.ycombinator.com/item?id=46532666 (direct comment link, more discussion on the issue in the parent)
Well no thanks, that risk is much higher than what this is worth.
I saw a write up where someone successfully got sshd to use a host key from a fido2 yubikey without touch, but I can't find it...
As far as "TPM vs HSM", it is soooo much simpler to make a key pair with a fido2 hardware key:
You can get them for <$30.I'm wondering why that doesn't apply here. The TPM holds the key to the cipher that is protecting your private keys. Someone uses some kind of RCE or LPE to get privileged access to your system. Now it sits and waits for you to do something that requires access to your SSH keys. When you do that you are expecting whatever user prompts come up, the malware rides along this expectation and gets ahold of your private SSH keys and stores them or sends them off somewhere. I'm not even positive that they need high degree of privileges on your box, if they can manipulate your invocation of the ssh client, by modifying your PATH or adding an ssh wrapper to something already in your path, then this pattern will also work.
What am I gaining from using this method that I don't get from using a password on my ssh private key?
Further promises are RTC that can prevent bruteforce (forced wait after wrong password entry) or locking itself after too many wrong attempts.
A good MCU receives the challenge and only replies with the signature, if the password was correct. You can argue that a phone with a Titan security chip is a type of TPM too. In the end it doesn't matter. I chose the solution that works best for me, where I can either only have all keys in my smart card or an offline paper wallet too in a fireproof safe. The choice is the user's.
Technically, a private key that was imported (and is marked as exportable) to a PKCS#11 device can subsequently be re-exported (but even then, during normal operation the device itself handles the crypto), but a key generated on-device and marked as non-exportable guarantees the private key never leaves the physical device.
https://wiki.archlinux.org/title/SSH_keys#Storing_SSH_keys_o...
And even the password can be forced to be re-entered by the agent for every use, if that level of security is wanted.
It does mean that they can't use the key a thousand times. But once? Yeah sure.
That doesn’t do anything at all.
1. If the attacker is redirecting you to a different host then ssh will simply refuse to connect due to known_hosts (I guess they could have added to that file too, redirect you to a honeypot and then hopefully you’ll run “sudo” before realizing but then at that point just hijack “sudo” itself in the local machine)
2. If the attacker is trying to let you connect and eavesdrop your connection to still credentials then that also still doesn’t work as the handshake for ssh is not vulnerable to replay attacks
The attacker could trick you into signing something I guess but then that still doesn’t do anything because secrets are not divulged at any point
I guess if the yubikey is also used for `sudo` then your attack makes more sense, as the attacker could prompt you to authenticate a sudo request when you call the evil `ssh`
1. attacker wants to use your yubikey-backed ssh key, let's say for running ssh-copy-id once with their own key so they can gain access to your server
2. thus they need to trick you into touching the key when they run that command
3. the best way to trick you is to wait until you do something where you'd normally need to touch that key yourself
4. so they alias ssh to a script that detects when you're trying to connect to this server yourself, and invoke ssh-copy-id instead, which prompts you to touch the yubikey and you do
5. spit out a reasonable looking error (something that makes you think "bloody DNS, it's always DNS, innit" or something silly like that); then they undo the alias so you succeed on the next try and suspect nothing
The recommended usage of a yubikey for ssh does something similar as otherwise your key consumes one of the limited number of slots on the key.
What am I missing?
Thank you for your reply.
(Others in the thread have confirmed that both resident and non-resident keys never leave the hardware. If you generate one that requires touch, they're fairly secure - you need physical presence and confirmation for every operation.)
Yes, with TPM and yubikey you have the option to store the per key material on disk, encrypted by the TPM. But the way this is then used is that the PKCS software sends that encrypted blob AND the requested operation, and gets only the output back. The CPU doesn't get the SSH private key back. Just the output of the RSA operation using the key.
Depending on which authenticator app (or maybe applies to all?), that data either is, or can be, backed up.
A yubikey cannot be cloned.[1]
> the malware rides along this expectation and gets ahold of your private SSH keys and stores them or sends them off somewhere.
Ah, this is where your misunderstanding lies. No, the crypto operation runs ON the TPM or yubikey. The actual secret key NEVER lives in RAM. (ehem, after it was imported, if importing is the method by which it was generated)
[1] You know what I mean. Of course in principle it can be. But not like a phone where it can literally be sent via scp.
"Your SSH keys" aren't really part of that threat model. "You" know the device you're connecting from (or to, though generally it's the client that's the mobile/untrusted thing). It's... yours. Or under your control.
All the stuff in the article about how the TPM contents can't be extracted is true, but missing the point. Yes, you need your own (outer) credentials to extract access to the (inner) credentials, which is no more or less true than just using your own credentials in the first place via something boring like a passphrase. It's an extra layer of indirection without value if all the hardware is yours.
TPMs and secure enclaves only matter when there's a third party watching[1] who needs to know the transaction is legitimate.
[1] An employer, a bank, a cloud service provider, a mobile platform vendor, etc... This stuff has value! But not to you.
What on earth do you think I make my users present keys for???
You know all those guides saying "you should never copy an ssh private key over the network. Make a new one for each device" that every idiot dev ignored? Now I can enforce that.
Not a chance. It is my key.
Which is what SSH keys are for?
The advantage of this approach is that malware can't just send off your private key file to its servers.
The use case is ssh keys! If malware can run an ssh command on the remote host, it doesn't need to steal your key, it can just install itself there. Or add its own keys to the access, etc... At best, you'd have to detect and fix that sort of thing with auditing and control, something that's isomorphic to the "third party" requirements I was mentioning.
To repeat the third time: this is all terrible threat model analysis. TPMs do not have value for individuals managing access between trusted devices. TPMs are for third-party validation.
So does a pass phrase though, with significant less complexity and fragility.
Again, the linked article and responses here are making IMHO a pretty bad mistake with threat model analysis.
Can you explain why securing the ssh keys on a host that was fully compromised like that is anything but theater? Fine, you can't get the key out. You can just run the command directly.
Again, there are use cases where TPMs provide value to authenticate specific devices. But they are not and never have been about "keeping secrets". Your secrets are trash once the device is compromised.
I absolutely agree that they do zilch to protect your SSH keys. Hardware security keys that need physical confirmation of presence are much better for that use-case.