In messengers with end-to-end encryption (E2E), the user is responsible for their keys. When he loses them, he is forced to reinstall his account.
Resetting an account is dangerous. You erase public keys and in all conversations become a cryptographic stranger. You need to recover your identity, and in almost all cases this means a personal meeting and comparison of “security numbers” with each of the contacts. How often do you actually pass such a test, which is the only protection against MiTM?
Even if you are serious about security numbers, you see many chat partners only once a year at a conference, so you're stuck.
But that doesn't happen often, right?
How often does a reset occur? Answer: In most E2E chat apps all the time.
')
In these messengers, you drop the cryptography and just start trusting the server: (1) whenever you switch to a new phone; (2) whenever an interlocutor switches to a new phone; (3) when resetting to the factory settings of the phone; (4) when any interlocutor resets to factory settings; (5) whenever you uninstall and reinstall the application, or (6) when any other person deletes and reinstalls it. If you have dozens of contacts, a reset will occur every few days.
The reset occurs so regularly that these applications pretend that this is not a problem:
Looks like we have a security upgrade! (But not really)Is it really TOFU?
In cryptography, the term TOFU (“trust when first used”) describes a game of chance when the two parties speak for the first time. Instead of meeting in person, the mediator is assigned to each side ... and then, after the parties are introduced, each side carefully tracks the keys to make sure that nothing has changed. If the key is changed, each side triggers an alarm.
If in SSH in such a situation the key of the remote host changes, it will not “just work”, but becomes completely militant:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff
Please contact your system administrator.
Add correct host key in /Users/rmueller/.ssh/known_hosts to get rid of this message.
Offending RSA key in /Users/rmueller/.ssh/known_hosts:12
RSA host key for 8.8.8.8 has changed and you have requested strict checking.
Host key verification failed.
Here is the correct behavior. And remember:
this is not a TOFU, if it allows you to continue working with a small warning. You should see a giant skull with crossed bones .
Of course, these messengers will argue that everything is fine, because the user is warned. If he wants, he
can check the security numbers. This is why we disagree:
- The check fails because it happens too often.
- Check sucks.
- Even a cursory survey of our friends who are concerned about safety showed that no one was worried about this check.
- So it's just server trust and SMS trust (well, well!) Again, again and again .
- Finally, these applications should not work this way. Especially when changing devices. A typical normal case can be handled smoothly and safely, and the rarer the situation, the worse it should look. In a minute we will show the solution Keybase.
Stop calling this TOFU
There is a very effective attack. Suppose Eve wants to break into the existing conversation between Alice and Bob and stand between them. Alice and Bob have been in contact for many years, having passed TOFU a long time ago.
Eve just makes Alice think that Bob bought a new phone:
Bob (Eve): Hey, Hey!
Alice: Yo, Bob! Looks like you have new security numbers.
Bob (Eva): Yes, I bought an iPhone XS, a good phone, very pleased with it. Let's exchange security numbers for RWC 2020. Hey, do you have a current Caroline address? I want to surprise her while I'm in San Francisco.
Alice: I can't compare, Android 4 life! Yes, Cozy Street 555.
Therefore, most instant messengers with encryption are unlikely to deserve TOFU compliance. This is more like TADA - trust after adding devices. This is a real, not a fictitious problem, because it creates an opportunity for malicious introduction into a pre-existing conversation. In real TOFU, by the time someone is interested in your conversation, he will not be able to infiltrate it. With TADA this is possible.
In group chats, the situation is even worse. The more people in the chat, the more often there will be reinstallation of accounts. In a company of only 20 people, this will happen about every two weeks, according to our estimates. And every person in the company should meet with this person. Personally. Otherwise, the entire chat is compromised by a single mole or hacker.
Decision
Is there a good solution that does not imply trust servers with private keys? We think that there is: real support for multiple devices. This means that you control a chain of devices that represent your personality. When you get a new device (phone, laptop, desktop computer, iPad, etc.), it generates its own key pair, and your previous device signs it. If you lose the device, then "delete" it from one of the rest. Technically, such a deletion is a recall, and in this case there is also some sort of key reversal that occurs automatically.
As a result,
you do not need to trust the server or meet in person when the interviewee or colleague receives a new device . Similarly, you do not need to trust the server or meet in person when he removes the device, if it was not the last. The only time you need to see a warning is when someone really loses access to all of their settings. And in this case you will see a serious warning, as it should:
Specially as ugly as possibleAs a result, far fewer accounts are reset and reset. Historically, on Keybase, the total number of add-ons and device reviews is
ten times the number of account drops (you don’t have to take our word for it, this is publicly available in our Merkle tree). Unlike other instant messengers, we can show a truly terrifying warning when you are talking to someone who recently reinstalled the keys.
Device management is a complex engineering operation that we have developed several times. The existing device signs the public keys of the new device and encrypts all the important secret data for the public key of the new device. This operation should be performed quickly (within a second), since this is a range of attention of the user. As a result, Keybase uses a key hierarchy, so that when transferring 32 bytes of secret data from an old device, the new device can see all long-lived cryptographic data (for more details, see the FAQ below). This may seem a bit surprising, but
that is the point of cryptography . It does not solve your secret management problems, it just makes the system more scalable.
Full safety picture
Now we can formulate four basic security properties for the Keybase application:
- durable secret keys never leave the devices on which they are created
- full support for multiple devices minimizes account dumping
- key recall cannot be maliciously delayed or rolled back
- direct secrecy using ephemeral messages over time
The first two seem clear. The third becomes important in a design where device recall is expected and considered normal. The system should be checking that malicious servers can not delay device reviews, as we
wrote earlier .
For more information on the fourth security property, see our
article on ephemeral messages .
A lot of new cryptography, everything is correctly implemented?
No one has ever implemented Keybase's basic security functions or even described them in scientific articles. We had to invent some cryptographic
protocols . Fortunately, ready-made, standardized, and widely used
cryptographic algorithms are in abundance for any situation. All our client code is
open . Theoretically, anyone can find design or implementation errors. But we wanted to
demonstrate the internal structure and hired the best security audit firm for a full review.
Today we present the
report of the NCC Group and are extremely encouraged by the results. Keybase spent on auditing more than $ 100,000, and the NCC Group hired top-level security and cryptography experts. They found two important errors in our implementation, and we quickly fixed them. These bugs could only appear if our servers acted maliciously. We can assure that they will not act that way, but you have no reason to believe us.
In fact of the matter!We think the NCC team did an excellent job. Respect for the time they spent to fully understand our architecture and implementation. They found subtle bugs that went past the attention of our developers, although we recently watched this part of the code base many times. We recommend to see the report
here , or go to our FAQ.
FAQ
How do you DARE attack the XYZ product?
We have already removed references to specific products from the article.
What else?
We are proud that Keybase does not require telephone numbers and can verify the identifiers Twitter, HackerNews, Reddit and Github cryptographically, if you know someone.
And ... very soon ... there will be support for Mastodon.
What about phone redirection attacks?
Many applications are susceptible to redirected attacks. Eva goes to a kiosk in a mall and convinces mobile operator Bob to forward Bob’s phone number to her device. Or she convinces a representative by phone. Now Eva can authenticate on the messenger servers, claiming that she is Bob. The result looks like our example of Alice, Bob and Eve above, but Eve does not need to penetrate any servers. Some applications offer “registration blocking” to protect against this attack, but by default they are annoying.
I heard Keybase sends some private keys to the server?
In the early days (2014 and early 2015), Keybase worked as a PGP web application, and the user could choose the function to store their private PGP keys on our servers, encrypted with passphrases (which Keybase did not know).
In
September 2015, we introduced a new model Keybase. PGP keys are never used (and never used) in a chat or Keybase file system.
How old chats instantly appear on new phones?
In some other applications, new devices do not see old messages, since the synchronization of old messages through the server contradicts outright secrecy. The Keybase application allows you to assign some messages - or whole conversations - as "ephemeral." They are destroyed after a certain time and are encrypted twice: once with the help of long-lived chat encryption keys, and the other with frequently changing ephemeral keys. Thus, ephemeral messages provide direct secrecy and cannot be synchronized between phones.
Non-e-mail messages are saved until the user explicitly deletes them and synchronizes E2E with new Slack-style devices, only with encryption! Therefore, when you add someone to a team or add a new device for yourself, messages are unlocked.
Read more about synchronization in the next paragraph.
Tell us about the PUKs!
Two years ago, we presented the
keys to users (PUK) . The public half of the PUK is advertised in the public
sigchain of users. The secret half is encrypted for the public key of each device. When Alice is preparing a new device, her old device knows the secret half of her PUK and the public key of the new device. It encrypts the secret half of the PUK for the public key of the new device, and the new device downloads this ciphertext through the server. A new device decrypts the PUK and can immediately access all long-lived chat messages.
Whenever Alice withdraws a device, it changes its PUK, so that all its devices except the most recently recalled receive a new PUK.
This synchronization scheme is fundamentally different from the Keybase PGP early system. Here, all involved keys have 32 bytes of true entropy, they do not break with brute force in case of a server hacking. True, if the
Curve25519 or
PRNG from Go is broken, then everything breaks. But PUK sync makes no other significant cryptographic assumptions.
What about big group chats?
tL; dr The groups have their own audited chains of signatures for changing roles, adding and deleting members.
Security researchers
wrote about phantom user attacks on group chats. If user clients are unable to cryptographically verify group membership, then malicious servers can inject spyware and moles into group chats. Keybase has a very reliable system here in the form of a
special function of groups , about which we will write further in the future.
Can you talk about NCC-KB2018-001?
We believe that this bug is the most significant finding of the NCC audit. Keybase actively uses immutable data structures for append-only server ambiguity protection. In the case of a bug, an honest server might start to shirk: “I told you A before, but there was a bug, I meant B”. But our clients have a common policy not to allow the server such flexibility: they are
tightly bound to exceptions in case of bugs.
Recently, we also introduced
Sigchain V2 : this system solves scalability problems that we did not quite correctly provide in the first version. Now customers are more economical with cryptographic data that they pull off the server, getting only one signature from the tail of the chain of signatures, rather than the signature of each intermediate link. Thus, clients lost the opportunity to get stuck in searching for a specific signature hash, but we previously used these hashes to look for bad chains in this list of hard-coded exceptions. We were preparing to release Sigchain V2, forgetting about this detail, buried under several layers of abstractions, so that the system simply trusted the field from the server response.
As soon as the NCC discovered this error, the
fix was simple enough: searching for hard-coded exceptions with the link link hash, not the link check mark hash. The client can always directly compute these hashes.
We can also attribute this error to the additional complexity required to simultaneously support Sigchain V1 and Sigchain V2. Current customers write down Sigchain V2 links, but all clients must maintain outdated v1 links for an unlimited time. Recall that customers sign up links with their private keys for each device. We cannot coordinate all clients rewriting historical data within a reasonable time, as these clients may simply be offline.
Can you talk about NCC-KB2018-004?
As in 001 (see above), we were let down by a certain combination of simultaneous support for outdated solutions and optimization, which seemed important as we gained more experience of real work with the system.
In Sigchain V2, we reduce the size of the chains in bytes to reduce the bandwidth needed to search for users. This savings is especially important on mobile phones. Thus, we encode chainlinks with a
MessagePack , not
JSON , which gives a good economy. In turn, customers sign and verify signatures on these chains. Researchers at NCC have found clever ways to create “signatures” that look like JSON and MessagePack at the same time, which led to a conflict. We involuntarily introduced this ambiguity of decoding during optimization, when we switched JSON parsers from the standard Go parser to the more efficient one. This fast parser quietly missed a bunch of garbage before finding the actual JSON that included this polyglot attack feature. Error corrected by
additional input validation .
In Sigchain V2, we also accepted the
proposal of Adam Langley that the signers should preface their packets with signatures with the context string prefix and byte
\0
, so that verifiers are not confused in the signer's intentions. On the verifying side of this context-prefix idea, there were errors that could lead to other polyglot attacks. We quickly corrected this flaw
with the white list .
After correcting both bugs, the server rejects the harmful loads of the polyglot attack, so that the exploitation of these vulnerabilities is possible only with the help of a compromised server.
Where is the documentation?
https://keybase.io/docsIn the coming months, we will devote more time to work on documentation.
You can tell more about this statement of the NCC: “However, an attacker is able to refuse to update the sigchain or roll back the user’s sigchain to the previous state by truncating subsequent chain links”
Keybase actively uses immutable append-only public data structures that force the server infrastructure to capture one true representation of user identifiers. We can guarantee the withdrawal of devices and the removal of team members in such a way that the compromised server cannot roll back. If the server decides to show an inconsistent view, this deviation becomes part of the unchangeable public record. Keybase clients or a third-party auditor may detect a discrepancy at any time after an attack. We believe that these guarantees far exceed those of competing products and are almost optimal given the practical limitations of mobile phones and customers with limited computing power.
Simply put, Keybase cannot invent someone's signatures. Like any server, it can hold data. But our transparent Merkle tree is designed to be stored for a very short period of time, and it is always detectable.
How does Keybase handle account resetting?
When Keybase users actually lose all their devices (as opposed to adding new ones or losing several), they need to reset. After resetting the account, the user is mostly new, but he has the same username. He cannot sign the "reset instruction" because he has lost all his keys. So instead, the Keybase server commits an indelible operator to the Merkle tree, which means a reset. Clients enforce the impossibility of reversing these instructions. In a future article, specific mechanisms will be described in detail.
This user will have to re-add proof of identity (Twitter, Github, whatever) with new keys.
Can the server just swap someone's leaf on a Merkle tree to advertise a completely different set of keys?
The NCC authors consider Keybase hostile server, which completely changes the leaf of the Merkle tree, replacing the true set of Bob's keys with a completely new fake set. The attacking server has two options. First, he can fork the state of the world, putting Bob in one fork, and those whom he wants to fool - in the other. Secondly, he can “cheat” by publishing a version of the Merkle tree with the correct set of Bob's keys and other versions with a fake set. Users who interact regularly with Bob will discover this attack, as they will verify that the previously downloaded versions of Bob’s history are valid prefixes of the new versions that they download from the server. Third-party validators that scan all Keybase updates will also notice this attack. If you write a third-party Keybase validator that we like, we can offer a significant reward. Refer to
max
on Keybase.
Otherwise, we hope to plan in the near future the creation of an autonomous validator.Can you believe that I read to the end?
Did you read it or just scroll down?