Dark Crystal Threat Model
This threat-model report is part of a developer toolkit designed for projects where effective key management is critical. Although secret sharing schemes have been around for a long time, their integration for provisioning backup and threshold features is not a commonly established practice. Therefore it is important that the concepts, uses and limitations are understood such that developers are easily able to establish whether sharding technology is suitable for integrating into their project.
This report provides a detailed threat model analysis, elaborating on possible vulnerabilities that can result from poor implementation decisions and describing the options that mitigate for specific user needs.
Terms used in the report
- Peer - an individual within a given social network
- Key - a peer or secret-owner’s encryption key
- Secret - the data to be backed up and potentially recovered.
- Secret-owner - the peer to whom the data belongs.
- Shard - a single encrypted share of the secret.
- Custodian - a peer who holds a shard, generally a friend or trusted contact of the secret owner.
- Threshold - the number of shards required to recover the secret.
- Incapacitation - (of an individual) their temporary or permanent unavailability, for example consequent of arrest, disappearance or death.
Introduction
Dark Crystal is a protocol for distributed data persistence and threshold-based consensus. It is based on a secure implementation of Shamir’s Secret Sharing and has multiple possible applications in security-oriented tools.
Modern encryption techniques are strong, but rarely used by those who need them. A recurring reason for this is users’ fears of losing access to critical data: the ‘Global Encryption Trends Study 2018’, conducted by the Ponemon Institute, indicates that key management issues pose a major barrier to the adoption of encryption tools.
Mechanisms such as privately owned offline storage in a secure location, virtual private servers, a trustworthy and reliable cloud service, or the data storage infrastructure provided by certain NGOs, have their own intrinsic limitations. These methods require that users be conceptually comfortable with digital security issues and familiar with key management practices, such that they can back-up their keys (or data) independently of the related application.
Moreover, while traditional forms of ‘secure backup’ make sense for sensitive media such as incriminating photos, they are less suitable for personal cryptographic keys. In the case of signing keys for creating verifiable evidence, it would undermine the strength of this evidence if another party took complete custody of the key. In the case of encryption keys for personal messages, insecure or unencrypted backups can create a weak point in security. There are also security risks involved with transmitting cryptographic keys over the internet.
Given the vulnerability of communication systems we believe that offering users at least the option of distributed backups - and therefore the ability to wipe their device when needed, or the reassurance to adopt encryption without fearing to lose their keys - can at the very least provide a complementary alternative option for developers and encourage users to make better use of encrypted tools.
Finally, as peer-to-peer protocols advance in response to widespread security concerns with centralised client-server architectures, key and/or data backup becomes an even more serious issue. Developing distributed backup and remote wipe features for peer-to-peer applications, to match features already available in client-server architectures, gives both developers and users a greater and more robust set of tools to choose from to address their particular digital security needs. However, no security feature will ever be water-tight. Every new technology solves some problems, but arrives with others. Dark Crystal offers a solution to key management and data persistence issues that developers and end-users currently face, but is itself vulnerable to entirely new vectors of attack. This report therefore aims to explore how it shifts - rather than eliminates - the attack surface.
Is this protocol appropriate for my application?
As an interesting anecdote, the root key for ICANN DNS security, effectively the key which secures the naming system of the internet, is held by seven parties, based in Britain, the U.S., Burkina Faso, Trinidad and Tobago, Canada, China, and the Czech Republic. Cryptographer Bruce Schneier has alleged that they are holders of Shamir’s secret shares, which indicates that the scheme is taken quite seriously.
The scheme is suitable for a wide range of applications and has been implemented to assist with data access and management issues including:
- ThresholdJS, a minimalist implementation designed specifically for Bitcoin private keys (though its application in this context is arguably less appropriate than multisig).
- Electronic voting, as in Homomorphic secret sharing where each vote is sharded and sent to a number of different vote-counters, rendering a single vote counter unable to manipulate votes in a directed way.
- Password databases, as in PolyPasswordHasher which uses Shamirs Secret Sharing to render it considerably more difficult to crack passwords if the database is compromised compared to traditional methods.
- Sharding GPG revocation certificates, allowing friends of a human-rights activist, if captured with their computer, to revoke their GPG key so that people know to stop sending them sensitive information.
- Group threshold signatures, where a message can be signed on behalf of a group by m of n (optionally anonymous) group members, providing a powerful consensus mechanism for collaboration.
- Threshold-based file (en/de)cryption, such as with threshcrypt, which implements the extra step of adding passwords to each shard.
- Remote deletion of an app, account or key, whereby a threshold of signatures from among a trusted group executes an action that no member alone can trigger.
- Access or signing using threshold signatures: The root key for ICANN DNS security, DNSSEC, is held by 7 parties, based in the U.K., U.S., Burkina Faso, Trinidad and Tobago, Canada, China, and the Czech Republic. Many assume they hold 5-of-7 Shamir’s shares.
Dark Crystal is the first, and currently only, implementation of Shamir’s Secret Sharing that enables developers of secure applications to integrate threshold functionality directly into their app.
However - as developers know - each application carries its own set of risks and has its own unique app- or user-specific threat model. So when reading this report there are a few key points to keep in mind:
- Key recovery alone cannot solve the problem of compromised keys. It is appropriate only to recover encrypted data following loss of a key, or for continued use of a key when it is known to be lost but not compromised, for example following deletion or hardware failure. Key revocation and remote data deletion using threshold signatures must be implemented alongside key recovery to offer protection for situations of compromise.
- Since secret sharing is a ‘transport agnostic protocol’, its only as strong as the transport/platform/architecture an app is already using. For example, peer-to-peer protocols are naturally better suited to secret sharing, due to there being no central server that can be surveilled to detect the possible combination of share-holders that a secret-owner might choose. Conversely, the nature of some p2p protocols make it difficult to delete data. There are specific methods of implementation that mitigate these different architecture-related risks, which are further discussed below.
- Dark Crystal functions more like a framework than a rigidly defined protocol. There are a range of implementation options - for example, including a label with the secret or padding the secret to a fixed length - that are necessary in some contexts but redundant in others. In order to effectively model threats and make appropriate choices, developers will need to take into account the specific context and needs of their application’s core userbase. This will be explored further in the accompanying ‘Social Contexts’ report.
Scope of this report
In general, Shamir’s scheme is considered information-theoretically secure. That is, individual shares contain absolutely no semantic information about the secret, and it can be said to be ‘post quantum’ cryptography. As there already exist many articles and papers establishing the security of the encryption primitives Dark Crystal is based on, and of the particular Shamir’s Secret Sharing implementation that we have chosen to use, this report will refrain from interrogating once again these discussions here. For those who are interested we have included a list of references at the end of the report.
Secondly, Dark Crystal does not aim to protect against attacks that target a user’s hardware or operating system. For example, Dark Crystal does not offer any extra protection against an in-memory attacker: Securely removing sensitive information from memory is very difficult due to the ‘garbage collection’ systems in many high level languages, and there is also often a danger that data stored in memory is written to disk by the operating system’s ‘swap’ system. Possible mitigations of these and other such attacks are therefore not addressed.
However, when integrating Dark Crystal into an existing application there are a range of specific implementation decisions that developers will need to make related to the associated metadata to be stored on the device. The appropriate choices will be determined by the particular application, transport protocol, desired feature and threat model of the app’s userbase. These decisions are discussed below, and examples expanded on in the accompanying social contexts report.
For this report, we have divided threats broadly into three categories: passive surveillance, which consists largely of threats to metadata-in-transit; active, targeted attacks; and considerations for (meta)data stored on the device. The first group of potential threats will apply to most applications and should be considered by all developers. The second and third group are largely specific to the types of features that developers might use Dark Crystal to implement, and therefore may or may not be relevant depending on the application and feature(s) concerned.
Threats from a Passive Adversary : Network Surveillance Attacks
A passive adversary is an individual or an entity that is able to simultaneously monitor the traffic between all computers in a given network. Through capturing and processing metadata such as the timing and size of packages moving across the network, they can potentially identify, for example, the participants to and possible nature of a given exchange. The amount of data they are able to capture depends on the communications protocols being used, extending to the full content of communications that are unencrypted.
The size of the given network that a particular passive adversary can surveil varies depending on the infrastructure and data processing capacities that they can access or control. The scope of overview will likely differ, for example, between state and non-state actors. For developers of applications whose core userbase is predominantly located within the jurisdiction of an authoritarian or otherwise repressive regime, it is prudent to assume a high level of network observation exists, at the very least within the borders the regime controls. The risks this entails for users pertain mainly to data or metadata leakage in transit and the sensitive details that can be deduced from those. Broad types of threat that can be identified in relation to this include:
Considerations for message transport
Ideally, the transport protocol of the application into which Dark Crystal features are integrated should be robust and encrypted, such that an eavesdropper cannot access the application’s message contents at all. It could be argued that when this is not the case, Dark Crystal should not be used, as transmitting details relating to the identity and location of custodians over an unencrypted connection may in some cases be worse than not having a backup or access to threshold features at all.
Furthermore, it is important for developers to ensure that Dark Crystal messages are indistinguishable from normal messages that the application sends. For example, group messages are popular these days, and it therefore might be assumed that Dark Crystal messages (specifically, those initially sent to distribute the shards among multiple custodians) would be indistinguishable from group messages sent at other times. However, the accuracy of this assumption depends entirely on how group messaging is implemented in the host application itself:
If group messages are sent simultaneously in bulk as individual messages to every other member of the group, then this may be fine. But if regular group messages are sent to a single ‘group’ location that all members of the group have access to read, then it is likely that Dark Crystal shard distribution messages, sent simultaneously in bulk as an individual message to each selected custodian, will stand out among traffic patterns and could cause identification of the sender and receivers of shards.
If this is the case, then it is important that developers mitigate for this when implementing the Dark Crystal feature, perhaps, for example, by staggering the distribution of shards. The repercussion for users will be that the shard distribution process will take longer, though on balance security is non-negotiably improved.
Another mitigation for additional obfuscation of shard-related metadata is to randomly pad the application messages within which Dark Crystal shard messages are wrapped, such that they are no longer uniform in size. This helps to frustrate possible analysis of message size to deduce knowledge about the shard.
In the end, the most suitable transport protocols are those that reveal less message data and metadata overall.
Considerations for storage
The Dark Crystal backup technique is robust because of its distributed nature. Ideally, shards are stored in multiple locations, controlled by multiple unique custodians. If the host application uses a traditional client-server architecture for storage, this largely nullifies the distributed nature of the scheme and renders the secret, whose shards are stored on a single server in a single location, once again vulnerable to a single failure or attack.
This is mitigated to some extent by the fact that each shard is encrypted to a particular custodian - but for the scheme to make sense then at the very least custodians’ private keys should be stored only on their client device and not on the same server storing the shards. Even in this case, however, a single compromised server could mean a serious metadata leak that reveals all custodians of a given set of shards as well as a possibility that the entire set of shards could be lost - nullifying the redundancy built in to a threshold scheme.
In these ways, reliance on a centralised storage therefore negates the core principle of this protocol, which is to provide an alternative, distributed option for backup that affords properties unachievable by other methods and unavailable elsewhere. Ideally, shards should be stored locally with the custodians, so increasing the number of locations and greatly improving security against attack.
Threats from an Active Adversary : Targeted Attacks
Assuming that an individual has deleted data from their device, and assuming they will not give up the fact that they have used sharding or identities of their custodians willingly, an adversary who does not know of the existence of the data cannot force the individual to expose the secret, reveal the key, or decrypt an account.
If we presume the adversary knows that the secret exists and that sharding has been used, in order to reveal the secret they must ascertain the identities of the individuals’ custodians to generate a new vector of attack. How easily an adversary can determine these depends on the technical contextual implementation of a threshold-based secret sharing mechanism, the transport layer involved in transmitting shards, as well as the operational security implemented by the individual when initially sharing their secret.
Ascertaining likely custodians
If the attacker is able to obtain datasets describing friendship circles and social relations of a given target individual they can then form hypotheses as to who the likely custodians might be. By compromising by force the suspected custodian(s), it may be possible to expose a target individual’s shard.
In an exaggerated version of this attack, for example if an adversary wanted to attack an entire community or network at once, the adversary could utilise a series of leaked datasets describing friendship circles and social relations in an attempt to pin-point specific individuals who are commonly trusted parties in the social graph. By compromising through force these particular key players, it may be possible to expose shards associated with a broad set of secrets.
This model assumes that when we aggregate social behaviour, it becomes apparent that our choices are much more deterministic than we as individuals believe they are. For example, in attempting to crack into as many possible accounts out of a set of 1 million users, it is possible that an adversary could ascertain that there may be 100 people who are more highly regarded as trustworthy, and suitably interconnected, who if compromised, may hold and thus expose one or more shards.
It should be noted that forcefully compromising a custodian - either through the technical targeting of their device or the physical or psychological targeting of their person - would still only reveal a single shard of a given secret, and would likely be far less resource-efficient than compromising the secret owner(s) themselves. It is therefore less attractive to an attacker as a vector of attack.
Nevertheless, to mitigate for this potential attack:
- the metadata associated with each shard should be the minimum necessary to meet the requirements of the feature
- the transport protocol used should ideally protect the app-related social graph
- the storage of shards by a custodian’s client application should be secure
The first of these methods is discussed in the final section of this report: ‘Threats relating to data/metadata stored on the device’. The remaining two depend on the host application itself.
Anonymity of custodians
This can be divided into two subcategories:
- Anonymity from outsiders: If an adversary can determine the custodians of a particular secret they can subsequently mount an attack, either through social engineering or compromising devices or accounts belonging to these peers. This case and its mitigations have been addressed above.
- Anonymity between custodians: For added security, it may be desirable that the peers themselves do not know who each other are, making it difficult for them to maliciously collude against the secret owner as well as providing an extra layer of protection should one of them be compromised or come under attack.
For implementations of the scheme that do not result in access to sensitive data and/or that rely on efficient action, anonymity between custodians may on balance introduce more complexity than it rewards. Such applications might include threshold-based publication of GPG revocation certificates, or remote deletion of an app or data, upon the secret-owner’s arrest. In these situations, efficient action can be critical and anonymity between custodians can create an inappropriate impediment.
For implementations of the scheme for recovery of lost or accidentally deleted identifiers, the secret-owner can independently contact their custodians, who need not know who each other are.
However, for implementations of the scheme for features such as threshold-based decryption (for example, after the secret-owner’s incapacitation), due to the sensitive nature of the data involved, it may on balance prove worthwhile to protect custodians from carrying the knowledge of who one another are.
In such cases, one possible solution would be a ‘proof-of-custody’ scheme, wherein the custodians would all hold a common piece of data, such as a unique identifier for the key they are protecting. In the case that one of them deems recovery of the secret to be necessary, they can ‘broadcast’ the hash of this piece of data by publishing it somewhere that the other custodians know to look. The remaining custodians each respond privately and a handshake takes place to prove genuine custody of a shard. The group can then proceed with the recovery process.
In the case of this solution being applied, it may be considered necessary in some cases to ensure that the custodian initiating the process does not have first, sole or perhaps even any access to the recovered secret, to render futile a malicious initiation attack.
Consent to custody of a shard
As described in the two sections above, possible custodians of a given secret are placed at an often low, but nonetheless existing risk of an attacker seeking to reveal that secret. Conversely, a desired custodian might know or consider themselves to be an unwise choice of trusted peer due to their own risk-profile independent of their possession of a given shard.
This is incredibly problematic - for either party, or both - if the custodian is not able to refuse participating in the arrangement. A clear consent mechanism may therefore prove an important or necessary feature in order to ensure the application does not place either custodian or secret owner at a risk of which they are unaware.
A consent mechanism can take the following forms:
- No consent required: Only a notification shown that a shard has been received. The secret-owner can optionally be prompted to first ask the custodian if it is okay, via sending a regular message or through some out-of-band form.
- ‘Weak consent’: The shard is sent to the desired custodian right away, but on receiving it they are asked for their consent. If they refuse then the shard is deleted locally and a message sent to the secret-owner to inform them. This has the advantage that in the ‘happy path’ (when everything goes well), the backup is made very quickly, and implementation on the side of the secret-owner would be fairly simple. The disadvantage is that until the custodian reads and responds to this notification, the shard exists in the application’s ‘inbox’ on their device such that they have in effect unwillingly accepted (at least temporary) custody.
- ‘Strong consent’: Initially only a message with a request for consent is transmitted to the desired custodian. They only receive the shard itself after confirming. This is perhaps better from the custodian’s point of view. It is also easier to handle refusal from the secret-owner’s perspective, because the shard has not left their device and therefore revocation and re-sharding are not required to maintain the same level of security. However it does mean that the shard distribution process takes longer to complete.
Resource exhaustion under no/weak consent model
A malicious secret owner in a social network could send a large number of shards to prospective custodians. The size of shard messages are only limited in our high-level API, which is optional. This means an attacker could exhaust the file system storage of a custodian — preventing them from receiving further shards — or cause the calling application to crash.
To resolve this, we recommend limiting the size shard messages to a maximum which is appropriate for the application. For example if the secrets are cryptographic keys, they have a fixed sized so the appropriate limit would be the key size plus the size of the additional metadata contained in the shard message (which is documented in the protocol specification).
Since resource exhaustion attacks are a general problem for network transports, it may be that the underlying transport system has some existing mechanism to limit the number or size of messages a peer can send.
For examples and other possible mitigations see Uncontrolled Resource Consumption in the Common Weakness Enumeration
Deceptive return requests
It is possible for an attacker, having identified likely custodians, to send messages impersonating the secret-owner, requesting the return of their shards. Features that enable recovery of the account or private key with which the shards were originally signed are more vulnerable to this form of attack, as the original signing key cannot then be produced.
Implementations for other features, where the original signing key has not been lost, ought to require that the original key be produced. However, this does not itself unequivocally prove identity, for example, that the key has not been captured and used to initiate a malicious recovery, nor does it protect access in the case that the signing key is lost. Therefore verification of the secret owner may still be recommended.
Mitigation here involves encouraging custodians, through steps implemented at the UI/UX layer during the shard request/response procedure, to verify that the request is genuine. This might involve a phone call, video call, or enquiry about some shared information, as relations between individuals are notoriously difficult to convincingly simulate.
Of course, in an absolute worst case scenario it can be imagined that the secret-owner is making the request under duress - though such circumstances are not unique to sharding, and hopefully if there is a likelihood of this happening then the secret-owner and custodians have discussed it in advance.
Mechanism for return of shards
Once the request has been verified as genuine, the custodian will need to return their shard.
For an implementation to enforce that shards be returned at an in-person meeting between custodian and secret-owner undoubtedly ensures with the greatest possible certainty that they are being returned to the right person, but at the cost of drastically limiting - in most cases - a secret-owner’s pool of potential custodians to a restricted geographical area and likely a single legal jurisdiction. This makes it far easier for an attacker to guess who they might be, as well increasing their powers of apprehension. These possible consequences might not apply to some users of the sharding feature (one can imagine an international activist deleting data from their phone when crossing the borders of a repressive state, and wanting to recover it on arrival at their destination), but at the very least such a process should be carefully considered before being technically enforced.
If an in-person return of shards is not required, and the request is being made from a new account or with a new key, the UI should encourage the custodian to require confirmation of the new key out of band, as a measure against any man-in-the-middle attack that might be used to insert a malicious recovery key. Rather than trying to confirm characters from the key itself, it can be much easier - and therefore more feasible - for the parties to confirm a set of dictionary words derived from it.
Malicious or compromised custodian
Rather than impersonate the owner of a secret, another vector of attack exists where a trusted custodian becomes or turns out to be malicious, or is later compromised while in possession of a shard.
A worst case scenario might be that malicious or compromised actors end up constituting a threshold of custodians or having enough sway within a group of custodians to expose a distributed secret. Such actors could be in service of state or non-state entities, or else could choose to collude for other reasons against the secret-owner.
Due to the cost in human resources necessary to mount such a strategy, it is unlikely that any entity would invest in such a tactic given the wide range of technical and non-technical attacks that would bear greater fruit than a single secret likely ever could. If such a number of malicious actors were among an individual’s most trusted contacts, it is likely that they would be exposed and deeply vulnerable to far less complex attacks than this. If at all, this attack is therefore more likely to result from peer-collusion than from a state or non-state entity’s acts.
Regardless, to mitigate for it, it would be wise for the UI/UX design of the custodian selection process to suggest best practice security advice. Such advice could include, for example, that in many circumstances, a threshold should not be achievable through collaboration within a single circle of friends, perhaps also residing in the same legal jurisdiction. However, such choices should not be enforced, and it is ultimately the secret-owner’s choice. Security tips to inform selection may nonetheless be appreciated.
While the worst-case of full custodian collusion is unlikely, and perhaps also essentially unmitigable, Dark Crystal does allow for the existence of one or more malicious or compromised custodians to be technically mitigated by offering the following optional methods for developers to implement:
-
Obfuscating the x-coordinate of the shares: Shares are a collection of points on an array of parabolic curves, and contain a ‘share index’, the x coordinate (which remains constant throughout the array) and the ‘share value’ (an array of y coordinates). In the implementation of Shamir’s Secret Sharing algorithm that Dark Crystal has chosen to use, the share indexes are given consecutive numbers for each share, so if we have 4 shares, they would have share indexes 1, 2, 3, and 4. This means that having a share gives some indication of the total number of shares. For example, if we have share number 3, we can infer that at least two other shares exist. To obfuscate this additional information, Dark Crystal generates a set of 255 shares (one byte - see SSS module documentation for details) and randomly selects the secret-owner’s desired number of shares from this set. This means that nothing can be inferred from a malicious custodian knowing their own share index, other than that the number of shares is less than 255. This is achieved using a ‘Durstenfeld Shuffle’ algorithm, to shuffle the array of shares - but instead of solely shuffling the array, we randomly select only the desired number of elements.
-
Padding the secret: The optional technique of encrypting the secret and then applying the secret sharing algorithm to the key allows us to ‘shard’ secrets of widely variable lengths. So any kind of data can be ‘sharded’ regardless of its size. However, the length of the secret can be determined by the length of a given share, so without any mitigating action, the length of the sharded secret is revealed to custodians. In many situations this may be undesirable, for example when the secret is a password, or a particular kind of key with a characteristic length. A solution to this is to add padding, giving Dark Crystal secrets a constant length. For example a secret of length 32 could be padded with an additional 32 bytes, all of which are zero, to form a standard secret length of 64 bytes.
In addition to these technical mitigation methods, maintaining anonymity between custodians - when appropriate, as described above - will further limit a malicious or compromised custodian’s ability to execute a successful attack.
Revocation of shards
In the case that a secret-owner becomes aware that a particular custodian is compromised, or for some reason their particular shard is no longer secure (for example, they lost their phone), it may be desirable to revoke the questionable shard.
The ability to do this is afforded by the random coefficients generated in the underlying secret sharing algorithm, such that sharding the same secret twice gives two distinct sets of incompatible shards. This makes it possible to ‘revoke’ an untrusted shard without cooperation from the untrusted shard-holder.
To ‘revoke’ a shard, the secret-owner would request that the remaining (still trusted) custodians delete their shards, rendering the untrusted shard useless. A fresh set of shards can then be generated to re-secure the secret.
However, this method only works in applications where deletion is possible. It cannot work, for example, in systems that rely on an append-only log. In such systems, previously appended data can only be marked obsolete, not forcibly removed. When implementing a Dark Crystal-based feature in such systems, the most effective mitigation strategy is to use an ephemeral key.
In such a case, the custodian-to-be would need to create an ephemeral keypair. They send the public key to the secret-owner, who encrypts a shard with it and sends it back to the custodian. Should the shard need to be deleted, the custodian can delete the private ephemeral key, such that while the encrypted shard will remain on the append-only log, it will no longer be possible to decrypt it.
Permitting revocation of trust and reissuance of keys is a necessary security feature for distributed shards - but when an application relies on an append-only log the cost is more keys to manage, increasing the complexity.
Return of malicious or corrupted shards
A custodian holding a shard that belongs to some split secret has no way to verify that the data has not been corrupted or tampered with before returning the shard, potentially causing an irretrievable secret or an incorrect result. Similarly, another situation may arise in which malicious ‘fakers’ claiming to be shard-holders manage to introduce fake shards into the recovery process, causing a similarly irretrievable or incorrect result.
Such an attack could occur, for example, if a custodian was malicious, or if they, their device or their application was compromised such that a recovery request could be intercepted and responded to with a fake shard.
It has been postulated that without access to the other shards such actors would have no ability to dictate a specific output, but that an adversarial custodian with enough information could produce a fake shard such that the secret is reconstructed to a value of their choice.
In Dark Crystal, it is easy to determine that a wrong secret has been recovered by comparing its hash with the attached message authentication code (hash-generated MAC).
However, it is also important to be able to determine and eliminate a problematic shard from the process, and so Dark Crystal optionally implements signing of shards during the back-up process. This enables a secret-owner to detect corrupt shards on their return, using cryptographic signing and asymmetric encryption to validate the sender and receiver of each shard.
Since the secret owner has an established public key, they can sign each individual shard before distributing and then, provided the public key of the secret owner is still known, can use the signature to verify that it has not been tampered with or corrupted once it is returned.
This same process allows custodians to verify that a shard has not been modified by a man-in-the-middle attack when they initially receive it.
Other possible methods of verifiable secret sharing that are compatible with Dark Crystal include:
- Feldman’s scheme: Paul Feldman proposed a scheme in 1987 which allows custodians to verify their own shares, using homomorphic encryption (an encryption scheme where computation can be done on encrypted data which when decrypted gives the same result as doing that computation on the original data) on top of Shamir’s original scheme.
- Pederson’s scheme: Pederson’s scheme, published in 1992, has the advantage that it can be used with secrets which are not uniformly random. For example, a password, or message in a human language.
- Schoenmakers scheme: More recently Berry Schoenmaker proposed a scheme which is designed to be publicly verifiable (originally introduced by Stadler, 1996). That is, not only custodians, but anybody is able to verify that the correct shares were given. The scheme is described in the context of an electronic voting application and focuses on validating the behaviour of the ‘dealer’ (the author of the secret). But it can just as well be used to verify that returned shares have not been modified.
The following two options also exist, but are not recommended for the reasons explained below:
- Publicly publishing the encrypted shares: This only works if the encryption scheme used is deterministic, such that encrypting the same message with the same key twice will reliably give the same output. But such encryption schemes are vulnerable to replay attacks. Most modern symmetric schemes introduce some random nonce to evade this problem: the scheme implemented by Dark Crystal (libsodium’s secret box) takes a 24 byte random nonce, and so this method would in this case not work.
- Publicly publishing the hash of each share: We consider this method to give custodians unnecessary additional information, which has undesirable security implications. Namely: a custodian in possession of one share and hashes of e.g. three other shares has slightly more likelihood of correctly guessing one of the other shares, as they are able to confirm a guess against three of the shares, plus the encrypted secret. That is, the key-space for a brute-force attack is smaller.
Weighting the distribution of shards
As a final possible threat, it should be acknowledged that the construction of Shamir’s Secret Sharing necessitates that all shares carry equal weight for recovery. It has been argued that because social trust is not uniformly distributed among a given individual’s network of peers (which perhaps includes family as well as work colleagues and friends), that it should be possible to assign shard-holders with differently weighted shards.
The way to technically implement this would be to allow one or more shard-holders to receive multiple shards. However, Dark Crystal has not implemented this as an optional feature, and does not recommend it as a practice for the following reasons:
- Endowing custodians with unequal numbers of shares disrupts the threshold tolerance (custodian redundancy) built into the scheme, as if one high-value shareholder is lost (e.g. by misplacing their phone) the chances of reaching the threshold are disproportionately diminished.
- Conversely, if one high-value custodian is somehow compromised, this constitutes a multiplication of risk than if distribution was equal among a circle of less trusted but unlikely to be malicious peers.
Instead, we would recommend that developer time is better invested in good UX/UI design for the host application, that supports users to select a sensible number of custodians, perhaps appropriately distributed, and a sensible threshold for recovering the secret.
Threats to a peer’s device
When a secret is encrypted, sharded and distributed using Dark Crystal, a series of 5 possible messages are created that contain metadata relating to the process. They are: root, shard, request, reply and forward. Each message contains references to some combination of: message type, version, timestamp, recipient, custodians, root id, branch id, no. of shards, quorum, ephemeral public key and the encrypted shard data itself.
Depending on the application and the particular feature being implemented, it is likely that only selected messages, and selected properties of each, will be required. We recommend that developers consider carefully what their particular feature requires, rather than integrate the message schemas in full, as minimising the metadata stored is always the best approach. The Dark Crystal protocol framework makes this incredibly easy to do.
Of course - there are hard limits to how much protection such decisions can offer if a user suffers a targeted hardware or operating system attack. Especially if the attack occurs before they use Dark Crystal-based features, it is likely that the ‘secret’ they want to shard and distribute has already been captured, rendering these decisions obsolete. If the secret they want to distribute is a private key that is regularly used and stored on their device, and the device is later compromised such that the private key (and associated password) are captured (the latter, for example, using a key-logger attack), then the fact of prior or subsequent distribution, and of the residual metadata stored, is rendered irrelevant.
With this in mind, storing some metadata about the distributed shards can make sense in some applications and features. Doing so can aid secret-owners to:
- Remember what their secret was about
- Remember when they last backed up their data
- Remind themselves who their custodians are
- Remind themselves how many shards exist and what number threshold they chose
Under other circumstances, it is better to store no metadata at all.
Questions for developers to ask -
- What data/metadata is stored with the owner and what is held by custodians?
- How is the data stored on the owner/custodian’s device?
- How easy is it for an attacker to search for and identify shards on a system?
- How hardened or sandboxed is the application against various hardware attacks? This should be relative to the protection offered to a user’s other private data and keys.
- For each metadata message stored, what properties does it need to have to fulfil the requirements of the feature?
These questions should be asked for messages transmitted and stored at each stage of the sharding, distribution and recovery process.
Conclusion
Dark Crystal essentially makes possible an complementary set of features for developers or users to choose should it improve their overall security to do so. For many users who currently do not use encryption at all, due to fears around losing access to data should they lose access to their key, it is perhaps beneficial for an application to offer the option of sharding. Likewise for the long list of other threshold-based features that sharding makes possible.
When considering the above list of possible vectors of attack, it is ultimately useful for application developers to ask the following three questions to determine the value of sharding for their tool:
- Would the attack be more efficient or effective than traditional attacks that target the secret-owner or their device?
- Would the attack be more efficient or effective than attacks that would be exposed were the threshold-feature not available at all?
- Does the functionality offered by the threshold-based feature therefore ultimately increase or decrease the users’ overall surface of risk?
Distributing the custody of a decryption key shifts the landscape for an adversary significantly, forcing a different set of methodologies for capturing the data they seek. The following set of questions can help developers consider implementation choices that ensure that custodians - who are perhaps now exposed in relation to a secret which they had no relation to before - continue to be low-value targets when the cost of targeting them is weighed against the benefit of capturing a single, encrypted shard:
- Can the adversary know that a sharded backup exists?
- Does the adversary have access to the secret-owner’s (app-specific) social graph?
- What transport protocol was used to send the shards and can it be surveilled to identify shard messages, and so the possible sender and recipients of a shard?
- Can the adversary operate across/above nation state boundaries?
- Can the adversary utilise traditional surveillance techniques to determine the most likely custodians, their number and the quorum? Can they do that without revealing they are engaged in active surveillance?
At the technical level, with a correct implementation, it is possible to reduce the risk around accessing these trusted parties to make them practically indeterminable and thus the exposure of the secret highly unlikely.
Three factors are crucial to consider when implementing sharding as a tool:
- Suitable application architecture: the new feature will only be as secure as the application hosting it
- A suitable transport protocol: sharding cannot mitigate for metadata leaked in transit
- UX/UI design: encouraging the users’ best practice operational security
Client side encryption and peer-to-peer protocols give an enormous security advantage but they are still not widely adopted, even by high-risk users. In recent years signal & co have made encryption easier to use, but their identity recovery mechanism essentially relies on distribution of a sim card which is inherently vulnerable. A system which relies solely on a cryptographic identity is unlikely to be widely adopted unless an easy-to-use recovery mechanism is provided. So while sharded backups introduce vulnerabilities, these are small in comparison to those introduced by centralised servers or failing to adopt client side encryption at all.