Don't trust, verify (an untrusted host)

One important thing to realize is that secure enclaves are not a silver bullet. Building secure protocols around them, even if you assume the enclave itself is impenetrable, is challenging.

First, a quick description about the threat model at hand we use with secure enclaves in the Secret Network:

  1. We assume each node is untrusted - i.e., could be run by a malicious host
  2. We assume each node is equipped with a secure enclave (SGX in our instance), that can execute code/data in a trusted manner - i.e., its data can’t be observed or manipulated by the host.
  3. We further assume there’s consensus in the network, which is another layer of security (explained below).
  4. We also assume the usual PKI/cryptographic primitives are secure (signatures, encryption, etc…)

So what does this threat model give us (and what doesn’t it)? At a first glance, it would seem assumption #2 gives us all we need - secret contracts run inside of enclaves and no one can see the code/data (privacy!) and and no one can tamper with the computations (correctness/integrity!). But it turns out this is only part of the picture.

Because of assumption #1 (host can be malicious), they can do whatever they want to the inputs going into the enclave/outputs coming from the enclave. Because of assumption #3 (consensus), attacks trying to tamper with the data (e.g., replacing a transaction or even censoring it) won’t work, which is great!

However, this does leave out attacks where an untrusted host (from assumption #1) tries to fool his own, local, secure enclave (assumption #2), in order to leak data. Let’s give a simple example:

Say there’s a secret contract that acts as a time vault for secrets: users can store secrets, only to be released at a given time (say, a given block height).

Although from a consensus-perspective, the block height is provided and agreed upon by the network, which means you can’t fool the network as a whole that we’re at a different point in time than we really are, you can however, trick your OWN secure enclave by changing the current block height number and passing that instead to the enclave. Block height, like many other environment variables, are not authenticated in any way, and the enclave has no way to know if it can trust that input. Therefore, in this instance, it would be trivial for a malicious host to trick its enclave into thinking we are far enough in the future, so it makes sense to output the decrypted secret.

This, of course, is just one example of a class of problems - any data that comes into the enclave and is used by the enclave, cannot be trusted and needs to be authenticated. In many cases, this is easy, because the user herself shares the inputs and signs them, so a malicious host cannot manipulate that data. However, for other types of information, like meta-data that lives in the network (e.g., block height), there’s no easy solution. This is also documented in this issue: Document limitations of information provided to secret contracts · Issue #201 · scrtlabs/SecretNetwork · GitHub.

We welcome feedback and ideas on how to resolve this concern. We have some ideas, but they aren’t trivial and are too high level and unsubstantiated at this point. In the mean time, contract developers would need to be aware of this, and design their contracts accordingly to avoid trusting unauthenticated data from outside of the enclave.

  • This problem could be seen as a tangent problem to the Oracle problem that exists in all blockchains.
6 Likes

Could we implement a standardized bundle of network metrics which nodes can transmit upon request? If a node is about to do something potentially dangerous based on a trigger, it must first validate the authenticity of the trigger from a second node. The node it chooses is simply the next block proposer. If there isn’t concensus between the two then the first node can broadcast a warning to the network/slashing/etc.

I can’t imagine how much it would cost in storage or secondary processing but if it’s a mandatory check then it’s baked into the cost of running a node and is effectively free. This would add at least 5 seconds to processing for to identify the next block proposer.

The problem is - how do we protect a malicious node from fooling its own enclave. What prevents such host from lying about who the block proposer is? The enclave isn’t aware of the blockchain’s state. If it were, we would not have a problem to begin with.

Ah yes, I see. It is difficult. Bear with me on this next one, I have a feeling I’m missing something :stuck_out_tongue:

Can a malicious host mint additional SCRT if they break off onto their own chain? If they cannot, could we have a secret contract acting as a listing service for other validators? The list would only be updateable if the total SCRT held by participating nodes is higher than the previous update.

At the time of the event, the secret contract can request approval from X randomly chosen validators in the list. The malicious host could break this by creating a chain with only bad actors but could fight that by only allowing for one attempt. If it fails then it would have to go through an unlock process based on the another similar secret contract which relies on a base threshold of SCRT.

If rogue chains can mint new SCRT then you could alternatively create a secret contract using threshold signatures across all the validators. Something like 30-of-50 validators are needed to update the secret contract validator list.

1 Like

No, a malicious host obviously cannot mint SCRT because no other node would accept it.

I don’t fully follow your solution, but I don’t see how something like that could work. Sure, presenting the enclave with a list of signatures (or one threshold sig) proving there’s sufficient agreement is a solid idea which we’re also considering. But, how do you prove to your enclave that those signatures are coming from the real main chain validators?

What prevents a malicious host from sybiling 50 key pairs and making up bonded SCRT numbers? The enclave can’t trust the list of nodes/stake it’s getting from the host.

I think you answered the problem with my first thought. I was questioning if “making up bonded SCRT numbers” was possible and it sounds like it is so that won’t work. So let’s forget that and stay with signatures.

But, how do you prove to your enclave that those signatures are coming from the real main chain validators?

You can sign with each validator’s enclave key. You save a list, to a secret contract, of all active validators at a point in time. The only way the list changes is with a quorum of previous validators. A rogue chain can’t override that list since they don’t have the keys from all validators. Any triggered code knows to query the validator list secret contract to (somehow deterministically, don’t ask me how just yet) return X number of validators that must confirm the metadata. This starts to appear like a case of “turtles all the way down” because you also have malicious nodes processing the validator list secret contract but that contract is protected from outside inputs since a rogue chain doesn’t have the keys to change the list of validators.

I see it going like this
Let’s say an event is triggered and we reach out to the validator list secret contract. The contract is told to return 1 validator. It returns the validator name and a random “proof” string encrypted with the public key of the selected validator, and the same string encrypted with the public key of the original enclave. If the validator is real, it will be able to decrypt the proof string and return that proof approval and the decrypted string back to the original process.

Even if we ignore the complexity of this protocol, how do you avoid someone sybiling by creating 50 nodes with enclaves? If the enclave can’t trust how much SCRT each validator has, any information they get about outside validators - even if they are running proper enclaves, cannot be trusted.

It feels like there should be a way to create a root of trust - i.e., mimic the generation of the chain on the ‘outside’ on the inside of the enclave as well, but we have yet to be able to formalize it.

There are many kinds of external events that are valuable. You describe the “time vault” feature, and I suppose that time events are one of the more worthwhile events to provide since a lot of business logic do depend on time. For that reason, one could perhaps create a token for that specific purpose, which would incentivize its shareholders to provide a time service. Another option would be to utilize SCRT. Anyhow, we can refer to these as secret keepers and the one who initially supplies the secret as the secret creator.

If you want to create a time-based secret, you begin by selecting a number of secret keepers. Their rate depends on how far into the future they need to unlock the secret (10 days needs to be cheaper than 10 years) as well as on the size of the stake. The secret keepers lock their stake for as long as they need to keep the secret. If they reveal the secret ahead of time, they will lose their stake, and all of it will go to the secret creator. In this way, you could calculate what the secret is worth in relation to how much you would receive if the secret is revealed prematurely.

A problem is, of course, if the validators somehow disappear and the secret is lost forever. If that happens, the secret creator receives all of the stake. But it would be unlikely to happen since you only need a subset of signatures to unlock the secret, and whoever forgets to unlock it will lose their stake because they haven’t fulfilled what they promised to do.

This falls more along the lines of an Oracle-type solution/solution managed by the contract developer. We agree that currently this is the most likely approach, even though it’s not as neat.

Generally speaking, for every external data, the contract provider can allow one or multiple signers to register and provide authenticated data (and yes - they can get incentivized for this). The contract developer can specify the threshold wallet’s address/individual addresses of these signers.

What about the approach where you validate the whole chain from the block when the secret was created until the block when it should be revealed? That includes verifying all the signatures as well as whether the validators have been truthfully elected. If the algorithm detects that less than 50% of SCRT supply is used to elect validators, then we could perhaps assume that it is not the real chain that is being referenced?

@transitory_system wouldn’t chain re-validation be prohibitively costly?

@guy for a simple solution couldn’t we simply ask another enclave to sign that we are past block 500? The malicious host could stop the communication but the information would still stay locked. I was making things complicated trying to find a random validator but I guess that doesn’t matter if we assume that all enclaves are secure.

@Derek how can we make sure the response we get from the other enclave is legitimate? An attacker can, theoretically, forge a response saying we are past a certain block number.

1 Like

@Derek Since the enclave space is limited you would need to input the minimum amount of data necessary to indicate whether to chain is authentic or not. Examples of data could be block hashes, the set of active validators, account balances, how the voting transpired, etc. And then, once you’ve confirmed that the chain is authentic, you could use the information to trigger certain functions within the secret contract. To use time as a parameter, you could use the number of blocks produced as a metric.

Maybe there is a way to feed the data sequentially into the enclave. Perhaps you could feed it one block at a time, such that the next block must match the block hash of the current block; otherwise, the enclave rejects the data.

To let an algorithm detect whether a chain is authentic or not seems to me like a rather hard research problem, but it might be solvable.

@toml if the response message is signed with the private key of the other enclave then we know it’s legitimate. A host could theoretically pass in old signed messages (i.e. block height 495) but that’s less of a concern. Signatures for block height of 500, 505, etc. are unknown until the real enclave releases them.

@transitory_system - there are tricks like the ones you mentioned but my gut tells me that the solution is just too costly. Also, there is a growing cost for longer storage which is complexity that we are offloading to the developer. That’s never ideal.

Mostly the concern is that it would be easy to fake a chain/replay a chain at a chosen time. For example, you could quickly spin up a separate chain of your own, or at least some of it. Also, as @Derek is saying, this would be prohibitive and expensive, but it is a solution we’re considering - we just haven’t found a way to make it work yet.

To be fair, this would’ve been easier with a PoW system, especially if you include some challenge-response mechanism like the enclave throwing a random value to be included in a block, then receiving that block and the chain ahead of it as proof. But, even that is an extremely complicated solution (although it seems more secure).

The goal of this post is to give an overview of the potential oracle solutions to deal with the limitation @guy raised. We are going to use time as an example for our discussion.

We can’t reliably tell time (and balances) inside the enclave during contract execution as detailed in Issue #201. This creates problems for applications where time is required as a trigger for contract execution, such as voting, mixing(Salad) and dead-man’s switch. The short term solution to this problem is Oracles. There are two potential design patterns to leverage time oracles for secret contracts:

  • Trusted off-chain oracles
  • Multi-sig on-chain oracles
    Note: the naming is slightly confusing because all oracles deal with off-chain data - bear with me pls :slight_smile:

Trusted off-chain oracles

This is a simple concept - there’s one registered address that’s allowed to send block-height input to the secret contract. While auditing oracle behavior is easy, this creates a significant level of trust on the oracle and opens a possibility of misbehaving oracle at the expense of the system. There are no checks and balances for the trusted oracle, no on-chain aggregation or consensus of off-chain input. In addition a trusted oracle is not censorship resistant, which is an important design principle in Web3.

Multi-sig on-chain oracles

The idea of a multi-sig oracle is to create an oracle model that doesn’t require a central point of failure, is censorship resistant and can aggregate off-chain data on-chain (multi-sig consensus). This replaces a trusted operator with a network of N participants. Here are some properties for the multi-sig operator model:

  • Open participation (anyone can choose to participate in the multi-sig oracle)
  • Predetermined set of participants
  • Crypto-economic incentives to honest behavior

Let take secret voting as an example. Voting contract that requires votes to be tallied after X blocks. The on-chain multi-sig oracle can provide time input to a voting contract, as well as any other contract like Salad, which has a time and quorum based trigger for the mix.

Multi-sig time oracle contract will accept deposits from network participants during time t. For sake of example, let’s assume minimum deposit amount is 100 SCRT and each vote costs 1 SCRT per multi-sig oracle participant. This process is open to any holder of SCRT who wants to serve the network. However at some point t+1, the multi-sig participant list must be finalized. The participant list can and should be updated periodically.

Secret voting contract will be able to query multi-sig time oracle contract during execution, which means that multi-sig time oracle can provide block height to determine whether the votes can be tallied or not.

From a crypto-economic perspective:

  • participants with time input on which the multi-sig contracts forms consensus on gets rewarded
  • participants with conflicting inputs get slashed.

One improvement area is to allow secret contracts to pay each other for running queries. This is a feedback we have given to CosmWasm team through the latest CosmWasm core contributor and our one and only @reuven !!!

For the sake for argument let’s assume we have a 2-3 multisig:

  • Alice, Bob and Charlie each deposit 100 SCRT to the multi-sig time oracle. The participants run a bot to provide block height. In exchange for providing time information, Alice, Bob and Charlie expect to receive a payment either generated by TX fees or set aside as a bounty by the contract owner (secret voting, salad etc.)

  • When the time input is required (secret voting contract tally function is called), participants provide the block-height to the multisig oracle contract. At this point there are number of possible outcomes:

    • 3/3 agree on the same result, time is passed to the voting contract. Reward denominated in the contract is shared among Alice, Bob and Charlie. Note that this may not be the right time, but there would at least be consensus on-chain on the block height
    • Bob agrees with Alice and Charlie doesn’t vote. ⅔ passes, reward is shared between Alice and Bob
    • Bob agrees with Alice and Charlie disagrees. ⅔ passes, reward + Charlie’s deposit (1 SCRT) is shared among Alice and Bob
    • Bob and Charlie doesn’t vote. We can’t provide time input to the voting contract and the contract halts. I don’t think this is very likely because either the contract owner or the participants in the voting have the incentive to see the contract run properly. However this is still a risk and threshold participation vs. incentives must be researched further.

Multi-sig operator model is significantly more complex than a trusted operator model. It’s a network within a network. However, it’s also censorship resistant and less vulnerable to manipulations. The mutli-sig oracle approach is also similar to how Maker and Compound use off-chain price information in determining their rates and prices. In Maker there are whitelisted oracles and an aggregation smart contract (instead of the multisig contract) that takes the average of the prices fed by whitelisted oracles.

Compound also uses a similar approach. (For more information on oracles). These two DeFi applications do not offer open participation. Maker and Compound alone account for approximately $600mn, which is a testament that oracles with some trust level are already playing a big role in the blockchain ecosystem.

I personally would like to see a multi-sig oracle contract and the tooling that makes participation easy as a building block in the Secret Network ecosystem!

5 Likes

Thanks everyone who participated in this discussion so far!

Beyond the Oracle solution Can presented, I’d like to present another solution that would likely work, and is most likely the optimal one.

First, touching on Can’s analysis - I started referring to this problem as the Extended Oracle Problem. The reasoning is - like with Oracles, we’re trying to move information from an untrusted source to a trusted one, in a verifiable way. Unlike Oracles in ‘classical’ blockchains, here the problem extends to on-chain data, which is usually trusted due to consensus.

This brings me to our solution. Intuitively, if a node’s enclave was aware of the chain’s consensus, it would be able to trust all contextual data coming from that chain (like block height, balance, etc…). This is the same argument why a given full node or even a light client trust blocks committed to the chain. Generally a light client puts a bit more trust in the network in that it only validates the easy stuff, but that is likely more than enough for our purposes. More formally, a node trusts the chain it’s seeing because of the following inductive argument around blocks:

Block height N=0 (genesis block):

  1. 2/3+ of genesis validators signed the genesis block data (genesis.json), which also included the initial distribution.

Assume we know/trust/verified the chain up to height N. Now for height N + 1

  1. Previous block is hash-linked to the current one (thus creating a sequential chain and preventing someone from proposing an arbitrary block with a different state)
  2. Current block is signed by 2/3+ of the validators (weighted by stake). We can trust the stake because we assume we know/trust the chain up to block height N
  3. All transactions in block N + 1 are valid (light clients don’t actually check this. This is fine).

A key point here is the chain of trust - everyone can start a new network with a new genesis.json and create many fake validators, but once a chain is up and has a current state/validator set, you will not accept just any block signed by anyone. It has to be proportional to 2/3+ of the stake as you know it. Based on this key insight, I believe that if we have a light client inside of the enclave, that starts from the beginning of the chain and continues to maintain a sealed state of the chain, then any data coming from the outside could be verified inside of the enclave before proceeding. This should therefore allow us to trust block height (time), balances, and other aspects inside of the enclave.

The one downside is that this solution is likely extremely complicated to develop, and needs to be carefully analyzed for correctness (both theoretically and from an implementation perspective). In practice, Oracles might be an easier way to go.

4 Likes

What was the end decision on this out of curiosity? This was a fascinating thread to read, and I can’t help but agree with your inductive proof as the basis for solving the Extended Oracle Problem. How would the sealed state of the chain be updated? What input exactly is being checked inside of the enclave so as to confirm the legitimacy of the incoming transaction/trusted chain of transactions?

1 Like

There was none, actually. More research just indicated that the solution last presented is the optimal one, but is quite challenging to build.

For the time being, we decided to park it and see how developers use it. Luckily, most of the use-cases we’ve seen emerge so far did not require such functionality. Those that do use (or will use) some off-chain oracle (whether centralized of federated, as @can suggested).

Though sybil attacks seem to be prevented by oracles at least (and I guess this is what the Band integration is for), what about the enclave’s encryption itself being compromised?

For example, for the purpose of storing keys on secret network, it doesn’t really seem feasible given that if an enclave were to be compromised then full loss of funds or whatever would ensue.

I understand that what I’m saying is clearly NOT simple in the slightest, but knowing what we know about SGX vulnerabilities having occurred, and that this would become a clear/increasing target if such a functionality were to be implemented/SCRT continued to increase in value, do we have any extra layers of security to prevent this?

Intel provides attestation on the enclave not being corrupted but is this attestation report something that is continuosly enforced for every computation too? Doesn’t this place a single point of failure then on the attestation service itself not being compromised?

I would assume that when it comes to keys, then breaking up the key into chunks across all validators (is this even possible lol) would be a way to mitigate this attack vector significantly similarly to how through consensus we can reject tampered computations, but I feel like the design of Secret Network for certain usecases like this one is still relatively exposed.