Secret against n-1 corruptions, or n/2?

I have answered my own question by reading the blog. See last two paragraphs below.

I was re reading the white paper and have a question about the security assumptions. When sMPC is implemented around next year, will it assume (t = n-1) (at least one node doesn’t collude) OR (t >= n/2) (no dishonest majority) ? If it is (t >= n/2) (no dishonest majority), is the risk of a sybil attack significant enough that financial / medical data could be leaked?

Well I read a blog post from Enigma in which it said: “The number of systems needed to reconstruct the data is a tunable parameter that can range from some portion of the system up to all of them.”

So this post is no longer a question but a statement of, hey that’s really cool. And if you could discuss the costs of securing against n-1 corruptions that would be great.

1 Like

This is a great question!

We are indeed mostly curious about the dishonest majority case - at least when it comes to privacy. So technically, we’re building our solution to support up to t = n-1 corruptions, where t can be tuned. With SPDZ and other recent protocols based on BMR, the performance overhead is not too great compared to the honest majority case.

The biggest downside of a dishonest majority is that it doesn’t support robustness - i.e., the protocol can fail to complete. For example, imagine your protocol is secure against all but one node colluding. While this does mean that there will be no data leakage unless everyone colludes, it also means that a single party aborting is enough to prevent a computation from happening.

This is a big concern in the non-blockchain case, as incentives aren’t taken into account. In our case however, we can disincentivize this behavior. To do so, we need to be able to uniquely identify a cheating party, which is where most of the complexity of the protocol occurs.

1 Like

That makes a lot of sense, even though I don’t really grasp how MPC works under the hood yet. Great explanation.

I have a follow up question, in two parts:
(1) Will there be a way to develop contracts such that it is possible for a data owner to guarantee that their masternode will participate in any round in which their data is input?
(2) Will there be any mechanisms to guarantee that some (subset of some) set of nodes always participates in a contract?

For example, lets say some intelligence agencies want to analyze all their data in aggregate, and each wants to have absolute veto power on any collusion attempt (2). For any round of computation they would probably want the threshold for collusion to be all-but-one, and also each participant, especially one that inputs data (1), would probably want to have a node in the round.

Also, if there is a way to participate in a round without a masternode and have veto power on collusion attempts, that would be relevant. But my understanding is that that won’t be feasible.

So to clarify in using SPDZ if ONE node chosen to perform a computation goes down the computation fails and a new entire round must be started correct? RENs protocol is claiming to go about this another way and claims this is a major problem. Wouldn’t it be fairly easy then to DoS a node with a high stake (and thus high chance of being chosen at random) and completely shut down computations AND cause that node to be penalized?

Correct. Easy ways to mitigate:

  1. Select several groups at random - at least one is likely to succeed and that creates a higher incentive to complete the computation quickly (if only the first group wins).

  2. If a group doesn’t return the result quickly enough, select another group.

A slightly more challenging approach is to modify SPDZ to support Shamir’s scheme, which allows you to select the threshold. This shouldn’t be too hard really.

Re: REN - I haven’t been able to find anything on the matter in their whitepaper. If there’s something concrete I’m happy to comment on it.

1 Like