Notes on protocols suitable for emerging Proof of Stake networks

Dominic Williams
4 min readDec 2, 2016

--

This was actually meant as a comment on Fred Wilson’s recent Proof of Stake post here http://avc.com/2016/11/proof-of-stake/.

For some reason, it got classified as spam. So did a previous comment. It was actually just an attempt to add some informed content — anyway, judge for yourself whether the content was useful…

Without seeking to bore people to death, here are some useful technical waypoints:

1. Regarding application of traditional Byzantine Fault Tolerant consensus protocols from distributed computing such as PBFT (the protocol currently being used by HyperLedger, etc):

- These minimally involve transmission of quadratic messages O(n²). This can make them unsuitable for large scale public broadcast/gossip networks. Even with point-to-point networks, the number of messages individual processes (think client software) must send and receive increases O(n). In practice there are big fat coefficients in the communications overhead equations and the graph is steep — dream on if you want to run these protocols over the Internet between more than 150 processes.

- If you do want to use these protocols, only consider fully asynchronous protocols. These don’t have timeouts and don’t depend on leaders. This makes them way more robust when network connectivity becomes unpredictable and far more resistant to attack. For example, the popular PBFT protocol uses a leader. If the leader is attacked, or if the network becomes asynchronous and its messages miss their timeout windows, an expensive leader election process is triggered. Also, a cunning Byzantine leader can send out its messages on the edge of timeout windows without triggering reelection and slow the network to a crawl. Prefer best of breed asynchronous protocols such as https://hal.inria.fr/hal-00944.... During 2014 I was using parallel derivations of this protocol that later became formulated into HoneyBadgerBFT, https://eprint.iacr.org/2016/1... (Andrew Miller, Elaine Shi were involved in the 2014 Pebble project that pursued stable cryptocurrency, which never launched and morphed into something else)

- Remember though, that even if you can run BFT consensus instances between limited numbers of clients, you will most probably have to allow point-to-point communication to achieve reasonable performance. Exposing processes (clients) addresses makes them vulnerable to DOS, which can easily prevent the network (e.g. a blockchain) progressing. One solution is to place them behind Tor addresses, as tested in HoneyBadgerBFT, but this is not without problems.

- Many protocols in use today, such as Tendermint/Cosmos, do not use fully asynchronous protocols and are vulnerable to attack by people such as myself who understand how traditional consensus protocols work. If you want to use these protocols, always choose protocols such as HoneyBadgerBFT, as has Gavin Wood with PolkaDot http://www.the-blockchain.com/...

2. Modern blockchain protocols:

- Systems such as Delegated Proof of Stake, where block forging rotates around a circle of delegates are trivially bad for simple reasons. Firstly, once the delegates have been deanonymized, one might imagine a DOS hose rotating like the second hands of a clock preventing them ever producing a block. Secondly, it allows the adversary who controls a sequence of forgers to know in advance that he will be creating a sequence of blocks in the chain, which lays the groundwork for all kinds of terrible attacks that I won’t document here. The only reason systems that use DPoS (BitShares, Tendermint) haven’t experienced issues is they have not reached a scale where anyone cares.

- Good protocols will depend upon production and application of randomness using cryptography. You will notice that Proof-of-Work actually works this way — miners race to solve a current puzzle (a hash below a target) where solutions can only be found in random time by brute force, with the winners being made temporary leaders who can append a block to the chain. If the majority of puzzle solving power is held by correct participants, the chain progresses and is resistant to attack.

  • For examples of applications of randomness, see Threshold Relay chains and Probabilistic Slot Protocols (skip to halfway in this deck) http://string.technology/#dfin...

ADDENDUM: 2 years ago, Ripple released details of the proprietary consensus system they were using. It was immediately apparent that it was deeply flawed for simple theoretical reasons — or at least for reasons that might seem readily apparent if you have studied traditional BFT consensus. I wrote a post breaking down the flaws, and explaining the dangers. Ripple of course was not a decentralized network and ran their protocol off a handful of internal servers. Later, Stella took their system and tried to use it to launch a genuinely distributed and presumably decentralized network. Within a short time it forked and they had to revert to running a few internal servers until Prof. David Mazieres gave them SCP, which does work. Anyway, I think the comment I made at the time does give some insights into the kinds of difficulties you can face, which are not always apparent. Checkout the comment here http://www.coindesk.com/ripple-protocol-consensus-algorithm-digital-transactions/#comment-1575948171. Consensus and design of robust decentralized systems is a tough sport. If I were to pinpoint a particular area of concern regarding the current fashion for applying traditional BFT consensus in decentralized networks with the aim of creating fast and consistent chains, it would be (i) the use of algorithms with leaders that can be attacked (ii) the use of algorithms with timeouts that get triggered by network asynchrony (iii) the need to allow point-to-point communication that makes processes vulnerable, and (iv) the impossibility of scaling the protocols so that a high degree of security can be obtained by involving a large number of processes.

--

--

Dominic Williams
Dominic Williams

Responses (3)