Cryptocurrency Research Review, Issue No. 1

*All the cryptospeak you could ever want!

https://mitcryptocurrencyresearch.substack.com/p/mit-dcis-cryptocurrency-research

Hello (world),

Welcome to Cryptocurrency Research Review, Issue No. 1! This biweekly publication (curated by the MIT Media Lab’s Digital Currency Initiative in partnership with MIT Press) crowdsources high-quality reviews of interdisciplinary research from the fields of cryptocurrency and blockchain technology. We intend this publication to help surface impactful work in the space, provide more neutral analysis of ongoing research, and to encourage interdisciplinary discussions in the community.

This is an experimental effort to figure out how best to meet the needs of cryptocurrency and blockchain researchers, so please share your feedback with us. You can use the links included at the end to submit your feedback or feel free to respond to this email directly. We hope you enjoy this first issue. Let’s get right to it!

Review of Introducing the “Minimal CBC Casper” Family of Consensus Protocols
Paper by Zamfir, et al. Nov 5, 2018

Review by Muneeb Ali, Jude Nelson, and Aaron Blankstein

Ethereum Research recently released a paper called “Introducing the “Minimal CBC Casper” Family of Consensus Protocols”. Ethereum scalability research is an important topic as the network experienced severe scalability problems on the production network in 2017. These were due to fundamental design issues in Ethereum itself (as predicted in 2016 for example).

A close examination of the CBC Casper paper reveals a major shortcoming that calls into question whether the paper contributes to the material advancement of working consensus protocols and scalability efforts.

Liveness and safety are inseparable:

Specifically, the paper attempts to treat and provide Byzantine safety without respect to liveness. However, liveness and safety are inseparable properties of Byzantine fault-tolerant protocols for achieving consensus. Because of this, we believe that the approach presented in this paper is fundamentally the wrong approach to start the design of consensus protocols.

Correctness is a guarantee that once a decision has been reached, it will remain decided. Liveness is a guarantee that a protocol will do something useful — i.e., process transactions — even in the face of failures. Without the liveness guarantee, users cannot actually do useful things with the protocol, because they do not know when they can consider that transactions have been processed successfully. For example, in Bitcoin, users need to know when they can consider a transaction confirmed.

This paper attempts to prove correctness without considering liveness. This is a problem because proving correctness without liveness is not practically useful, and most of the hard problems show up precisely when you consider liveness — treating 1 out of 2 doesn’t get you 50% of the way to a full protocol; it leaves you with almost nothing.

The proofs in the paper are useful to the extent an empty set of properties is useful. An empty set of properties might be “safe” but makes no progress towards having a practical protocol.

The paper’s definition of Byzantine fault tolerance as “BFT safety but without liveness and only for equivocation faults” is unconventional and, in our view, fundamentally the wrong approach to start the design of consensus protocols.

[Check out the full review here.]

UTXO Accumulators
Review by Pieter Wuille

Much has been written about Bitcoin's scalability, frequently regarding the growth of the blockchain in function of the rate of transactions on the network. But perhaps more concerning in the long term is the growth of the dataset needed for validation [1], called the UTXO set in Bitcoin. In the current protocol, every fully validating node needs to at least know which outputs of previous payments haven't been spent from yet, in order to detect attempts to double spend. While blockchain data is only accessed sequentially and can be provided by untrusted hosting, the UTXO set needs fast random access, and cannot be outsourced as its integrity is critical for security. Furthermore, there are very few economic incentives to curb its growth.

A radical solution to this problem was described as early as 2012 [2]. By replacing the UTXO set with a continuously-updated commitment to the UTXO set, and including proof data in every transaction that its inputs are in fact included in this commitment, the trusted storage needed by validation nodes is reduced to a constant, categorically removing the UTXO growth concern. In practice, the design space for this class of solutions is large, and while many approaches have been discussed [3] [4] [5] [6], little work has been done to actually analyze the various trade-offs and optimize a concrete design.

The Utreexo [7] work focuses on building a practical protocol to accomplish this using Merkle trees, including the non-negligible constant factor increases in bandwidth to transmit the proofs, as well as the inevitable compatibility that will be needed for deployment in the form of bridge nodes. Longer term, and so far only hypothetically, a solution with an even better trade-off may be possible through the use of cryptographic accumulators....