Since at least 2015 some of the smartest of this generation have been working on one of the most challenging problem: scaling public blockchains.
Whoever gets it right and whoever does so first will probably dominate the field for years if not decades to come.
Nor are network effects likely to hinder too much any newcomer. The current field is still far too small. Little is being used in production by ordinary consumers. For young millennials, in fact, the best description of blockchain 2019 is the search engine space in the 90s prior to Google’s march towards world domination.
Ethereum has the bet of many, but for a long time now there have been two ethereum teams. One led by Vitalik Buterin and another led by Gavin Wood, the author of ethereum’s yellow paper.
Wood is now to launch a new blockchain potentially by autumn. It sounds like ethereum 2.0, so to understand more we held an extensive interview.
What’s this about second ICO?
Jack Platts, Web3 Communications: Web3 Foundation will distribute up to 20% of DOTs prior to network launch later this year (see Light Paper or FAQ). As Gavin said in his year-end recap, there will be a generally available public sale for up to 10% of that at some point this year that the W3F will announce when it gets everything in order.
So it’s public sale, not VCs?
Jack Platts: There will be a public sale. In addition to that W3F is working with strategically aligned projects to ensure they have access to DOTs for leasing parachain space. It’s critically important they have DOTs when the network goes live to ensure we have a vibrant community of parachains connecting with Polkadot.
Just trying to understand Polkadot at a conceptual level in a somewhat plain language, like how does the relay manage the coordination at a conceptual level?
Robert Habermeier, Polkadot co-founder: which details of the coordination are you interested in?
Generally, the whole thing, how is it meant to work and how is it different from the Beacon chain?
Habermeier: At a very high level, the Polkadot relay chain manages advancement of state transitions for a set of heterogeneous chains, known as parachains, as well as messages being passed between them. It’s done by garnering attestations on the results of new parachain state transitions by randomly selected sub-committees of the validator set. Security is underpinned by economic games that ensure availability of data necessary to re-evaluate those state transitions as well as evaluation of proofs of invalidity.
Yeah 99% of people reading that have no clue what you said. Do we have a random numbers generator here who allocates stakers to parachains or how does the coordination work from an end-users perspective? And as far as a node is concerned, what are you validating, like does your node have to validate all parachains or does each parachain have its own nodes and does the relay chain act as a light node for parachains as well?
Habermeier: The point is that full nodes don’t need to validate all parachains’ state transitions as long as they trust the economic games. The relay-chain serves as a fork-choice and finality beacon for parachains. There is a randomness beacon which allocates stakers to parachain validation committees with periodic shuffling.
In the current PoC implementations shuffling is done every block but it’s parameterizable. Parachain full nodes are at the very least also a relay-chain light client so they can interpret the consensus, finality, fork-choice of the relay-chain — these nodes interpret and execute the parachain state transitions as well.
As such, they can detect when the state transition registered on the relay-chain is actually bad and, optionally trigger a dispute-resolution process to slash the validators who attested to the bad transition in exchange for a bounty — nodes which do this kind of bounty-hunting are known as Fishermen.
So basically eth 2.0, with presumably some small differences. Is there any major difference do you think?
Habermeier: Sure, there are a lot of differences if you get down into the details. Heterogeneity of “shards”, consensus algorithm, availability games, governance. At a high level the goals are similar to eth 2.0 as well as other hypothetical sharding and interoperability solutions.
Ok, so, is there a blocksize?
Habermeier: More similar to a computation/gas size. the computations that are done on the relay chain are not turing-complete but they don’t always correspond directly to byte-size of transactions.
Why they not turing complete?
Habermeier: If we talk about what is actually being computed on the Relay chain, it’s not meant to be a smart-contract platform where turing-complete work is done on-chain.
The only point where you would run into turing-complete evaluation is if checking a parachain-block execution on-chain when it has been reported bad.
So where do smart contracts come in?
Habermeier: In parachain-space only. The relay chain would provide security for a smart-contracts parachain, e.g. Edgeware, which the Commonwealth Labs team is building or you could deploy an EVM, EWASM, whatever parachain. The relay chain is agnostic over the work that the parachains are doing.
So this (parachains) is a shard is it, but the difference here I think is that anyone can launch a shard?
Habermeier: It could be classified as a sharded system but typically when people refer to shards they are implying that the shards are different sides of the same coin, fragmented for scalability. Parachains may not share any state at all with each other.
You mean u can’t send a transaction from one chain to the other?
Habermeier: Sure, you can send messages between chains, which might be transactions, but there’s no requirement that parachains have to actually do anything with the message.
If you wanted to, you could deploy a parachain which ignores all incoming messages and never sends any out, which means it doesn’t share any state outside of its “black box.”
Typically that won’t be the case — the main difference between parachains and typical sharding is that chains are heterogeneous and specialized to perform certain tasks particularly efficiently.
Let me make it a bit more concrete, I launch cryptokitties on a parachain. Now I want to launch another parachain which copies the kitties dna from the kitties parachain and make them race in the racing parachain. Can I do that?
In eth they call it cross-sharding communication I think, basically smart contracts in different shards being able to “read” and maybe even write each other.
Habermeier: Sure, it’s possible.
So how are these chains talking to each other, like how does the racing chain know there’s a new kittie on the kitties chain?
Gavin Wood: The relay chain can relay arbitrary messages, trust-free, between chains. You could, for example, have a pub/sub model whereby racing chain asks kitties chain to message them whenever there is a new kitty. Or (cheaper, but has an off-chain component) you could have an actor update the racing chain (by sending a normal tx with a proof using the chain root) when a new kitty exists. Or, most likely since it’s cheapest, whenever you want to race two kitties, you would call into the racing chain with the proof.
The messaging is most useful when you want to have one chain autonomously effect a change of state on another chain.
In principle the messaging can also be used to just ferry data around too (e.g. if you want to query something on an oracle chain), but it’s potentially quite expensive (since you engage the relay chain’s validators) and it’s asynchronous so you’d need to be able to handle the reply coming in some blocks later.
For the “querying the oracle chain” situation, you’d likely want to build an off-chain service (this will likely be built into the protocol eventually) that gathered the relevant data and proof for you from the oracle chain and supplied it to the querying chain just in time.
Doesn’t that mean the relay needs to know everything that’s going on, so as a relay node you need to carry all chains’ data?
Wood: No. That would not be scalable. The sophistication of Polkadot, and the reason we call it (fairly) scalable and sharded, is precisely because relay chain validators do not need to know everything about all parachains.
Collators are split from validators; collators are per-parachain and compile a cryptographic proof of validity. Validators are able to verify and approve this proof without being synchronised on the parachain.
By collators you mean block producers presumably? So does each chain have a collator sending these proofs out? Meaning then, presumably, as a collator node you need to know everything that’s going on?
Wood: Collators produce parachain blocks. Validators produce relay chain blocks.
I see, so in more familiar language each chain has its own stakers, and then there’s the relay coordinator with its own stakers, and the chain staker can just give a proof to the relayers and so “talk”?
Wood: Pretty much, yeah. The only minor point to make is that parachains “stakers” don’t need to stake in the typical sense of the word because there isn’t really anything bad that they can do.
Unlike relay chain stakers who contribute to the validation and finality process. Parachain stakers “just” produce block candidates which are always tested for validity before possibly becoming finalised.
So that’s why they’re called “collators”; they really just collate transactions into blocks and present a proof. They don’t bond their tokens for a long period to prove that they won’t misbehave in the future or anything because they’re not really trusted with anything truly important.
Why do you say fairly scalable, what’s the limitation?
Wood: The limitation is that it still falls upon a single set of nodes (the relay chain validators) to handle all the interchain message passing.
This is an O(N^2) problem and can only be mitigated in one of two ways: peer-level messaging or hierarchical chains. We plan Polkadot v2 to support one or both of these.
Each chain can in principle be similar to the relay chain. With the right mix of economic games, then its forseeable to have indefinitely levels of relay chain sacrificing only (reasonable levels of) latency as messages bubble up and down the tree.
Alright well I think I have a fairly good understanding, the final aspect was I don’t know if you’ve finalized the details like block times, how many dots per block, how much do stakers get, how much do collators get?
Wood: Relay-chain block times are likely to be around 10-15 seconds, similar to Ethereum. We don’t plan on having a fixed block reward; rewards will be provided per staking session (perhaps every hour) and be based upon a market mechanism designed to optimise to a certain degree of security.
The incentivisation of collators will really depend on the parachain in question. Community chains like Edgware will likely either dilute their token supply or use transaction fees to give a mild incentivisation. Bridge chains will likely give a cut from the bridge fees. Enterprise/consortium chains will just be run by the enterprise in question. There may yet be innovative, new ways of incentivising collators though – Polkadot’s all about flexibility and we avoid tying parachains into any given collator-incentivisation strategy.
Not sure if you saw Justin Drake’s comment. Are we expecting an effectively “dummy” Polkadot blockchain later this year, or it it the full thing?
Habermeier: Everything we mentioned above, except what we categorized as 2.0 goals, is planned to launch this year.
So then if I can put it this way, I read about Grandpa, which says its kind of basically CBC. How come you CBC-ing before eth?
Habermeier: We are just implementing what our research shows as practical. Seems like Vitalik has been putting some work into CBC-ifying the latest ETH2.0 spec. My opinion is that GRANDPA is similar to CBC but is more practical to implement for now.
Wood: Polkadot v1, scheduled at the end of the year, will be fairly full-on, yes. The aspects that it’ll lack will be the peripheral and auxilliary stuff; bridges, off-chain infrastructure and so forth. They’ll be released as they become ready.
However it will have a full finalising PoS-based consensus giving fast, secure finality able to fluidly adapt to network characteristics without sacrificing block production speed.
It will have what is often termed “sharding” (parallelised transaction processing) together with message passing between chains (shards).
It will have a sophisticated and fully-autonomous governance system capable of enacting its own “hard-forks” (protocol updates). We also plan for it to have a number of parachains at launch, not least a WebAssembly-based smart contract.
So yeah, I wouldn’t call it “dummy” or “Phase 0” or whatever.
The belief of our research team is that GRANDPA is strictly better than CBC. They share a degree of DNA, but GRANDPA is designed to be more adaptive and more tractable leading to a greater degree of certainty that it will not break under adverse conditions.
Final thing, I haven’t quite kept up with Tezos’ auto governance, how is this meant to work on Polkadot?
Wood: There’s a writeup here. Happy to answer any questions.
Well, the first question is does this mean auto updates?
Gav: I can’t really comment on Tezos as I’m not super-familiar with it. As far as I know it’s simple coin-voting but please don’t quote me on that. Polkadot is (compared to coinvoting) pretty sophisticated.
Auto updates, yes. Though only within a very strict sandbox: the auto-updated software is just the core “runtime” of the blockchain. It can’t do any dangerous stuff like access the filesystem or network. It can only process blocks and record stuff into the blockchain’s database.
We use WebAssembly for the auto-updatable core to ensure that our core runtime is consensus-safe, secure, fast, cross-platform and well-supported. As far as I know we’re the only ones doing that right now.
How can you auto update my node, I suppose I have to opt-in to the auto update, otherwise I can just change the code of my node as I like?
Wood: The core of the blockchain is not hardcoded.Rather it’s defined as part of its state, which like all things in the state, it can change as the chain progresses.
If you alter your software in some way to prevent the blockchain progressing as it should (e.g. by ignoring a transaction), then you’ve just hard-forked.
Nothing stops you from hard-forking, but you would need to take action and hope that the majority takes exactly the same action for it not to be futile.
Those who do not explicitly hard-fork will automatically “go along with” the updates as directed by the (transparent, on-chain) governance process.
This is at odds with chains like Ethereum and Bitcoin, where to make an upgrade then node administrators must take explicit action, and if they do not then the upgrade will not happen. This is the case even if the general sentiment of token holders is to see the upgrade. I believe this incoherency leads to a dearth of innovation.
Doesn’t that have some attack vectors. 51% of coins obviously but here it might be, I mean dunno 20%?
Wood: Autonomous governance is not without its risks. We’ve erred on the conservative side for version 1.0, requiring a high degree of coherency between stakeholders for any change to get through.
There are a number of means to avoid malicious parties buying stake or votes in order to “brick the chain”.
The council, adaptive quorum biasing, lock-voting and delayed enactment are all there to ensure that “malicious upgrades” don’t make it onto the chain.
We are also keen to perform real-world governance experiments on other “value bearing testnets” for getting empirical data to aid our understanding.
So I suppose Pol won’t have set timeline hardforks upgrades as such, more of dev proposals sort of voted all the time?
Wood: Upgrades are likely to happen more often than hard-forks, yeah. There will still be a bunch of stuff to do before upgrading not least audits. But it’ll be much more dynamic.
This council, is this dpos as far as upgrades are concerned? So do we get a few entities “actually” deciding what goes through to an upgrade?
Wood: The council don’t get any power to push an upgrade (or any other change) through. But they can table motions for a straight majority-carrries referendum. But all upgrades must ultimately pass by referendum.
Referendum voting is coin-lock-voting. The winning side must always offer their coins for lock-up until the date that the upgrade would happen.
But, voters can also offer to lock up their tokens beyond this date. The longer a lock-up they offer, the more voting power their tokens get.
So a voter willing to lock their 6 tokens up for the minimum 2 week period would be on par with a voter with just 1 token but willing to lock up for the maximum 12 weeks.
This is designed to empower those willing to stand by their opinions (and potentially “sink with the ship”) over those who are voting with no strongly held believes or against their better judgement.
For a referendum to be cancelled, there must be a unanimous vote to do so by the council presumably? So the council can cancel an upgrade however many polkadoters vote for it?
Wood: If they act in unanimity, yes. If the token holders believe this was an abuse of their collective power, they are free to act en masse and unseat them in an upcoming council election.
Interesting. Alright I think I’m finished unless you have any other comments? So might as well go a bit more controversial. Why did you leave EF and why you launching what some would say is a directly competing chain?
Wood: Well, I think smart contracts are a different use case to parachains, so I don’t really see them as direct competitors. Polkadot aims to be a very thin and highly flexible “blockchain network” able to innovate and assimilate new technology as it comes along. Ethereum is really more of fixed functionality application base for end-users.
Besides, friendly competition is always good right?
Of course, you wrote the yellow paper so you think eth was your creation or Vitalik’s?
Vitalik invented Ethereum – my contribution was in engineering and productifying it. Who “created” it is really a matter of semantics.
Source: Read Full Article