The winding road to functional light clients - Part 3

Piper Merriam,client diversitymulticlientEthereum

Most wallet software depends on centralized providers like Infura. If we want things to be different, we need a new type of light client capable of running on low-resource devices.

The Ethereum state

When we refer to the "state" we are talking about all of the account information like ether balances, as well as all of the data stored in smart contracts. At present there are:

Lets take a look at how clients currently work with the state.

Syncing

Ethereum nodes need access to the full state to be able to process incoming blocks. The state can be computed from scratch by executing every block from genesis up to the current head of the chain. This approach isn't typically used as it is very computationally expensive.
Instead, clients tend to directly pull a copy of the state from other fully synced clients. The exact mechanics of how clients do this differ, but generally when the client first comes online, or returns online after being offline for some time, it must spend time syncing to catch up with the tip of the chain. Syncing can take many hours and is a significant downside when using your own node to interact with the chain. The nontrivial cost is your time, waiting for the client to sync, and the computational and storage resources of your computer, to keep the client synced. We can address both of these issues by designing for resource-constrained devices. Running this light client should consume minimal CPU/RAM/HDD/bandwidth resources and make it viable to have as an "always on" daemon process. Of course, all devices are not made equal, and there are valid reasons why even a low base load might not be acceptable. For this case, we're designing to eliminate the need for syncing entirely. All that clients need under this model is an accurate view of the tip of the header chain. Ultimately, the goal is to build a client that is immediately usable from both a fresh install or when the client has been offline for some time. We just need access to the right data.

On-Demand State Availability

In today's DevP2P ETH protocol there is a message called GetNodeData. It allows for retrieval of arbitrary pieces of the Ethereum state. We've used this network message in Trinity to build and prove "Beam" sync, one of the foundational pieces of research we've done to prove that this new type of light client is possible. Unfortunately, the current DevP2P ETH network is not well suited for this light client use case. The implicit expectation is that every node stores all 40+ GB of state data and can serve any arbitrary part of the state. A node that fails to respond to these state data requests is unlikely to maintain healthy peer connections.

On-demand state access patterns

The current network is designed to facilitate syncing the full state. It was only a coincidence that the GetNodeData message was well suited for our experiments in on-demand state retrieval. For clients syncing the full state, the efficient access pattern is to iterate over the data in order, receiving large contiguous chunks of the state. However, when we look at the wallet use case, and how this new type of light client will need to access the state, we instead see a largely random access pattern. The primary ways that wallets access state is through the following JSON-RPC methods:

Both eth_getBalance and eth_getTransactionCount are simply reading values from the main account trie. They could be called with the address of any of the 100+ million accounts currently in the trie. Both eth_call and eth_estimateGas involve actual EVM execution, which could read from any of the 100+ million accounts, as well as any of their underlying contract storage tries. What we see is that wallets read small amounts of data, and do so in a largely random fashion. This is fundamentally different from syncing the full state, and thus, the two use cases are unlikely to be addressed by the same solution.

Problems we need to solve

This new network needs to tackle a few problems that the current network falls short on.

Sharing and lowering the storage burden

Nodes on this network need to be able to contribute small amounts of storage towards storing the overall state. Instead of fully replicating all of the state onto every node, we instead want each node in the network to store a small subset of that state. With enough nodes, the network can easily house the full state with a high replication factor.

Finding the state you need

With each node only storing a small part of the state, we cannot blindly request data from just any node on the network. Instead, the network will need a mechanism to facilitate finding the nodes that have the desired data.

Keeping the data up to date

Unlike the chain history which can be modeled as an append-only file, the Ethereum state is constantly changing. Every transaction modifies account balances and contract storage, and these updates need to be propogated to this network.

Reading data from the network

It will be important for clients to be able to read data from this network efficiently. A call to eth_estimateGas will perform speculative execution of the desired transaction against a recent state root to determine how much gas the transaction needs. For a simple value transfer transaction that only touches two different accounts, the amount of data needed is relatively small. But for more complex transactions that interact with smart contracts and touch contract storage, the number of items that need to be read from the database quickly grows. If we assume that a single network round trip takes 100ms, then a transaction which touches 100 pieces of state will take ~10 seconds to estimate the gas for that transaction. If the latency numbers grow too large, some operations would take an unreasonably long amount of time to complete which would greatly reduce the usefulness of this network.

Imbalanced contract storage

The account trie is balanced by design. The contract storage is not. This makes contract storage inherently difficult to work with.

Potential Solutions

The topic of on-demand state availability is actively being researched. Exactly which direction this research is going to take us is unclear at this stage, but we are currently focused on two different approaches.

Naive GetNodeData-style Kademlia DHT

One of the simplest solutions we could do is to do things the same way that GetNodeData works, but instead of expecting each node to have all of the data, nodes would store only the data that is close to them. Each node in the trie has a hash and we can use these hashes to map the trie data onto regions of the DHT key space. You may recall that Kademlia DHT networks have an emergent property of O(log(N)) traversal of the keyspace. The problem with this approach is efficiency and speed. Storing the trie data keyed by individual node hashes involves storing a large number of intermediate trie nodes which more than doubles the total data the network needs to store. This approach also makes data retrieval less efficient. When looking data up from this structure, you must traverse the trie nodes starting at the state root. For the account trie, this requires an average of seven lookups before getting down to the actual account data. This approach does have some significant upsides. It completely side-steps the problem of imbalanced contract storage, because individual trie node hashes are random and thus the data automatically gets randomly distributed. It also naturally extends to the network becoming a full archive node in the eventuality that the network grows sufficiently large to store the full 6TB of archive history. Another major win for this approach is the elimination of the need for proofs. Because we are directly modeling the trie and all intermediate nodes, there is no need for accompanying merkle proofs. Work is underway right now to determine whether this approach can be made performant enough to be viable.

Leaves and Proofs

An alternate approach is to group the leaves of the trie into contiguous chunks that all share a common base path. Individual nodes would store all of the leaves "near" the trie path closest to their location in the Kademlia DHT network. For the account trie, which is well balanced, this approach is very appealing. By addressing the data by it's path in the trie, we can eliminate the need to walk down the trie and instead have O(1) access for leaf data. If you recall, the naive GetNodeData style approach ends up requiring on average seven network round trips to access the data stored in the trie leaves. The performance win here is significant and potentially required to achieve the performance necessary to make this network usable. The benefits of this approach come at a cost. There is a non trivial level of added complexity in keeping the data up to date. There are various approaches on the table for doing so, but each approach has its own trade-offs. The data can be updated in-place, which comes at the cost of individual nodes needing to do what may be expensive computation. Or, new updated proofs can be disceminated around the network each time a new block is mined. These effectively trade computation vs bandwidth, both of which are resources that we want to treat as scarce.

The results of our research will dictate which direction this new network will take.

The next post in this series should be the last of the introductory material. We'll take a look at what the actual client would look like and how we see it being used. We'll cover the high-level roadmap of how we intend to get there, and we'll touch on how this effort relates to "Stateless Ethereum."

Portal Website
Portal Specs