Solana is all the rage these days, and rightfully so. It’s gone from the dark days of the Alameda crisis, to strong price action, and from frequent halts to successfully handling one of the busiest airdrop claims in history – all while maintaining incredibly low fees. From the perspective of onboarding new users, Solana is a good choice: Ethereum L2s still charge up to USD1 per transaction (and we really don’t think starting from BSC or Tron is a good idea).
Another of Solana’s strengths is its single global state that instantly reflects all market signals, without the arbitrage and bridging hops between rollups or shards. It’s as if trading on all global exchanges was seamless 24 hours a day, with events instantly reflected in price changes on all exchanges, no matter the geography or time zone.
These are the benefits of a monolithic chain at its best, but there remain downsides to this design choice. Most notably, that the Solana validator set trends towards centralization due to very high hardware requirements. This happens because Solana monolithically handles all three layers of blockchain: execution, consensus, and data availability.
On the other end of the design spectrum, modular architecture – and specifically the outsourced data availability layer – is rising in popularity. This approach lowers transaction costs while maintaining low hardware requirements (although MEV threatens this). A modular design also allows for more specialized chains and hardware for specific applications, with dYdX being the best example.
At the forefront of the modular movement is Celestia, a chain optimized for rollup data efficiency. Ethereum, on the other hand, has arrived at a modular approach in a more piecemeal manner, building the plane whilst already flying. We believe rollups are the key to scaling and cheaper transactions, with the battle for data availability layers (and the rest of the modular stack) now on.
The data availability problem was first identified in the early race to scale blockchains. The focus was on minimizing the amount of data to store in order to maximize the number of nodes in a network. The same dynamics underpinned Bitcoin’s block size wars. Data availability refers to a blockchain’s ability to make its data accessible to all network participants. The key breakthrough in solving this problem was the introduction of data availability sampling (DAS), as Bridget Harris explains:
“With DAS, light nodes can confirm that the data is available by participating in rounds of random sampling of block data rather than having to download each entire block. Once multiple rounds of sampling are completed – and a certain confidence threshold is reached that the data is available – the rest of the transaction process is safe to occur. This way, a chain can scale its block size yet maintain easy data availability verification. And considerable cost savings are also achieved: these emerging layers can reduce DA costs by up to 99%.”
Celestia, Avail, NearDA, and EigenDA are the most important DA projects. They don’t need to verify transactions, but simply check that each block was added by consensus and that new blocks are available to the network. They rely on third-party sequencers to execute and verify transactions. Celestia was launched in October 2023, Avail and EigenDA have their mainnet in coming months, and Near has most recently announced its DA solution. Let’s review the unique features of each:
And then we have the rollups themselves. Of the rollups that build on these DA providers, there are a number of tools to make it easier to launch a rollup:
In a truly modular way, modules of each layer are chosen based on specific needs. The variety of combination options can be seen here:
Rollup-as-a-service projects like Eclipse make it even easier to launch a rollup, where the developer chooses which technology to use for each of the three modules.
Similarly, Conduit allows you to deploy a rollup in 15 minutes, with Optimism, Arbitrum Orbit, and Celestia as the supported stacks. A monthly hosting infrastructure fee is paid to Conduit, and there’s a separate data availability fee paid to the provider.
The wealth of possible combinations that modularity creates is certainly a major step forward. Is it akin to the difficulty of building an early website compared to the ease and customization of Squarespace today?
Despite the growth in DA projects, many have reservations about outsourcing DA. Vitalik made his clear: “Your data layer must be your security layer.” Dankrad Feist, another member of Ethereum Foundation, concurs: “If it doesn’t use Ethereum for data availability, it’s not an (Ethereum rollup) and therefore not an Ethereum L2.”
We agree. Rollups with outsourced data availability will be less secure than those using the same chain for data and consensus (and really should be referenced as “validiums”), although secure enough for certain applications. Short-term projects using such rollups will emerge and fade quickly, making it a good experimentation and testing ground. However, for long-term holding of financial assets, L1s such as Ethereum or rollups using them both for data and consensus will remain the networks with the lowest risk profile.
While skeptical about outsourced data availability, Ethereum is big on modular architecture. The early vision of scaling via sharding was abandoned in favor of modular.
The three main updates needed to implement the vision are rollups (we talked about these before), proposer-builder separation (“instead of a block proposer generating a ‘revenue-maximizing’ block by itself, it delegates the task to a market of outside actors (builders)”), and data sampling. The latter is a way for light nodes to verify that a block was published by only downloading a few randomly selected pieces of data. This is technically more challenging than the other two and will require two to three years to ship.
Important note: EIP-4844 was the first step in improving Ethereum’s data availability layer before data sampling goes live. As discussed earlier, enhancing Ethereum is similar to building the plane whilst flying; once the Ethereum Foundation recognized the need for rollups (aka when Vitalik dropped the famous rollup-centric future), the team opted to extend blocks with blobs (a dedicated space tailored specifically for rollup data). Blobs are expected to reduce the cost of rollup transactions up to 10 fold. EIP-4844 is scheduled to go live with the Dencun upgrade in March/April. While this is a temporary solution to keep Ethereum competitive for two to three years, the long-term solution will be supporting validity proofs on mainnet itself, which will make rollups orders of magnitude cheaper.
While Solana might be strongly defending its philosophy of monolithic architecture (and they might prove right for many use cases), the industry seems to be converging on modularity. In the case of Ethereum, only modular architecture will enable a future where:
Transactions are cheap for millions of users thanks to rollups (scalability);
The network is protected from censorship and threats like 51% attacks (security); and
An average PC or even a mobile can run a node to verify transactions (decentralization).
One might ask if Ethereum’s modular architecture solves the blockchain trilemma that was supposed to be unsolvable? Technically it doesn’t, because Ethereum is not a monolithic network anymore, but as a modular network, it does.
Of these three, we think decentralization is the most important part of the trilemma to solve. Innovation will eventually drive down transaction costs; prioritizing decentralization (especially geographic) is the only way to ensure long-term security for the network. Ethereum is leading in decentralization by having the most distributed validator set, with more than 800,000 validators. At the same time, with the modular approach, it can adapt to new design innovations through customized rollups that launch on top. Celestia and others certainly share this vision. The question remaining is whether Ethereum can move in this modular direction fast enough to keep up with the competition, which is building from scratch, and not fixing the plane whilst flying.
Solana is all the rage these days, and rightfully so. It’s gone from the dark days of the Alameda crisis, to strong price action, and from frequent halts to successfully handling one of the busiest airdrop claims in history – all while maintaining incredibly low fees. From the perspective of onboarding new users, Solana is a good choice: Ethereum L2s still charge up to USD1 per transaction (and we really don’t think starting from BSC or Tron is a good idea).
Another of Solana’s strengths is its single global state that instantly reflects all market signals, without the arbitrage and bridging hops between rollups or shards. It’s as if trading on all global exchanges was seamless 24 hours a day, with events instantly reflected in price changes on all exchanges, no matter the geography or time zone.
These are the benefits of a monolithic chain at its best, but there remain downsides to this design choice. Most notably, that the Solana validator set trends towards centralization due to very high hardware requirements. This happens because Solana monolithically handles all three layers of blockchain: execution, consensus, and data availability.
On the other end of the design spectrum, modular architecture – and specifically the outsourced data availability layer – is rising in popularity. This approach lowers transaction costs while maintaining low hardware requirements (although MEV threatens this). A modular design also allows for more specialized chains and hardware for specific applications, with dYdX being the best example.
At the forefront of the modular movement is Celestia, a chain optimized for rollup data efficiency. Ethereum, on the other hand, has arrived at a modular approach in a more piecemeal manner, building the plane whilst already flying. We believe rollups are the key to scaling and cheaper transactions, with the battle for data availability layers (and the rest of the modular stack) now on.
The data availability problem was first identified in the early race to scale blockchains. The focus was on minimizing the amount of data to store in order to maximize the number of nodes in a network. The same dynamics underpinned Bitcoin’s block size wars. Data availability refers to a blockchain’s ability to make its data accessible to all network participants. The key breakthrough in solving this problem was the introduction of data availability sampling (DAS), as Bridget Harris explains:
“With DAS, light nodes can confirm that the data is available by participating in rounds of random sampling of block data rather than having to download each entire block. Once multiple rounds of sampling are completed – and a certain confidence threshold is reached that the data is available – the rest of the transaction process is safe to occur. This way, a chain can scale its block size yet maintain easy data availability verification. And considerable cost savings are also achieved: these emerging layers can reduce DA costs by up to 99%.”
Celestia, Avail, NearDA, and EigenDA are the most important DA projects. They don’t need to verify transactions, but simply check that each block was added by consensus and that new blocks are available to the network. They rely on third-party sequencers to execute and verify transactions. Celestia was launched in October 2023, Avail and EigenDA have their mainnet in coming months, and Near has most recently announced its DA solution. Let’s review the unique features of each:
And then we have the rollups themselves. Of the rollups that build on these DA providers, there are a number of tools to make it easier to launch a rollup:
In a truly modular way, modules of each layer are chosen based on specific needs. The variety of combination options can be seen here:
Rollup-as-a-service projects like Eclipse make it even easier to launch a rollup, where the developer chooses which technology to use for each of the three modules.
Similarly, Conduit allows you to deploy a rollup in 15 minutes, with Optimism, Arbitrum Orbit, and Celestia as the supported stacks. A monthly hosting infrastructure fee is paid to Conduit, and there’s a separate data availability fee paid to the provider.
The wealth of possible combinations that modularity creates is certainly a major step forward. Is it akin to the difficulty of building an early website compared to the ease and customization of Squarespace today?
Despite the growth in DA projects, many have reservations about outsourcing DA. Vitalik made his clear: “Your data layer must be your security layer.” Dankrad Feist, another member of Ethereum Foundation, concurs: “If it doesn’t use Ethereum for data availability, it’s not an (Ethereum rollup) and therefore not an Ethereum L2.”
We agree. Rollups with outsourced data availability will be less secure than those using the same chain for data and consensus (and really should be referenced as “validiums”), although secure enough for certain applications. Short-term projects using such rollups will emerge and fade quickly, making it a good experimentation and testing ground. However, for long-term holding of financial assets, L1s such as Ethereum or rollups using them both for data and consensus will remain the networks with the lowest risk profile.
While skeptical about outsourced data availability, Ethereum is big on modular architecture. The early vision of scaling via sharding was abandoned in favor of modular.
The three main updates needed to implement the vision are rollups (we talked about these before), proposer-builder separation (“instead of a block proposer generating a ‘revenue-maximizing’ block by itself, it delegates the task to a market of outside actors (builders)”), and data sampling. The latter is a way for light nodes to verify that a block was published by only downloading a few randomly selected pieces of data. This is technically more challenging than the other two and will require two to three years to ship.
Important note: EIP-4844 was the first step in improving Ethereum’s data availability layer before data sampling goes live. As discussed earlier, enhancing Ethereum is similar to building the plane whilst flying; once the Ethereum Foundation recognized the need for rollups (aka when Vitalik dropped the famous rollup-centric future), the team opted to extend blocks with blobs (a dedicated space tailored specifically for rollup data). Blobs are expected to reduce the cost of rollup transactions up to 10 fold. EIP-4844 is scheduled to go live with the Dencun upgrade in March/April. While this is a temporary solution to keep Ethereum competitive for two to three years, the long-term solution will be supporting validity proofs on mainnet itself, which will make rollups orders of magnitude cheaper.
While Solana might be strongly defending its philosophy of monolithic architecture (and they might prove right for many use cases), the industry seems to be converging on modularity. In the case of Ethereum, only modular architecture will enable a future where:
Transactions are cheap for millions of users thanks to rollups (scalability);
The network is protected from censorship and threats like 51% attacks (security); and
An average PC or even a mobile can run a node to verify transactions (decentralization).
One might ask if Ethereum’s modular architecture solves the blockchain trilemma that was supposed to be unsolvable? Technically it doesn’t, because Ethereum is not a monolithic network anymore, but as a modular network, it does.
Of these three, we think decentralization is the most important part of the trilemma to solve. Innovation will eventually drive down transaction costs; prioritizing decentralization (especially geographic) is the only way to ensure long-term security for the network. Ethereum is leading in decentralization by having the most distributed validator set, with more than 800,000 validators. At the same time, with the modular approach, it can adapt to new design innovations through customized rollups that launch on top. Celestia and others certainly share this vision. The question remaining is whether Ethereum can move in this modular direction fast enough to keep up with the competition, which is building from scratch, and not fixing the plane whilst flying.