Infrastructural Frontiers for Multi-Rollup World

Intermediate1/11/2024, 8:52:37 AM
The article delves into the four fundamental pillars shaping the future of the multi-Rollup ecosystem, emphasizing the importance of zk and economic models.

Recently, there has been a noticeable trend where an increasing number of dApps are announcing the launch of their own rollups. Additionally, there’s a rise in the number of generic rollups that are set to go live.

Generic rollups address Ethereum’s scalability issues as it faces rising transaction volumes and dApp growth. These layer 2 solutions process more transactions off-chain, later securing them on the main chain, balancing scalability with security. Their versatility supports various dApps, removing the need for unique scaling solutions for each application.

App-specific rollups are tailored solutions that address the unique needs of individual applications. They offer enhanced speed by optimizing transaction processing for specific use cases. Cost-wise, they might provide a more efficient alternative to generic solutions, especially during network congestion. Their standout feature is flexibility. Unlike general-purpose Layer 2 solutions that are rigid and are more constrained by the enshrined EVM design, app-specific rollups can be customized, making them ideal for applications like games that require specific precompiles. Additionally, they allow dApps to better capture value, offering more control over token economics and revenue streams.

With the consensus forming around the proliferation of rollups, looking a year into the future where multiple rollups dominate the market, the need for a robust infrastructure becomes paramount. This infrastructure will serve as the “reinforced concrete” of a multi-rollup world.

This article will delve into four fundamental pillars that will shape the future of the multi-rollup ecosystem:

  1. Security As a Foundation: The Security Layer is the bedrock of trust in the decentralized world. In this section, we explore the vital role it plays in ensuring the integrity of Layer 2 transactions, identifying trust assumptions, and addressing potential security pitfalls.
  2. Balancing Customizability and Interoperability : Achieving seamless interoperability among diverse rollups is pivotal for a modular blockchain world. In this section, we dive into the interop problems brought by a modular structure and discuss current solutions to address fragmentation, and foster a cohesive ecosystem.
  3. Cost Analysis: Reducing costs is crucial for the broader adoption and viability of rollups, as it lowers the economic barriers compared to utilizing smart contracts. Cost efficiency in rollups is primarily achieved through harnessing economies of scale by aggregating with other rollups to share fees, and embracing the division of labor by delegating certain tasks to external service providers.
  4. Shared Security: A shared security layer is essential as it alleviates the time and resource-intensive process of bootstrapping security for new protocols or modular layers, ensuring a robust security comparable to established platforms like Ethereum. Numerous solutions like Eigenlayer, Babylon, Cosmos’s ICS, and Mesh Security have emerged, showcasing a variety of applications

Together, these four layers will provide a comprehensive blueprint for the infrastructure needed to support a thriving and cohesive modular blockchain world.

Security As a Foundation

At the heart of any decentralized system lies trust and security. Their absence undermines the very promise of a trustless ecosystem. This is why the security layer is paramount; without it, users and TVL are at risk. Plasma and Sidechains’ decline offers a cautionary tale. Once seen as Ethereum’s scaling savior, its issues, like the “data availability problem,” eroded trust and led to its waning popularity. That’s why the security layer becomes part I of this article.

To understand the intricacies of rollups and their potential vulnerabilities, it’s essential to dissect the lifecycle of a Layer 2 transaction. Using smart contract rollups as a reference, let’s delve into each phase and identify the trust assumptions and potential security pitfalls:

  1. Tx Submission through RPC:
    1. Trust Assumption: The RPC endpoint is reliable and secure. Users and dapps are now trusting rpc providers eg alchemy, infura, etc.
    2. Security Concern: Users might be censored by rpc providers, e.g. infura and alchemy blocking rpc requests to tornardo cash. RPC providers might face DDOS attacks, eg ankr being comprised via DNS hijack.
    3. Solutions: RPC providers, such as Infura, are actively pursuing a decentralized roadmap. Additionally, users have the option to choose decentralized solutions like the Pocket Network.
  2. Sequencer Orders the Tx, Provides Soft Commitments: unsafe state
    1. Trust Assumption: Users expect sequencers to fairly order transactions and provide genuine soft commitments.
    2. Security Concern: The system must resist censorship, ensuring all transactions are processed without bias. It’s crucial for the system to remain continuously operational, and it would be better to guard against sequencers gainingbad MEV at the expense of the end user.
    3. Solutions:
      1. CR and liveness:
        1. current solutions ranking based on CR and liveness level (low to high): single sequencer——POA——permissionless POS sequencers——shared sequencers——based rollups(sequenced by l1)
          1. Note that POA with limited authorities without support for force txns can be less CR than a centralized sequencer with force txn enabled.
          2. Regarding liveness, another crucial metric to consider is proposer failure, which occurs when a proposer goes offline. In such cases, it’s essential to ensure that users can still withdraw their funds.
            1. Even if the sequencers are censoring or refuse to work, some rollups enable users to submit their txs directly to L1s by themselves, i.e the escape hatch ( liveness for forced txs depend on the specific implementation ). The problem is that it might be too expensive for users with limited funds to do that and users might expect real-time CR and liveness.
            2. Certain rollup solutions, such as Arbitrum and Fuel, offer the capability for anyone to become a proposer after a certain time delay, i.e self propose.
            3. Check out this indicater for each rollup: https://l2beat.com/scaling/risk
        2. More details on other different solutions can be referred to my previous thread: https://twitter.com/yuxiao_deng/status/1666086091336880128
      2. MEV-protection:
        1. Different privacy solutions can help protect users from being front-run or sandwiched as the tx info is hidden (also help with CR). Related methods for hiding tx info include FCFS with a private mempool (what arbitrum and optimism are implementing right now), SUAVE’s TEE solution, threshold encryption (shutter network working on this), etc. The more sophisticated the solution is, the less complicated computation on the txs can be done.

MEV Roast | Encrypted Mempools - Justin Drake (Ethereum Foundation) - YouTube

  1. Note that what we want is mev-protection not mev-elimination. Research by @tarunchitra summarizes two main directions to reduce MEV: reduce the flexibility of the miner to reorder transactions by enforcing ordering rules and introduce a competitive market for the right to reorder, add, and/or censor transactions. However, the paper concludes that neither fair ordering nor economic mechanisms alone can effectively mitigate MEV for all payoff functions. There are lower bounds on how you can’t remove MEV beyond some point.
  2. Sequencer executes and posts tx batch and state roots to the DA layer when it is economically reasonable; safe state
    1. Trust assumption: Block producers publish the whole block on the DA layer so that others can download and validate them.
    2. Security Concern: If part of the data is not available, the block might contain malicious transactions that are being hidden by the block producer. Even if the block contains non-malicious transactions, hiding them might compromise the security of the system. It is very important that sequencers have tx data available, as the rollup needs to know about the state of the network and account balances
    3. Solutions:
    4. Posting on Ethereum now is the safest but most expensive solution(would be 90% cheaper after protodankshadring, but even a 10x throughput increase might still not be enough for the rollups): the rollups’ txs are downloaded and gossiped by all Ethereum nodes. As Ethereum has a large number of nodes replicating and verifying transaction data, it is highly unlikely that data will ever disappear or be entirely unavailable.
      1. After danksharding, ethereum nodes will not download all the tx data, but only parts of the data using the DAS and KZG (similar to avail’s solution mentioned below)
      2. Under the modular concept, it might be more efficient for rollups to post tx data to a DA layer which is only responsible for DA (The theoretical performance of Ethereum might be slightly inferior because, in addition to DA, it still retains the execution of L1, see the performance comparison between eigenDA and Ethereum below).

  1. Current modular DA solutions present trade-offs between safety and performance. It’s challenging to compare the security of DA using just one dimension:
    1. Avail and Celestia utilize DAS to ensure the availability of the data; as long as there is enough sampling, the data is secured. LCs can sample and get a high guarantee of DA as data unavailability would be easily detected and recovered by very small portions of LCs . This is not possible without DAS. The decentralization of the DA layer, i.e. the number of nodes in the network decides the security level and also the stake distribution. EigenDA doesn’t use DAS but uses a proof-of-custody mechanism to prevent restakers from being lazy, i.e. the DA operators have to routinely compute a function that can only be completed if they have downloaded all the required data and get slashed if they fail to attest to the blobs right (no need to store after the proof has been performed though).
    2. Ensure the data duplication process, i.e., erasure encoding, is accurate. EigenDA, Ethereum after 4844, and Avail use kzg commitments to guarantee accuracy, but these are computationally intensive. Celestia employs fraud-proof. Light nodes must wait for a brief interval before they can confirm a block has been correctly encoded, finalizing it from their perspective. (*Celestia could potentially switch to validity proofs if it’s a better trade-off option)
    3. Economic security of the DA layer (reorg and collusion risks): depends on the value staked in the DA layer, =2/3 of value staked in the Avail and Celestia
    4. Relaying the DA attestation of the DA layer to Ethereum. If the data is posted to another DA layer while the settlement contract is still in Ethereum, then we need a bridge contract to validate the DA is available in the DA layer for final settlement.
      1. Celestia’s blobstream verifies the signatures on the DA attestation from Celestia. The attestation is a Merkle root of the L2 data signed by the Celestia validators attesting to the fact that the data is available on Celestia. This feature is available on testnet right now.
      2. Avail uses an optimistic approach to verify the DA attestation. Once the attestation is posted to the bridge contract on Ethereum, a waiting period begins during which the attestation is assumed to be valid unless challenged.
      3. Succinct is working with Avail and Celestia on a zk-SNARK-based data attestation bridge, which makes the attestation progress more secure and cheaper by just verifying the zk proof.
      4. For EigenDA, the disperser splits and posts tasks to EigenDA nodes and then aggregates signatures from them and relays data to Ethereum
  2. Final Settlement: finalized state
    1. Trust Assumption 1:
      1. Rollup full nodes (a node that can fully calculates the state without relying on other proofs) can finalize the first valid rollup block at its height as soon as it is published on the parent chain as they have the necessary data and computational resources to verify the validity of the block quickly. However, this is not the case for other third parties like light clients, which rely on validity proofs, fraud proofs, or dispute resolution protocols to verify the state trustlessly without needing to run a full replica of the chain themselves.
    2. Security Concern 1:
      1. For ZK Rollups, l1 verifies the zkp and only accepts correct state roots. The difficulty mainly lies in the cost and generation process of zkp.
      2. On the other hand, Optimistic Rollups depend on the premise that at least one honest party will promptly submit fraud proof to contest any malicious transactions. However, most current fraud-proving systems are not yet permissionless, and the submission of fraud proofs is reliant on only a few validators.
    3. Solutions 1:
      1. Permissionless fraud proving enabled by Arbitrum’s BOLD protocol. The main reason why fraud proving is permissioned right now is the concern of delay attacks:
        1. During the challenge period, any stakers other than the proposer can launch a challenge. The proposer is then required to defend their assertion against each challenger individually, one at a time. At the conclusion of each challenge, the party on the losing end forfeits their stake.
        2. In a delay attack, a malicious party (or group of parties) can prevent or delay the confirmation of results back to the L1 chain by making challenges and deliberately losing the dispute and stakes)
        3. 𝐁𝐎𝐋𝐃 challenge protocol addresses this by guaranteeing fixed upper bounds on confirmation times for Optimistic Rollups’ settlement by ensuring a single honest party in the world can win against any number of malicious claims.
      2. Witness chain can act as a watch tower for new optimistic rollups to guarantee at least one honest party would challenge an invalid state:
        1. For established rollups such as Arbitrum and Optimism, there are enough intrinsic incentives for several third-party providers such as explorers, Infura-like services, and their foundation to monitor the chain state and submit fraud proof when necessary. However, new rollups or appchains might lack this level of security.
        2. Witness Chain employs a unique incentive mechanism, the “Proof of Diligence,” which ensures that watchtowers (validators) are consistently motivated to monitor and verify transactions, ensuring the state submitted to the parent chain is correct. This mechanism guarantees that each watchtower performs its due diligence since the rewards they receive are specific and independent for each node. In other words, if one watchtower discovers a bounty, it cannot share the exact incentive payout with other watchtowers, ensuring that every node conducts its independent verification. Additionally, Witness Chain offers flexibility by allowing rollups to specify custom requirements, such as the number of watchtowers and their geographical distribution powered by “proof of location” - their independent service. This flexibility ensures a balance between security and efficiency. \
          *Watchtower network is also emerging as a new layer in the rollup stack itself, providing pooled security to execution used by other related applications - such as the rollup security itself, interop protocols, notification service and keeper network, etc. More details will be launched in the future.
    4. Trust Assumption 2:
      1. The whole process of settlement for smart-contract rollups is written in smart contact on L1. The smart contract on the DA layer is assumed to be logically accurate, bug-free, and not evilly upgraded.
    5. Security Concern 2: Smart contract rollups’ bridges and upgrades are controlled by multi-sig wallets. The bridge has the ability to arbitrarily steal funds from users via a malicious upgrade.
    6. Solutions 2:
      1. The most popular idea today is to add time delays that allow users to exit if they disagree with a planned upgrade. However, this solution requires users to continually monitor all chains they have tokens on in case they ever need to exit.
      2. Altlayer’s Beacon Layer can act as a social layer for upgrades for all the rollups enshrined to it. Sequencers that register to operate a rollup together with the Beacon Layer rollup validators can socially fork the rollup regardless of whether or not the enshrined bridge contract on Ethereum gets upgraded.
      3. Enshrined rollups in the long term: enshrined rollup has been the endgame of the Ethereum roadmap for several years now. Except for the enshrining bridge/fraud proof verifier on L1, the settlement contract is also enshrined.
        1. Ethereum PSE is working in this direction

As for sovereign rollups, the main difference is that the chain state is settled by rollup full nodes instead of the enshrined smart contract in the L1. More detailed comparison can referred to https://www.cryptofrens.info/p/settlement-layers-ethereum-rollups

It’s important to note that more security doesn’t equate to better performance. Typically, as security measures increase, there’s a trade-off with scalability. Therefore, it’s essential to strike a balance between the two. In conclusion, rollups offer the flexibility to select varying levels of security assumptions based on individual preferences. This adaptability is one of the remarkable features of the modular world, allowing for a tailored approach to meet specific needs while maintaining the integrity of the system.

Balancing Customizability and Interoperability

It’s a well-known adage in the modular world: “Modularism, not maximalism.” If rollups can’t interoperate securely and efficiently, then modularism ≠ maximalism but = fragmentation. It’s crucial to figure out how to handle interoperability between different rollups.

Let’s first revisit how monolithic chains achieve interoperability. In simplest terms, they achieve cross-chain operations by verifying the consensus or state of the other chain. There are various approaches available in the market, and the differences lie in who is responsible for the verification (official entities, multi-signature mechanisms, decentralized networks, etc.) and how the correctness of the verification is ensured (through external parties, economic guarantees, optimistic mechanisms, zk-proofs, etc.). For a deeper dive into this topic, check out my favorite bridging pieces: Thoughts on Interoperability.

With the rise of modularization, the issue of interoperability has become more intricate:

  1. Fragmentation problem:
    1. The proliferation of rollups is expected to significantly surpass the number of L1s as it’s way easier to land a l2 than l1. Could this lead to a highly fragmented network?
    2. While monolithic blockchains offer a consistent consensus and state for straightforward verification, what will be the verification process for modular blockchains which have three (or possibly four) distinct components (DA, execution, settlement, and sequencing) ?
      1. DA and settlement layer become the main source of truth. Execution verification is already available as rollups inherently provide execution proofs. Sequencing happens before posting to DA.
  2. Extensible Problem:
    1. As new rollups are introduced, the question arises: can we promptly offer bridging services to accommodate them? Even if building up a rollup is permissionless, you might need to spend 10 weeks convincing other folks to add one. Current bridging services predominantly cater to mainstream rollups and tokens. With the potential influx of numerous rollups, there’s a concern about whether these services can efficiently evaluate and launch corresponding solutions to support these emerging rollups without compromising on security and functionality.
  3. User-experience Problem:
    1. The final settlement of optimistic rollups takes seven days, which is much longer than other L1s. The challenge is addressing the seven-day wait time for the official bridges of optimistic rollups. The submission of zkp also has a time lag as rollups usually wait to accumulate a large batch of transactions before submitting proof to save verification costs. Popular rollups like StarkEx typically post proofs to L1 only once every few hours.
    2. Rollups’ tx data submitted to the DA/settlement layer will have a time lag to save costs (1-3 mins for optimistic rollups and a few hours for zk rollups as mentioned above. This needs to be abstracted away from users who have needs for quicker and safer finality.

The good news is that there are emerging solutions to these challenges:

  1. Fragmentation problem:
    1. While there is a proliferation of rollups in the ecosystem, it’s noteworthy that the majority of smart contract rollups currently share a common settlement layer, namely Ethereum. The primary distinctions among these rollups lie in their execution and sequencing layers. To achieve interoperability, they simply need to mutually verify the final state of the shared settlement layer. However, the scenario becomes slightly more intricate for sovereign rollups. Their interoperability is somewhat challenging due to differing settlement layers. One approach to address this is by establishing a Peer-to-Peer (P2P) settlement mechanism, where each chain directly embeds a light client of the other, facilitating mutual verification. Alternatively, these sovereign rollups can first bridge to a centralized settlement hub, which then serves as a conduit for connecting with other chains. This hub-centric approach streamlines the process and ensures a more cohesive interconnection among diverse rollups. (similar to the status of cosmos interop)

  1. Besides Ethereum serving as one of the settlement hubs, other potential settlement hubs include Arbitrum, zkSync, and StarkNet, which act as settlement hubs for L3s built on them. The interop layer of Polygon 2.0 also functions as a central hub for zk rollups built on top of it.
  2. In conclusion, while the number of rollups and their variations are expanding, the quantity of settlement hubs remains limited. This effectively simplifies the topology, narrowing down the fragmentation problem to just a few key hubs. Although there will be more rollups than altl1s, cross-rollup interactions are less complicated than cross-l1 interactions as rollups usually fall within the same trust/security zoom.
  3. How different settlement hubs interop with each other can refer to how current monolithic chains interop with each other, as mentioned at the beginning.

*Moreover, in an effort to eliminate fragmentation on the user side, certain Layer 2s, such as ZKSync, have integrated native Account Abstraction to facilitate a seamless cross-rollup experience.

  1. Extensible problem
    1. Hyperlane(provide modular security for modular chains) and Catalyst(Permissionless cross-chain liquidity) are born to solve the permissioned interop problem.
      1. The essence of Hyperlane is to create a standardized security layer that can be applied across various chains, making them inherently interoperable.
      2. Catalyst is designed to offer permissionless liquidity for modular chains. It acts as a bridge, allowing any new chain to connect liquidity and swap with major hubs like Ethereum and Cosmos seamlessly.
    2. Rollup SDK/RAAS Providers offer native bridging services within their ecosystem
      1. Now, new rollups are mostly launched through existing rollup SDKs or RAAS services, so they are inherently interoperable with other rollups that use the same services. For example, for infrastructure built with the OP Stack, the foundational level is a shared bridging standard, which allows for the seamless movement of assets across everything that shares the OP Stack codebase. For rollups launching through the altlayer, they are all enshrined to the beacon layer, which acts as the settlement hub and ensures safe interoperability. For rollups launching through sovereign labs or zksync, they are interoperable with each other out of the box based on proof aggregation (will explain more later).

UE problem:

  1. Before diving in this part, let’s first recognize different levels of commitments and their time lag:

1. Some parties are comfortable with stage1 soft commitments by l2, eg exchanges like Binance only wait for a certain number of Layer2 blocks to consider transactions as confirmed, without the need to wait for batch processing to be submitted to Layer1
2. Some bridge providers like hop protocol would take as many blocks on the sending chain and determine finality based on L1 consensus (stage 2)
3. For trust-minimized bridges and for users to withdraw the funds from l2-l1 using the official bridge, it might take too long (several hours & 7 days)
  1. Reducing either Stage 2 or Stage 3 would offer significant advantages, delivering a stronger guarantee in a shorter time frame for a more secure and faster user experience. Additionally, achieving a trust-minimized bridge has always been a coveted goal, especially in light of the frequent security incidents with bridges.
  2. Reducing the final settlement time (7days for optimistic rollups and several hours for zk rollups), i.e. shorten stage 3
    1. Hybrid Rollups (Fraud Proof + ZK): This approach combines the strengths of ZK proofs and optimistic rollups. While generating and verifying proofs can be resource-intensive, it’s only executed when a state transition is challenged. Instead of posting a ZK proof for every batch of transactions, the proof is computed and posted only when a proposed state is contested, similar to an optimistic rollup. This allows for a shorter challenge period since the fraud proof can be generated in a single step, and the cost of ZK proofs is avoided in most scenarios.
      1. Notably, Eclipse’s SVM rollups and LayerN utilize risc0 to generate the zk fraud proof. The OP Stack has granted support to Risc0 and Mina for zk fraud proof development. Additionally, Fuel has recently introduced a similar hybrid method that supports multiple provers.
    2. After posting data to the DA layer, do some extra verification of the correctness of execution to increase the confidence level——high requirements, same as a full node
      1. When the sequencer batches txs to the optimistic rollups’ DA layers, it ensures canonical ordering and DA for x-rollup txs. Therefore, the only thing that needs to be confirmed is the execution: S1 == STF(S0, B1). Of course, you can simply run a full node (high requirements) to verify the tx, but what we really want is to reduce latency for light clients. Prover networks like @SuccinctLabs and @RiscZero can confirm the post-execution state by providing succinct state proofs. This provides robust confirmation for dapps and users.
      2. Altlayer has a beacon layer between rollups and L1. Sequencers from the beacon layer are responsible for sequencing, executing, and generating proof of validity (POV). POV allows verifiers to verify a state transition for a rollup later without having access to the entire state. With decentralized verifiers performing periodic checks, we have achieved a highly robust transaction finality. No need to wait for 7 days as the verifiers have already completed the necessary checks. As a result, cross-chain messaging has become faster and more secure.
      3. EigenSettle guarantees verification through economic mechanisms. Opt-in EigenLayer nodes with stakes do the computation to ensure the validity of the state and use their collateral to back their commitments. Any amount that is lower than the amount of stake these operators have posted can be treated as safely settled and enables economically-backed interoperability.
    3. Instant Verification with ZK Rollups:
      1. Sovereign Labs and Polygon 2.0 employ an innovative approach to achieve rapid finality by circumventing the settlement layer. Instead of waiting to submit the proof to Ethereum, we can instantly disseminate the generated zk proofs through a peer-to-peer network and conduct cross-chain operations based on the propagated zkps. Later on, we can utilize recursion to consolidate them into a batch proof and submit it to Layer 1 when it becomes economically viable.
        1. It’s not fully settled though, we still need to trust the correct aggregation of the zkp. Polygon 2.0’s Aggregator may be operated in a decentralized manner, involving Polygon validators from the shared validator pool, thereby improving network liveness and resistance to censorship. Nevertheless, using this method will also lead to a shorter finality time since aggregating zkps from multiple chains is certainly faster than waiting for sufficient zkps on a single chain.
      2. Zksync’s hyperchains utilize the layering method to aggregate zkp and achieve shorter finality. In contrast to settling on L1, hyperchains can settle their proofs on L2 (become a l3). This approach facilitates rapid messaging, as the cost-effective environment in L2 enables swift and economically viable verification.
        1. To enhance scalability further, we can replace L2 settlement with a minimal program required to operate L3 with messaging. This concept has been substantiated through specialized proofs that allow for aggregation.
  3. Addressing the Time Lag in Posting to the DA Layer (some methods can also be applied to reduce the settlement period), i.e. shorten stage2
    1. Shared Sequencing Layer: If rollups share a sequencing layer (e.g., through a shared sequencer service or using the same set of sequencing layers), they can obtain a soft confirmation from the sequencer. This, combined with an economic mechanism, ensures the final state’s integrity. Possible combinations include:
      1. Stateless shared sequencer + builder makes execution promise by staking proposed by Espresso; This approach is more suitable for rollups with PBS structure, assuming the block builder already has the necessary rights to parts of the blocks. Since the builder is stateful and serves as the underlying execution role for the shared sequencers, it’s natural for it to make additional promises.
      2. Shared validity sequencing proposed by Umbra research: stateful shared sequencer + fraud proof to ensure good behavior. Sequencers accept cross-chain requests. To prevent dishonest behavior by sequencers, a shared fraud-proof mechanism is used, involving slight changes to the original rollup fraud-proof mechanism. During the challenge period, challengers would also verify the correct execution of atomic actions. This may involve checking the roots of bridging contracts on different rollups or examining the Merkle proof provided by the sequencers. Dishonest sequencers get slashed.
    2. Third-Party Intervention: External entities like Hop, Connext, and Across can step in to mitigate risks. They validate messages and front the capital for users’ cross-chain financial activities, effectively reducing the waiting period. For example, Boost (GMP Express) is a special feature of Axelar and Squid that reduces transaction time across chains to 5-30 seconds for swaps below a value of $20,000 USD.
    3. Intent Infrastructure for bridging as a specific form of third-party intervention: This revamped infrastructure can embrace more third-parties to step in and solve the cross-domain intents for users.
      1. Through an intent-focused architecture (extracting away frictions and complexity from users by involving sophisticated actors like MMs and builders), users convey their intended objective or result without detailing the precise transactions needed to realize it. Individuals with a high tolerance for risk can step in, front the necessary capital, and levy increased fees.
      2. It’s safer because users’ funds would only be released when the outcome is valid. It might be faster and more flexible because more parties(solvers) are permissionlessly involved in the solving process and competing to give a better outcome to users.
      3. UniswapX, flashbots’ SUAVE, and essential are all working in this direction. More on intents: \
        nft://10/0x9351de088B597BA0dd2c1188f6054f1388e83578/?showBuying=true&showMeta=true
      4. The challenging aspect of this solution centers around the settlement oracle. Let’s take UniswapX as an example. To facilitate cross-chain swaps, we rely on a settlement oracle to determine when to release funds to solvers. If the settlement oracle opts for the native bridge (which is slow), or if a third-party bridge is used (raising trust concerns), or even if it’s a Light Client bridge (not yet ready for use), we essentially find ourselves in the same loop as before. Hence, UniswapX also offers “Fast cross-chain swaps” similar to an optimistic bridge.
      5. Simultaneously, the effectiveness of intent resolution relies on the competition among solvers. Since solvers need to rebalance their inventory across different chains, this may potentially lead to centralized solver issues, limiting the full potential of intents.

To summarize, it can be observed that there are three ways to address the UE problems:

  1. Use the magic of zk :
    1. The primary challenge lies in the performance of zk technology, encompassing both the time required for generation and associated costs. Additionally, when dealing with highly customizable modular blockchains, the question arises: do we possess a zk proving system capable of accommodating the myriad of differences?
  2. Use an economic slashing scheme to guarantee :
    1. The major drawback of this approach is the time delay inherent in the decentralized method (for instance, in the case of EigenSettle, we must wait for the cap to be reached). Furthermore, the centralized approach offers limited commitments (as exemplified by shared sequencing), relying on builders/sequencers to make commitments, which can be restricted and lack extensibility.
  3. Trust a third party :
    1. While trusting a third party can introduce additional risks, as users must have faith in the bridge, intent-enabled cross-domain swaps represent a somewhat more “decentralized” form of third-party bridging. However, this approach still contends with oracle latency, trust issues, and potential time delays, as you must wait someone to accept your intent.

It’s interesting that modularization also introduces new possibilities for interoperability experiences:

  1. Enhanced Speed with Modular Components: By breaking down into finer modules, users can get quicker confirmations from the layer2 level (might be already safe enough for ordinal users)
  2. Shared Sequencer for Atomic Transactions: The concept of a shared sequencer could potentially enable a new form of atomic transactions, such as flash loans. More details on : https://twitter.com/sanjaypshah/status/1686759738996912128

Modular interoperability solutions are experiencing rapid growth, and currently, there are various approaches, each with its own strengths and weaknesses. Perhaps the ultimate solution is still some distance away, but it’s heartening to see so many individuals striving to create a safer and more connected modular world before the rollup explosion arrives.

Cost Analysis

One factor contributing to the limited number of rollups in existence is the economic consideration associated with their launch, in comparison to utilizing smart contracts. Operating via smart contracts adopts a more variable cost model, where the primary expense is the gas fee, whereas launching and maintaining a rollup incurs both fixed and variable costs. This cost dynamic suggests that applications with a substantial transaction volume or relatively high transaction fees are better positioned to leverage rollups, as they have a greater capacity to amortize the fixed costs involved. Consequently, initiatives aimed at reducing the cost associated with rollups—both fixed and variable—are paramount. Delving into the cost components of rollups, as elucidated by Neel and Yaoqi during their talk at ETHCC, provides a clearer picture:

Employing a financial model, such as the Discounted Cash Flow (DCF) analysis, can be instrumental in evaluating the viability of launching a rollup for an application. The formula:

DCF(Revenue - Expenses)>Initial Investment

serves as a baseline to ascertain whether the operational income surpasses the initial investment, thereby making the launch of a rollup a financially sound decision. Protocols that succeed in lowering operational costs while augmenting revenue are instrumental in encouraging the increased adoption of rollups. Let’s explore one by one:

  1. Initial Development and Deployment Fee
    1. The initial setup, despite the availability of open-source SDKs like Opstack and Rollkit, still demands a significant amount of time and human capital for installation and debugging. Customization needs, for instance, integrating a VM into an SDK, further escalate the resources required to align the VM with the various interfaces each SDK provides.
    2. RAAS services like AltLayer and Caldera can significantly alleviate these complexities and efforts, encapsulating the economic benefits of division of labor.
  2. Recurring Fee/Revenue
    1. Revenue (++++)
      1. User fees
        1. = L1 Data Posting Fee + L2 Operator Fee + L2 Congestion Fee
        2. Although some user fees might be offset by expenses, scrutinizing and striving to lower these costs is vital as rollups may become untenable if user fees are prohibitively high for users. (Explored in the expense section)
      2. Miner Extractable Value (MEV) captured
        1. Primarily related to the transaction value originating from the chain, this can be boosted either by enhancing MEV-extraction efficiency or increasing cross-domain MEV.
        2. Partnering with established searchers, employing a PBS auction to foster competition, or leveraging SUAVE’s block building as a service are viable strategies to optimize MEV capture efficiency.
        3. For capturing more cross-chain MEV, utilizing a shared sequencer layer or SUAVE (shared mempool and shared block building) is beneficial as they connect to several domains.
          1. According to recent research by Akaki, shared sequencers are valuable for arbitrage searchers aiming to seize arbitrage opportunities across different chains, as they ensure victory in simultaneous races on all chains.
          2. SUAVE serves as a multi-domain order flow aggregation layer, aiding the builder/searcher in exploring cross-domain MEV.
    2. Expenses (- - - -)
      1. Layer 2 (L2) operation fee
        1. Ordering: It might be tricky to compare centralized and decentralized ordering solutions. Competition in more decentralized solutions like Proof of Efficiency can help decrease cost by keeping the operator margin minimal and also incentivize posting batches as often as possible. On the other hand, centralized solutions typically involve fewer parties, which can simplify the process but may not benefit from the same cost-reducing dynamics.
        2. Execution: This is where full nodes use VMs/EVMs to execute the changes to a rollup’s state given new user transactions.
          1. Efficiency can be bolstered through optimized alt-VMs like Fuel and Eclipse’s Solana VM, which enable parallel execution. However, deviating from EVM compatibility could introduce friction for developers and end-users, along with potential security issues. The compatibility of Arbitrum’s Stylus with both EVM and WASM (which is more efficient than EVM) is commendable.
        3. Proving
          1. Prover Market
            1. Theoretically, utilizing a specialized prover market like Risc0, =nil and marlin, instead of creating a proprietary centralized or decentralized prover network, can result in cost savings for several reasons:
              1. There may be a higher level of participation in a dedicated prover market, which in turn fosters increased competition, ultimately leading to lower prices.
              2. Provers can optimize hardware usage and be repurposed when a specific application does not require immediate proof generation, reducing operation costs and providing cheaper service.
              3. Naturally, there are downsides, including potentially capturing less token utility and relying on the performance of an external party. Furthermore, distinct zk rollups may impose varying hardware prerequisites for the proof generation process. This variability could pose a challenge for provers seeking to expand their proving operations.
              4. more on the prover market and prover network: https://figmentcapital.medium.com/decentralized-proving-proof-markets-and-zk-infrastructure-f4cce2c58596
      2. Layer 1 (L1) data posting
        1. Opting for a more cost-effective Data Availability (DA) layer apart from Ethereum or even using DAC solution can significantly cut down expenses, although at the potential cost of reduced security (explored further in the security layer). For gaming and social which usually have low-value but high-bandwidth, scalability might be a more important factor than security for them.
        2. Employing Ethereum as the DA layer allows for leveraging protodansharing and dansharding to attain cost efficiency. Moreover, given that the blob posting fee is set per block irrespective of the blob’s utilization by the rollup, there exists a need to balance between cost and delay: While a rollup would ideally post a complete blob, a low transaction arrival rate leading to fully occupying a blob space results in excessive delay costs.
          1. Potential solutions: joint blob posting cost for small rollups;
      3. L1 settlement fee
        1. For optimistic rollups, the settlement cost is relatively low. Post bedrock, Optimism only pays ~5$ a day to ethereum;
        2. For zk settlement, it’s relatively expensive for zkp verification
          1. zk-proof aggregation
            1. Depending on the underlying proof system, a rollup on Ethereum might spend anywhere from 300k to 5m gas to validate a single proof. But since proof sizes grow very slowly (or not at all) with the number of transactions, rollups can reduce their per-transaction cost by waiting to accumulate a large batch of transactions before submitting a proof.
            2. Sovereign labs, polygon 2.0’s interoperability layer as mentioned before aggregates proofs from multiple rollups, each rollup can then verify the state of multiple rollups at the same time, saving on verification costs. Zksync’s layering structure combined with proof aggregation further reduces the verification costs.
            3. Nonetheless, this method is most effective when two domains utilize the same ZKVM or a shared prover scheme (zksync’s hyperchains use the same zkEVM with fully identical zkp circuits) ; otherwise, it may result in compromised performance.
              1. NEBRA labs bring economy of scale and composability of proof verification on Ethereum. NEBRA UPA (Universal Proof Aggregator) universally aggregates heterogeneous proofs so that the verification cost can be amortized. UPA can be used to compose proofs from different sources to allow new use cases as well.

To summarize, the primary methods to economize on rollup costs include:

  1. Co-aggregating with other rollups to share fees or harness economies of scale:
    1. It’s noteworthy that such aggregation is also potentially crucial for achieving interoperability. As previously highlighted, employing a congruent layer or framework across diverse rollups simplifies interaction amongst them, ensuring hassle-free information exchange. This consolidated strategy fosters a more integrated and unified Layer 2 infrastructure.
  2. Delegating certain tasks to external service providers, capitalizing on the principle of division of labor.

As more rollups emerge (meaning you can collaborate with additional parties to divide fees), and more rollup service providers offer more refined services (providing a wider selection of mature upstream providers), we anticipate that the expenses associated with establishing a rollup will decrease.

Shared Security

If you aim to achieve an equivalent level of security (both economically and in terms of decentralization) as the source chain, simply deploy a smart contract or a smart contract rollup. If harnessing a portion of the security provided by the source chain is sufficient to improve performance, there are currently several shared security solutions at your disposal.

Shared security solutions greatly ease the security bootstrap process of most of the protocols or modular layers that need initial security. This is very meaningful for a future modular world, as we envision more infra/protocols emerging to facilitate the functionality of a modular world and more parts of a rollup become modular, except for DA, execution, settlement and sequencing. If a rollup uses a certain modular layer (such as DA) or a service whose security isn’t on par with Ethereum’s, then the overall security of the entire modular chain could be compromised. We need shared security to enable a decentralized and reliable SAAS service economy.

Eigenlayer, Babylon and Cosmos’s ICS and Osmosis’ mesh security serve a pivotal role in offering decentralized trust as a service to other infrastructural entities.

  1. Eigenlayer allows Ethereum stakers to repurpose their staked $ETH to secure other applications built on the network.
  2. Cosmos’ ICS allows the Cosmos Hub (“provider chain”) to lend its security to other blockchains (“consumer chains”) in return for fees.
  3. Mesh Security, brought up by osmosis, enables token delegators (not validators) to restake their staked tokens on a partner chain within the ecosystem. This allows for bidirectional or multilateral security flow, as different appchains can combine their mcaps to enhance overall security.
  4. Babylon allows BTC holders stake their BTC within the BTC network and provide security to other POS chains by optimizing the use of the Bitcoin scripting language and use an advanced cryptographic mechanism

ICS and Mesh Security, both integral to the Cosmos ecosystem, primarily aim to facilitate inter-chain borrowing of security. These solutions predominantly address the security needs of Cosmos appchains, allowing them to leverage the security of other chains within the ecosystem. Specifically, cosmos hub ICS offers for cosmos chains that don’t want to bootstrap validator sets (replicated security), while mesh security requires each chain to have its ownb validator set, but allows far greater optionality to the chain governance.

On the other hand, Babylon presents a unique approach by unlocking the latent potential of BTC holders’ idle assets without moving BTC out of its native chain. By optimizing the use of Bitcoin’s scripting language and integrating advanced cryptographic mechanisms, Babylon provides additional security to the consensus mechanisms of other chains with great features like faster unbonding periods. Validators on other POS chains with BTC can lock their BTC on the Bitcoin network and sign POS blocks with BTC private keys. Invalid performances like double-signing would leak the btc private key of the validator and burn its BTC on the bitcoin network. BTC staking would be launched in babylon’s 2nd testnet.

While Babylon navigates the constraints of Bitcoin’s lack of smart contract support, Eigenlayer operators on the Turing-complete Ethereum platform, Not only does Eigenlayer offer economic security to new rollups and chains, but its environment on Ethereum also allows for a more diverse range of AVS. According to eigenlayer’s article on programmable trust, the security eigenlayer can provide can actually be further broken down into 3 types:

  1. Economic trust: Trust from validators making commitments and backing their promises with financial stakes.This trust model ensures consistency regardless of the number of parties involved. There must be objective slashing conditions that can be submitted and verified onchain and it’s usually heavyweight for restakers.
  2. Decentralized trust: trust from having a decentralized network operated by independent and geographically isolated operators. This aspect emphasizes the intrinsic value of decentralization and enables use cases that are not objectively provable as decentralization increases the difficulty of collusion. To utilize decentralized trust, it’s usually lightweight.
  3. Ethereum inclusion trust: trust that Ethereum validators will construct and include your blocks as promised, alongside the consensus software they are running. This can specifically be committed by ethereum validators (not LST restakers). They run software sidecars to perform extra computation and receive extra rewards.

So now we are clear with the security materials, what can we expect?

  1. ICS and mesh security lower the security barriers for cosmos appchains like neutron, stride and axelar.
  2. Eigenlayer can fit into many solutions that have been mentioned before:
    1. rollup security: relay network; watch tower, sequencing, mev-protection, eigenDA
    2. rollup interop: eigensettle; bridges
    3. cost analysis: prover network
    4. more to explore, check out https://www.blog.eigenlayer.xyz/eigenlayer-universe-15-unicorn-ideas/
  3. Babylon is running testnet to increase the security level of other pos chains. Its first testnet provides timestamping service to add addtional security to high-value defi activities from several cosmos chains like akash, osmosis, juno,etc.

The core idea behind these shared security solutions is to enhance the capital efficiency of staked or illiquid assets by introducing additional responsibilities. However, it’s essential to be vigilant about the added risks when seeking higher returns:

  1. Increased complexity introduces more uncertainties. Validators become exposed to additional slashing conditions that may lack enough training wheels, which can be precarious.
    1. Eigenlayer aims to address this issue by proposing the implementation of a veto committee. This committee serves as a mutually trusted entity among stakers, operators, and AVS developers. In the event of a software bug within the AVS, stakers and operators won’t face penalties as the veto committee can cast a veto vote. While this approach may not be inherently scalable and could be subjective if AVSs are not strictly aligned with use cases based on trustlessly attributable actions, it can still serve as a valuable means to initiate a risk mitigation strategy during the early stages.
  2. Greater complexity also brings additional burdens. It can be overwhelming for less experienced validators to determine which service to share security with. Also the initial setup period may involve a higher risk of errors. Additionally, there should be mechanisms in place to allow “less tech-savvy” validators and stakers to benefit from higher yields, provided they are willing to accept relatively elevated risks, without being constrained by their operational capabilities.
    1. Rio Network and Renzo are all working on effectively addressing this challenge for Eigenlayer by offering a structured approach to cautiously select sophisticated node operators and AVS services for potential restakes, elevating security levels and reducing entry barriers for participants.

Furthermore, as Eigenlayer gains wider adoption, it could potentially open up new horizons in the realm of the Financialization of Security. This could facilitate the valuation of shared security and the various applications built upon it.

  1. An limitation that is presented to EigenLayer is in its ability to scale capital allocation to its system by outcompeting yield opportunities in DeFi for the same assets that it supports (LSTs). EigenLayer commoditizes the value of security and this opens up the door for many primitives to underwrite this value and provide the ability for restakers to both restake and participate in the greater DeFi ecosystem.
    1. Ion Protocol is a product attempting to do this in order to scale the reach that restaking can have. Ion is building a price-agnostic lending platform that is built to specifically support staked and restaked assets via using ZK-infrastructure to underwrite the lower-level slashing risk present in such assets (ZK state proof systems + ZKML). This could initiate the beginning of the birth of many novel DeFi primitives built upon the underlying value of security that EigenLayer commoditizes, further enabling the ability of restaking to scale across the entire ecosystem.

As we stand at the cusp of significant transformations, it is crucial to embrace the principles of security, interoperability, and cost-effectiveness. These pillars will not only guide the development of more scalable and efficient blockchain solutions but will also pave the way for a more interconnected and accessible digital world. Embracing these changes with foresight and adaptability will undoubtedly lead to groundbreaking advancements in the blockchain ecosystem.

Disclaimer:

  1. This article is reprinted from [mirror]. All copyrights belong to the original author [SevenX Ventures ]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

Infrastructural Frontiers for Multi-Rollup World

Intermediate1/11/2024, 8:52:37 AM
The article delves into the four fundamental pillars shaping the future of the multi-Rollup ecosystem, emphasizing the importance of zk and economic models.

Recently, there has been a noticeable trend where an increasing number of dApps are announcing the launch of their own rollups. Additionally, there’s a rise in the number of generic rollups that are set to go live.

Generic rollups address Ethereum’s scalability issues as it faces rising transaction volumes and dApp growth. These layer 2 solutions process more transactions off-chain, later securing them on the main chain, balancing scalability with security. Their versatility supports various dApps, removing the need for unique scaling solutions for each application.

App-specific rollups are tailored solutions that address the unique needs of individual applications. They offer enhanced speed by optimizing transaction processing for specific use cases. Cost-wise, they might provide a more efficient alternative to generic solutions, especially during network congestion. Their standout feature is flexibility. Unlike general-purpose Layer 2 solutions that are rigid and are more constrained by the enshrined EVM design, app-specific rollups can be customized, making them ideal for applications like games that require specific precompiles. Additionally, they allow dApps to better capture value, offering more control over token economics and revenue streams.

With the consensus forming around the proliferation of rollups, looking a year into the future where multiple rollups dominate the market, the need for a robust infrastructure becomes paramount. This infrastructure will serve as the “reinforced concrete” of a multi-rollup world.

This article will delve into four fundamental pillars that will shape the future of the multi-rollup ecosystem:

  1. Security As a Foundation: The Security Layer is the bedrock of trust in the decentralized world. In this section, we explore the vital role it plays in ensuring the integrity of Layer 2 transactions, identifying trust assumptions, and addressing potential security pitfalls.
  2. Balancing Customizability and Interoperability : Achieving seamless interoperability among diverse rollups is pivotal for a modular blockchain world. In this section, we dive into the interop problems brought by a modular structure and discuss current solutions to address fragmentation, and foster a cohesive ecosystem.
  3. Cost Analysis: Reducing costs is crucial for the broader adoption and viability of rollups, as it lowers the economic barriers compared to utilizing smart contracts. Cost efficiency in rollups is primarily achieved through harnessing economies of scale by aggregating with other rollups to share fees, and embracing the division of labor by delegating certain tasks to external service providers.
  4. Shared Security: A shared security layer is essential as it alleviates the time and resource-intensive process of bootstrapping security for new protocols or modular layers, ensuring a robust security comparable to established platforms like Ethereum. Numerous solutions like Eigenlayer, Babylon, Cosmos’s ICS, and Mesh Security have emerged, showcasing a variety of applications

Together, these four layers will provide a comprehensive blueprint for the infrastructure needed to support a thriving and cohesive modular blockchain world.

Security As a Foundation

At the heart of any decentralized system lies trust and security. Their absence undermines the very promise of a trustless ecosystem. This is why the security layer is paramount; without it, users and TVL are at risk. Plasma and Sidechains’ decline offers a cautionary tale. Once seen as Ethereum’s scaling savior, its issues, like the “data availability problem,” eroded trust and led to its waning popularity. That’s why the security layer becomes part I of this article.

To understand the intricacies of rollups and their potential vulnerabilities, it’s essential to dissect the lifecycle of a Layer 2 transaction. Using smart contract rollups as a reference, let’s delve into each phase and identify the trust assumptions and potential security pitfalls:

  1. Tx Submission through RPC:
    1. Trust Assumption: The RPC endpoint is reliable and secure. Users and dapps are now trusting rpc providers eg alchemy, infura, etc.
    2. Security Concern: Users might be censored by rpc providers, e.g. infura and alchemy blocking rpc requests to tornardo cash. RPC providers might face DDOS attacks, eg ankr being comprised via DNS hijack.
    3. Solutions: RPC providers, such as Infura, are actively pursuing a decentralized roadmap. Additionally, users have the option to choose decentralized solutions like the Pocket Network.
  2. Sequencer Orders the Tx, Provides Soft Commitments: unsafe state
    1. Trust Assumption: Users expect sequencers to fairly order transactions and provide genuine soft commitments.
    2. Security Concern: The system must resist censorship, ensuring all transactions are processed without bias. It’s crucial for the system to remain continuously operational, and it would be better to guard against sequencers gainingbad MEV at the expense of the end user.
    3. Solutions:
      1. CR and liveness:
        1. current solutions ranking based on CR and liveness level (low to high): single sequencer——POA——permissionless POS sequencers——shared sequencers——based rollups(sequenced by l1)
          1. Note that POA with limited authorities without support for force txns can be less CR than a centralized sequencer with force txn enabled.
          2. Regarding liveness, another crucial metric to consider is proposer failure, which occurs when a proposer goes offline. In such cases, it’s essential to ensure that users can still withdraw their funds.
            1. Even if the sequencers are censoring or refuse to work, some rollups enable users to submit their txs directly to L1s by themselves, i.e the escape hatch ( liveness for forced txs depend on the specific implementation ). The problem is that it might be too expensive for users with limited funds to do that and users might expect real-time CR and liveness.
            2. Certain rollup solutions, such as Arbitrum and Fuel, offer the capability for anyone to become a proposer after a certain time delay, i.e self propose.
            3. Check out this indicater for each rollup: https://l2beat.com/scaling/risk
        2. More details on other different solutions can be referred to my previous thread: https://twitter.com/yuxiao_deng/status/1666086091336880128
      2. MEV-protection:
        1. Different privacy solutions can help protect users from being front-run or sandwiched as the tx info is hidden (also help with CR). Related methods for hiding tx info include FCFS with a private mempool (what arbitrum and optimism are implementing right now), SUAVE’s TEE solution, threshold encryption (shutter network working on this), etc. The more sophisticated the solution is, the less complicated computation on the txs can be done.

MEV Roast | Encrypted Mempools - Justin Drake (Ethereum Foundation) - YouTube

  1. Note that what we want is mev-protection not mev-elimination. Research by @tarunchitra summarizes two main directions to reduce MEV: reduce the flexibility of the miner to reorder transactions by enforcing ordering rules and introduce a competitive market for the right to reorder, add, and/or censor transactions. However, the paper concludes that neither fair ordering nor economic mechanisms alone can effectively mitigate MEV for all payoff functions. There are lower bounds on how you can’t remove MEV beyond some point.
  2. Sequencer executes and posts tx batch and state roots to the DA layer when it is economically reasonable; safe state
    1. Trust assumption: Block producers publish the whole block on the DA layer so that others can download and validate them.
    2. Security Concern: If part of the data is not available, the block might contain malicious transactions that are being hidden by the block producer. Even if the block contains non-malicious transactions, hiding them might compromise the security of the system. It is very important that sequencers have tx data available, as the rollup needs to know about the state of the network and account balances
    3. Solutions:
    4. Posting on Ethereum now is the safest but most expensive solution(would be 90% cheaper after protodankshadring, but even a 10x throughput increase might still not be enough for the rollups): the rollups’ txs are downloaded and gossiped by all Ethereum nodes. As Ethereum has a large number of nodes replicating and verifying transaction data, it is highly unlikely that data will ever disappear or be entirely unavailable.
      1. After danksharding, ethereum nodes will not download all the tx data, but only parts of the data using the DAS and KZG (similar to avail’s solution mentioned below)
      2. Under the modular concept, it might be more efficient for rollups to post tx data to a DA layer which is only responsible for DA (The theoretical performance of Ethereum might be slightly inferior because, in addition to DA, it still retains the execution of L1, see the performance comparison between eigenDA and Ethereum below).

  1. Current modular DA solutions present trade-offs between safety and performance. It’s challenging to compare the security of DA using just one dimension:
    1. Avail and Celestia utilize DAS to ensure the availability of the data; as long as there is enough sampling, the data is secured. LCs can sample and get a high guarantee of DA as data unavailability would be easily detected and recovered by very small portions of LCs . This is not possible without DAS. The decentralization of the DA layer, i.e. the number of nodes in the network decides the security level and also the stake distribution. EigenDA doesn’t use DAS but uses a proof-of-custody mechanism to prevent restakers from being lazy, i.e. the DA operators have to routinely compute a function that can only be completed if they have downloaded all the required data and get slashed if they fail to attest to the blobs right (no need to store after the proof has been performed though).
    2. Ensure the data duplication process, i.e., erasure encoding, is accurate. EigenDA, Ethereum after 4844, and Avail use kzg commitments to guarantee accuracy, but these are computationally intensive. Celestia employs fraud-proof. Light nodes must wait for a brief interval before they can confirm a block has been correctly encoded, finalizing it from their perspective. (*Celestia could potentially switch to validity proofs if it’s a better trade-off option)
    3. Economic security of the DA layer (reorg and collusion risks): depends on the value staked in the DA layer, =2/3 of value staked in the Avail and Celestia
    4. Relaying the DA attestation of the DA layer to Ethereum. If the data is posted to another DA layer while the settlement contract is still in Ethereum, then we need a bridge contract to validate the DA is available in the DA layer for final settlement.
      1. Celestia’s blobstream verifies the signatures on the DA attestation from Celestia. The attestation is a Merkle root of the L2 data signed by the Celestia validators attesting to the fact that the data is available on Celestia. This feature is available on testnet right now.
      2. Avail uses an optimistic approach to verify the DA attestation. Once the attestation is posted to the bridge contract on Ethereum, a waiting period begins during which the attestation is assumed to be valid unless challenged.
      3. Succinct is working with Avail and Celestia on a zk-SNARK-based data attestation bridge, which makes the attestation progress more secure and cheaper by just verifying the zk proof.
      4. For EigenDA, the disperser splits and posts tasks to EigenDA nodes and then aggregates signatures from them and relays data to Ethereum
  2. Final Settlement: finalized state
    1. Trust Assumption 1:
      1. Rollup full nodes (a node that can fully calculates the state without relying on other proofs) can finalize the first valid rollup block at its height as soon as it is published on the parent chain as they have the necessary data and computational resources to verify the validity of the block quickly. However, this is not the case for other third parties like light clients, which rely on validity proofs, fraud proofs, or dispute resolution protocols to verify the state trustlessly without needing to run a full replica of the chain themselves.
    2. Security Concern 1:
      1. For ZK Rollups, l1 verifies the zkp and only accepts correct state roots. The difficulty mainly lies in the cost and generation process of zkp.
      2. On the other hand, Optimistic Rollups depend on the premise that at least one honest party will promptly submit fraud proof to contest any malicious transactions. However, most current fraud-proving systems are not yet permissionless, and the submission of fraud proofs is reliant on only a few validators.
    3. Solutions 1:
      1. Permissionless fraud proving enabled by Arbitrum’s BOLD protocol. The main reason why fraud proving is permissioned right now is the concern of delay attacks:
        1. During the challenge period, any stakers other than the proposer can launch a challenge. The proposer is then required to defend their assertion against each challenger individually, one at a time. At the conclusion of each challenge, the party on the losing end forfeits their stake.
        2. In a delay attack, a malicious party (or group of parties) can prevent or delay the confirmation of results back to the L1 chain by making challenges and deliberately losing the dispute and stakes)
        3. 𝐁𝐎𝐋𝐃 challenge protocol addresses this by guaranteeing fixed upper bounds on confirmation times for Optimistic Rollups’ settlement by ensuring a single honest party in the world can win against any number of malicious claims.
      2. Witness chain can act as a watch tower for new optimistic rollups to guarantee at least one honest party would challenge an invalid state:
        1. For established rollups such as Arbitrum and Optimism, there are enough intrinsic incentives for several third-party providers such as explorers, Infura-like services, and their foundation to monitor the chain state and submit fraud proof when necessary. However, new rollups or appchains might lack this level of security.
        2. Witness Chain employs a unique incentive mechanism, the “Proof of Diligence,” which ensures that watchtowers (validators) are consistently motivated to monitor and verify transactions, ensuring the state submitted to the parent chain is correct. This mechanism guarantees that each watchtower performs its due diligence since the rewards they receive are specific and independent for each node. In other words, if one watchtower discovers a bounty, it cannot share the exact incentive payout with other watchtowers, ensuring that every node conducts its independent verification. Additionally, Witness Chain offers flexibility by allowing rollups to specify custom requirements, such as the number of watchtowers and their geographical distribution powered by “proof of location” - their independent service. This flexibility ensures a balance between security and efficiency. \
          *Watchtower network is also emerging as a new layer in the rollup stack itself, providing pooled security to execution used by other related applications - such as the rollup security itself, interop protocols, notification service and keeper network, etc. More details will be launched in the future.
    4. Trust Assumption 2:
      1. The whole process of settlement for smart-contract rollups is written in smart contact on L1. The smart contract on the DA layer is assumed to be logically accurate, bug-free, and not evilly upgraded.
    5. Security Concern 2: Smart contract rollups’ bridges and upgrades are controlled by multi-sig wallets. The bridge has the ability to arbitrarily steal funds from users via a malicious upgrade.
    6. Solutions 2:
      1. The most popular idea today is to add time delays that allow users to exit if they disagree with a planned upgrade. However, this solution requires users to continually monitor all chains they have tokens on in case they ever need to exit.
      2. Altlayer’s Beacon Layer can act as a social layer for upgrades for all the rollups enshrined to it. Sequencers that register to operate a rollup together with the Beacon Layer rollup validators can socially fork the rollup regardless of whether or not the enshrined bridge contract on Ethereum gets upgraded.
      3. Enshrined rollups in the long term: enshrined rollup has been the endgame of the Ethereum roadmap for several years now. Except for the enshrining bridge/fraud proof verifier on L1, the settlement contract is also enshrined.
        1. Ethereum PSE is working in this direction

As for sovereign rollups, the main difference is that the chain state is settled by rollup full nodes instead of the enshrined smart contract in the L1. More detailed comparison can referred to https://www.cryptofrens.info/p/settlement-layers-ethereum-rollups

It’s important to note that more security doesn’t equate to better performance. Typically, as security measures increase, there’s a trade-off with scalability. Therefore, it’s essential to strike a balance between the two. In conclusion, rollups offer the flexibility to select varying levels of security assumptions based on individual preferences. This adaptability is one of the remarkable features of the modular world, allowing for a tailored approach to meet specific needs while maintaining the integrity of the system.

Balancing Customizability and Interoperability

It’s a well-known adage in the modular world: “Modularism, not maximalism.” If rollups can’t interoperate securely and efficiently, then modularism ≠ maximalism but = fragmentation. It’s crucial to figure out how to handle interoperability between different rollups.

Let’s first revisit how monolithic chains achieve interoperability. In simplest terms, they achieve cross-chain operations by verifying the consensus or state of the other chain. There are various approaches available in the market, and the differences lie in who is responsible for the verification (official entities, multi-signature mechanisms, decentralized networks, etc.) and how the correctness of the verification is ensured (through external parties, economic guarantees, optimistic mechanisms, zk-proofs, etc.). For a deeper dive into this topic, check out my favorite bridging pieces: Thoughts on Interoperability.

With the rise of modularization, the issue of interoperability has become more intricate:

  1. Fragmentation problem:
    1. The proliferation of rollups is expected to significantly surpass the number of L1s as it’s way easier to land a l2 than l1. Could this lead to a highly fragmented network?
    2. While monolithic blockchains offer a consistent consensus and state for straightforward verification, what will be the verification process for modular blockchains which have three (or possibly four) distinct components (DA, execution, settlement, and sequencing) ?
      1. DA and settlement layer become the main source of truth. Execution verification is already available as rollups inherently provide execution proofs. Sequencing happens before posting to DA.
  2. Extensible Problem:
    1. As new rollups are introduced, the question arises: can we promptly offer bridging services to accommodate them? Even if building up a rollup is permissionless, you might need to spend 10 weeks convincing other folks to add one. Current bridging services predominantly cater to mainstream rollups and tokens. With the potential influx of numerous rollups, there’s a concern about whether these services can efficiently evaluate and launch corresponding solutions to support these emerging rollups without compromising on security and functionality.
  3. User-experience Problem:
    1. The final settlement of optimistic rollups takes seven days, which is much longer than other L1s. The challenge is addressing the seven-day wait time for the official bridges of optimistic rollups. The submission of zkp also has a time lag as rollups usually wait to accumulate a large batch of transactions before submitting proof to save verification costs. Popular rollups like StarkEx typically post proofs to L1 only once every few hours.
    2. Rollups’ tx data submitted to the DA/settlement layer will have a time lag to save costs (1-3 mins for optimistic rollups and a few hours for zk rollups as mentioned above. This needs to be abstracted away from users who have needs for quicker and safer finality.

The good news is that there are emerging solutions to these challenges:

  1. Fragmentation problem:
    1. While there is a proliferation of rollups in the ecosystem, it’s noteworthy that the majority of smart contract rollups currently share a common settlement layer, namely Ethereum. The primary distinctions among these rollups lie in their execution and sequencing layers. To achieve interoperability, they simply need to mutually verify the final state of the shared settlement layer. However, the scenario becomes slightly more intricate for sovereign rollups. Their interoperability is somewhat challenging due to differing settlement layers. One approach to address this is by establishing a Peer-to-Peer (P2P) settlement mechanism, where each chain directly embeds a light client of the other, facilitating mutual verification. Alternatively, these sovereign rollups can first bridge to a centralized settlement hub, which then serves as a conduit for connecting with other chains. This hub-centric approach streamlines the process and ensures a more cohesive interconnection among diverse rollups. (similar to the status of cosmos interop)

  1. Besides Ethereum serving as one of the settlement hubs, other potential settlement hubs include Arbitrum, zkSync, and StarkNet, which act as settlement hubs for L3s built on them. The interop layer of Polygon 2.0 also functions as a central hub for zk rollups built on top of it.
  2. In conclusion, while the number of rollups and their variations are expanding, the quantity of settlement hubs remains limited. This effectively simplifies the topology, narrowing down the fragmentation problem to just a few key hubs. Although there will be more rollups than altl1s, cross-rollup interactions are less complicated than cross-l1 interactions as rollups usually fall within the same trust/security zoom.
  3. How different settlement hubs interop with each other can refer to how current monolithic chains interop with each other, as mentioned at the beginning.

*Moreover, in an effort to eliminate fragmentation on the user side, certain Layer 2s, such as ZKSync, have integrated native Account Abstraction to facilitate a seamless cross-rollup experience.

  1. Extensible problem
    1. Hyperlane(provide modular security for modular chains) and Catalyst(Permissionless cross-chain liquidity) are born to solve the permissioned interop problem.
      1. The essence of Hyperlane is to create a standardized security layer that can be applied across various chains, making them inherently interoperable.
      2. Catalyst is designed to offer permissionless liquidity for modular chains. It acts as a bridge, allowing any new chain to connect liquidity and swap with major hubs like Ethereum and Cosmos seamlessly.
    2. Rollup SDK/RAAS Providers offer native bridging services within their ecosystem
      1. Now, new rollups are mostly launched through existing rollup SDKs or RAAS services, so they are inherently interoperable with other rollups that use the same services. For example, for infrastructure built with the OP Stack, the foundational level is a shared bridging standard, which allows for the seamless movement of assets across everything that shares the OP Stack codebase. For rollups launching through the altlayer, they are all enshrined to the beacon layer, which acts as the settlement hub and ensures safe interoperability. For rollups launching through sovereign labs or zksync, they are interoperable with each other out of the box based on proof aggregation (will explain more later).

UE problem:

  1. Before diving in this part, let’s first recognize different levels of commitments and their time lag:

1. Some parties are comfortable with stage1 soft commitments by l2, eg exchanges like Binance only wait for a certain number of Layer2 blocks to consider transactions as confirmed, without the need to wait for batch processing to be submitted to Layer1
2. Some bridge providers like hop protocol would take as many blocks on the sending chain and determine finality based on L1 consensus (stage 2)
3. For trust-minimized bridges and for users to withdraw the funds from l2-l1 using the official bridge, it might take too long (several hours & 7 days)
  1. Reducing either Stage 2 or Stage 3 would offer significant advantages, delivering a stronger guarantee in a shorter time frame for a more secure and faster user experience. Additionally, achieving a trust-minimized bridge has always been a coveted goal, especially in light of the frequent security incidents with bridges.
  2. Reducing the final settlement time (7days for optimistic rollups and several hours for zk rollups), i.e. shorten stage 3
    1. Hybrid Rollups (Fraud Proof + ZK): This approach combines the strengths of ZK proofs and optimistic rollups. While generating and verifying proofs can be resource-intensive, it’s only executed when a state transition is challenged. Instead of posting a ZK proof for every batch of transactions, the proof is computed and posted only when a proposed state is contested, similar to an optimistic rollup. This allows for a shorter challenge period since the fraud proof can be generated in a single step, and the cost of ZK proofs is avoided in most scenarios.
      1. Notably, Eclipse’s SVM rollups and LayerN utilize risc0 to generate the zk fraud proof. The OP Stack has granted support to Risc0 and Mina for zk fraud proof development. Additionally, Fuel has recently introduced a similar hybrid method that supports multiple provers.
    2. After posting data to the DA layer, do some extra verification of the correctness of execution to increase the confidence level——high requirements, same as a full node
      1. When the sequencer batches txs to the optimistic rollups’ DA layers, it ensures canonical ordering and DA for x-rollup txs. Therefore, the only thing that needs to be confirmed is the execution: S1 == STF(S0, B1). Of course, you can simply run a full node (high requirements) to verify the tx, but what we really want is to reduce latency for light clients. Prover networks like @SuccinctLabs and @RiscZero can confirm the post-execution state by providing succinct state proofs. This provides robust confirmation for dapps and users.
      2. Altlayer has a beacon layer between rollups and L1. Sequencers from the beacon layer are responsible for sequencing, executing, and generating proof of validity (POV). POV allows verifiers to verify a state transition for a rollup later without having access to the entire state. With decentralized verifiers performing periodic checks, we have achieved a highly robust transaction finality. No need to wait for 7 days as the verifiers have already completed the necessary checks. As a result, cross-chain messaging has become faster and more secure.
      3. EigenSettle guarantees verification through economic mechanisms. Opt-in EigenLayer nodes with stakes do the computation to ensure the validity of the state and use their collateral to back their commitments. Any amount that is lower than the amount of stake these operators have posted can be treated as safely settled and enables economically-backed interoperability.
    3. Instant Verification with ZK Rollups:
      1. Sovereign Labs and Polygon 2.0 employ an innovative approach to achieve rapid finality by circumventing the settlement layer. Instead of waiting to submit the proof to Ethereum, we can instantly disseminate the generated zk proofs through a peer-to-peer network and conduct cross-chain operations based on the propagated zkps. Later on, we can utilize recursion to consolidate them into a batch proof and submit it to Layer 1 when it becomes economically viable.
        1. It’s not fully settled though, we still need to trust the correct aggregation of the zkp. Polygon 2.0’s Aggregator may be operated in a decentralized manner, involving Polygon validators from the shared validator pool, thereby improving network liveness and resistance to censorship. Nevertheless, using this method will also lead to a shorter finality time since aggregating zkps from multiple chains is certainly faster than waiting for sufficient zkps on a single chain.
      2. Zksync’s hyperchains utilize the layering method to aggregate zkp and achieve shorter finality. In contrast to settling on L1, hyperchains can settle their proofs on L2 (become a l3). This approach facilitates rapid messaging, as the cost-effective environment in L2 enables swift and economically viable verification.
        1. To enhance scalability further, we can replace L2 settlement with a minimal program required to operate L3 with messaging. This concept has been substantiated through specialized proofs that allow for aggregation.
  3. Addressing the Time Lag in Posting to the DA Layer (some methods can also be applied to reduce the settlement period), i.e. shorten stage2
    1. Shared Sequencing Layer: If rollups share a sequencing layer (e.g., through a shared sequencer service or using the same set of sequencing layers), they can obtain a soft confirmation from the sequencer. This, combined with an economic mechanism, ensures the final state’s integrity. Possible combinations include:
      1. Stateless shared sequencer + builder makes execution promise by staking proposed by Espresso; This approach is more suitable for rollups with PBS structure, assuming the block builder already has the necessary rights to parts of the blocks. Since the builder is stateful and serves as the underlying execution role for the shared sequencers, it’s natural for it to make additional promises.
      2. Shared validity sequencing proposed by Umbra research: stateful shared sequencer + fraud proof to ensure good behavior. Sequencers accept cross-chain requests. To prevent dishonest behavior by sequencers, a shared fraud-proof mechanism is used, involving slight changes to the original rollup fraud-proof mechanism. During the challenge period, challengers would also verify the correct execution of atomic actions. This may involve checking the roots of bridging contracts on different rollups or examining the Merkle proof provided by the sequencers. Dishonest sequencers get slashed.
    2. Third-Party Intervention: External entities like Hop, Connext, and Across can step in to mitigate risks. They validate messages and front the capital for users’ cross-chain financial activities, effectively reducing the waiting period. For example, Boost (GMP Express) is a special feature of Axelar and Squid that reduces transaction time across chains to 5-30 seconds for swaps below a value of $20,000 USD.
    3. Intent Infrastructure for bridging as a specific form of third-party intervention: This revamped infrastructure can embrace more third-parties to step in and solve the cross-domain intents for users.
      1. Through an intent-focused architecture (extracting away frictions and complexity from users by involving sophisticated actors like MMs and builders), users convey their intended objective or result without detailing the precise transactions needed to realize it. Individuals with a high tolerance for risk can step in, front the necessary capital, and levy increased fees.
      2. It’s safer because users’ funds would only be released when the outcome is valid. It might be faster and more flexible because more parties(solvers) are permissionlessly involved in the solving process and competing to give a better outcome to users.
      3. UniswapX, flashbots’ SUAVE, and essential are all working in this direction. More on intents: \
        nft://10/0x9351de088B597BA0dd2c1188f6054f1388e83578/?showBuying=true&showMeta=true
      4. The challenging aspect of this solution centers around the settlement oracle. Let’s take UniswapX as an example. To facilitate cross-chain swaps, we rely on a settlement oracle to determine when to release funds to solvers. If the settlement oracle opts for the native bridge (which is slow), or if a third-party bridge is used (raising trust concerns), or even if it’s a Light Client bridge (not yet ready for use), we essentially find ourselves in the same loop as before. Hence, UniswapX also offers “Fast cross-chain swaps” similar to an optimistic bridge.
      5. Simultaneously, the effectiveness of intent resolution relies on the competition among solvers. Since solvers need to rebalance their inventory across different chains, this may potentially lead to centralized solver issues, limiting the full potential of intents.

To summarize, it can be observed that there are three ways to address the UE problems:

  1. Use the magic of zk :
    1. The primary challenge lies in the performance of zk technology, encompassing both the time required for generation and associated costs. Additionally, when dealing with highly customizable modular blockchains, the question arises: do we possess a zk proving system capable of accommodating the myriad of differences?
  2. Use an economic slashing scheme to guarantee :
    1. The major drawback of this approach is the time delay inherent in the decentralized method (for instance, in the case of EigenSettle, we must wait for the cap to be reached). Furthermore, the centralized approach offers limited commitments (as exemplified by shared sequencing), relying on builders/sequencers to make commitments, which can be restricted and lack extensibility.
  3. Trust a third party :
    1. While trusting a third party can introduce additional risks, as users must have faith in the bridge, intent-enabled cross-domain swaps represent a somewhat more “decentralized” form of third-party bridging. However, this approach still contends with oracle latency, trust issues, and potential time delays, as you must wait someone to accept your intent.

It’s interesting that modularization also introduces new possibilities for interoperability experiences:

  1. Enhanced Speed with Modular Components: By breaking down into finer modules, users can get quicker confirmations from the layer2 level (might be already safe enough for ordinal users)
  2. Shared Sequencer for Atomic Transactions: The concept of a shared sequencer could potentially enable a new form of atomic transactions, such as flash loans. More details on : https://twitter.com/sanjaypshah/status/1686759738996912128

Modular interoperability solutions are experiencing rapid growth, and currently, there are various approaches, each with its own strengths and weaknesses. Perhaps the ultimate solution is still some distance away, but it’s heartening to see so many individuals striving to create a safer and more connected modular world before the rollup explosion arrives.

Cost Analysis

One factor contributing to the limited number of rollups in existence is the economic consideration associated with their launch, in comparison to utilizing smart contracts. Operating via smart contracts adopts a more variable cost model, where the primary expense is the gas fee, whereas launching and maintaining a rollup incurs both fixed and variable costs. This cost dynamic suggests that applications with a substantial transaction volume or relatively high transaction fees are better positioned to leverage rollups, as they have a greater capacity to amortize the fixed costs involved. Consequently, initiatives aimed at reducing the cost associated with rollups—both fixed and variable—are paramount. Delving into the cost components of rollups, as elucidated by Neel and Yaoqi during their talk at ETHCC, provides a clearer picture:

Employing a financial model, such as the Discounted Cash Flow (DCF) analysis, can be instrumental in evaluating the viability of launching a rollup for an application. The formula:

DCF(Revenue - Expenses)>Initial Investment

serves as a baseline to ascertain whether the operational income surpasses the initial investment, thereby making the launch of a rollup a financially sound decision. Protocols that succeed in lowering operational costs while augmenting revenue are instrumental in encouraging the increased adoption of rollups. Let’s explore one by one:

  1. Initial Development and Deployment Fee
    1. The initial setup, despite the availability of open-source SDKs like Opstack and Rollkit, still demands a significant amount of time and human capital for installation and debugging. Customization needs, for instance, integrating a VM into an SDK, further escalate the resources required to align the VM with the various interfaces each SDK provides.
    2. RAAS services like AltLayer and Caldera can significantly alleviate these complexities and efforts, encapsulating the economic benefits of division of labor.
  2. Recurring Fee/Revenue
    1. Revenue (++++)
      1. User fees
        1. = L1 Data Posting Fee + L2 Operator Fee + L2 Congestion Fee
        2. Although some user fees might be offset by expenses, scrutinizing and striving to lower these costs is vital as rollups may become untenable if user fees are prohibitively high for users. (Explored in the expense section)
      2. Miner Extractable Value (MEV) captured
        1. Primarily related to the transaction value originating from the chain, this can be boosted either by enhancing MEV-extraction efficiency or increasing cross-domain MEV.
        2. Partnering with established searchers, employing a PBS auction to foster competition, or leveraging SUAVE’s block building as a service are viable strategies to optimize MEV capture efficiency.
        3. For capturing more cross-chain MEV, utilizing a shared sequencer layer or SUAVE (shared mempool and shared block building) is beneficial as they connect to several domains.
          1. According to recent research by Akaki, shared sequencers are valuable for arbitrage searchers aiming to seize arbitrage opportunities across different chains, as they ensure victory in simultaneous races on all chains.
          2. SUAVE serves as a multi-domain order flow aggregation layer, aiding the builder/searcher in exploring cross-domain MEV.
    2. Expenses (- - - -)
      1. Layer 2 (L2) operation fee
        1. Ordering: It might be tricky to compare centralized and decentralized ordering solutions. Competition in more decentralized solutions like Proof of Efficiency can help decrease cost by keeping the operator margin minimal and also incentivize posting batches as often as possible. On the other hand, centralized solutions typically involve fewer parties, which can simplify the process but may not benefit from the same cost-reducing dynamics.
        2. Execution: This is where full nodes use VMs/EVMs to execute the changes to a rollup’s state given new user transactions.
          1. Efficiency can be bolstered through optimized alt-VMs like Fuel and Eclipse’s Solana VM, which enable parallel execution. However, deviating from EVM compatibility could introduce friction for developers and end-users, along with potential security issues. The compatibility of Arbitrum’s Stylus with both EVM and WASM (which is more efficient than EVM) is commendable.
        3. Proving
          1. Prover Market
            1. Theoretically, utilizing a specialized prover market like Risc0, =nil and marlin, instead of creating a proprietary centralized or decentralized prover network, can result in cost savings for several reasons:
              1. There may be a higher level of participation in a dedicated prover market, which in turn fosters increased competition, ultimately leading to lower prices.
              2. Provers can optimize hardware usage and be repurposed when a specific application does not require immediate proof generation, reducing operation costs and providing cheaper service.
              3. Naturally, there are downsides, including potentially capturing less token utility and relying on the performance of an external party. Furthermore, distinct zk rollups may impose varying hardware prerequisites for the proof generation process. This variability could pose a challenge for provers seeking to expand their proving operations.
              4. more on the prover market and prover network: https://figmentcapital.medium.com/decentralized-proving-proof-markets-and-zk-infrastructure-f4cce2c58596
      2. Layer 1 (L1) data posting
        1. Opting for a more cost-effective Data Availability (DA) layer apart from Ethereum or even using DAC solution can significantly cut down expenses, although at the potential cost of reduced security (explored further in the security layer). For gaming and social which usually have low-value but high-bandwidth, scalability might be a more important factor than security for them.
        2. Employing Ethereum as the DA layer allows for leveraging protodansharing and dansharding to attain cost efficiency. Moreover, given that the blob posting fee is set per block irrespective of the blob’s utilization by the rollup, there exists a need to balance between cost and delay: While a rollup would ideally post a complete blob, a low transaction arrival rate leading to fully occupying a blob space results in excessive delay costs.
          1. Potential solutions: joint blob posting cost for small rollups;
      3. L1 settlement fee
        1. For optimistic rollups, the settlement cost is relatively low. Post bedrock, Optimism only pays ~5$ a day to ethereum;
        2. For zk settlement, it’s relatively expensive for zkp verification
          1. zk-proof aggregation
            1. Depending on the underlying proof system, a rollup on Ethereum might spend anywhere from 300k to 5m gas to validate a single proof. But since proof sizes grow very slowly (or not at all) with the number of transactions, rollups can reduce their per-transaction cost by waiting to accumulate a large batch of transactions before submitting a proof.
            2. Sovereign labs, polygon 2.0’s interoperability layer as mentioned before aggregates proofs from multiple rollups, each rollup can then verify the state of multiple rollups at the same time, saving on verification costs. Zksync’s layering structure combined with proof aggregation further reduces the verification costs.
            3. Nonetheless, this method is most effective when two domains utilize the same ZKVM or a shared prover scheme (zksync’s hyperchains use the same zkEVM with fully identical zkp circuits) ; otherwise, it may result in compromised performance.
              1. NEBRA labs bring economy of scale and composability of proof verification on Ethereum. NEBRA UPA (Universal Proof Aggregator) universally aggregates heterogeneous proofs so that the verification cost can be amortized. UPA can be used to compose proofs from different sources to allow new use cases as well.

To summarize, the primary methods to economize on rollup costs include:

  1. Co-aggregating with other rollups to share fees or harness economies of scale:
    1. It’s noteworthy that such aggregation is also potentially crucial for achieving interoperability. As previously highlighted, employing a congruent layer or framework across diverse rollups simplifies interaction amongst them, ensuring hassle-free information exchange. This consolidated strategy fosters a more integrated and unified Layer 2 infrastructure.
  2. Delegating certain tasks to external service providers, capitalizing on the principle of division of labor.

As more rollups emerge (meaning you can collaborate with additional parties to divide fees), and more rollup service providers offer more refined services (providing a wider selection of mature upstream providers), we anticipate that the expenses associated with establishing a rollup will decrease.

Shared Security

If you aim to achieve an equivalent level of security (both economically and in terms of decentralization) as the source chain, simply deploy a smart contract or a smart contract rollup. If harnessing a portion of the security provided by the source chain is sufficient to improve performance, there are currently several shared security solutions at your disposal.

Shared security solutions greatly ease the security bootstrap process of most of the protocols or modular layers that need initial security. This is very meaningful for a future modular world, as we envision more infra/protocols emerging to facilitate the functionality of a modular world and more parts of a rollup become modular, except for DA, execution, settlement and sequencing. If a rollup uses a certain modular layer (such as DA) or a service whose security isn’t on par with Ethereum’s, then the overall security of the entire modular chain could be compromised. We need shared security to enable a decentralized and reliable SAAS service economy.

Eigenlayer, Babylon and Cosmos’s ICS and Osmosis’ mesh security serve a pivotal role in offering decentralized trust as a service to other infrastructural entities.

  1. Eigenlayer allows Ethereum stakers to repurpose their staked $ETH to secure other applications built on the network.
  2. Cosmos’ ICS allows the Cosmos Hub (“provider chain”) to lend its security to other blockchains (“consumer chains”) in return for fees.
  3. Mesh Security, brought up by osmosis, enables token delegators (not validators) to restake their staked tokens on a partner chain within the ecosystem. This allows for bidirectional or multilateral security flow, as different appchains can combine their mcaps to enhance overall security.
  4. Babylon allows BTC holders stake their BTC within the BTC network and provide security to other POS chains by optimizing the use of the Bitcoin scripting language and use an advanced cryptographic mechanism

ICS and Mesh Security, both integral to the Cosmos ecosystem, primarily aim to facilitate inter-chain borrowing of security. These solutions predominantly address the security needs of Cosmos appchains, allowing them to leverage the security of other chains within the ecosystem. Specifically, cosmos hub ICS offers for cosmos chains that don’t want to bootstrap validator sets (replicated security), while mesh security requires each chain to have its ownb validator set, but allows far greater optionality to the chain governance.

On the other hand, Babylon presents a unique approach by unlocking the latent potential of BTC holders’ idle assets without moving BTC out of its native chain. By optimizing the use of Bitcoin’s scripting language and integrating advanced cryptographic mechanisms, Babylon provides additional security to the consensus mechanisms of other chains with great features like faster unbonding periods. Validators on other POS chains with BTC can lock their BTC on the Bitcoin network and sign POS blocks with BTC private keys. Invalid performances like double-signing would leak the btc private key of the validator and burn its BTC on the bitcoin network. BTC staking would be launched in babylon’s 2nd testnet.

While Babylon navigates the constraints of Bitcoin’s lack of smart contract support, Eigenlayer operators on the Turing-complete Ethereum platform, Not only does Eigenlayer offer economic security to new rollups and chains, but its environment on Ethereum also allows for a more diverse range of AVS. According to eigenlayer’s article on programmable trust, the security eigenlayer can provide can actually be further broken down into 3 types:

  1. Economic trust: Trust from validators making commitments and backing their promises with financial stakes.This trust model ensures consistency regardless of the number of parties involved. There must be objective slashing conditions that can be submitted and verified onchain and it’s usually heavyweight for restakers.
  2. Decentralized trust: trust from having a decentralized network operated by independent and geographically isolated operators. This aspect emphasizes the intrinsic value of decentralization and enables use cases that are not objectively provable as decentralization increases the difficulty of collusion. To utilize decentralized trust, it’s usually lightweight.
  3. Ethereum inclusion trust: trust that Ethereum validators will construct and include your blocks as promised, alongside the consensus software they are running. This can specifically be committed by ethereum validators (not LST restakers). They run software sidecars to perform extra computation and receive extra rewards.

So now we are clear with the security materials, what can we expect?

  1. ICS and mesh security lower the security barriers for cosmos appchains like neutron, stride and axelar.
  2. Eigenlayer can fit into many solutions that have been mentioned before:
    1. rollup security: relay network; watch tower, sequencing, mev-protection, eigenDA
    2. rollup interop: eigensettle; bridges
    3. cost analysis: prover network
    4. more to explore, check out https://www.blog.eigenlayer.xyz/eigenlayer-universe-15-unicorn-ideas/
  3. Babylon is running testnet to increase the security level of other pos chains. Its first testnet provides timestamping service to add addtional security to high-value defi activities from several cosmos chains like akash, osmosis, juno,etc.

The core idea behind these shared security solutions is to enhance the capital efficiency of staked or illiquid assets by introducing additional responsibilities. However, it’s essential to be vigilant about the added risks when seeking higher returns:

  1. Increased complexity introduces more uncertainties. Validators become exposed to additional slashing conditions that may lack enough training wheels, which can be precarious.
    1. Eigenlayer aims to address this issue by proposing the implementation of a veto committee. This committee serves as a mutually trusted entity among stakers, operators, and AVS developers. In the event of a software bug within the AVS, stakers and operators won’t face penalties as the veto committee can cast a veto vote. While this approach may not be inherently scalable and could be subjective if AVSs are not strictly aligned with use cases based on trustlessly attributable actions, it can still serve as a valuable means to initiate a risk mitigation strategy during the early stages.
  2. Greater complexity also brings additional burdens. It can be overwhelming for less experienced validators to determine which service to share security with. Also the initial setup period may involve a higher risk of errors. Additionally, there should be mechanisms in place to allow “less tech-savvy” validators and stakers to benefit from higher yields, provided they are willing to accept relatively elevated risks, without being constrained by their operational capabilities.
    1. Rio Network and Renzo are all working on effectively addressing this challenge for Eigenlayer by offering a structured approach to cautiously select sophisticated node operators and AVS services for potential restakes, elevating security levels and reducing entry barriers for participants.

Furthermore, as Eigenlayer gains wider adoption, it could potentially open up new horizons in the realm of the Financialization of Security. This could facilitate the valuation of shared security and the various applications built upon it.

  1. An limitation that is presented to EigenLayer is in its ability to scale capital allocation to its system by outcompeting yield opportunities in DeFi for the same assets that it supports (LSTs). EigenLayer commoditizes the value of security and this opens up the door for many primitives to underwrite this value and provide the ability for restakers to both restake and participate in the greater DeFi ecosystem.
    1. Ion Protocol is a product attempting to do this in order to scale the reach that restaking can have. Ion is building a price-agnostic lending platform that is built to specifically support staked and restaked assets via using ZK-infrastructure to underwrite the lower-level slashing risk present in such assets (ZK state proof systems + ZKML). This could initiate the beginning of the birth of many novel DeFi primitives built upon the underlying value of security that EigenLayer commoditizes, further enabling the ability of restaking to scale across the entire ecosystem.

As we stand at the cusp of significant transformations, it is crucial to embrace the principles of security, interoperability, and cost-effectiveness. These pillars will not only guide the development of more scalable and efficient blockchain solutions but will also pave the way for a more interconnected and accessible digital world. Embracing these changes with foresight and adaptability will undoubtedly lead to groundbreaking advancements in the blockchain ecosystem.

Disclaimer:

  1. This article is reprinted from [mirror]. All copyrights belong to the original author [SevenX Ventures ]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!