How EigenDA Works

Advanced10/22/2024, 4:12:17 AM
Data Availability (DA) solution serves a similar purpose in crypto. It ensures that the information required to validate and process transactions on a blockchain is accessible to all participants. Without robust data availability, the integrity and functionality of blockchain networks—especially scaling solutions like rollups—could be severely compromised.

I often visit the Starbucks in the Fort area of Mumbai. On my way, I pass by the famous Asiatic Society Library, which has been featured in movies and countless reels, I’m reminded of its enduring presence. I considered using a different analogy to explain data availability, but when something works so well, why change it?

Source- Wikipedia

Imagine it’s the 1800s, and the Asiatic Society Library is among the very few—or perhaps the only—libraries in town. This library isn’t just a repository of books. It is the central hub where every piece of information needed to keep the town running smoothly is stored. The library holds essential records like birth certificates and property deeds. It also contains valuable resources such as educational materials and cultural artefacts. The town could not lose access to these materials at any point. What would happen if the library were locked or vanished? It would wreak havoc across all municipal departments that rely on its information.

Data Availability (DA) solution serves a similar purpose in crypto. It ensures that the information required to validate and process transactions on a blockchain is accessible to all participants. Without robust data availability, the integrity and functionality of blockchain networks—especially scaling solutions like rollups—could be severely compromised.

From Early Web Businesses to Modular Blockchains

In the early days of the web, every online business had to manage everything themselves. As Shlok explored in our AVS article, every online business needed physical servers, networking equipment, data storage, software licences for databases and operating systems, a secure facility to house hardware, a team of system administrators and network engineers, and robust disaster recovery and backup solutions. All of this cost at least $250,000 and took several months to a year to set up.

However, we soon realised that delegating these tasks was beneficial for everyone. This insight aligns with the economic principle of comparative advantage. It states that entities do not need to produce everything themselves. Instead, they can specialise in areas where they have a lower opportunity cost and engage in trade with others.

In essence, attempting to produce everything incurs an opportunity cost—the resources and time dedicated to producing one good could instead be allocated to producing another. Some entities can produce certain goods more efficiently than others. A classic example of comparative advantage is the trade between the US and China. The US has a comparative advantage in producing high-tech goods, such as software and advanced machinery, because of its skilled workforce and innovation capabilities. Meanwhile, China has a comparative advantage in manufacturing consumer goods, like electronics and clothing, due to its lower labour costs. By focusing on producing what each country is relatively more efficient at, both countries benefit from trade by obtaining goods at a lower cost than if they tried producing them domestically. By focusing on their strengths and trading, all parties can achieve greater efficiency and mutual benefits without the burden of excelling in every area independently.

This principle extends beyond nations and businesses to blockchain architectures as well. Just as countries specialise in particular industries or products, different components of a blockchain system can focus on specific functions. This specialisation leads to overall improved performance and efficiency within the ecosystem.

Why data availability?

Similar to early internet businesses, blockchains initially handled everything: executing transactions, reaching consensus, storing data, and settling transactions. This approach posed problems for chains like Ethereum, which is relatively highly decentralised at the base level. Gradually, the idea of modularity gained traction. Modularity in blockchains refers to breaking down the blockchain’s functions (like consensus, data availability, and execution) into separate, specialised layers or modules. This allows for greater flexibility, scalability, and efficiency by letting each layer focus on a specific task.

Ethereum decided that separating execution from consensus and settlement was the best way to scale, putting the rollup-centric roadmap in the spotlight.

Several Layer 2 (L2) solutions flooded the Ethereum Virtual Machine (EVM) landscape, overloading Ethereum by posting transaction data on it. This competition for Ethereum’s blockspace made using the L1 expensive. Storing and accessing data on Ethereum was costly—by March 2024, L2s incurred over 11,000 ETH in fees. At $3,400 per ETH, that amounted to $37.4 million!

Ethereum addressed the problem with EIP-4844, introducing a separate space called blobs for L2s to store their data. Consequently, the cost dropped to 1.7k ETH the following month and to just over 100 ETH by August—a 99% reduction. So, is the cost issue for rollups solved? I wish it were that simple.

Challenges beyond cost

Despite the reduction in fees for storing data in blobs, two critical challenges remain:

  1. Fee Predictability: Fees remain unpredictable due to Ethereum congestion.
  2. Blob Capacity: Each blob can contain 128kB of data, and each block can include up to 6 blobs, totalling 768kB per block. Considering other transactions, an Ethereum block can be around 1.77 MB. This takes the maximum size of an Ethereum block to approximately 2.5 MB. With a 12-second block time, Ethereum’s bandwidth is roughly 0.2 MB/s—insufficient for the anticipated increase in decentralised application users.

These limitations underscore the need for dedicated DA services, much like how rollups offload execution from Ethereum.

With this backdrop, several DA solutions like Celestia, Avail, and Near have emerged. These dedicated services focus exclusively on ensuring that data is both accessible and secure, providing the necessary infrastructure to support scalable and reliable blockchain networks. By concentrating on data availability, these solutions can optimise performance and address the specific challenges that general-purpose blockchains struggle to manage effectively.

EigenDA - Ethereum’s data storage extension

EigenDA is an Actively Validated Service (AVS) by EigenLayer on top of Ethereum. It means that EigenDA doesn’t work independently of Ethereum. If a developer wants to use a DA service without Ethereum in the mix, EigenDA is not the answer. It is distinguished by several key features that set it apart from other DA services.

1. High throughput

At 15 MB/s, EigenDA has the highest bandwidth among the ‘out-of-protocol’ DA services. Out-of-protocol implies that the DA service operates separately from the core blockchain. It achieves high throughput by separating consensus from DA, Erasure coding, and direct communication instead of peer-to-peer.

Separating consensus from DA. Most current DA systems combine verifying that data is accessible with arranging the order of that data into a single, complex system. While attesting data can be done parallelly, reaching a consensus or ordering the data slows everything down. This combined approach can enhance security for systems that manage data ordering themselves. But it’s unnecessary for DA systems like EigenDA that work alongside Ethereum, which already handles data ordering or consensus. By removing the extra step of ordering, EigenDA becomes much faster and more efficient.

Here’s how EigenDA works with Ethereum, with an example of a rollup:

  1. The rollup sequencer (which organises transactions) sends a batch of transactions to the EigenDA system.
  2. The EigenDA system breaks the batch into smaller parts, creates proof that the data is complete, and sends these parts to different storage operators, getting confirmation that they’ve stored the data.
  3. After getting these confirmations, EigenDA sends a message to the blockchain (Ethereum) saying the data is safely stored and includes details and proof.
  4. EigenDA’s contract on Ethereum verifies the proof and stores the result on-chain.
  5. Once the data is stored off-chain and recorded (proof that data is stored off-chain) on the blockchain, the rollup sequencer sends a reference ID for the data to its own system.
  6. Before accepting the data ID, the rollup system checks with EigenDA to make sure the data is fully available. If the check confirms it’s stored, the ID is accepted. If not, the ID is rejected.
    In essence, EigenDA helps store and verify transaction data outside the main blockchain, ensuring its security and availability.

You can understand the mechanism in depth in EigenDA docs.

Erasure coding is like creating a clever puzzle from your data, where you only need some of the pieces to solve it. This method ensures that your data remains safe, accessible, and efficient to store, even if some parts are lost or some storage locations fail. EigenDA uses this technique when rollups send data, encoding it into fragments. This way, each node only needs to download a tiny part of the data instead of the whole thing, making the process much more efficient. And the best part is, as the size of the data increases, the part that nodes need to download doesn’t increase linearly but quasilinearly.

Instead of using fraud proofs to catch mistakes, EigenDA uses special cryptographic proofs called KZG commitments. These proofs help nodes ensure that the data is correctly processed and stored, enhancing both speed and security.

Direct communication instead of P2P. Most current data availability (DA) systems use peer-to-peer (P2P) networks, where each operator shares data with their neighbours, which slows down the entire process. In contrast, EigenDA employs a central disperser that sends data directly to all operators using unicast communication. Unicast means that data is sent directly to an operator instead of being gossiped around the network. Although this may seem to create more centralisation in the system, it is not so. Because the disperser is not directly responsible for DA. It just moves data. The actual data storage happens across several nodes across the network. Moreover, the centralised disperser is a part of the current architecture, but the EigenDA team suggests that it will move towards decentralised dispersal in the future.

This direct approach avoids the delays and inefficiencies of P2P sharing, allowing EigenDA to verify data availability much faster and more efficiently. EigenDA ensures quicker data confirmation and enhances overall performance by eliminating time-consuming gossip protocols.

These three factors allow EigenDA to scale horizontally, meaning that as more nodes join the network, it becomes more scalable. Currently, the limit is 200 operators.

2. Strong Trust Model

Most data availability (DA) solutions, such as Celestia and Avail, require node operators to stake their native tokens to enhance the token’s utility. In contrast, EigenDA adopts a unique approach by implementing dual staking with both ETH and EIGEN tokens. To join the respective ETH and EIGEN quorums, an operator must restake at least 32 ETH and 1 EIGEN.

But why mandate operators to stake EIGEN in addition to ETH? This dual staking mechanism enables EigenDA to penalise malicious operators through token forking rather than relying solely on Ethereum for enforcement. This process, known as intersubjective forking, allows for more efficient and effective punishment of bad actors. Let’s unpack how this works.

One of the most critical aspects of maintaining the network integrity of a DA service is combating data withholding attacks. This type of attack occurs when a block producer proposes a new block but withholds the transaction data necessary to validate it. Typically, blockchains ensure block availability by requiring validators to download and validate the entire block. However, if a majority of validators act maliciously and approve a block with missing data, the block might still be added to the chain, though full nodes will eventually reject it.

While full nodes can detect invalid blocks by fully downloading them, light clients lack this capability. Techniques like Data Availability Sampling (DAS) help light clients verify data availability without downloading the entire block, thereby keeping their resource requirements low.

In DAS, nodes do not need to download entire blobs of data to verify their availability. Instead, they randomly sample small portions of the data chunks stored across various nodes. This sampling approach significantly reduces the amount of data each node must handle, enabling quicker verification and lower resource consumption.

But what happens if some nodes don’t comply and refuse to store or provide the required data? Traditionally, the response would be to report these misbehaving nodes to Ethereum, which would then slash their stakes. However, making a DA service force a potentially malicious node to post all its data on Ethereum to prove its innocence is not feasible due to the following reasons:

  1. High Costs: Posting large amounts of data on Ethereum is prohibitively expensive. Ethereum’s blockspace is already highly sought after, and adding significant data burdens would lead to exorbitant fees and further network congestion. Let’s drive the point with an example. The storage of the first 32 bytes in Ethereum costs 20k gas, and each subsequent 32-byte chunk costs 5k gas. To store 1 GB (1073741824 bytes) of data would cost 20k + (1073741824/32 – 1)*5k = 167,772,175k gas. If gas trades at 30 Gwei, the total cost is 5,033,165,250,000 gwei or ~5033 ETH. This is roughly $13 million if ETH is trading at $2600.
  2. Scalability Issues: Ethereum’s current throughput and block size limits mean that processing large data posts from multiple DA services would strain the network, causing delays and inefficiencies.
  3. Transaction Latency: The time it takes to process and confirm large data uploads on Ethereum would slow down the punitive process, allowing malicious actors to potentially continue their harmful activities longer than desired.
  4. Inefficient Enforcement: Relying on Ethereum’s own mechanisms for slashing would involve complex coordination among validators. This will result in higher latency, making it an impractical solution for frequent enforcement actions required by DA services.

Given these challenges, EigenDA employs intersubjective forking as a more efficient and cost-effective method to enforce penalties against malicious operators. Here’s how it works:

All reasonable and honest observers within the EigenDA network can independently verify that an operator is not serving data when requested. Upon verification, EigenDA can initiate a fork of the EIGEN token, effectively slashing the malicious operator’s stake. This process bypasses the need to involve Ethereum directly, thereby reducing costs and speeding up the punitive process.

Intersubjective forking leverages the collective agreement of multiple observers to enforce network rules, ensuring that malicious operators are swiftly and efficiently penalised without the overhead associated with traditional methods. This robust trust model enhances EigenDA’s security and reliability, making it a better choice among DA solutions.

3. Customisability

Attestation is required to ensure the validity and availability of data within a blockchain system. It acts as a verification process where participants, like validators or stakers, confirm that the data in a block is correct and accessible to everyone. Without attestation, there would be no guarantee that the proposed data is legitimate or that it hasn’t been withheld or tampered with, which could lead to a breakdown in trust and potential security vulnerabilities. Attestation ensures transparency and prevents malicious actions, such as withholding data or proposing invalid blocks.

Custom Quorum

EigenDA has a feature called Custom Quorum, where two separate groups must verify data availability. One group consists of ETH restakers (the ETH quorum), and the other could be stakers of the rollup’s native token. Both groups work independently, and EigenDA only fails if both are compromised. So, projects that don’t want to rely on EigenDA’s attestation can employ the custom quorum. This is helpful for developers because it introduces the optionality of overriding EigenDA’s checks.

Pricing flexibility and reserved bandwidth

Rollups currently take on gas price uncertainty and exchange rate risk when they charge for fees in their native token and they are paying Ethereum in ETH for settlement. EigenDA offers rollups and other apps to pay for DA in their native tokens and also reserve dedicated bandwidth that doesn’t conflict with anything else.

EigenDA has carved out a distinctive position in the data availability landscape with its high throughput and innovative dual quorum mechanism. Its intersubjective forking system and DAS offer robust solutions to critical challenges like data withholding attacks, enhancing network security without over-relying on Ethereum.

However, EigenDA faces two significant hurdles. Firstly, the current cap of 200 operators poses a potential bottleneck for scalability and decentralisation as demand grows. This limitation could become increasingly problematic as more rollups and applications seek reliable data availability solutions.

Secondly, and perhaps more pressingly, EigenDA must navigate the challenge of sustainable revenue generation. The following chart shows how DA service revenue has declined significantly for both Celestia and Ethereum.

With data availability fees trending downwards across the industry, EigenDA’s economic model will need to evolve. The project must find new ways to monetise its services without compromising affordability or performance.

EigenDA’s success will largely depend on how it addresses these challenges. Can it expand its operator network without sacrificing security or efficiency? Will it discover new revenue streams or optimise its cost structure to remain competitive in a market of decreasing fees? As the blockchain ecosystem continues to mature, EigenDA’s responses to these questions will play a crucial role in shaping not only its own trajectory but also the broader landscape of blockchain scalability solutions.

Disclaimer:

  1. This article is reprinted from [decentralised], All copyrights belong to the original author [Saurabh Deshpande]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

How EigenDA Works

Advanced10/22/2024, 4:12:17 AM
Data Availability (DA) solution serves a similar purpose in crypto. It ensures that the information required to validate and process transactions on a blockchain is accessible to all participants. Without robust data availability, the integrity and functionality of blockchain networks—especially scaling solutions like rollups—could be severely compromised.

I often visit the Starbucks in the Fort area of Mumbai. On my way, I pass by the famous Asiatic Society Library, which has been featured in movies and countless reels, I’m reminded of its enduring presence. I considered using a different analogy to explain data availability, but when something works so well, why change it?

Source- Wikipedia

Imagine it’s the 1800s, and the Asiatic Society Library is among the very few—or perhaps the only—libraries in town. This library isn’t just a repository of books. It is the central hub where every piece of information needed to keep the town running smoothly is stored. The library holds essential records like birth certificates and property deeds. It also contains valuable resources such as educational materials and cultural artefacts. The town could not lose access to these materials at any point. What would happen if the library were locked or vanished? It would wreak havoc across all municipal departments that rely on its information.

Data Availability (DA) solution serves a similar purpose in crypto. It ensures that the information required to validate and process transactions on a blockchain is accessible to all participants. Without robust data availability, the integrity and functionality of blockchain networks—especially scaling solutions like rollups—could be severely compromised.

From Early Web Businesses to Modular Blockchains

In the early days of the web, every online business had to manage everything themselves. As Shlok explored in our AVS article, every online business needed physical servers, networking equipment, data storage, software licences for databases and operating systems, a secure facility to house hardware, a team of system administrators and network engineers, and robust disaster recovery and backup solutions. All of this cost at least $250,000 and took several months to a year to set up.

However, we soon realised that delegating these tasks was beneficial for everyone. This insight aligns with the economic principle of comparative advantage. It states that entities do not need to produce everything themselves. Instead, they can specialise in areas where they have a lower opportunity cost and engage in trade with others.

In essence, attempting to produce everything incurs an opportunity cost—the resources and time dedicated to producing one good could instead be allocated to producing another. Some entities can produce certain goods more efficiently than others. A classic example of comparative advantage is the trade between the US and China. The US has a comparative advantage in producing high-tech goods, such as software and advanced machinery, because of its skilled workforce and innovation capabilities. Meanwhile, China has a comparative advantage in manufacturing consumer goods, like electronics and clothing, due to its lower labour costs. By focusing on producing what each country is relatively more efficient at, both countries benefit from trade by obtaining goods at a lower cost than if they tried producing them domestically. By focusing on their strengths and trading, all parties can achieve greater efficiency and mutual benefits without the burden of excelling in every area independently.

This principle extends beyond nations and businesses to blockchain architectures as well. Just as countries specialise in particular industries or products, different components of a blockchain system can focus on specific functions. This specialisation leads to overall improved performance and efficiency within the ecosystem.

Why data availability?

Similar to early internet businesses, blockchains initially handled everything: executing transactions, reaching consensus, storing data, and settling transactions. This approach posed problems for chains like Ethereum, which is relatively highly decentralised at the base level. Gradually, the idea of modularity gained traction. Modularity in blockchains refers to breaking down the blockchain’s functions (like consensus, data availability, and execution) into separate, specialised layers or modules. This allows for greater flexibility, scalability, and efficiency by letting each layer focus on a specific task.

Ethereum decided that separating execution from consensus and settlement was the best way to scale, putting the rollup-centric roadmap in the spotlight.

Several Layer 2 (L2) solutions flooded the Ethereum Virtual Machine (EVM) landscape, overloading Ethereum by posting transaction data on it. This competition for Ethereum’s blockspace made using the L1 expensive. Storing and accessing data on Ethereum was costly—by March 2024, L2s incurred over 11,000 ETH in fees. At $3,400 per ETH, that amounted to $37.4 million!

Ethereum addressed the problem with EIP-4844, introducing a separate space called blobs for L2s to store their data. Consequently, the cost dropped to 1.7k ETH the following month and to just over 100 ETH by August—a 99% reduction. So, is the cost issue for rollups solved? I wish it were that simple.

Challenges beyond cost

Despite the reduction in fees for storing data in blobs, two critical challenges remain:

  1. Fee Predictability: Fees remain unpredictable due to Ethereum congestion.
  2. Blob Capacity: Each blob can contain 128kB of data, and each block can include up to 6 blobs, totalling 768kB per block. Considering other transactions, an Ethereum block can be around 1.77 MB. This takes the maximum size of an Ethereum block to approximately 2.5 MB. With a 12-second block time, Ethereum’s bandwidth is roughly 0.2 MB/s—insufficient for the anticipated increase in decentralised application users.

These limitations underscore the need for dedicated DA services, much like how rollups offload execution from Ethereum.

With this backdrop, several DA solutions like Celestia, Avail, and Near have emerged. These dedicated services focus exclusively on ensuring that data is both accessible and secure, providing the necessary infrastructure to support scalable and reliable blockchain networks. By concentrating on data availability, these solutions can optimise performance and address the specific challenges that general-purpose blockchains struggle to manage effectively.

EigenDA - Ethereum’s data storage extension

EigenDA is an Actively Validated Service (AVS) by EigenLayer on top of Ethereum. It means that EigenDA doesn’t work independently of Ethereum. If a developer wants to use a DA service without Ethereum in the mix, EigenDA is not the answer. It is distinguished by several key features that set it apart from other DA services.

1. High throughput

At 15 MB/s, EigenDA has the highest bandwidth among the ‘out-of-protocol’ DA services. Out-of-protocol implies that the DA service operates separately from the core blockchain. It achieves high throughput by separating consensus from DA, Erasure coding, and direct communication instead of peer-to-peer.

Separating consensus from DA. Most current DA systems combine verifying that data is accessible with arranging the order of that data into a single, complex system. While attesting data can be done parallelly, reaching a consensus or ordering the data slows everything down. This combined approach can enhance security for systems that manage data ordering themselves. But it’s unnecessary for DA systems like EigenDA that work alongside Ethereum, which already handles data ordering or consensus. By removing the extra step of ordering, EigenDA becomes much faster and more efficient.

Here’s how EigenDA works with Ethereum, with an example of a rollup:

  1. The rollup sequencer (which organises transactions) sends a batch of transactions to the EigenDA system.
  2. The EigenDA system breaks the batch into smaller parts, creates proof that the data is complete, and sends these parts to different storage operators, getting confirmation that they’ve stored the data.
  3. After getting these confirmations, EigenDA sends a message to the blockchain (Ethereum) saying the data is safely stored and includes details and proof.
  4. EigenDA’s contract on Ethereum verifies the proof and stores the result on-chain.
  5. Once the data is stored off-chain and recorded (proof that data is stored off-chain) on the blockchain, the rollup sequencer sends a reference ID for the data to its own system.
  6. Before accepting the data ID, the rollup system checks with EigenDA to make sure the data is fully available. If the check confirms it’s stored, the ID is accepted. If not, the ID is rejected.
    In essence, EigenDA helps store and verify transaction data outside the main blockchain, ensuring its security and availability.

You can understand the mechanism in depth in EigenDA docs.

Erasure coding is like creating a clever puzzle from your data, where you only need some of the pieces to solve it. This method ensures that your data remains safe, accessible, and efficient to store, even if some parts are lost or some storage locations fail. EigenDA uses this technique when rollups send data, encoding it into fragments. This way, each node only needs to download a tiny part of the data instead of the whole thing, making the process much more efficient. And the best part is, as the size of the data increases, the part that nodes need to download doesn’t increase linearly but quasilinearly.

Instead of using fraud proofs to catch mistakes, EigenDA uses special cryptographic proofs called KZG commitments. These proofs help nodes ensure that the data is correctly processed and stored, enhancing both speed and security.

Direct communication instead of P2P. Most current data availability (DA) systems use peer-to-peer (P2P) networks, where each operator shares data with their neighbours, which slows down the entire process. In contrast, EigenDA employs a central disperser that sends data directly to all operators using unicast communication. Unicast means that data is sent directly to an operator instead of being gossiped around the network. Although this may seem to create more centralisation in the system, it is not so. Because the disperser is not directly responsible for DA. It just moves data. The actual data storage happens across several nodes across the network. Moreover, the centralised disperser is a part of the current architecture, but the EigenDA team suggests that it will move towards decentralised dispersal in the future.

This direct approach avoids the delays and inefficiencies of P2P sharing, allowing EigenDA to verify data availability much faster and more efficiently. EigenDA ensures quicker data confirmation and enhances overall performance by eliminating time-consuming gossip protocols.

These three factors allow EigenDA to scale horizontally, meaning that as more nodes join the network, it becomes more scalable. Currently, the limit is 200 operators.

2. Strong Trust Model

Most data availability (DA) solutions, such as Celestia and Avail, require node operators to stake their native tokens to enhance the token’s utility. In contrast, EigenDA adopts a unique approach by implementing dual staking with both ETH and EIGEN tokens. To join the respective ETH and EIGEN quorums, an operator must restake at least 32 ETH and 1 EIGEN.

But why mandate operators to stake EIGEN in addition to ETH? This dual staking mechanism enables EigenDA to penalise malicious operators through token forking rather than relying solely on Ethereum for enforcement. This process, known as intersubjective forking, allows for more efficient and effective punishment of bad actors. Let’s unpack how this works.

One of the most critical aspects of maintaining the network integrity of a DA service is combating data withholding attacks. This type of attack occurs when a block producer proposes a new block but withholds the transaction data necessary to validate it. Typically, blockchains ensure block availability by requiring validators to download and validate the entire block. However, if a majority of validators act maliciously and approve a block with missing data, the block might still be added to the chain, though full nodes will eventually reject it.

While full nodes can detect invalid blocks by fully downloading them, light clients lack this capability. Techniques like Data Availability Sampling (DAS) help light clients verify data availability without downloading the entire block, thereby keeping their resource requirements low.

In DAS, nodes do not need to download entire blobs of data to verify their availability. Instead, they randomly sample small portions of the data chunks stored across various nodes. This sampling approach significantly reduces the amount of data each node must handle, enabling quicker verification and lower resource consumption.

But what happens if some nodes don’t comply and refuse to store or provide the required data? Traditionally, the response would be to report these misbehaving nodes to Ethereum, which would then slash their stakes. However, making a DA service force a potentially malicious node to post all its data on Ethereum to prove its innocence is not feasible due to the following reasons:

  1. High Costs: Posting large amounts of data on Ethereum is prohibitively expensive. Ethereum’s blockspace is already highly sought after, and adding significant data burdens would lead to exorbitant fees and further network congestion. Let’s drive the point with an example. The storage of the first 32 bytes in Ethereum costs 20k gas, and each subsequent 32-byte chunk costs 5k gas. To store 1 GB (1073741824 bytes) of data would cost 20k + (1073741824/32 – 1)*5k = 167,772,175k gas. If gas trades at 30 Gwei, the total cost is 5,033,165,250,000 gwei or ~5033 ETH. This is roughly $13 million if ETH is trading at $2600.
  2. Scalability Issues: Ethereum’s current throughput and block size limits mean that processing large data posts from multiple DA services would strain the network, causing delays and inefficiencies.
  3. Transaction Latency: The time it takes to process and confirm large data uploads on Ethereum would slow down the punitive process, allowing malicious actors to potentially continue their harmful activities longer than desired.
  4. Inefficient Enforcement: Relying on Ethereum’s own mechanisms for slashing would involve complex coordination among validators. This will result in higher latency, making it an impractical solution for frequent enforcement actions required by DA services.

Given these challenges, EigenDA employs intersubjective forking as a more efficient and cost-effective method to enforce penalties against malicious operators. Here’s how it works:

All reasonable and honest observers within the EigenDA network can independently verify that an operator is not serving data when requested. Upon verification, EigenDA can initiate a fork of the EIGEN token, effectively slashing the malicious operator’s stake. This process bypasses the need to involve Ethereum directly, thereby reducing costs and speeding up the punitive process.

Intersubjective forking leverages the collective agreement of multiple observers to enforce network rules, ensuring that malicious operators are swiftly and efficiently penalised without the overhead associated with traditional methods. This robust trust model enhances EigenDA’s security and reliability, making it a better choice among DA solutions.

3. Customisability

Attestation is required to ensure the validity and availability of data within a blockchain system. It acts as a verification process where participants, like validators or stakers, confirm that the data in a block is correct and accessible to everyone. Without attestation, there would be no guarantee that the proposed data is legitimate or that it hasn’t been withheld or tampered with, which could lead to a breakdown in trust and potential security vulnerabilities. Attestation ensures transparency and prevents malicious actions, such as withholding data or proposing invalid blocks.

Custom Quorum

EigenDA has a feature called Custom Quorum, where two separate groups must verify data availability. One group consists of ETH restakers (the ETH quorum), and the other could be stakers of the rollup’s native token. Both groups work independently, and EigenDA only fails if both are compromised. So, projects that don’t want to rely on EigenDA’s attestation can employ the custom quorum. This is helpful for developers because it introduces the optionality of overriding EigenDA’s checks.

Pricing flexibility and reserved bandwidth

Rollups currently take on gas price uncertainty and exchange rate risk when they charge for fees in their native token and they are paying Ethereum in ETH for settlement. EigenDA offers rollups and other apps to pay for DA in their native tokens and also reserve dedicated bandwidth that doesn’t conflict with anything else.

EigenDA has carved out a distinctive position in the data availability landscape with its high throughput and innovative dual quorum mechanism. Its intersubjective forking system and DAS offer robust solutions to critical challenges like data withholding attacks, enhancing network security without over-relying on Ethereum.

However, EigenDA faces two significant hurdles. Firstly, the current cap of 200 operators poses a potential bottleneck for scalability and decentralisation as demand grows. This limitation could become increasingly problematic as more rollups and applications seek reliable data availability solutions.

Secondly, and perhaps more pressingly, EigenDA must navigate the challenge of sustainable revenue generation. The following chart shows how DA service revenue has declined significantly for both Celestia and Ethereum.

With data availability fees trending downwards across the industry, EigenDA’s economic model will need to evolve. The project must find new ways to monetise its services without compromising affordability or performance.

EigenDA’s success will largely depend on how it addresses these challenges. Can it expand its operator network without sacrificing security or efficiency? Will it discover new revenue streams or optimise its cost structure to remain competitive in a market of decreasing fees? As the blockchain ecosystem continues to mature, EigenDA’s responses to these questions will play a crucial role in shaping not only its own trajectory but also the broader landscape of blockchain scalability solutions.

Disclaimer:

  1. This article is reprinted from [decentralised], All copyrights belong to the original author [Saurabh Deshpande]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
即刻开始交易
注册并交易即可获得
$100
和价值
$5500
理财体验金奖励!