Forward the Original Title‘EigenDA: Revolutionizing Rollup Economics’
Today, EigenDA is the largest AVS in terms of both restaked capital and unique operators, with over 3.64m ETH and 70m EIGEN restaked, totaling about $9.1bn of restaked capital, from 245 operators and 127k unique staking wallets. With an influx of alternative Data Availability platforms launching, it becomes difficult to decipher the differences between them, their unique value propositions, and how protocol value accrual may vary. In this article, we’ll dive deep into EigenDA, exploring the unique mechanisms that constitute its design, while also taking a look at the competitive landscape to analyze how this market sector could potentially play out.
Before we dive deep into EigenDA, let’s first understand the concept of data availability (DA) and why it matters. Data Availability refers to the assurance that all necessary data for verifying transactions and maintaining the blockchain is accessible to all participants (nodes) in the network. DA is part of the classical monolithic architecture we have seen others write about ad nauseum - but to be brief, execution, consensus, and settlement, all rely on DA. Without DA, the integrity of a blockchain is essentially compromised.
The reliance of all other parts of the stack on DA created a bottleneck for scaling, which is why we saw the Layer 2 roadmap take place. Following the introduction of Optimistic Rollups in 2019, the L2 future was birthed into existence. L2s had execution take place off chain, but still relied on Ethereum for DA to maintain Ethereum’s security guarantees. With this paradigm shift, many realized that the advantages offered by L2s could be even further exacerbated by building specific blockchains, or services that focused solely on improving the limitations of a monolithic architecture‘s DA layer.
Despite specific DA layers popping up that pose a realistic opportunity for fees to be driven down via competition, and further experimentation to take place, the DA problem is being addressed on Ethereum mainnet itself through a process known as “Dank Sharding.” The first part of Dank Sharding was implemented via EIP-4844, which introduced transactions that carry additional data blobs, which can be up to 125 KB in size. These blobs are committed using KZG (a type of cryptographic commitment), ensuring data integrity and future compatibility with data availability sampling. Prior to EIP-4844 being implemented, rollups were using calldata to post rollup transaction data to Ethereum.
Since proto danksharding went live in the Dancun update back in mid March, there have been 2.4m blobs, for a total size of 294 gigabytes of data, with over 1700 ETH paid in fees to the L1. It’s important to note that blob data is not accessible to the EVM and is automatically deleted after roughly 2 months. There is currently a limit of 6 blobs per block, for 750 kb total. Context for non-technical readers, if blobspace was maxed out for three blocks straight, you’ve got an average Gamecube memory card’s worth of data, throwback.
This limit is indeed getting hit multiple times a day, signaling ample demand for blobspace on Ethereum. While the blob base fee on Ethereum is around $5 at the time of writing, we should be carefully reminded that this fee is reflexive to the price of ETH, as is most DeFi activity. Therefore, in times of euphoric price appreciation for ETH, more activity takes place, which in turn would lead to more demand for blobspace. Therefore, in order to be prepared for either increased speculation across DeFi or open the network up to unseen use cases, the cost of data availability must be driven down further. There is still plenty of incentive for lowering these costs even further to encourage continued user activity.
EigenDA is built on the simple principle that Data Availability does not require independent consensus to solve, thus EigenDA is structurally crafted to scale linearly as the primary role of operators is to simply handle data storage. To get more granular, there are three main cogs in the EigenDA architecture:
Operators for EigenDA are the parties or entities responsible for running EigenDA node software, registered in EigenLayer with stake delegated to them. You can think of these similarly to how you think about node operators for a traditional proof of stake network. However, instead of being burdened with consensus, the role of these operators is rather to store blobs associated with valid storage requests. In this particular instance, a valid storage request is a request where fees are paid and the provided blob chunk verifies against the provided KZG commitment and proof.
Put simply, a KZG commitment allows you to link a piece of data with a unique code (commitment) and later prove that a given piece of data is the original one using a special key (proof). This ensures that the data hasn’t been changed or tampered with, thus maintaining the integrity of the blob.
The Disperser is what EigenDA documentation refers to as an “untrusted” service, hosted by EigenLabs. Its primary responsibility is interfacing between EigenDA clients, operators, and contracts. Clients of EigenDA make dispersal requests to the dispenser, which Reed-Solomon encodes the data, which helps in data recovery, and then calculates the encoded blob’s KZG commitment, and generates a KZG proof for each chunk. Following this, the disperser sends chunks, KZG commitments, and KZG proofs to EigenDA operators, who then return signatures. The last action of the disperser is to aggregate those signatures and upload them to Ethereum in the form of calldata to the EigenDA contract. It’s important to note that this step is a necessary precondition for slashing operators for potential misbehavior.
The last core component of EigenDA, the Retriever, is a service that queries the EigenDA operators for blob chunks, verifies that blob chunks are accurate, and then reconstructs the original blob for the user. While EigenDA hosts a retriever service, client rollups can also potentially host their own retriever as a sidecar to their sequencer.
Below is the flow of events for how EigenDA operates in reality:
In plain english, the sequencer sends data to EigenDA, which chops it up, stores it, and checks if it’s safe. If everything looks good, the data gets a green light and moves forward. If not, it gets tossed out.
When looking at the competitive landscape for DA services more broadly, EigenDA has a clear advantage over others with regards to throughput. As more operators join the network, the opportunity to scale in potential throughput increases as well. Additionally, when considering which alternative DA service is most “Ethereum Aligned” its not hard to see that the clear choice is EigenDA.
While Celestia provided breakthrough innovations with regards to DAS, it is difficult to view them as fully ETH aligned, which while not mandatory, can certainly make a difference for clients (rollups) deciding which service to use. Celestia has also implemented an interesting strategy with regards to their light node architecture which could potentially allow for larger blocks, thus more potential blobs included in each blob, subject to certain constraints.
As it stands today, with Celestia, it appears that operationally they’ve been very successful with reducing costs for rollups, which has also been passed on to end users. However, despite this meaningful and impactful innovation, they’ve had very little actual traction with regards to fees accrued, despite its multibillion dollar fully diluted valuation, roughly $5.5 billion at the time of writing. Celestia launched on Halloween of last year, since then, 20 unique rollups have integrated their DA service. Across those 20 rollups, they’ve posted an aggregate 54.94 GB of blobspace data, allowing the protocol to collect 4,091 TIA, which at current prices is worth around $21k. However, to be fair fees accrued are paid out to stakers and validators, and the price of TIA has varied over time, hitting 19.87 at its all time high, so the actual dollar amount could vary. Using secondary data, we can estimate total fees in dollar terms are more likely around $35k.
The pricing of EigenDA was recently revealed, which has an “on-demand” option, along with three separate tiers. The on-demand option offers variable throughput for 0.015 ETH / GB, and the “tier 1” allows for 256 KiB / s for 70 ETH. When looking at the current DA landscape on Ethereum mainnet today, here are some assumptions we can make about what potential demand may look like for EigenDA, and also what revenue this may generate for restakers.
As it stands today, there are roughly 27 rollups posting blob data to Ethereum L1, using the data from the query. Each blob that is posted to Ethereum, post EIP-4844 has 128kb of data. Across these 27 rollups, there have been roughly 2.4m blobs posted, for an aggregate total of 295 GB of data. Therefore, if all of these rollups were to use the 0.015 ETH / GB pricing, it would come out to 4.425 ETH.
Now at first glance, this is seemingly problematic. However, it’s important to note that roll ups vary greatly in their unique offerings, and architecture. Because of the design differences, and user base, their individuality leads to vastly different amounts of blobs posted, and fees paid to the L1.
For example, for the rollups analyzed in this research piece, this is how many blobs (count + GB), and fees were used by each rollup.
From this analysis alone, 6 rollups have already passed the mark on fees which would make sense for them to opt into EigenDA’s tier 1 pricing, but on a pure data throughput basis, it doesn’t seem to make sense for them. In fact, utilizing the on-demand pricing of EigenDA would still result in a direct cost reduction of around 98.91% on average.
Therefore, this leaves restakers, and other ecosystem stakeholders in a bit of a dilemma. The cost reduction enabled by EigenDA benefits L2s and their users alike, as this will lead to better margins and revenue for L2s, but it doesn’t strike confidence amongst restakers hoping that EigenDA would be a leader amongst AVS with regards to restaking rewards.
However, another way this can be interpreted is that EigenDA’s cost reduction spurs innovation. Over the course of history, we’ve seen a myriad of examples of cost-reduction being key catalysts for growth. For example, the “Bessamear Process” for steel was an innovative technique that significantly reduced the cost and time needed to produce steel, enabling mass production of stronger, higher quality steel, with an 82% cost reduction. One could argue that a similar principle applies for DA services, where the introduction of multiple DA service providers not only drastically lowers costs, reinforced by competition, but also inherently spurs innovation for high throughput rollups to launch, expanding upon the design boundaries of what was previously unexplored.
For example, Eclipse, an SVM rollup which recently began posting blobs only 28 days ago, has already become responsible for 86% of the total blob share on Celestia. Their mainnet isn’t even open to the public yet, and while much of this use could potentially just be pure testing to ensure technical robustness, it gives an idea of what is possible for high throughput rollups, and signals that they will be significantly higher consumers of DA than the majority of rollups we see today.
So where does this leave us? Well, to get to that $160k monthly revenue target for EigenDA set forth in the blog post from the team, using the tier one pricing of 70 ETH annually, assuming an average ETH price of ~$2,500, you would need 11 rollups as paying customers. From our analysis, we saw that since EIP-4844 went live in early March, there have been about six rollups that have exceeded the 70 ETH spend mark on fees for the L1 alone. As we discussed, the on demand pricing will still reduce costs for all of these rollups by ~99%, but ultimately their desired throughput will be the determining factor on whether or not they will opt in to using EigenDA.
In addition to this, it is likely we see the cost reduction induce demand by creation of multiple, high throughput rollups such as MegaETH. It is also likely that in the future these types of high performance rollups will be able to be deployed from Rollup-as-a-Service (RaaS) providers like AltLayer, and Conduit. However, in the short term, there is still some proving yet to be done to reach the $160k monthly revenue mark, which would be the breakeven cost, assuming there are only 400 operators supporting EigenDA. Overall, EigenDA opens new potential design possibilities that have the potential to be very value additive, but its not entirely clear how much of that value will be captured by EigenDA and directed back to restakers. Nonetheless, we believe EigenDA is well positioned to capture the majority of DA market share as a provider, and we look forward to continued coverage on one of the most notable AVSs.
Forward the Original Title‘EigenDA: Revolutionizing Rollup Economics’
Today, EigenDA is the largest AVS in terms of both restaked capital and unique operators, with over 3.64m ETH and 70m EIGEN restaked, totaling about $9.1bn of restaked capital, from 245 operators and 127k unique staking wallets. With an influx of alternative Data Availability platforms launching, it becomes difficult to decipher the differences between them, their unique value propositions, and how protocol value accrual may vary. In this article, we’ll dive deep into EigenDA, exploring the unique mechanisms that constitute its design, while also taking a look at the competitive landscape to analyze how this market sector could potentially play out.
Before we dive deep into EigenDA, let’s first understand the concept of data availability (DA) and why it matters. Data Availability refers to the assurance that all necessary data for verifying transactions and maintaining the blockchain is accessible to all participants (nodes) in the network. DA is part of the classical monolithic architecture we have seen others write about ad nauseum - but to be brief, execution, consensus, and settlement, all rely on DA. Without DA, the integrity of a blockchain is essentially compromised.
The reliance of all other parts of the stack on DA created a bottleneck for scaling, which is why we saw the Layer 2 roadmap take place. Following the introduction of Optimistic Rollups in 2019, the L2 future was birthed into existence. L2s had execution take place off chain, but still relied on Ethereum for DA to maintain Ethereum’s security guarantees. With this paradigm shift, many realized that the advantages offered by L2s could be even further exacerbated by building specific blockchains, or services that focused solely on improving the limitations of a monolithic architecture‘s DA layer.
Despite specific DA layers popping up that pose a realistic opportunity for fees to be driven down via competition, and further experimentation to take place, the DA problem is being addressed on Ethereum mainnet itself through a process known as “Dank Sharding.” The first part of Dank Sharding was implemented via EIP-4844, which introduced transactions that carry additional data blobs, which can be up to 125 KB in size. These blobs are committed using KZG (a type of cryptographic commitment), ensuring data integrity and future compatibility with data availability sampling. Prior to EIP-4844 being implemented, rollups were using calldata to post rollup transaction data to Ethereum.
Since proto danksharding went live in the Dancun update back in mid March, there have been 2.4m blobs, for a total size of 294 gigabytes of data, with over 1700 ETH paid in fees to the L1. It’s important to note that blob data is not accessible to the EVM and is automatically deleted after roughly 2 months. There is currently a limit of 6 blobs per block, for 750 kb total. Context for non-technical readers, if blobspace was maxed out for three blocks straight, you’ve got an average Gamecube memory card’s worth of data, throwback.
This limit is indeed getting hit multiple times a day, signaling ample demand for blobspace on Ethereum. While the blob base fee on Ethereum is around $5 at the time of writing, we should be carefully reminded that this fee is reflexive to the price of ETH, as is most DeFi activity. Therefore, in times of euphoric price appreciation for ETH, more activity takes place, which in turn would lead to more demand for blobspace. Therefore, in order to be prepared for either increased speculation across DeFi or open the network up to unseen use cases, the cost of data availability must be driven down further. There is still plenty of incentive for lowering these costs even further to encourage continued user activity.
EigenDA is built on the simple principle that Data Availability does not require independent consensus to solve, thus EigenDA is structurally crafted to scale linearly as the primary role of operators is to simply handle data storage. To get more granular, there are three main cogs in the EigenDA architecture:
Operators for EigenDA are the parties or entities responsible for running EigenDA node software, registered in EigenLayer with stake delegated to them. You can think of these similarly to how you think about node operators for a traditional proof of stake network. However, instead of being burdened with consensus, the role of these operators is rather to store blobs associated with valid storage requests. In this particular instance, a valid storage request is a request where fees are paid and the provided blob chunk verifies against the provided KZG commitment and proof.
Put simply, a KZG commitment allows you to link a piece of data with a unique code (commitment) and later prove that a given piece of data is the original one using a special key (proof). This ensures that the data hasn’t been changed or tampered with, thus maintaining the integrity of the blob.
The Disperser is what EigenDA documentation refers to as an “untrusted” service, hosted by EigenLabs. Its primary responsibility is interfacing between EigenDA clients, operators, and contracts. Clients of EigenDA make dispersal requests to the dispenser, which Reed-Solomon encodes the data, which helps in data recovery, and then calculates the encoded blob’s KZG commitment, and generates a KZG proof for each chunk. Following this, the disperser sends chunks, KZG commitments, and KZG proofs to EigenDA operators, who then return signatures. The last action of the disperser is to aggregate those signatures and upload them to Ethereum in the form of calldata to the EigenDA contract. It’s important to note that this step is a necessary precondition for slashing operators for potential misbehavior.
The last core component of EigenDA, the Retriever, is a service that queries the EigenDA operators for blob chunks, verifies that blob chunks are accurate, and then reconstructs the original blob for the user. While EigenDA hosts a retriever service, client rollups can also potentially host their own retriever as a sidecar to their sequencer.
Below is the flow of events for how EigenDA operates in reality:
In plain english, the sequencer sends data to EigenDA, which chops it up, stores it, and checks if it’s safe. If everything looks good, the data gets a green light and moves forward. If not, it gets tossed out.
When looking at the competitive landscape for DA services more broadly, EigenDA has a clear advantage over others with regards to throughput. As more operators join the network, the opportunity to scale in potential throughput increases as well. Additionally, when considering which alternative DA service is most “Ethereum Aligned” its not hard to see that the clear choice is EigenDA.
While Celestia provided breakthrough innovations with regards to DAS, it is difficult to view them as fully ETH aligned, which while not mandatory, can certainly make a difference for clients (rollups) deciding which service to use. Celestia has also implemented an interesting strategy with regards to their light node architecture which could potentially allow for larger blocks, thus more potential blobs included in each blob, subject to certain constraints.
As it stands today, with Celestia, it appears that operationally they’ve been very successful with reducing costs for rollups, which has also been passed on to end users. However, despite this meaningful and impactful innovation, they’ve had very little actual traction with regards to fees accrued, despite its multibillion dollar fully diluted valuation, roughly $5.5 billion at the time of writing. Celestia launched on Halloween of last year, since then, 20 unique rollups have integrated their DA service. Across those 20 rollups, they’ve posted an aggregate 54.94 GB of blobspace data, allowing the protocol to collect 4,091 TIA, which at current prices is worth around $21k. However, to be fair fees accrued are paid out to stakers and validators, and the price of TIA has varied over time, hitting 19.87 at its all time high, so the actual dollar amount could vary. Using secondary data, we can estimate total fees in dollar terms are more likely around $35k.
The pricing of EigenDA was recently revealed, which has an “on-demand” option, along with three separate tiers. The on-demand option offers variable throughput for 0.015 ETH / GB, and the “tier 1” allows for 256 KiB / s for 70 ETH. When looking at the current DA landscape on Ethereum mainnet today, here are some assumptions we can make about what potential demand may look like for EigenDA, and also what revenue this may generate for restakers.
As it stands today, there are roughly 27 rollups posting blob data to Ethereum L1, using the data from the query. Each blob that is posted to Ethereum, post EIP-4844 has 128kb of data. Across these 27 rollups, there have been roughly 2.4m blobs posted, for an aggregate total of 295 GB of data. Therefore, if all of these rollups were to use the 0.015 ETH / GB pricing, it would come out to 4.425 ETH.
Now at first glance, this is seemingly problematic. However, it’s important to note that roll ups vary greatly in their unique offerings, and architecture. Because of the design differences, and user base, their individuality leads to vastly different amounts of blobs posted, and fees paid to the L1.
For example, for the rollups analyzed in this research piece, this is how many blobs (count + GB), and fees were used by each rollup.
From this analysis alone, 6 rollups have already passed the mark on fees which would make sense for them to opt into EigenDA’s tier 1 pricing, but on a pure data throughput basis, it doesn’t seem to make sense for them. In fact, utilizing the on-demand pricing of EigenDA would still result in a direct cost reduction of around 98.91% on average.
Therefore, this leaves restakers, and other ecosystem stakeholders in a bit of a dilemma. The cost reduction enabled by EigenDA benefits L2s and their users alike, as this will lead to better margins and revenue for L2s, but it doesn’t strike confidence amongst restakers hoping that EigenDA would be a leader amongst AVS with regards to restaking rewards.
However, another way this can be interpreted is that EigenDA’s cost reduction spurs innovation. Over the course of history, we’ve seen a myriad of examples of cost-reduction being key catalysts for growth. For example, the “Bessamear Process” for steel was an innovative technique that significantly reduced the cost and time needed to produce steel, enabling mass production of stronger, higher quality steel, with an 82% cost reduction. One could argue that a similar principle applies for DA services, where the introduction of multiple DA service providers not only drastically lowers costs, reinforced by competition, but also inherently spurs innovation for high throughput rollups to launch, expanding upon the design boundaries of what was previously unexplored.
For example, Eclipse, an SVM rollup which recently began posting blobs only 28 days ago, has already become responsible for 86% of the total blob share on Celestia. Their mainnet isn’t even open to the public yet, and while much of this use could potentially just be pure testing to ensure technical robustness, it gives an idea of what is possible for high throughput rollups, and signals that they will be significantly higher consumers of DA than the majority of rollups we see today.
So where does this leave us? Well, to get to that $160k monthly revenue target for EigenDA set forth in the blog post from the team, using the tier one pricing of 70 ETH annually, assuming an average ETH price of ~$2,500, you would need 11 rollups as paying customers. From our analysis, we saw that since EIP-4844 went live in early March, there have been about six rollups that have exceeded the 70 ETH spend mark on fees for the L1 alone. As we discussed, the on demand pricing will still reduce costs for all of these rollups by ~99%, but ultimately their desired throughput will be the determining factor on whether or not they will opt in to using EigenDA.
In addition to this, it is likely we see the cost reduction induce demand by creation of multiple, high throughput rollups such as MegaETH. It is also likely that in the future these types of high performance rollups will be able to be deployed from Rollup-as-a-Service (RaaS) providers like AltLayer, and Conduit. However, in the short term, there is still some proving yet to be done to reach the $160k monthly revenue mark, which would be the breakeven cost, assuming there are only 400 operators supporting EigenDA. Overall, EigenDA opens new potential design possibilities that have the potential to be very value additive, but its not entirely clear how much of that value will be captured by EigenDA and directed back to restakers. Nonetheless, we believe EigenDA is well positioned to capture the majority of DA market share as a provider, and we look forward to continued coverage on one of the most notable AVSs.