Predicting the future is difficult, if not impossible. Yet we are all in the prediction business in some shape or form and need to make decisions based on where we think the world is headed.
For the first time, we are publishing our annual predictions for what will happen by the end of next year and where the industry is headed. Joint work between the two arms of Equilibrium - Labs and Ventures.
Before we get into it, here’s how we approached this exercise:
Today, L2Beat lists 120 L2s and L3s (jointly called “Ethereum scaling solutions”). We believe the modularisation of Ethereum will continue to accelerate in 2025, and by the end of the year, there will be more than 2,000 Ethereum scaling solutions—representing a ~17x growth from today.
The launches of new L2s/L3s are driven by both application-specific rollups (gaming, defi, payments, social…) and “corporate” L2s (traditional companies expanding on-chain, such as Coinbase or Kraken).
The scaling factor is measured as the total daily average UOPS or TPS of Ethereum scaling solutions compared to Ethereum L1 (metric reported by L2Beat and rollup.wtf). It’s currently fluctuating around 25x, so we’d need at least an 8x increase to raise it above 200x (driven by both scaling existing solutions and new launches).
The L2 scaling factor is both a measure of user demand for applications on Ethereum L2s/L3s and how well the underlying infra manages to scale. More broadly speaking, it showcases the success of the Ethereum rollup-centric scaling roadmap compared to the demand for blockspace on Ethereum L1.
Daily average UOPS of Ethereum scaling solutions vs Ethereum L1 (Source: L2Beat)
Demand for Solana blockspace has been high over the past year due to a growing defi-ecosystem, memecoin speculation, DePIN, and many other areas of demand. This has enabled proper stress testing and pushed the core team to keep improving the network. While there’s an increasing number of teams working on Solana network extensions, there is no doubt that scaling Solana L1 remains the top priority for the core developer team.
Source: Solana Roadmap
Over the past few months, Solana has been averaging 700-800 non-voting transactions per second, with peaks of up to 3,500. We believe this will grow to an average of >5,000 non-vote transactions per second during 2025, representing a 6-7x increase from current levels. Peak levels are likely much higher than that.
Solana has averaged about 700-800 TPS over recent months (Source: Blockworks Research)
The key network upgrades we expect to make this possible are:
L2s and L3s have the option to publish their data to Ethereum (either as blobs or calldata), alternative DA layers such as Avail, Celestia, EigenDA, and NearDA, or to an external data-availability committee (in the extreme case, data is just stored in one node).
Today, approximately 35% of all data from L2s/L3s is published to alt-DA layers (the graph below excludes Avail, NearDA, and EigenDA), with the rest published to Ethereum (primarily as blobs). Metrics and dashboards of posted data can be found for Celestia, Ethereum, and GrowThePie.
We believe the share of alt-DAs will grow to >80% during 2025, which, depending on how much the target and max blob are increased in the Pectra update, would represent a 10-30x increase in data posted to alt-DA layers from current levels. This growth will be driven by both high-throughput rollups (such as Eclipse and MegaETH, which are expected to drive growth for Celestia and EigenDA, respectively) and a growing ecosystem of native rollups launched on top of Celestia and Avail.
Source: GrowThePie
Today, only ~25% (30 of 120) of the scaling solutions listed on L2Beat are either validity rollups or validiums (leverage ZKPs to prove correct state transitions and post data to Ethereum or an alt-DA layer / an external data-availability committee).
With ZK proving and verification becoming faster and cheaper, it’s becoming increasingly difficult to see the case for optimistic scaling solutions in the longer term. Validity rollups such as Starknet are already breaking records regarding scaling (and we’re only getting started). Meanwhile, ZK-based scaling solutions provide stronger guarantees regarding asynchronous interoperability than their optimistic counterparts. Finally, latency (or time to finality) reduces naturally through faster and cheaper proving and verification without weakening the underlying trust guarantees.
Hence, we believe the share of ZK-based scaling solutions will increase to more than 50% by the end of 2025 (and likely exceed this by a significant margin). Several ZK stacks are expected to launch their production-ready chain-development kits (Polygon, ZK Sync, Scroll, etc.), which makes it easier to deploy new validity rollups or validiums. In addition, there’s increasing interest in converting existing optimistic rollups to validity rollups (for example, by leveraging OP Succinct or Kakarot zkEVM for proving).
While Ethereum is focusing on its rollup-centric roadmap, the L1 still plays an important role for many high-value applications that aren’t as gas-sensitive. Over the past year, we’ve seen multiple calls to raise the gas limit from various individuals within the EF, along with external parties.
The current max gas limit per block is 30m gas (with a target of 15m), which hasn’t changed since 2021. Since then, blocks have been at target almost constantly (50% of the max limit). We believe this will double in 2025 to a new max limit of 60m gas and a block target of 30m gas. However, this is conditional on a) the Fusaka update happening in 2025 and b) the Ethereum core developer community agreeing on a gas limit increase as part of Fusaka.
ZK proving Ethereum blocks enable easier verification of correct execution. This would, for example, benefit light clients who are currently only relying on consensus/validator signatures.
Proving every Ethereum block is already feasible at an annual cost of around $1m by running EVM execution through a general-purpose zkVM (probably already lower by the time this is published, given how quickly things are progressing).
While the proving would be delayed by a few minutes (the average time it takes to generate a proof for an Ethereum block today), this would still benefit services that aren’t as time-sensitive. As cost and proving times decrease, relying on zk-proofs becomes feasible for a broader range of use cases. This leads us to the following prediction:
Ethereum’s roadmap includes eventually enshrining its own zkEVM into the core protocol, which would help avoid re-execution and enable other services to easily verify correct execution. However, implementation will likely still take years.
In the meantime, we can leverage general-purpose zkVMs to prove state transitions. zkVMs have seen significant performance improvements over the last year and offer a simple developer experience (e.g. just writing programs in Rust).
Proving Ethereum blocks under 30s is ambitious, but Risc Zero already claims to achieve 90s proving times. In the longer term, however, we need to lower proving times by at least another order of magnitude to enable real-time proving for Ethereum. Given a 12s block time, proving needs to happen fast enough to allow for communication, verification, and voting.
Today, most ZKPs are generated in a centralized manner by the core team. This is expensive (non-optimal hardware utilization), weakens censorship resistance, and adds complexity for teams who need ZKPs for their product but don’t necessarily want to run their own prover infrastructure.
While it’s possible to build network-specific decentralized proving (i.e. only for a specific L2 or use case), decentralized proving networks can offer cheaper pricing, operational simplification, and better censorship resistance. The price-benefit comes from both decentralized networks’ ability to find the cheapest compute resources globally and higher hardware utilization rates of hardware (users only pay for the compute that they use).
Due to these reasons, we believe most projects will choose to outsource their proving (something we’re already seeing with several projects) and that decentralized proving networks will generate more than 90% of all ZK proofs by the end of 2025. Gevulot will be the first production-ready prover network capable of handling large proving volumes, but more will follow as the industry expands.
Before ChatGPT came out, most people did not think about the use cases of AI and LLMs or their benefits. This changed almost overnight, and most people today have interacted with an LLM or are at least familiar with how they work.
A similar transformation is likely to occur in the blockchain privacy space. While many still question how big a problem on-chain privacy is (or aren’t even aware of it), privacy is important for protecting both individuals and companies using blockchains in addition to expanding the expressivity of blockchains (i.e., what is possible to build on top of them).
While privacy is seldom the selling point by itself, the below framework can be used to identify categories where the value of privacy is the highest:
Zama, which develops FHE infrastructure for blockchains and AI, is expected to release the library for their MPC decryption network shortly. This will be the first major open-source library of its kind.
Given that there is very little competition, it could become the de facto standard everyone benchmarks and compares against - similar to what Arkworks and MP-SPDZ did for ZKPs and MPC. However, this depends a lot on how permissive the license will be.
Nym focuses on baselayer and networking privacy. The Nym mixnet can be integrated into any blockchain, wallet, or app to shield IP addresses and traffic patterns. Meanwhile, NymVPN offers a decentralized VPN (currently in public beta), which features both:
To incentivize the supply side, Nym is expected to run an “incentivized privacy provision” to increase the number of nodes provisioning their VPN network. For the demand side, however, they will have to prove that their product is worth using.
10% of Tor’s usage (on average, 2-3m users) would translate to 200-300k users for NymVPN. This is optimistic, but achievable if the team executes on go-to-market and marketing. Cryptoeconomic incentives could also be used in the short term to bootstrap demand and subsidize usage.
In addition to the privacy-first approach taken by teams such as Aztec, Aleo, and Namada, another angle is for existing transparent networks to outsource computation requiring privacy guarantees. This “add-on privacy” or “privacy-as-a-service” approach enables applications and networks to achieve some privacy guarantees without having to re-deploy to a new privacy-centric network and lose out on existing network effects.
There are multiple approaches to private/confidential compute, with providers including those focusing on MPC (Arcium, Nillion, Taceo, SodaLabs…), FHE (Zama, Fhenix, Inco…), and TEE (Secret Network and Oasis Protocol). More information on the current state of the privacy space here. We believe that at least one of the major rollup providers (Optimism, Arbitrum, Base, Starknet, ZK Sync, Scroll, etc) will integrate one or more of these confidential compute providers and enable apps building on top to start using them in production.
Indistinguishability obfuscation (IO), in simplified terms, is a form of encryption that enables hiding (obfuscating) the implementation of a program while still allowing users to execute it. It involves transforming a program or a circuit into a “scrambled” version such that it is difficult to reverse-engineer, but the scrambled program still performs the same function as the original. In addition to providing similar guarantees around verifiable computation as ZKPs, IO could also support private multi-party computation, maintaining secrets, and only using them under certain conditions.
While IO is slow, expensive, and not practically feasible today, the same was true for ZKPs until a few years ago. More recent examples include teams working on MPC and FHE-based programmable privacy in blockchains, which have made significant progress within the last year. The bottom line is that when you combine capable teams with sufficient funding, there can be a lot of progress in a seemingly short period of time.
To our knowledge, there are only a couple of teams working on some implementations today - Sora and Gauss Labs. Given the potential of IO, we would expect to see at least three new startups raise venture funding to accelerate development and make it more practically feasible.
Encrypted mempools are a way to reduce harmful MEV, such as frontrunning and sandwich attacks, by keeping transactions encrypted until the ordering has been committed (commit-reveal). In practice, there are many different approaches with two main tradeoff dimensions:
While the overall benefits of encrypted mempools seem positive, we believe that external protocols will struggle to get adopted. On the other hand, in projects that offer encrypted mempools as part of a broader product, the adoption of encrypted mempools hinges on the success of the broader product. The clearest path to adoption would be enshrining a solution into the core protocol itself, but this will likely take longer than a year to implement (particularly for Ethereum, despite it being on the roadmap).
Directed Acyclic Graph (DAG) based consensus enables separating communication (data propagation) from the consensus layer (linear ordering of transactions) in a way that’s more natural for distributed systems. The data structure makes the ordering deterministic, so as long as everyone (eventually) has the same DAG, all nodes end up with the same ordering.
A key benefit of this approach is reducing the communication overhead. Instead of having a leader who builds and distributes the official blocks, the leader only attests to a finalized sub-DAG. After receiving this attestation, the rest of the nodes can build the equivalent block deterministically and locally. Along with the early pioneers Aptos and Sui, newer protocols, such as Aleo, have also implemented DAG-based consensus. We predict that this trend will continue, and at least one major protocol will decide to transition from proof-of-work or BFT-based proof-of-stake consensus to DAG-based consensus.
However, we are less confident that the full transition will happen by the end of 2025 due to the complexities of implementation (even if taking an existing implementation, such as Narwhal-Bullshark or Mysticeti). However, we are happy to be proven wrong if some team can execute quickly!
QUIC (Quick UDP Internet Connections) is a modern transport layer protocol developed by Google and later adopted as a standard by the Internet Engineering Task Force (IETF). It was designed to reduce latency, improve connection reliability, and increase security.
QUIC uses UDP (User Datagram Protocol) as its foundation, rather than traditional TCP leveraged in HTTP2/1. However, HTTP2 benefits from decades of optimization - both protocol-level optimization and moving workloads to the kernel level - which gives it a performance advantage.
Although some proposals for QUIC kernel inclusion already exist, an implementation of QUIC that doesn’t rely on TLS would make hardware acceleration easier. This would alleviate some of the performance issues and likely drive more utilization of QUIC in P2P networks. Today, of the major blockchains, only Solana, Internet Computer, and Sui use QUIC (as far as we know).
While the Solana core team is focusing on improving the L1, we’re already observing the modularisation of Solana. However, one key difference is that Solana network extensions (L2s) focus less on pure scaling and more on providing new experiences for developers (and users) that aren’t currently possible on the L1. Examples include lower latency and custom/sovereign blockspace, which is mostly relevant for use cases that work well in isolation and aren’t as dependent on accessing shared state (e.g. games or some DeFi apps).
Given the user- and product-centric focus of the broader Solana ecosystem, we believe the same will translate over to these network extensions. We expect to see at least one Solana application launching as a rollup/network extension, but where the users don’t notice that they’ve moved away from the Solana L1. Some contenders include apps built on Magic Block or Bullet (ZetaX).
One great example from the Ethereum ecosystem is Payy - a mobile-based application offering private USDC payments. Easy onboarding and smooth UX, but in the background it runs as an Ethereum validium built on Polygon’s tech stack.
Disclaimer: Equilibrium Ventures is an investor in Magic Block and Zeta.
Chain abstraction is an umbrella term for different methods of abstracting away the complexity of navigating blockchains, particularly in a multi-chain world. While early adopters (prosumers) are willing to go through a lot more hassle, chain abstraction can provide a reasonable tradeoff for less experienced users. Another way to look at it is risk shifting, i.e. trusting an external party (such as intent solvers) to manage and handle the multi-chain complexity on behalf of the user.
We expect that by the end of 2025, at least 25% of all on-chain transactions will be generated in a chain abstracted manner, i.e. without end-users needing to know which underlying chain they use.
While chain abstraction does add trust assumptions and obfuscates risk, it’s feasible that we’ll have something akin to “on-chain rating agencies” (e.g. the L2Beat’s of the world) that grade different solutions. This would allow users to set preferences such as only interacting with chains above a certain level of security (e.g. rollups with forced exits included). Another risk vector relates to the solver market, which should be competitive enough to ensure users get a good outcome and minimize censorship risk.
In the end, prosumers still have the option to do things as before, while those who feel less sophisticated about the different options can outsource the decision-making to a more informed third party.
Validity rollup clusters based on a shared L1 bridge design provide stronger (asynchronous) interoperability guarantees than their optimistic counterparts. The network effects of the rollup cluster also increase with each additional rollup launched on top of it.
We believe most new rollups in 2025 will be launched on ZK stacks with native interoperability. While the cluster is composed of multiple different chains, the aim is for users to feel like they are using a single chain. This enables developers to focus more on the applications, user experience, and onboarding. Examples in this category include zkSync’s Elastic Chains, Polygon’s Agglayer, and Nil’s zkSharding.
While we are starting to see the first applications expand their reach to a larger user base, there is still a lot of work ahead to ensure that the underlying infrastructure can accommodate more users and a wider range of applications.
As an industry, we’ve made significant progress through the past bear market, but there will be new scaling bottlenecks and calls to fund infrastructure again. This is a dynamic we’ve observed over multiple cycles now, and have no reason to believe it won’t repeat this time around. Another way to put it, we don’t believe there is such a thing as “sufficiently scaled”. With each increase in capacity, new use cases become feasible - driving up demand for blockspace.
Privacy is perhaps the last major problem in blockchains that still needs to be solved. Today, there is a relatively good understanding of the roadmap ahead; it’s just about putting all the pieces together and improving performance. The recent positive verdict in the Tornado Cash case has raised optimism about a more open approach from governments, but a lot of work still remains on both technical and social fronts.
Regarding the user experience, we’ve done a pretty good job at abstracting away a lot of the complexity when using a single blockchain over the past few years. However, with an increasing number of new chains and L2s/L3s launching, it’s becoming increasingly critical to get the cross-chain UX right.
Several of our predictions for next year hinge on ZK proving becoming cheaper and faster to make more use cases feasible. We expect this trend to continue in 2025, driven by software optimizations, more specialized hardware, and decentralized proving networks (which can source the cheapest compute resources globally and allow users to avoid paying for idle time).
All in all, excited for what 2025 has in store. Onwards and upwards!
Predicting the future is difficult, if not impossible. Yet we are all in the prediction business in some shape or form and need to make decisions based on where we think the world is headed.
For the first time, we are publishing our annual predictions for what will happen by the end of next year and where the industry is headed. Joint work between the two arms of Equilibrium - Labs and Ventures.
Before we get into it, here’s how we approached this exercise:
Today, L2Beat lists 120 L2s and L3s (jointly called “Ethereum scaling solutions”). We believe the modularisation of Ethereum will continue to accelerate in 2025, and by the end of the year, there will be more than 2,000 Ethereum scaling solutions—representing a ~17x growth from today.
The launches of new L2s/L3s are driven by both application-specific rollups (gaming, defi, payments, social…) and “corporate” L2s (traditional companies expanding on-chain, such as Coinbase or Kraken).
The scaling factor is measured as the total daily average UOPS or TPS of Ethereum scaling solutions compared to Ethereum L1 (metric reported by L2Beat and rollup.wtf). It’s currently fluctuating around 25x, so we’d need at least an 8x increase to raise it above 200x (driven by both scaling existing solutions and new launches).
The L2 scaling factor is both a measure of user demand for applications on Ethereum L2s/L3s and how well the underlying infra manages to scale. More broadly speaking, it showcases the success of the Ethereum rollup-centric scaling roadmap compared to the demand for blockspace on Ethereum L1.
Daily average UOPS of Ethereum scaling solutions vs Ethereum L1 (Source: L2Beat)
Demand for Solana blockspace has been high over the past year due to a growing defi-ecosystem, memecoin speculation, DePIN, and many other areas of demand. This has enabled proper stress testing and pushed the core team to keep improving the network. While there’s an increasing number of teams working on Solana network extensions, there is no doubt that scaling Solana L1 remains the top priority for the core developer team.
Source: Solana Roadmap
Over the past few months, Solana has been averaging 700-800 non-voting transactions per second, with peaks of up to 3,500. We believe this will grow to an average of >5,000 non-vote transactions per second during 2025, representing a 6-7x increase from current levels. Peak levels are likely much higher than that.
Solana has averaged about 700-800 TPS over recent months (Source: Blockworks Research)
The key network upgrades we expect to make this possible are:
L2s and L3s have the option to publish their data to Ethereum (either as blobs or calldata), alternative DA layers such as Avail, Celestia, EigenDA, and NearDA, or to an external data-availability committee (in the extreme case, data is just stored in one node).
Today, approximately 35% of all data from L2s/L3s is published to alt-DA layers (the graph below excludes Avail, NearDA, and EigenDA), with the rest published to Ethereum (primarily as blobs). Metrics and dashboards of posted data can be found for Celestia, Ethereum, and GrowThePie.
We believe the share of alt-DAs will grow to >80% during 2025, which, depending on how much the target and max blob are increased in the Pectra update, would represent a 10-30x increase in data posted to alt-DA layers from current levels. This growth will be driven by both high-throughput rollups (such as Eclipse and MegaETH, which are expected to drive growth for Celestia and EigenDA, respectively) and a growing ecosystem of native rollups launched on top of Celestia and Avail.
Source: GrowThePie
Today, only ~25% (30 of 120) of the scaling solutions listed on L2Beat are either validity rollups or validiums (leverage ZKPs to prove correct state transitions and post data to Ethereum or an alt-DA layer / an external data-availability committee).
With ZK proving and verification becoming faster and cheaper, it’s becoming increasingly difficult to see the case for optimistic scaling solutions in the longer term. Validity rollups such as Starknet are already breaking records regarding scaling (and we’re only getting started). Meanwhile, ZK-based scaling solutions provide stronger guarantees regarding asynchronous interoperability than their optimistic counterparts. Finally, latency (or time to finality) reduces naturally through faster and cheaper proving and verification without weakening the underlying trust guarantees.
Hence, we believe the share of ZK-based scaling solutions will increase to more than 50% by the end of 2025 (and likely exceed this by a significant margin). Several ZK stacks are expected to launch their production-ready chain-development kits (Polygon, ZK Sync, Scroll, etc.), which makes it easier to deploy new validity rollups or validiums. In addition, there’s increasing interest in converting existing optimistic rollups to validity rollups (for example, by leveraging OP Succinct or Kakarot zkEVM for proving).
While Ethereum is focusing on its rollup-centric roadmap, the L1 still plays an important role for many high-value applications that aren’t as gas-sensitive. Over the past year, we’ve seen multiple calls to raise the gas limit from various individuals within the EF, along with external parties.
The current max gas limit per block is 30m gas (with a target of 15m), which hasn’t changed since 2021. Since then, blocks have been at target almost constantly (50% of the max limit). We believe this will double in 2025 to a new max limit of 60m gas and a block target of 30m gas. However, this is conditional on a) the Fusaka update happening in 2025 and b) the Ethereum core developer community agreeing on a gas limit increase as part of Fusaka.
ZK proving Ethereum blocks enable easier verification of correct execution. This would, for example, benefit light clients who are currently only relying on consensus/validator signatures.
Proving every Ethereum block is already feasible at an annual cost of around $1m by running EVM execution through a general-purpose zkVM (probably already lower by the time this is published, given how quickly things are progressing).
While the proving would be delayed by a few minutes (the average time it takes to generate a proof for an Ethereum block today), this would still benefit services that aren’t as time-sensitive. As cost and proving times decrease, relying on zk-proofs becomes feasible for a broader range of use cases. This leads us to the following prediction:
Ethereum’s roadmap includes eventually enshrining its own zkEVM into the core protocol, which would help avoid re-execution and enable other services to easily verify correct execution. However, implementation will likely still take years.
In the meantime, we can leverage general-purpose zkVMs to prove state transitions. zkVMs have seen significant performance improvements over the last year and offer a simple developer experience (e.g. just writing programs in Rust).
Proving Ethereum blocks under 30s is ambitious, but Risc Zero already claims to achieve 90s proving times. In the longer term, however, we need to lower proving times by at least another order of magnitude to enable real-time proving for Ethereum. Given a 12s block time, proving needs to happen fast enough to allow for communication, verification, and voting.
Today, most ZKPs are generated in a centralized manner by the core team. This is expensive (non-optimal hardware utilization), weakens censorship resistance, and adds complexity for teams who need ZKPs for their product but don’t necessarily want to run their own prover infrastructure.
While it’s possible to build network-specific decentralized proving (i.e. only for a specific L2 or use case), decentralized proving networks can offer cheaper pricing, operational simplification, and better censorship resistance. The price-benefit comes from both decentralized networks’ ability to find the cheapest compute resources globally and higher hardware utilization rates of hardware (users only pay for the compute that they use).
Due to these reasons, we believe most projects will choose to outsource their proving (something we’re already seeing with several projects) and that decentralized proving networks will generate more than 90% of all ZK proofs by the end of 2025. Gevulot will be the first production-ready prover network capable of handling large proving volumes, but more will follow as the industry expands.
Before ChatGPT came out, most people did not think about the use cases of AI and LLMs or their benefits. This changed almost overnight, and most people today have interacted with an LLM or are at least familiar with how they work.
A similar transformation is likely to occur in the blockchain privacy space. While many still question how big a problem on-chain privacy is (or aren’t even aware of it), privacy is important for protecting both individuals and companies using blockchains in addition to expanding the expressivity of blockchains (i.e., what is possible to build on top of them).
While privacy is seldom the selling point by itself, the below framework can be used to identify categories where the value of privacy is the highest:
Zama, which develops FHE infrastructure for blockchains and AI, is expected to release the library for their MPC decryption network shortly. This will be the first major open-source library of its kind.
Given that there is very little competition, it could become the de facto standard everyone benchmarks and compares against - similar to what Arkworks and MP-SPDZ did for ZKPs and MPC. However, this depends a lot on how permissive the license will be.
Nym focuses on baselayer and networking privacy. The Nym mixnet can be integrated into any blockchain, wallet, or app to shield IP addresses and traffic patterns. Meanwhile, NymVPN offers a decentralized VPN (currently in public beta), which features both:
To incentivize the supply side, Nym is expected to run an “incentivized privacy provision” to increase the number of nodes provisioning their VPN network. For the demand side, however, they will have to prove that their product is worth using.
10% of Tor’s usage (on average, 2-3m users) would translate to 200-300k users for NymVPN. This is optimistic, but achievable if the team executes on go-to-market and marketing. Cryptoeconomic incentives could also be used in the short term to bootstrap demand and subsidize usage.
In addition to the privacy-first approach taken by teams such as Aztec, Aleo, and Namada, another angle is for existing transparent networks to outsource computation requiring privacy guarantees. This “add-on privacy” or “privacy-as-a-service” approach enables applications and networks to achieve some privacy guarantees without having to re-deploy to a new privacy-centric network and lose out on existing network effects.
There are multiple approaches to private/confidential compute, with providers including those focusing on MPC (Arcium, Nillion, Taceo, SodaLabs…), FHE (Zama, Fhenix, Inco…), and TEE (Secret Network and Oasis Protocol). More information on the current state of the privacy space here. We believe that at least one of the major rollup providers (Optimism, Arbitrum, Base, Starknet, ZK Sync, Scroll, etc) will integrate one or more of these confidential compute providers and enable apps building on top to start using them in production.
Indistinguishability obfuscation (IO), in simplified terms, is a form of encryption that enables hiding (obfuscating) the implementation of a program while still allowing users to execute it. It involves transforming a program or a circuit into a “scrambled” version such that it is difficult to reverse-engineer, but the scrambled program still performs the same function as the original. In addition to providing similar guarantees around verifiable computation as ZKPs, IO could also support private multi-party computation, maintaining secrets, and only using them under certain conditions.
While IO is slow, expensive, and not practically feasible today, the same was true for ZKPs until a few years ago. More recent examples include teams working on MPC and FHE-based programmable privacy in blockchains, which have made significant progress within the last year. The bottom line is that when you combine capable teams with sufficient funding, there can be a lot of progress in a seemingly short period of time.
To our knowledge, there are only a couple of teams working on some implementations today - Sora and Gauss Labs. Given the potential of IO, we would expect to see at least three new startups raise venture funding to accelerate development and make it more practically feasible.
Encrypted mempools are a way to reduce harmful MEV, such as frontrunning and sandwich attacks, by keeping transactions encrypted until the ordering has been committed (commit-reveal). In practice, there are many different approaches with two main tradeoff dimensions:
While the overall benefits of encrypted mempools seem positive, we believe that external protocols will struggle to get adopted. On the other hand, in projects that offer encrypted mempools as part of a broader product, the adoption of encrypted mempools hinges on the success of the broader product. The clearest path to adoption would be enshrining a solution into the core protocol itself, but this will likely take longer than a year to implement (particularly for Ethereum, despite it being on the roadmap).
Directed Acyclic Graph (DAG) based consensus enables separating communication (data propagation) from the consensus layer (linear ordering of transactions) in a way that’s more natural for distributed systems. The data structure makes the ordering deterministic, so as long as everyone (eventually) has the same DAG, all nodes end up with the same ordering.
A key benefit of this approach is reducing the communication overhead. Instead of having a leader who builds and distributes the official blocks, the leader only attests to a finalized sub-DAG. After receiving this attestation, the rest of the nodes can build the equivalent block deterministically and locally. Along with the early pioneers Aptos and Sui, newer protocols, such as Aleo, have also implemented DAG-based consensus. We predict that this trend will continue, and at least one major protocol will decide to transition from proof-of-work or BFT-based proof-of-stake consensus to DAG-based consensus.
However, we are less confident that the full transition will happen by the end of 2025 due to the complexities of implementation (even if taking an existing implementation, such as Narwhal-Bullshark or Mysticeti). However, we are happy to be proven wrong if some team can execute quickly!
QUIC (Quick UDP Internet Connections) is a modern transport layer protocol developed by Google and later adopted as a standard by the Internet Engineering Task Force (IETF). It was designed to reduce latency, improve connection reliability, and increase security.
QUIC uses UDP (User Datagram Protocol) as its foundation, rather than traditional TCP leveraged in HTTP2/1. However, HTTP2 benefits from decades of optimization - both protocol-level optimization and moving workloads to the kernel level - which gives it a performance advantage.
Although some proposals for QUIC kernel inclusion already exist, an implementation of QUIC that doesn’t rely on TLS would make hardware acceleration easier. This would alleviate some of the performance issues and likely drive more utilization of QUIC in P2P networks. Today, of the major blockchains, only Solana, Internet Computer, and Sui use QUIC (as far as we know).
While the Solana core team is focusing on improving the L1, we’re already observing the modularisation of Solana. However, one key difference is that Solana network extensions (L2s) focus less on pure scaling and more on providing new experiences for developers (and users) that aren’t currently possible on the L1. Examples include lower latency and custom/sovereign blockspace, which is mostly relevant for use cases that work well in isolation and aren’t as dependent on accessing shared state (e.g. games or some DeFi apps).
Given the user- and product-centric focus of the broader Solana ecosystem, we believe the same will translate over to these network extensions. We expect to see at least one Solana application launching as a rollup/network extension, but where the users don’t notice that they’ve moved away from the Solana L1. Some contenders include apps built on Magic Block or Bullet (ZetaX).
One great example from the Ethereum ecosystem is Payy - a mobile-based application offering private USDC payments. Easy onboarding and smooth UX, but in the background it runs as an Ethereum validium built on Polygon’s tech stack.
Disclaimer: Equilibrium Ventures is an investor in Magic Block and Zeta.
Chain abstraction is an umbrella term for different methods of abstracting away the complexity of navigating blockchains, particularly in a multi-chain world. While early adopters (prosumers) are willing to go through a lot more hassle, chain abstraction can provide a reasonable tradeoff for less experienced users. Another way to look at it is risk shifting, i.e. trusting an external party (such as intent solvers) to manage and handle the multi-chain complexity on behalf of the user.
We expect that by the end of 2025, at least 25% of all on-chain transactions will be generated in a chain abstracted manner, i.e. without end-users needing to know which underlying chain they use.
While chain abstraction does add trust assumptions and obfuscates risk, it’s feasible that we’ll have something akin to “on-chain rating agencies” (e.g. the L2Beat’s of the world) that grade different solutions. This would allow users to set preferences such as only interacting with chains above a certain level of security (e.g. rollups with forced exits included). Another risk vector relates to the solver market, which should be competitive enough to ensure users get a good outcome and minimize censorship risk.
In the end, prosumers still have the option to do things as before, while those who feel less sophisticated about the different options can outsource the decision-making to a more informed third party.
Validity rollup clusters based on a shared L1 bridge design provide stronger (asynchronous) interoperability guarantees than their optimistic counterparts. The network effects of the rollup cluster also increase with each additional rollup launched on top of it.
We believe most new rollups in 2025 will be launched on ZK stacks with native interoperability. While the cluster is composed of multiple different chains, the aim is for users to feel like they are using a single chain. This enables developers to focus more on the applications, user experience, and onboarding. Examples in this category include zkSync’s Elastic Chains, Polygon’s Agglayer, and Nil’s zkSharding.
While we are starting to see the first applications expand their reach to a larger user base, there is still a lot of work ahead to ensure that the underlying infrastructure can accommodate more users and a wider range of applications.
As an industry, we’ve made significant progress through the past bear market, but there will be new scaling bottlenecks and calls to fund infrastructure again. This is a dynamic we’ve observed over multiple cycles now, and have no reason to believe it won’t repeat this time around. Another way to put it, we don’t believe there is such a thing as “sufficiently scaled”. With each increase in capacity, new use cases become feasible - driving up demand for blockspace.
Privacy is perhaps the last major problem in blockchains that still needs to be solved. Today, there is a relatively good understanding of the roadmap ahead; it’s just about putting all the pieces together and improving performance. The recent positive verdict in the Tornado Cash case has raised optimism about a more open approach from governments, but a lot of work still remains on both technical and social fronts.
Regarding the user experience, we’ve done a pretty good job at abstracting away a lot of the complexity when using a single blockchain over the past few years. However, with an increasing number of new chains and L2s/L3s launching, it’s becoming increasingly critical to get the cross-chain UX right.
Several of our predictions for next year hinge on ZK proving becoming cheaper and faster to make more use cases feasible. We expect this trend to continue in 2025, driven by software optimizations, more specialized hardware, and decentralized proving networks (which can source the cheapest compute resources globally and allow users to avoid paying for idle time).
All in all, excited for what 2025 has in store. Onwards and upwards!