ABCDE: An in-depth discussion of co-processors and various solutions

Intermediate1/6/2024, 7:31:54 AM
This article details the positioning and solutions of coprocessors.

1. What is a co-processor and what is it not?

If you were asked to explain coprocessors to a non-technical or developer in just one sentence, how would you describe it?

I think what Dr. Dong Mo said may be very close to the standard answer - to put it bluntly, the co-processor is “giving the smart contract the ability to do Dune Analytics”.

How to deconstruct this sentence?

Imagine the scenario where we use Dune - you want to do LP in Uniswap V3 to earn some handling fees, so you open Dune and find the recent trading volume of various trading pairs on Uniswap, the APR of handling fees in the past 7 days, and mainstream trading pairs The upper and lower fluctuation ranges, etc…

Or maybe when StepN became popular, you started speculating on shoes, and you were not sure when to sell them, so you stared at the StepN data on Dune every day, the daily transaction volume, the number of new users, the floor price of shoes… and planned to once there was growth. If the trend slows down or goes down, run quickly.

Of course, you may not only be staring at these data, but the development teams of Uniswap and StepN are also paying attention to these data.

This data is very meaningful - it can not only help judge changes in trends, but also use it to create more tricks, just like the “big data” approach commonly used by major Internet companies.

For example, based on the style and price of shoes that users often buy and sell, similar shoes are recommended.

For example, based on the length of time users hold Chuangshi shoes, a “User Loyalty Reward Program” will be launched to give loyal users more airdrops or benefits.

For example, a VIP plan similar to Cex can be launched based on the TVL or trading volume provided by LP or Trader on Uniswap, giving Trader transaction fee reduction or LP fee share increase benefits.

……

At this time, the problem arises - when major Internet companies play with big data + AI, it is basically a black box. They can do whatever they want. Users can’t see it and don’t care.

But on the Web3 side, transparency and trustlessness are our natural political correctness, and we reject black boxes!

So when you want to realize the above scenario, you will face a dilemma - either you can achieve it through centralized means, “manually” use Dune to count the index data in the background, and then deploy and implement it; or you can write a Set up smart contracts to automatically capture these data on the chain, complete calculations, and automatically deploy points.

The former can leave you with “politically incorrect” trust issues.

The gas fee generated by the latter on the chain will be an astronomical figure, and your (project side) wallet cannot afford it.

This is the time for the co-processor to come on stage. Combine the two methods just now, and at the same time use the “background manual” step to “self-prove innocence” through technical means. In other words, use ZK technology to “index +” outside the chain. The “calculation” part “self-proves innocence”, and then feeds it to the smart contract. In this way, the trust problem is solved, and the massive gas fees are gone. Perfect!

Why is it called a “coprocessor”? In fact, this is derived from the “GPU” in the development history of Web2.0. The reason why the GPU was introduced as a separate computing hardware and existed independently of the CPU at that time was because its design architecture could handle some calculations that were fundamentally difficult for the CPU to handle, such as large-scale parallel repeated calculations, graphics calculations, etc. . It is precisely because of this “co-processor” architecture that we have wonderful CG movies, games, AI models, etc. today, so this co-processor architecture is actually a leap in computing architecture. Now various co-processor teams also hope to introduce this architecture into Web3.0. The blockchain here is similar to the CPU of Web3.0. Whether it is L1 or L2, they are inherently unsuitable for such “heavy data” and “ Complex calculation logic” tasks, so a blockchain co-processor is introduced to help handle such calculations, thereby greatly expanding the possibilities of blockchain applications.

So what the coprocessor does can be summarized into two things:

  1. Get the data from the blockchain and prove through ZK that the data I got is true and not adulterated;
  2. Make corresponding calculations based on the data I just obtained, and once again use ZK to prove that the results I calculated are true and not adulterated. The calculation results can be called by the smart contract “Low Fee + Trustless”.

Some time ago, Starkware had a popular concept called Storage Proof, also called State Proof. It basically does step 1, represented by Herodotus, Langrage, etc. The technical focus of many cross-chain bridges based on ZK technology is also in step 1. 1 on.

The co-processor is nothing more than adding step 2 after step 1 is completed. After extracting the data without trust, it can then do a trust-free calculation.

So to use a relatively technical term to describe it accurately, the coprocessor should be a superset of Storage Proof/State Proof and a subset of Verfiable Computation.

One thing to note is that the coprocessor is not Rollup.

Technically speaking, Rollup’s ZK proof is similar to step 2 above, and the process of step 1 “getting data” is directly implemented through Sequencer. Even a decentralized Sequencer only uses some kind of competition or consensus mechanism. Take it instead of Storage Proof in the form of ZK. What’s more important is that in addition to the calculation layer, ZK Rollup also needs to implement a storage layer similar to the L1 blockchain. This storage is permanent, while ZK Coprocessor is “stateless”. After the calculation is completed, no All status is retained.

From the perspective of application scenarios, the coprocessor can be regarded as a service plug-in for all Layer1/Layer2, while Rollup re-creates an execution layer to help expand the settlement layer.

2. Why do you have to use ZK? Is it okay to use OP?

After reading the above, you may have a doubt, does it have to be done with ZK as a coprocessor? It sounds so much like a “Graph with ZK added”, and we don’t seem to have any “big doubts” about the results on Graph.

That’s because when you use Graph, you basically don’t involve real money. These indexes serve off-chain services. What you see on the front-end user interface are transaction volume, transaction history, etc. Data can be provided through multiple data index providers such as Graph, Alchemy, Zettablock, etc., but this data cannot be stuffed back into the smart contract, because once you stuff it back in, you will add additional trust in the indexing service. When data is linked to real money, especially large-volume TVL, this extra trust becomes important. Imagine that the next time a friend asks you to borrow 100 yuan, you may just lend it without blinking an eye. Yes, what about when I ask you to borrow 10,000 yuan, or even 1 million yuan?

But then again, do we really have to use ZK to co-process all the above scenarios? After all, we have two technical routes, OP and ZK, in Rollup. The recently popular ZKML also has the OPML concept of corresponding branch routes. Asked, does the coprocessor also have a branch of OP, such as OP-Coprocessor?

In fact, there is - but we are keeping the specific details confidential for now, and we will release more detailed information soon.

3. Which coprocessor is better - Comparison of several common coprocessor technology solutions on the market

  1. Brevis:

Brevis’s architecture consists of three components: zkFabric, zkQueryNet, and zkAggregatorRollup.

The following is a Brevis architecture diagram:

zkFabric: Collects block headers from all connected blockchains and generates ZK consensus proofs proving the validity of these block headers. Through zkFabric, Brevis implements an interoperable coprocessor for multiple chains, which allows one blockchain to access any historical data of another blockchain.

zkQueryNet: An open ZK query engine marketplace that accepts data queries from dApps and processes them. Data queries process these queries using verified block headers from zkFabric and generate ZK query proofs. These engines have both highly specialized functions and general query languages ​​to meet different application needs.

zkAggregatorRollup: A ZK convolutional blockchain that acts as the aggregation and storage layer for zkFabric and zkQueryNet. It verifies proofs from both components, stores the verified data, and commits its zk-validated state root to all connected blockchains.

ZK Fabric is a key part of generating proof for the block header. It is very important to ensure the security of this part. The following is the architecture diagram of zkFabric:

zkFabric’s Zero-Knowledge Proof (ZKP) based light client makes it completely trust-free without relying on any external verification entity. There is no need to rely on any external verification entity, as its security comes entirely from the underlying blockchain and mathematically reliable proofs.

The zkFabric Prover network implements circuitry for each blockchain’s lightclient protocol, and the network generates validity proofs for block headers. Provers can leverage accelerators such as GPUs, FPGAs, and ASICs to minimize proof time and cost.

zkFabric relies on the security assumptions of the blockchain and the underlying encryption protocol and the security assumptions of the underlying encryption protocol. However, to ensure the effectiveness of zkFabric, at least one honest relayer is required to synchronize the correct fork. Therefore, zkFabric adopts a decentralized relay network instead of a single relay to optimize the effectiveness of zkFabric. This relay network can leverage existing structures, such as the state guard network in the Celer network.

Prover Allocation: The prover network is a decentralized ZKP prover network that selects a prover for each proof generation task and pays fees to these provers.

Current deployment:

Light client protocols currently implemented for various blockchains including Ethereum PoS, Cosmos Tendermint, and BNB Chain serve as examples and proof-of-concepts.

Brevis has currently cooperated with uniswap hook, which greatly adds custom uniswap pools. However, compared with CEX, UnisWap still lacks effective data processing capabilities to create projects that rely on large user transaction data (such as loyalty programs based on transaction volume). Function.

With the help of Brevis, hook solved the challenge. Hooks can now read from the full history chain data of a user or LP and run customizable calculations in a completely trustless manner.

  1. Herodotus

Herodotus is a powerful data access middleware that provides smart contracts with the following functions to synchronously access current and historical data on the chain across the Ethereum layer:

L1 states from L2s

L2 states from both L1s and other L2s

L3/App-Chain states to L2s and L1s

Herodotus proposed the concept of proof of storage, which combines proof of inclusion (confirming the existence of data) and proof of computation (verifying the execution of a multi-step workflow) to prove that a large data set (such as the entire Ethereum blockchain or a rollup) or the validity of multiple elements.

The core of the blockchain is the database, in which the data is encrypted and protected using data structures such as Merkle trees and Merkle Patricia trees. What is unique about these data structures is that once data is securely committed to them, evidence can be generated to confirm that the data is contained within the structure.

The use of Merkle trees and Merkle Patricia trees enhances the security of the Ethereum blockchain. By cryptographically hashing the data at each level of the tree, it is nearly impossible to alter the data without detection. Any changes to a data point require changing the corresponding hash on the tree to the root hash, which is publicly visible in the blockchain header. This fundamental feature of blockchain provides a high level of data integrity and immutability.

Second, these trees allow for efficient data verification via inclusion proofs. For example, when verifying the inclusion of a transaction or the state of a contract, there is no need to search the entire Ethereum blockchain but only the path within the relevant Merkle tree.

Proof of storage as defined by Herodotus is a fusion of:

  • Containment proofs: These proofs confirm the existence of specific data in a cryptographic data structure (such as a Merkle tree or Merkle Patricia tree), ensuring that the data in question is indeed present in the data set.
  • Computational Proof: Verify the execution of a multi-step workflow, proving the validity of one or more elements in a broad dataset, such as the entire Ethereum blockchain or an aggregate. In addition to indicating the presence of data, they also verify transformations or operations applied to that data.
  • Zero-knowledge proofs: Simplify the amount of data smart contracts need to interact with. Zero-knowledge proofs allow smart contracts to confirm the validity of a claim without processing all the underlying data.

Workflow :

  1. Obtain block hash

Every data on the blockchain belongs to a specific block. The block hash serves as the unique identifier of the block and summarizes all its contents through the block header. In the proof-of-storage workflow, we first need to determine and verify the block hash of the block containing the data we are interested in. This is the first step in the entire process.

  1. Obtain block header

Once the relevant block hash is obtained, the next step is to access the block header. To do this, the block header associated with the block hash obtained in the previous step needs to be hashed. The hash of the provided block header is then compared to the resulting block hash:

There are two ways to obtain the hash:

(1) Use BLOCKHASH opcode to retrieve

(2) Query the hashes of blocks that have been verified in history from the Block Hash Accumulator

This step ensures that the block header being processed is authentic. Once this step is completed, the smart contract can access any value in the block header.

  1. Determine the required roots (optional)

With the block header in hand, we can delve into its contents, specifically:

stateRoot: A cryptographic digest of the entire blockchain state at the time the blockchain occurred.

receiptsRoot: Encrypted digest of all transaction results (receipts) in the block.

TransactionsRoot: A cryptographic digest of all transactions that occurred in the block.

can be decoded, allowing verification of whether a specific account, receipt or transaction is included in the block.

  1. Validate data against selected root (optional)

With the root we selected, and considering that Ethereum uses a Merkle-Patricia Trie structure, we can use the Merkle inclusion proof to verify that the data exists in the tree. Verification steps will vary depending on the data and the depth of the data within the block.

Currently supported networks:

From Ethereum to Starknet

From Ethereum Goerli to Starknet Goerli

From Ethereum Goerli to zkSync Era Goerli

  1. Axiom

Axiom provides a way for developers to query block headers, account or storage values ​​from Ethereum’s entire history. AXIOM introduces a new method of linking based on cryptography. All results returned by Axiom are verified on-chain through zero-knowledge proofs, meaning smart contracts can use them without additional assumptions of trust.

Axiom recently releasedHalo2-repl , is a browser-based halo2 REPL written in Javascript. This allows developers to write ZK circuits using just standard Javascript without having to learn a new language like Rust, install proof libraries, or deal with dependencies.

Axiom consists of two main technology components:

AxiomV1 — Ethereum blockchain cache, starting with Genesis.

AxiomV1Query — Smart contract that executes queries against AxiomV1.

(1) Cache block hash value in AxiomV1:

The AxiomV1 smart contract caches Ethereum block hashes since the genesis block in two forms:

First, the Keccak Merkle root of 1024 consecutive block hashes is cached. These Merkle roots are updated via ZK proofs, verifying that the block header hash forms a commitment chain ending with one of the most recent 256 blocks directly accessible to the EVM or a block hash that already exists in the AxiomV1 cache.

Secondly. Axiom stores the Merkle Mountain Range of these Merkle roots starting from the genesis block. The Merkle Mountain Range is built on-chain by updating the first part of the cache, the Keccak Merkle root.

(2) Execute the query in AxiomV1Query:

The AxiomV1Query smart contract is used for batch queries to enable trustless access to historical Ethereum block headers, accounts, and arbitrary data stored in the accounts. Queries can be made on-chain and are completed on-chain via ZK proofs against AxiomV1 cached block hashes.

These ZK proofs check whether the relevant on-chain data is located directly in the block header, or in the block’s account or storage trie, by verifying the inclusion (or not inclusion) proof of the Merkle-Patricia Trie.

  1. Nexus

Nexus attempts to build a common platform for verifiable cloud computing using zero-knowledge proofs. Currently it is machine archetechture agnostic and supports risc 5/ WebAssembly/ EVM. Nexus uses supernova’s proof system. The team tested that the memory required to generate the proof is 6GB. In the future, it will be optimized on this basis so that ordinary user-side devices and computers can generate proofs.

To be precise, the architecture is divided into two parts:

Nexus zero: A decentralized verifiable cloud computing network powered by zero-knowledge proofs and universal zkVM.

Nexus: A decentralized verifiable cloud computing network powered by multi-party computation, state machine replication, and a universal WASM virtual machine.

Nexus and Nexus Zero applications can be written in traditional programming languages, currently supporting Rust, with more languages ​​to come.

Nexus applications run on a decentralized cloud computing network, which is essentially a general-purpose “serverless blockchain” connected directly to Ethereum. Therefore, Nexus applications do not inherit the security of Ethereum, but in exchange gain access to higher computing power (such as compute, storage, and event-driven I/O) due to the reduced size of its network. Nexus applications run on a private cloud that achieves internal consensus and provides verifiable “proofs” of computation (not true proofs) through verifiable network-wide threshold signatures within Ethereum.

Nexus Zero applications do inherit the security of Ethereum, as they are universal programs with zero-knowledge proofs that can be verified on-chain on the BN-254 elliptic curve.

Since Nexus can run any deterministic WASM binary in a replicated environment, it is expected to be used as a source of proof of validity/dispersion/fault tolerance for generated applications, including zk-rollup sequencers, optimistic rollup sequencers, and other proofs Server, such as Nexus Zero’s zkVM itself.

Disclaimer:

  1. This article is reprinted from [Medium]. All copyrights belong to the original author [ABCDE]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

ABCDE: An in-depth discussion of co-processors and various solutions

Intermediate1/6/2024, 7:31:54 AM
This article details the positioning and solutions of coprocessors.

1. What is a co-processor and what is it not?

If you were asked to explain coprocessors to a non-technical or developer in just one sentence, how would you describe it?

I think what Dr. Dong Mo said may be very close to the standard answer - to put it bluntly, the co-processor is “giving the smart contract the ability to do Dune Analytics”.

How to deconstruct this sentence?

Imagine the scenario where we use Dune - you want to do LP in Uniswap V3 to earn some handling fees, so you open Dune and find the recent trading volume of various trading pairs on Uniswap, the APR of handling fees in the past 7 days, and mainstream trading pairs The upper and lower fluctuation ranges, etc…

Or maybe when StepN became popular, you started speculating on shoes, and you were not sure when to sell them, so you stared at the StepN data on Dune every day, the daily transaction volume, the number of new users, the floor price of shoes… and planned to once there was growth. If the trend slows down or goes down, run quickly.

Of course, you may not only be staring at these data, but the development teams of Uniswap and StepN are also paying attention to these data.

This data is very meaningful - it can not only help judge changes in trends, but also use it to create more tricks, just like the “big data” approach commonly used by major Internet companies.

For example, based on the style and price of shoes that users often buy and sell, similar shoes are recommended.

For example, based on the length of time users hold Chuangshi shoes, a “User Loyalty Reward Program” will be launched to give loyal users more airdrops or benefits.

For example, a VIP plan similar to Cex can be launched based on the TVL or trading volume provided by LP or Trader on Uniswap, giving Trader transaction fee reduction or LP fee share increase benefits.

……

At this time, the problem arises - when major Internet companies play with big data + AI, it is basically a black box. They can do whatever they want. Users can’t see it and don’t care.

But on the Web3 side, transparency and trustlessness are our natural political correctness, and we reject black boxes!

So when you want to realize the above scenario, you will face a dilemma - either you can achieve it through centralized means, “manually” use Dune to count the index data in the background, and then deploy and implement it; or you can write a Set up smart contracts to automatically capture these data on the chain, complete calculations, and automatically deploy points.

The former can leave you with “politically incorrect” trust issues.

The gas fee generated by the latter on the chain will be an astronomical figure, and your (project side) wallet cannot afford it.

This is the time for the co-processor to come on stage. Combine the two methods just now, and at the same time use the “background manual” step to “self-prove innocence” through technical means. In other words, use ZK technology to “index +” outside the chain. The “calculation” part “self-proves innocence”, and then feeds it to the smart contract. In this way, the trust problem is solved, and the massive gas fees are gone. Perfect!

Why is it called a “coprocessor”? In fact, this is derived from the “GPU” in the development history of Web2.0. The reason why the GPU was introduced as a separate computing hardware and existed independently of the CPU at that time was because its design architecture could handle some calculations that were fundamentally difficult for the CPU to handle, such as large-scale parallel repeated calculations, graphics calculations, etc. . It is precisely because of this “co-processor” architecture that we have wonderful CG movies, games, AI models, etc. today, so this co-processor architecture is actually a leap in computing architecture. Now various co-processor teams also hope to introduce this architecture into Web3.0. The blockchain here is similar to the CPU of Web3.0. Whether it is L1 or L2, they are inherently unsuitable for such “heavy data” and “ Complex calculation logic” tasks, so a blockchain co-processor is introduced to help handle such calculations, thereby greatly expanding the possibilities of blockchain applications.

So what the coprocessor does can be summarized into two things:

  1. Get the data from the blockchain and prove through ZK that the data I got is true and not adulterated;
  2. Make corresponding calculations based on the data I just obtained, and once again use ZK to prove that the results I calculated are true and not adulterated. The calculation results can be called by the smart contract “Low Fee + Trustless”.

Some time ago, Starkware had a popular concept called Storage Proof, also called State Proof. It basically does step 1, represented by Herodotus, Langrage, etc. The technical focus of many cross-chain bridges based on ZK technology is also in step 1. 1 on.

The co-processor is nothing more than adding step 2 after step 1 is completed. After extracting the data without trust, it can then do a trust-free calculation.

So to use a relatively technical term to describe it accurately, the coprocessor should be a superset of Storage Proof/State Proof and a subset of Verfiable Computation.

One thing to note is that the coprocessor is not Rollup.

Technically speaking, Rollup’s ZK proof is similar to step 2 above, and the process of step 1 “getting data” is directly implemented through Sequencer. Even a decentralized Sequencer only uses some kind of competition or consensus mechanism. Take it instead of Storage Proof in the form of ZK. What’s more important is that in addition to the calculation layer, ZK Rollup also needs to implement a storage layer similar to the L1 blockchain. This storage is permanent, while ZK Coprocessor is “stateless”. After the calculation is completed, no All status is retained.

From the perspective of application scenarios, the coprocessor can be regarded as a service plug-in for all Layer1/Layer2, while Rollup re-creates an execution layer to help expand the settlement layer.

2. Why do you have to use ZK? Is it okay to use OP?

After reading the above, you may have a doubt, does it have to be done with ZK as a coprocessor? It sounds so much like a “Graph with ZK added”, and we don’t seem to have any “big doubts” about the results on Graph.

That’s because when you use Graph, you basically don’t involve real money. These indexes serve off-chain services. What you see on the front-end user interface are transaction volume, transaction history, etc. Data can be provided through multiple data index providers such as Graph, Alchemy, Zettablock, etc., but this data cannot be stuffed back into the smart contract, because once you stuff it back in, you will add additional trust in the indexing service. When data is linked to real money, especially large-volume TVL, this extra trust becomes important. Imagine that the next time a friend asks you to borrow 100 yuan, you may just lend it without blinking an eye. Yes, what about when I ask you to borrow 10,000 yuan, or even 1 million yuan?

But then again, do we really have to use ZK to co-process all the above scenarios? After all, we have two technical routes, OP and ZK, in Rollup. The recently popular ZKML also has the OPML concept of corresponding branch routes. Asked, does the coprocessor also have a branch of OP, such as OP-Coprocessor?

In fact, there is - but we are keeping the specific details confidential for now, and we will release more detailed information soon.

3. Which coprocessor is better - Comparison of several common coprocessor technology solutions on the market

  1. Brevis:

Brevis’s architecture consists of three components: zkFabric, zkQueryNet, and zkAggregatorRollup.

The following is a Brevis architecture diagram:

zkFabric: Collects block headers from all connected blockchains and generates ZK consensus proofs proving the validity of these block headers. Through zkFabric, Brevis implements an interoperable coprocessor for multiple chains, which allows one blockchain to access any historical data of another blockchain.

zkQueryNet: An open ZK query engine marketplace that accepts data queries from dApps and processes them. Data queries process these queries using verified block headers from zkFabric and generate ZK query proofs. These engines have both highly specialized functions and general query languages ​​to meet different application needs.

zkAggregatorRollup: A ZK convolutional blockchain that acts as the aggregation and storage layer for zkFabric and zkQueryNet. It verifies proofs from both components, stores the verified data, and commits its zk-validated state root to all connected blockchains.

ZK Fabric is a key part of generating proof for the block header. It is very important to ensure the security of this part. The following is the architecture diagram of zkFabric:

zkFabric’s Zero-Knowledge Proof (ZKP) based light client makes it completely trust-free without relying on any external verification entity. There is no need to rely on any external verification entity, as its security comes entirely from the underlying blockchain and mathematically reliable proofs.

The zkFabric Prover network implements circuitry for each blockchain’s lightclient protocol, and the network generates validity proofs for block headers. Provers can leverage accelerators such as GPUs, FPGAs, and ASICs to minimize proof time and cost.

zkFabric relies on the security assumptions of the blockchain and the underlying encryption protocol and the security assumptions of the underlying encryption protocol. However, to ensure the effectiveness of zkFabric, at least one honest relayer is required to synchronize the correct fork. Therefore, zkFabric adopts a decentralized relay network instead of a single relay to optimize the effectiveness of zkFabric. This relay network can leverage existing structures, such as the state guard network in the Celer network.

Prover Allocation: The prover network is a decentralized ZKP prover network that selects a prover for each proof generation task and pays fees to these provers.

Current deployment:

Light client protocols currently implemented for various blockchains including Ethereum PoS, Cosmos Tendermint, and BNB Chain serve as examples and proof-of-concepts.

Brevis has currently cooperated with uniswap hook, which greatly adds custom uniswap pools. However, compared with CEX, UnisWap still lacks effective data processing capabilities to create projects that rely on large user transaction data (such as loyalty programs based on transaction volume). Function.

With the help of Brevis, hook solved the challenge. Hooks can now read from the full history chain data of a user or LP and run customizable calculations in a completely trustless manner.

  1. Herodotus

Herodotus is a powerful data access middleware that provides smart contracts with the following functions to synchronously access current and historical data on the chain across the Ethereum layer:

L1 states from L2s

L2 states from both L1s and other L2s

L3/App-Chain states to L2s and L1s

Herodotus proposed the concept of proof of storage, which combines proof of inclusion (confirming the existence of data) and proof of computation (verifying the execution of a multi-step workflow) to prove that a large data set (such as the entire Ethereum blockchain or a rollup) or the validity of multiple elements.

The core of the blockchain is the database, in which the data is encrypted and protected using data structures such as Merkle trees and Merkle Patricia trees. What is unique about these data structures is that once data is securely committed to them, evidence can be generated to confirm that the data is contained within the structure.

The use of Merkle trees and Merkle Patricia trees enhances the security of the Ethereum blockchain. By cryptographically hashing the data at each level of the tree, it is nearly impossible to alter the data without detection. Any changes to a data point require changing the corresponding hash on the tree to the root hash, which is publicly visible in the blockchain header. This fundamental feature of blockchain provides a high level of data integrity and immutability.

Second, these trees allow for efficient data verification via inclusion proofs. For example, when verifying the inclusion of a transaction or the state of a contract, there is no need to search the entire Ethereum blockchain but only the path within the relevant Merkle tree.

Proof of storage as defined by Herodotus is a fusion of:

  • Containment proofs: These proofs confirm the existence of specific data in a cryptographic data structure (such as a Merkle tree or Merkle Patricia tree), ensuring that the data in question is indeed present in the data set.
  • Computational Proof: Verify the execution of a multi-step workflow, proving the validity of one or more elements in a broad dataset, such as the entire Ethereum blockchain or an aggregate. In addition to indicating the presence of data, they also verify transformations or operations applied to that data.
  • Zero-knowledge proofs: Simplify the amount of data smart contracts need to interact with. Zero-knowledge proofs allow smart contracts to confirm the validity of a claim without processing all the underlying data.

Workflow :

  1. Obtain block hash

Every data on the blockchain belongs to a specific block. The block hash serves as the unique identifier of the block and summarizes all its contents through the block header. In the proof-of-storage workflow, we first need to determine and verify the block hash of the block containing the data we are interested in. This is the first step in the entire process.

  1. Obtain block header

Once the relevant block hash is obtained, the next step is to access the block header. To do this, the block header associated with the block hash obtained in the previous step needs to be hashed. The hash of the provided block header is then compared to the resulting block hash:

There are two ways to obtain the hash:

(1) Use BLOCKHASH opcode to retrieve

(2) Query the hashes of blocks that have been verified in history from the Block Hash Accumulator

This step ensures that the block header being processed is authentic. Once this step is completed, the smart contract can access any value in the block header.

  1. Determine the required roots (optional)

With the block header in hand, we can delve into its contents, specifically:

stateRoot: A cryptographic digest of the entire blockchain state at the time the blockchain occurred.

receiptsRoot: Encrypted digest of all transaction results (receipts) in the block.

TransactionsRoot: A cryptographic digest of all transactions that occurred in the block.

can be decoded, allowing verification of whether a specific account, receipt or transaction is included in the block.

  1. Validate data against selected root (optional)

With the root we selected, and considering that Ethereum uses a Merkle-Patricia Trie structure, we can use the Merkle inclusion proof to verify that the data exists in the tree. Verification steps will vary depending on the data and the depth of the data within the block.

Currently supported networks:

From Ethereum to Starknet

From Ethereum Goerli to Starknet Goerli

From Ethereum Goerli to zkSync Era Goerli

  1. Axiom

Axiom provides a way for developers to query block headers, account or storage values ​​from Ethereum’s entire history. AXIOM introduces a new method of linking based on cryptography. All results returned by Axiom are verified on-chain through zero-knowledge proofs, meaning smart contracts can use them without additional assumptions of trust.

Axiom recently releasedHalo2-repl , is a browser-based halo2 REPL written in Javascript. This allows developers to write ZK circuits using just standard Javascript without having to learn a new language like Rust, install proof libraries, or deal with dependencies.

Axiom consists of two main technology components:

AxiomV1 — Ethereum blockchain cache, starting with Genesis.

AxiomV1Query — Smart contract that executes queries against AxiomV1.

(1) Cache block hash value in AxiomV1:

The AxiomV1 smart contract caches Ethereum block hashes since the genesis block in two forms:

First, the Keccak Merkle root of 1024 consecutive block hashes is cached. These Merkle roots are updated via ZK proofs, verifying that the block header hash forms a commitment chain ending with one of the most recent 256 blocks directly accessible to the EVM or a block hash that already exists in the AxiomV1 cache.

Secondly. Axiom stores the Merkle Mountain Range of these Merkle roots starting from the genesis block. The Merkle Mountain Range is built on-chain by updating the first part of the cache, the Keccak Merkle root.

(2) Execute the query in AxiomV1Query:

The AxiomV1Query smart contract is used for batch queries to enable trustless access to historical Ethereum block headers, accounts, and arbitrary data stored in the accounts. Queries can be made on-chain and are completed on-chain via ZK proofs against AxiomV1 cached block hashes.

These ZK proofs check whether the relevant on-chain data is located directly in the block header, or in the block’s account or storage trie, by verifying the inclusion (or not inclusion) proof of the Merkle-Patricia Trie.

  1. Nexus

Nexus attempts to build a common platform for verifiable cloud computing using zero-knowledge proofs. Currently it is machine archetechture agnostic and supports risc 5/ WebAssembly/ EVM. Nexus uses supernova’s proof system. The team tested that the memory required to generate the proof is 6GB. In the future, it will be optimized on this basis so that ordinary user-side devices and computers can generate proofs.

To be precise, the architecture is divided into two parts:

Nexus zero: A decentralized verifiable cloud computing network powered by zero-knowledge proofs and universal zkVM.

Nexus: A decentralized verifiable cloud computing network powered by multi-party computation, state machine replication, and a universal WASM virtual machine.

Nexus and Nexus Zero applications can be written in traditional programming languages, currently supporting Rust, with more languages ​​to come.

Nexus applications run on a decentralized cloud computing network, which is essentially a general-purpose “serverless blockchain” connected directly to Ethereum. Therefore, Nexus applications do not inherit the security of Ethereum, but in exchange gain access to higher computing power (such as compute, storage, and event-driven I/O) due to the reduced size of its network. Nexus applications run on a private cloud that achieves internal consensus and provides verifiable “proofs” of computation (not true proofs) through verifiable network-wide threshold signatures within Ethereum.

Nexus Zero applications do inherit the security of Ethereum, as they are universal programs with zero-knowledge proofs that can be verified on-chain on the BN-254 elliptic curve.

Since Nexus can run any deterministic WASM binary in a replicated environment, it is expected to be used as a source of proof of validity/dispersion/fault tolerance for generated applications, including zk-rollup sequencers, optimistic rollup sequencers, and other proofs Server, such as Nexus Zero’s zkVM itself.

Disclaimer:

  1. This article is reprinted from [Medium]. All copyrights belong to the original author [ABCDE]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
即刻开始交易
注册并交易即可获得
$100
和价值
$5500
理财体验金奖励!