Decoding Merlin's Technology: How It Operates

AdvancedApr 29, 2024
Within the bustling realm of Bitcoin's Layer2 arena, Merlin stands out with its multi-billion dollar TVL, attracting considerable attention. Web3 enthusiast and founder Faust delves into Merlin Chain’s technical framework, offering an interpretation of its publicly disclosed documents and the thought process behind its protocol design. This analysis aims to provide readers with a clear understanding of Merlin's operational workflow and its security architecture.
Decoding Merlin's Technology: How It Operates

Since the “Summer of Inscriptions” in 2023, Bitcoin’s Layer2 technologies have been at the forefront of the Web3 revolution. Despite entering the field later than Ethereum’s Layer2 solutions, Bitcoin has leveraged the unique appeal of POW and the smooth launch of spot ETFs, which are free from “securitization” risks, to draw billions of dollars in investment to this burgeoning sector within just six months. Among these, Merlin stands out as the most substantial and followed entity in the Bitcoin Layer2 landscape, commanding billions in total value locked (TVL). Thanks to clear staking incentives and impressive returns, Merlin quickly rose to prominence, surpassing even the well-known Blast ecosystem in a matter of months. With the growing buzz around Merlin, the exploration of its technical infrastructure has captivated an increasing audience. In this article, Geek Web3 focuses on deciphering the technical strategies behind Merlin Chain. By unpacking the publicly available documents and the rationale behind its protocol design, we aim to demystify Merlin’s operational processes and enhance understanding of its security framework, providing a clearer view of how this leading Bitcoin Layer2 solution functions.

Merlin’s Decentralized Oracle Network: An Open Off-Chain DAC Committee

For every Layer2 technology, addressing data availability (DA) and the cost of data publication remains a critical challenge, whether it’s for Ethereum Layer2 or Bitcoin Layer2. Given the inherent limitations of the Bitcoin network, which struggles with large data throughput, strategizing the efficient use of valuable DA space is a significant test for Layer2 developers’ creativity.

It’s evident that if Layer2 projects were to directly publish raw transaction data onto the Bitcoin blockchain, they would fail to achieve both high throughput and low fees. The predominant solutions include highly compressing the data to reduce its size significantly before uploading it to the Bitcoin blockchain, or opting to publish the data off-chain.

Among those employing the first strategy, Citrea stands out. They aim to upload changes in Layer2 states over certain intervals, which involves recording the results of state changes across multiple accounts, along with the corresponding zero-knowledge proofs (ZKPs), onto the Bitcoin blockchain. Under this arrangement, anyone can access the state diffs and ZKPs from the Bitcoin mainnet to track Citrea’s state changes. This approach effectively reduces the size of the data uploaded to the blockchain by over 90%.

Although this can greatly compress the data size, the bottleneck is still obvious. If a large number of accounts change their status in a short period of time, Layer 2 must summarize and upload all the changes in these accounts to the Bitcoin chain. The final data release cost cannot be kept very low. This is true in many Ethereums. This can be seen in ZK Rollup.

Many Bitcoin Layer 2 simply take the second path: directly use the DA solution under the Bitcoin chain, either build a DA layer by themselves, or use Celestia, EigenDA, etc.B^Square, BitLayer and the protagonist of this article, Merlin, all use this off-chain DA expansion solution.

Previous article in geek web3——“Analysis of B^2 New Version Technology Roadmap: The Necessity of DA and Verification Layer under Bitcoin Chain”, we mentioned that B^2 directly imitated Celestia and built a DA network that supports data sampling function under the chain, called B^2 Hub. “DA data ‘’ such as transaction data or state diff is stored under the Bitcoin chain, and only the datahash / merkle root is uploaded to the Bitcoin mainnet.

This essentially treats Bitcoin as a trustless bulletin board: anyone can read the datahash from the Bitcoin chain.After you obtain the DA data from the off-chain data provider, you can check whether it corresponds to the on-chain datahash., that is hash(data1) == datahash1? . If there is a correspondence between the two, it means that the data given to you by the off-chain data provider is correct.

DA Layer in Bitcoin’s Layer2 Explained:

(Image Source: Geek web3)

This system ensures that data from off-chain nodes correlates with specific “clues” or proofs on Layer1, safeguarding against the potential issue of the DA layer providing false information. However, a significant concern arises if the data’s originator—the Sequencer—fails to distribute the actual data related to a datahash. Suppose the Sequencer only transmits the datahash to the Bitcoin blockchain while intentionally withholding the corresponding data from public access. What happens then?

Consider scenarios where only the ZK-Proof and StateRoot are made public, but the accompanying DA data (like state diffs or transaction data) is not released. While it’s possible to validate the ZK-Proof and ensure the transition from Prev_Stateroot to New_Stateroot is accurate, it leaves unknown which accounts’ states have changed. Under these circumstances, even though users’ assets remain secure, the network’s actual condition remains unclear. No one knows which transactions have been incorporated into the blockchain or which contract states have been updated, effectively rendering Layer2 inoperative, almost as if it has gone offline.

This practice is referred to as “data withholding.” In August 2023, Dankrad from the Ethereum Foundation initiated a discussion on Twitter about a concept known as “DAC.”

In many Ethereum Layer2 setups that utilize off-chain data availability (DA) solutions, there are often a few nodes with special privileges that form a committee known as the Data Availability Committee (DAC). This committee serves as a guarantor, assuring that the Sequencer has indeed released complete DA data (transaction data or state diff) off-chain. The DAC members then create a collective multi-signature. If this multi-signature achieves the required threshold (for example, 2 out of 4), the corresponding contracts on Layer1 are designed to assume that the Sequencer has met the DAC’s verification standards and has genuinely released the full DA data off-chain.


Ethereum Layer2 DAC committees predominantly adhere to the Proof of Authority (POA) model, limiting membership to a select group of nodes that have passed KYC or have been officially designated. This approach has effectively branded the DAC as a hallmark of “centralization” and “consortium blockchain.” Additionally, in certain Ethereum Layer2s utilizing the DAC approach, the sequencer distributes DA data solely to DAC member nodes, with minimal external dissemination. Consequently, anyone seeking DA data must secure approval from the DAC, akin to operating within a consortium blockchain.

It is clear that DACs need to be decentralized. Although Layer2 may not be required to upload DA data directly to Layer1, the DAC’s membership access should be publicly accessible to avoid collusion and malfeasance by a few individuals. (For more on this issue, refer to Dankrad’s prior discussions on Twitter.)

Celestia’s proposal of BlobStream fundamentally aims to replace a centralized DAC with Celestia. Under this model, an Ethereum L2 sequencer would post DA data to the Celestia blockchain. If two-thirds of Celestia’s nodes validate this data, the specialized Layer2 contract on Ethereum would validate that the sequencer has accurately published the DA data, thus positioning Celestia nodes as guarantors. Given that Celestia operates with hundreds of validator nodes, this larger DAC configuration is considered relatively decentralized.

The DA solution adopted by Merlin is actually relatively close to Celestia’s BlobStream, both of which open access to DAC through POS, making it more decentralized.Anyone can run a DAC node as long as they stake enough assets. In Merlin’s documentation, the above-mentioned DAC node is called Oracle, and it is pointed out that it will support asset pledges of BTC, MERL and even BRC-20 tokens to implement a flexible pledge mechanism and also support proxy pledges similar to Lido. (The POS pledge agreement of the oracle machine is basically one of Merlin’s next core narratives, and the pledge interest rates provided are relatively high)

Here we briefly describe Merlin’s workflow (picture below):

  1. After the sequencer receives a large number of transaction requests, it aggregates them and generates a data batch (data batch), which is passed to the Prover node and the Oracle node (decentralized DAC).

  2. Merlin’s Prover node is decentralized and uses lumoz’s Prover as a Service service. After receiving multiple data batches, the Prover mining pool will generate corresponding zero-knowledge proofs. Afterwards, ZKP will be sent to the Oracle node for verification.

  3. The Oracle node will verify whether the ZK Proof sent by Lmuoz’s ZK mining pool corresponds to the data Batch sent by the Sequencer. If the two can be matched and do not contain other errors, the verification is passed. during this process,Decentralized Oracle nodes will generate multi-signatures through threshold signatures and declare to the outside world that the sequencer has completely sent out the DA data, and the corresponding ZKP is valid and has passed the verification of the Oracle nodes.

  4. The sequencer collects multi-signature results from Oracle nodes. When the number of signatures meets the threshold requirements, the signature information is sent to the Bitcoin chain, along with the datahash of DA data (data batch), and is handed over to the outside world for reading and confirmation.

(Merlin working principle diagram source: Geek web3)

  1. Oracle nodes perform special processing on the calculation process of verifying ZK Proof, generate Commitment commitments, and send them to the Bitcoin chain, allowing anyone to challenge the “commitment”.The process here is basically the same as bitVM’s fraud proof protocol. If the challenge is successful, the Oracle node that issued the Commitment will be financially punished. Of course, the data that Oracle wants to publish to the Bitcoin chain also includes the hash of the current Layer 2 state - StateRoot, as well as ZKP itself, which must be published to the Bitcoin chain for external detection.

References:“A Minimalist Interpretation of BitVM: How to Verify Fraud Proofs on the BTC Chain”

There are several details that need to be elaborated here. First of all, it is mentioned in the Merlin roadmap that Oracle will back up DA data to Celestia in the future. In this way, Oracle nodes can properly eliminate local historical data without the need to Data persists locally. At the same time, the Commitment generated by Oracle Network is actually the root of a Merkle Tree. It is not enough to disclose the root to the outside world. The complete data set corresponding to the Commitment must be made public. This requires finding a third-party DA platform. This platform can be Celestia or EigenDA, or other DA layers.

References:“Analysis of B^2 New Version Technology Roadmap: The Necessity of DA and Verification Layer under Bitcoin Chain”

Security model analysis: Optimistic ZKRollup+Cobo’s MPC service

Above we have briefly described the workflow of Merlin, and I believe you have already mastered its basic structure. It is not difficult to see that Merlin, B^Square, BitLayer, and Citrea basically follow the same security model—optimistic ZK-Rollup.

When reading this word for the first time, many Ethereum enthusiasts may feel weird. What is “optimistic ZK-Rollup”? In the understanding of the Ethereum community, ZK Rollup’s “theoretical model” is completely based on the reliability of cryptographic calculations and does not require the introduction of trust assumptions. The word “optimistic” precisely introduces the trust assumption, which means that people Most of the time, be optimistic that Rollup does not have errors and is reliable. Once an error occurs, the Rollup operator can be punished through fraud proof. This is the origin of the name Optimistic Rollup, also known as OP Rollup.

For the Ethereum ecosystem at the base of Rollup, the optimistic ZK-Rollup may be a bit nondescript, but it exactly fits the current situation of Bitcoin Layer 2. Due to technical limitations, the Bitcoin chain cannot completely verify ZK Proof. It can only verify a certain step of the calculation process of ZKP under special circumstances. Under this premise, the Bitcoin chain can actually only support the fraud proof protocol. People It can be pointed out that there is an error in a certain calculation step of ZKP during the off-chain verification process, and challenged through fraud proof. Of course, this cannot be compared to the Ethereum-style ZK Rollup, but it is already the best that Bitcoin Layer 2 can currently achieve. Reliable and most secure security model.

Under the above optimistic ZK-Rollup scheme,Assume that there are N people in the Layer 2 network who have the authority to initiate challenges. As long as one of the N challengers is honest and reliable and can detect errors and initiate fraud proof at any time, the Layer 2 state transition is safe.. Of course, the optimistic Rollup with a relatively high degree of completion needs to ensure that its withdrawal bridge is also protected by the fraud proof protocol. However, almost all Bitcoin Layer 2 currently cannot achieve this premise and need to rely on multi-signature/MPC. So how to choose multi-signature? Signing the MPC solution has become an issue closely related to Layer 2 security.

Merlin chose Cobo’s MPC service for the bridge solution.Using measures such as hot and cold wallet isolation, the bridge assets are jointly managed by Cobo and Merlin Chain. Any withdrawal behavior needs to be handled jointly by MPC participants of Cobo and Merlin Chain. Essentially, the reliability of the withdrawal bridge is guaranteed through the credit endorsement of the institution. . Of course, this is only a stopgap measure at this stage. As the project gradually improves, the withdrawal bridge can be replaced by the “optimistic bridge” with a 1/N trust assumption by introducing BitVM and the fraud proof protocol. However, the implementation of this will be more difficult. Large (almost all official Layer 2 bridges currently rely on multi-signature).

Overall, we can sort it out,Merlin introduced POS-based DAC, BitVM-based optimistic ZK-Rollup, and Cobo-based MPC asset custody solution, solving the DA problem by opening DAC permissions; ensuring the security of state transitions by introducing BitVM and fraud proof protocols; ensuring the reliability of the withdrawal bridge by introducing the MPC service of the well-known asset custody platform Cobo.

Lumoz’s Two-Step Verification ZKP Submission Strategy

In our previous discussions, we delved into Merlin’s security framework and explored the innovative concept of optimistic ZK-rollups. A key element in Merlin’s technological trajectory is the decentralized Prover. This role is pivotal within the ZK-Rollup architecture, tasked with generating ZK Proofs for batches released by the Sequencer. The creation of zero-knowledge proofs is notably resource-intensive, posing a considerable challenge.

One basic strategy to expedite the generation of ZK proofs is to divide and parallelize the tasks. This process, known as parallelization, involves breaking down the ZK proof generation into distinct parts. Each part is handled by a different Prover, and finally, an Aggregator merges these individual proofs into a unified whole. This approach not only speeds up the process but also distributes the computational load effectively.

In order to speed up the generation process of ZK proof,Merlin will adopt Lumoz’s Prover as a service solution, in fact, it is to gather a large number of hardware devices together to form a mining pool, and then allocate computing tasks to different devices and allocate corresponding incentives, which is somewhat similar to POW mining.

In this decentralized Prover solution, there is a type of attack scenario, commonly known as a front-running attack: Assume that an Aggregator has established ZKP, and it sends ZKP in order to obtain rewards. After other aggregators see the content of ZKP, they publish the same content before him, claiming that he generated the ZKP first. How to solve this situation?

Perhaps the most instinctive solution that everyone thinks of is to assign a designated task number to each Aggregator. For example, only Aggregator A can take task 1, and others will not get rewards even if they complete task 1. But there is a problem with this approach, which is that it cannot resist single-point risks. If Aggregator A has a performance failure or is disconnected, task 1 will be stuck and unable to be completed. Moreover, this method of allocating tasks to a single entity cannot improve production efficiency through a competitive incentive mechanism, and is not a good approach.

Polygon zkEVM once proposed a method called Proof of efficiency in a blog, which pointed out that competitive means should be used to promote competition between different Aggregators, and incentives should be allocated on a first-come, first-served basis. The Aggregator that submits ZK-Proof to the chain first can receive rewards. Of course, he did not mention how to solve the MEV front-running problem.

Lumoz adopts a two-step verification ZK certificate submission method. After an Aggregator generates a ZK certificate, it does not need to send out the complete content first, but only publishes the ZKP hash. In other words, publishes the hash (ZKP+Aggregator Address). In this way, even if others see the hash value, they do not know the corresponding ZKP content and cannot directly jump ahead;

If someone simply copies the entire hash and publishes it first, there is no point, because the hash contains the address of a specific aggregator X. Even if aggregator A publishes the hash first, when the original image of the hash is revealed, everyone will also See that the aggregator address contained in it is X’s, not A’s.

Through this two-step verification ZKP submission scheme, Merlin (Lumoz) can solve the front-running problem existing in the ZKP submission process, thereby achieving highly competitive zero-knowledge proof generation incentives, thereby increasing the speed of ZKP generation.

Merlin’s Phantom: Multi-chain interoperability

According to Merlin’s technology roadmap, they will also support interoperability between Merlin and other EVM chains, its implementation path is basically the same as the previous Zetachain idea. If Merlin is used as the source chain and other EVM chains are used as the target chain, when the Merlin node senses the cross-chain interoperability request issued by the user, it will trigger subsequent work on the target chain. process.

For example, an EOA account controlled by the Merlin network can be deployed on Polygon,When a user issues a cross-chain interoperability instruction on the Merlin Chain, the Merlin Network first parses its content and generates a transaction data to be executed on the target chain, and then the Oracle Network performs MPC signature processing on the transaction to generate the transaction number. sign. Merlin’s Relayer node then releases the transaction on Polygon, complete subsequent operations through Merlin’s assets in the EOA account on the target chain, such as.

When the operation requested by the user is completed, the corresponding assets will be directly forwarded to the user’s address on the target chain. In theory, it can also be directly transferred to the Merlin Chain. This solution has some obvious benefits: it can avoid the handling fees and wear and tear caused by cross-chain bridge contracts when traditional assets cross-chain, and the security of cross-chain operations is directly guaranteed by Merlin’s Oracle Network, and there is no need to rely on external parties. infrastructure. As long as users trust Merlin Chain, such cross-chain interoperability behavior can be assumed to be no problem.

Summarize

In this article, we briefly interpret the general technical solution of Merlin Chain, which we believe will allow more people to understand Merlin’s general workflow and have a clearer understanding of its security model. Considering that the current Bitcoin ecosystem is in full swing, we believe that this kind of technology popularization activities are valuable and needed by the general public.We will conduct long-term follow-up on Merlin, bitLayer, B^Square and other projects in the future., for a more in-depth analysis of its technical solutions, so stay tuned!

Disclaimer:

  1. This article is reproduced from [geek web3], the copyright belongs to the original author [Faust], if you have any objection to the reprint, please contact Gate Learn Team, the team will handle it as soon as possible according to relevant procedures.

  2. The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Other language versions of the article are translated by the Gate Learn team. Without referencing Gate.io, copying, distributing, or plagiarizing the translated articles is prohibited.

Start Now
Sign up and get a
$100
Voucher!
Create Account