🎆 New Year, New Luck! Join the Ultimate Lucky Draw Celebration!
🎉 Gate.io Community Honor Credits New Year Lucky Draw - Phase 6 is officially live!
Start the lucky draw now 👉 https://www.gate.io/activities/creditprize?now_period=6
🌟 How to Participate?
1️⃣ Go to the [Credits Center] in Gate Post and complete tasks like posting, commenting, and liking to earn Honor Credits.
2️⃣ Lower entry threshold: Earn 300 credits to get one entry into the lucky draw!
🎁 Enter the draw for a MacBook Air, exclusive merchandise, Points, Futures Voucher and more!
📅 Event Time: Dec 30, 2024, 07:00 AM - Jan
From Storing the Past to Computing the Future: AO Super-Parallel Computer
Author: YBB Capital Researcher Zeke
Preface
The two mainstream blockchain architecture designs that Web3 has now differentiated have inevitably caused some aesthetic fatigue. Whether it is the rampant modular public chain or the new L1 that always emphasizes performance but fails to reflect performance advantages, its ecology can be said to be... It is a replica or slight improvement of the Ethereum ecosystem. The extremely homogeneous experience has already made users lose their sense of freshness. The latest AO protocol proposed by Arweave is eye-catching, achieving ultra-high-performance computing on the storage public chain and even achieving a quasi-Web2 experience. This seems to be hugely different from the expansion methods and architectural design we are currently familiar with. So what exactly is AO? Where does the logic to support its performance come from?
How to understand AO
The name of AO comes from the abbreviation of Actor Oriented, a programming paradigm in the concurrent computing model Actor Model. Its overall design idea is derived from the extension of Smart Weave and also follows the message passing as the core concept of Actor Model. Simply put, we can understand AO as a "hyper-parallel computer" running on the Arweave network through a modular architecture. From the perspective of the implementation plan, AO is not actually the modular execution layer we commonly see today, but a communication protocol that standardizes message transmission and data processing. The core goal of the protocol is to realize the collaboration of different "roles" within the network through information transfer, thereby achieving a computing layer whose performance can be infinitely superimposed, ultimately enabling Arweave, the "giant hard drive", to have a central authority in a decentralized trust environment. Cloud-level speed, scalable computing power and scalability.
AO Architecture
The concept of AO seems to be similar to the "Core Time" splitting and recombining proposed by Gavin Wood at the Polkadot Decoded conference last year. Both are to achieve the so-called "high-performance world computer" through the scheduling and coordination of computing resources. But there are actually some differences between the two in essence. Exotic Scheduling is the deconstruction and reorganization of the relay chain block space resources. It does not change the architecture of Polkadot much. Although the computing performance has broken through the limitation of a single parallel chain under the slot model, the upper limit is still limited by the maximum number of idle cores in Polkadot. In theory, AO can provide nearly unlimited computing power (in actual circumstances, it should depend on the level of network incentives) and higher degrees of freedom through the horizontal expansion of nodes. From an architectural point of view, AO standardizes the data processing method and the expression of messages, and completes the sorting, scheduling and calculation of information through three network units (subnets). Its standardization method and the functions of different units can be summarized as follows according to official data analysis:
Operating system AOS
AOS can be regarded as the operating system or terminal tool in the AO protocol, which can be used to download, run and manage threads. It provides an environment in which developers can develop, deploy and run applications. On AOS, developers can use the AO protocol to develop and deploy applications and interact with the AO network.
Run logic
Actor Model advocates a philosophical view called "everything is an actor". All components and entities within this model can be regarded as "actors". Each actor has its own state, behavior and mailbox. They communicate and collaborate through asynchronous communication, allowing the entire system to operate in a distributed manner. and organize and run in a concurrent manner. The same is true for the operating logic of the AO network. Components and even users can be abstracted as "actors" and communicate with each other through the message passing layer, so that processes are linked to each other. A distributed work system that can be calculated in parallel and has no shared state is intertwined. was established.
The following is a brief description of the steps in the information transfer flowchart:
Retrieve information: *SU receives a GET request and retrieves message information based on the given time range and process ID.
Push outbox messages:
What has AO changed? 「1」
Differences from common networks:
The differences between AO’s node network and traditional computing environments:
Support for the project:
AO’s Verifiability Problem
After we understand the framework and logic of AO, there is usually a common question. AO does not seem to have the global characteristics of traditional decentralized protocols or chains. Can it achieve verifiability and decentralization just by uploading some data to Arweave? ? In fact, this is the mystery of AO design. AO itself is an off-chain implementation and does not solve the issue of verifiability or change the consensus. The idea of the AR team is to separate the functions of AO and Arweave and then connect them in a modular manner. AO only performs communication and calculation, and Arweave only provides storage and verification. The relationship between the two is more like mapping. AO only needs to ensure that the interaction log is stored on Arweave, and its state can be projected to Arweave to create a hologram. This holographic state projection ensures the consistency and reliability of the output when calculating the state. sex, certainty. In addition, the AO process can be reversely triggered to perform specific operations through the message log on Arweave (it can wake up on its own according to preset conditions and schedules, and perform corresponding dynamic operations).
According to what Hill and Outprog shared, if the verification logic is simpler, then AO can be imagined as an inscription calculation framework based on a super-parallel indexer. We all know that the Bitcoin inscription indexer needs to extract JSON information from the inscription to verify the inscription, record the balance information in the off-chain database, and complete the verification through a set of indexing rules. Although the indexer is verified off-chain, users can verify the inscription by changing multiple indexers or running the index themselves, so there is no need to worry about the indexer doing evil. We mentioned above that data such as the sorting of messages and the holographic status of the process are uploaded to Arweave. Then it only needs to be based on the SCP paradigm (storage consensus paradigm. Here it can be simply understood that SCP is the indexer of the indexing rules on the chain. In addition It is worth noting that SCP appeared much earlier than the indexer), and anyone can restore AO or any thread on AO through the holographic data on Arweave. Users do not need to run the whole node to verify the trusted status. Just like changing the index, users only need to make query requests to single or multiple CU nodes through SU. Arweave has high storage capacity and low cost, so under this logic, AO developers can implement a supercomputing layer that far exceeds the functions of Bitcoin inscriptions.
AO and ICP
Let’s use some keywords to summarize the characteristics of AO: giant native hard disk, unlimited parallelism, unlimited computing, modular overall architecture, and holographic state processes. All this sounds very good, but friends who are familiar with various public chain projects in the blockchain may find that AO is particularly similar to a "Death-level" project, which is the once popular "Internet Computer" ICP.
ICP was once hailed as the last king-level project in the blockchain world and was highly favored by top institutions. It also reached an FDV of US$200 billion during the 21 years of crazy bulls. But as the wave receded, ICP’s token value also plummeted. Until the 2023 bear market, the value of ICP tokens had dropped nearly 260 times compared to its historical high. However, if the performance of Token price is not considered, even if ICP is re-examined at this time, its technical features still have many unique features. Many of the amazing advantages and features of AO today were also possessed by ICP back then. So will AO fail like ICP? Let’s first understand why the two are so similar. ICP and AO are both designed based on Actor Model and focus on locally running blockchains, so the characteristics of the two have many similarities. The ICP subnet blockchain is formed by a number of independently owned and controlled high-performance hardware devices (node machines) that run the Internet Computer Protocol (ICP). The Internet Computer Protocol is implemented by a number of software components, which as a bundle are replicas in that they replicate state and computation across all nodes in a subnet blockchain.
The ICP replication architecture can be divided into four layers from top to bottom:
Peer-to-Peer (P2P) Network Layer: Used to collect and advertise messages from users, other nodes in their subnet blockchain, and other subnet blockchains. Messages received by the peer layer are replicated to all nodes in the subnet to ensure security, reliability and resiliency;
Consensus layer: Selects and orders messages received from users and different subnets to create blockchain blocks that can be notarized and finalized through a Byzantine fault-tolerant consensus that forms the evolving blockchain. These finalized chunks are passed to the message routing layer;
Message routing layer: used to route user and system-generated messages between subnets, manage the input and output queues of Dapps, and schedule message execution;
Execution environment layer: computes the deterministic computations involved in executing smart contracts by processing messages received from the message routing layer.
Subnet Blockchain
A so-called subnet is a collection of interacting replicas running separate instances of the consensus mechanism in order to create its own blockchain on which a set of "containers" can run. Each subnet can communicate with other subnets and is controlled by the root subnet, which uses chain key cryptography to delegate its permissions to individual subnets. ICP uses subnets to allow it to expand indefinitely. The problem with traditional blockchains (and individual subnets) is that they are limited by the computing power of a single node machine, since each node must run everything that happens on the blockchain in order to participate in the consensus algorithm. Running multiple independent subnets in parallel allows ICP to break through this single-machine barrier.
Why it failed
As mentioned above, the purpose that the ICP architecture wants to achieve is simply a decentralized cloud server. A few years ago, this idea was as shocking as AO, but why did it fail? To put it simply, it means that if you don’t succeed at the high level, you won’t settle at the low level. You haven’t found a good balance between Web3 and your own ideas, which ultimately leads to the embarrassing situation that the project is neither Web3 nor as easy to use as the centralized cloud. In summary, there are three problems. point. First, ICP's program system Canister, the "container" mentioned above, is actually somewhat similar to AOS and processes in AO, but they are not the same. ICP programs are implemented by Canister encapsulation and are not visible to the outside world. They need to access data through specific interfaces. Asynchronous communication is very unfriendly to contract calls in DeFi protocols, so in DeFi Summer, ICP did not capture the corresponding financial value.
The second point is that the hardware requirements are extremely high, resulting in the project not being decentralized. The picture below is the minimum hardware configuration diagram of the node given by ICP at that time. Even now, it is very exaggerated, far exceeding Solana's configuration, and even the storage requirements are higher than the storage requirements. The public chain is still high.
The third point is the lack of ecology. ICP is still a public chain with extremely high performance even now. If there is no DeFi application, what about other applications? Sorry, ICP has not produced a killer application since its birth. Its ecology has neither captured Web2 users nor Web3 users. After all, with such a low degree of decentralization, why not directly use rich and mature centralized applications? But in the end, it is undeniable that ICP's technology is still top-notch. Its advantages of reverse Gas, high compatibility, and unlimited expansion are still necessary to attract the next billion users. In the current wave of AI, if ICP can make good use of its own architectural advantages, it may still have the possibility of turning around.
So back to the question above, will AO fail like ICP? I personally think that AO will not repeat the same mistakes. The last two points that led to the failure of ICP in the first place are not problems for AO. Arweave already has a good ecological foundation. Holographic state projection also solves the centralization problem. In terms of compatibility, AO Also more flexible. More challenges may focus on the design of the economic model, support for DeFi, and a century-old problem: In the non-financial and storage fields, what form should Web3 take?
Web3 shouldn’t just be a narrative
The word that appears most frequently in the Web3 world must be "narrative", and we have even become accustomed to using narrative perspectives to measure the value of most tokens. This naturally stems from the dilemma that most Web3 projects have great vision but are very embarrassing to use. In comparison, Arweave already has many fully implemented applications, and they all target Web2-level experience. For example, Mirror and ArDrive. If you have used these projects, it will be difficult to feel the difference from traditional applications. However, Arweave still has great limitations in value capture as a storage public chain, and calculation may be the only way to go. Especially in today's external world, AI has become a general trend, and there are still many natural barriers to the integration of Web3 at this stage. We have also talked about this in past articles. Now Arweave's AO uses a non-Ethereum modular solution architecture, giving Web3 x AI a good new infrastructure. From the Library of Alexandria to super-parallel computers, Arweave is following a paradigm of its own.
Reference article