🎉 Thanksgiving Special: Chat & Win Exclusive Gate.io Merch!
Thanksgiving Exclusive Merch Giveaway!Join the Gate.io Chat Community, chat actively in the community to win exclusive prizes, more futures vouchers, Thanksgiving Lucky Grand Prizes, and tons of red packets waiting for you to grab!
🎁Benef
Reunderstanding Marlin: Verifiable Computing L0 in the Second Half of AI "New Infrastructure"
In the world of encryption, there is never a shortage of new narratives, but there are only a few that can be called sexy and practical.
For example, since the start of this round of AI super narratives, although cloud computing has become the core productivity of the future digital economy era, traditional Web2 giants have monopolized high-performance GPU and computing power resources, while projects in the middle and tail have no bargaining power or autonomy, and a wider range of verifiable computing application scenarios also face the dilemma of having no resources.
Therefore, under the tide of AI+Crypto, concepts such as fully homomorphic encryption (FHE) have gradually come to the fore, and are widely regarded as one of the best solutions for verifiable computation and AI confidential data processing.
This article will focus on Marlin, an encryption veteran positioned as a "verifiable cloud computing infrastructure", and explore how the DeAI project, which cuts across multiple narratives such as AI, MEV, Oracle, ZK, and TEE, fits into the current AI boom and whether it can bring new variables to the "AI+Crypto" track.
In the second half of AI, can't do without "verifiable cloud computing"?
As is well known, apart from the rapid expansion of the AIGC large model, numerous AI scenarios in the early stages of outbreak, such as healthcare, education, and intelligent driving, are rapidly unfolding, all of which require massive computation.
However, for these segmented scenarios, users' medical, education, and smart driving information are all critical data related to economic and even life safety: information data dimensions such as healthcare, energy systems, bandwidth networks, connected vehicles, etc., not only directly relate to the security of personal confidential data, but also require broader data sharing and collaboration to promote the development of the industry.
At the same time, the traditional cloud service market is dominated by Internet giants such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) - the three of them together occupy more than 60% of the cloud computing market share, forming a clear seller's market.
One of the most obvious problems with this market architecture is the widespread reliance on centralized cloud servers, which means that developers/project parties rely on the cloud services they use, essentially highly tied to the reputation of one or more giants, equivalent to surrendering data autonomy and security to Web2 giants.
As a result, in recent years, cloud service providers have frequently experienced data leaks, causing serious losses to individuals and organizations. Therefore, regardless of how developers/project parties view the core issue of 'Decentralization' in the crypto world, it is far better to design mechanisms to make it 'Can't Be Evil' rather than simply believing that the giants will 'Don't Be Evil'.
Against this background, if we carefully analyze the direction of AI cloud computing, it is still facing an ecological gap between the underlying mature cloud computing service solutions of Web2, that is, the transformation of confidential computing technology faces high costs. Therefore, there is no particularly good solution for enabling programs to be quickly and securely deployed in the industry, which means that a series of application functions required by AI+Web3 cannot be met, thus limiting its development momentum.
So to put it simply, the development of the second half of the AI cloud computing market urgently needs a complete set of blockchain development tools for Web3, in order to provide a comprehensive solution that meets the requirements of decentralization, verifiability, low latency, and low cost. From this perspective, the necessity of decentralized and verifiable cloud computing services is gradually emerging, which happens to fill the gap in this specific direction.
In this context, the necessity of decentralization for verifiable cloud computing services has gradually emerged. As a solution that uses encryption technology to perform calculations, it allows for the verification of the correctness of the calculation results without revealing the underlying data, which neither leaks private information nor ensures that critical data is not leaked.
All these are extremely compatible with the application scenarios of Web3, thus giving rise to many imaginative spaces for confidential cloud computing among industry users, so technologies such as zero-knowledge proof (ZKP), multi-party computation (MPC), and the recently popular fully homomorphic encryption (FHE) have attracted special attention in the market.
And this is also what Marlin wants to do - any DePIN/Web2.5/AI application, as long as there is a demand for computing services with low latency and high computing power, can actually choose to deploy on WeTEE to obtain a generalized cloud computing solution similar to traditional cloud services.
Reinterpreting Marlin: Becoming the Verifiable L0 of the AI World
If we could sum up Marlin's vision in the field of AI verifiable computing in a single sentence, it would actually be about dedicating to helping the AI computing demand side in the increasingly important AI large model training directly integrate verifiable cloud computing services into their existing products at any time and anywhere in the form of infrastructure layer plugins.
This is essentially becoming a verifiable universal L0 in the AI world, so it encapsulates core functions and provides project parties with one-click accessible services based on TEE high-performance node-enhanced network and ZK verifiable communication network.
Marlin ensures data confidentiality and the integrity of computations by isolating data and code from other processes at the hardware level using a Trusted Execution Environment (TEE) and a co-processor based on Zero-Knowledge Proofs (ZKP). This also guarantees the accuracy, verifiability, and tamper-proof nature of the computed results.
Unlike most ZK co-processors that are designed specifically for certain environments (RISC-V, WASM, or MIPS) and can only handle programs written in compatible languages, Marlin's ZK proof market is circuit-based and therefore language-agnostic, allowing nodes to choose the circuits they want to support - existing Python, C++, or Go applications can be directly ported or zkVM can be used.
At the same time, the overall Marlin network architecture can be divided into three main components: Oyster, Kalypso, and the Relay network (Marlin Relay).
Oyster and Kalypso respectively utilize Trusted Execution Environments (TEE) and Zero-Knowledge Proofs (ZKP) to ensure the correctness and security of computations, while the relay network is responsible for ensuring that untrusted nodes can contribute resources to the network without compromising its security through built-in incentive mechanisms:
It is worth noting that each node in Marlin's node network is equipped with TEE for operation, and a secure and isolated Enclave enclave environment can be built in the storage system to ensure that information is not eavesdropped or leaked during computation and storage.
And each node can prove to the other party that a certain statement is true through the ZK protocol, without revealing any specific data about the statement, which protects the information security of the data subject and ensures the correctness of the fact.
Overall, Marlin, as a verifiable cloud computing L0 for AI, has a wide range of applications. It can provide node computing power and storage network resources for applications such as Oracle Machine, ZK Prover system, AI artificial intelligence, etc., based on a decentralized distributed node network system, and become the cornerstone of data protection for many encryption+AI applications.
The imaginative space of Marlin and 'AI World L0'
From this perspective, Marlin is actually playing a key role in the infrastructure of the second half of AI+Web3 - the core essence is to truly bring verifiable computing to the AI and Web3 world.
For example, by using its own 'L0' attribute verifiable cloud computing component service, Marlin can go further and transform itself into a 'Lego brick', becoming a key infrastructure component of AI 'verifiable computing+' service, empowering DApp products in various fields to achieve complete verifiable computing attributes.
The most direct application scenario is that Marlin can provide a secure model training and computing environment for AI model training based on the TEE trusted execution environment co-processor, which is increasingly important in AI large-scale model training. This means that in addition to ChatGPT, different large-scale model projects can integrate Marlin or build on Marlin to form a friendly plug-in and usable verifiable computing middleware, thereby achieving empowerment through the form of 'verifiable computing+'.
What is even more important is to build a decentralized, transparent, and verifiable incentive environment, which transforms the distributed node network into a decentralized cloud mining "rental" service network, that is, to achieve a wider range of DePIN business logic, and reduce cloud computing service costs through token incentives:
Gather idle computing power, use low cost and more flexible deployment configurations to help entrepreneurs train more personalized small and medium AI models, greatly improving resource utilization.
This is just the tip of the iceberg in terms of the empowerment of AI application scenarios that Marlin can provide as a verifiable computing middleware.
Summary
In short, the main imagination space that Marlin can bring to AI+Web3 is as the infrastructure of the L0 layer, empowering AI project parties to develop native verifiable computing product services (regarded as middleware for verifiable computing).
Actually, as an indispensable core component in the AI+Web3 era, this is equivalent to the industry's key 'infrastructure' to some extent.
From the underlying computing power supply and demand matching, to the provision of oracle machine data, to decentralized front-end services based on distributed storage, etc., it can basically form a closed loop in logic, so that users and applications can obtain low-cost and flexible plug-in services for verifiable computation, effectively utilize and leverage the value of data, and thus lay the foundation for diversified application scenarios.
It is clear that in the second half of AI, the verifiable computing track still has huge potential value waiting for us to explore, especially the concept of "verifiable computing+" based on Web3 on-chain may contain even more grand contents.
Not only AI, but also on-chain entertainment, social, gaming, and almost all applications we can think of can further expand the imagination space of verifiable computation/confidential data services.
In this incremental construction process, Marlin is likely to become the key underlying infrastructure for future AI+Web3 applications, which may also be the largest imagination space for verifiable computing in the AI+Web3 era.