DePIN Ecosystem: The Transformative Force Drives AI Computing Power

AdvancedJul 08, 2024
This article summarizes and distills the basic framework of the DePIN project, providing an overview using the "WHAT-WHY-HOW" structure to review and summarize the DePIN track. The author then outlines an analytical approach to understanding the DePIN project based on their experience, focusing on detailed analysis of specific "computing power" projects.
DePIN Ecosystem: The Transformative Force Drives AI Computing Power

Introduction

DePIN, a concept introduced by Messari in November 2022, is not entirely novel but shares similarities with previous phenomena like IoT (Internet of Things). The author considers DePIN as a new form of “sharing economy.”

Unlike previous DePIN trends, the current cycle focuses primarily on the AI trifecta—data, algorithms, and computing power—with a notable emphasis on “computing power” projects such as io.net, Aethir, and Heurist. Therefore, this article specifically analyzes projects related to “computing power.”

This article summarizes and distills the basic framework of the DePIN project, providing an overview using the “WHAT-WHY-HOW” structure to review and summarize the DePIN track. The author then outlines an analytical approach to understanding the DePIN project based on their experience, focusing on detailed analysis of specific “computing power” projects.

1. What’s DePIN?

1.1 Definition

DePIN, short for Decentralized Physical Infrastructure Networks, is a blockchain-powered network that connects physical hardware infrastructure in a decentralized manner. This allows users to access and utilize network resources without permission, often in a cost-effective manner. DePIN projects typically employ token reward systems to incentivize active participation in network construction, following the principle of “the more you contribute, the more you earn.”

The application scope of DePIN is extensive, encompassing fields such as data collection, computing, and data storage. Areas involving CePIN often feature DePIN’s presence.

Considering the operational and economic model of DePIN projects, DePIN fundamentally operates as a new form of “sharing economy.” Therefore, when conducting an initial analysis of DePIN projects, it can be approached succinctly by first identifying the core business of the project.

If the project mainly involves computing power or storage services, it can be simply defined as a platform providing “shared computing power” and “shared storage” services. This classification helps to clarify the project’s value proposition and its positioning in the market.

Source: @IoTeX

In the aforementioned model of the sharing economy, there are three main participants: the demand side, the supply side, and the platform side. In this model, firstly, the demand side sends requests to the platform side, such as for ridesharing or accommodation. Next, the platform side forwards these requests to the supply side. Finally, the supply side provides the corresponding services based on the requests, thus completing the entire business transaction process.

In this model, the flow of funds begins with the demand side transferring funds to the platform side. After the demand side confirms the order, funds then flow from the platform side to the supply side. The platform side earns profit through transaction fees by providing a stable trading platform and a smooth order fulfillment experience. Think of your experience when using ride-hailing services like DiDi—it exemplifies this model.

In traditional “sharing economy” models, the platform side is typically a centralized large enterprise that retains control over its network, drivers, and operations. In some cases, the supply side in “sharing economy” models is also controlled by the platform side, such as with shared power banks or electric scooters. This setup can lead to issues like monopolization by enterprises, lower costs of malpractice, and excessive fees that infringe upon the interests of the supply side. In essence, pricing power remains centralized within these enterprises, not with those who control the means of production, which does not align with decentralized principles.

However, in the Web3 model of the “sharing economy,” the platform facilitating transactions is a decentralized protocol. This eliminates intermediaries like DiDi, empowering the supply side with pricing control. This approach provides passengers with more economical ride services, offers drivers higher income, and enables them to influence the network they help build each day. It represents a multi-win model where all parties benefit.

1.2 Development History of DePIN

Since the rise of Bitcoin, people have been exploring the integration of peer-to-peer networks with physical infrastructure, aiming to build an open and economically incentivized decentralized network across various devices. Influenced by terms like DeFi and GameFi in Web3, MachineFi was one of the earliest concepts proposed.

  • In December 2021, IoTeX became the first entity to coin the term “MachineFi” for this emerging field. This name combines “Machine” and “DeFi” to signify the financialization of machines and the data they generate.
  • In April 2022, Multicoin introduced the concept of “Proof of Physical Work” (PoPW), which refers to an incentive structure enabling anyone to contribute to a shared objective without permission. This mechanism significantly accelerated the development of DePIN.
  • In September 2022, Borderless Capital proposed “EdgeFi.”
  • In November 2022, Messari conducted a Twitter poll to unify the abbreviation for this field, with options including PoPW, TIPIN, EdgeFi, and DePIN. Ultimately, DePIN won with 31.6% of the votes, becoming the unified name for this domain.

Source: @MessariCrypto

2. Why do we need DePIN?

In traditional physical infrastructure networks (such as communication networks, cloud services, energy networks, etc.), the market is often dominated by large or giant companies due to huge capital investment and operation and maintenance costs. This centralized industrial characteristic has brought about the following major dilemmas and challenges:

  • The interests of government and enterprises are tightly bound, and the threshold for new entrants is high: Taking the U.S. communications industry as an example, the Federal Communications Commission (FCC) auctions wireless spectrum to the highest bidder. This makes it easier for companies with strong capital to win and gain absolute advantage in the market, leading to a significant Matthew effect in the market, with the strong getting stronger.
  • The market competition pattern is stable and innovation and vitality are insufficient: A small number of licensed companies present market pricing power. Due to their generous and stable cash returns, these companies often lack motivation for further development, resulting in slow network optimization, untimely equipment reinvestment and upgrades, and insufficient motivation for technological innovation and personnel renewal.
  • Outsourcing of technical services leads to varying service standards: Traditional industries are moving towards specialized outsourcing, but significant differences in service philosophies and technical capabilities among outsourcing service providers make it challenging to control delivery quality, lacking effective collaboration mechanisms in outsourcing.

2.1 Disadvantages of CePIN

  • Centralized control: CePIN is controlled by centralized institutions, posing risks of single points of failure, vulnerability to attacks, low transparency, and lack of user control over data and operations.
  • High entry barriers: New entrants face high capital investments and complex regulatory barriers, limiting market competition and innovation.
  • Resource wastage: Centralized management leads to resource idleness or wastage, resulting in low resource utilization rates.
  • Inefficient equipment reinvestment: Decision-making centralized in a few institutions leads to inefficient equipment updates and investment.
  • Inconsistent service quality: Outsourced service quality is difficult to guarantee, and the standards vary.
  • Information asymmetry: Central institutions hold all data and operation records, preventing users from fully understanding internal system operations and increasing risks of information asymmetry.
  • Insufficient incentive mechanisms: CePIN lacks effective incentive mechanisms, leading to low user participation and contribution to network resources.

2.2 Advantages of DePin

  • Decentralization: No single point of failure enhances system reliability and resilience, reduces the risk of attacks, and improves overall security.
  • Transparency: All transactions and operation records are publicly auditable, giving users complete control over data, allowing participation in decision-making processes, and increasing system transparency and democracy.
  • Incentive mechanism: Through token economics, users can earn token rewards by contributing network resources, incentivizing active participation and maintenance of the network.
  • Censorship resistance: Without central control points, the network is more resistant to censorship and control, allowing free flow of information.
  • Efficient resource utilization: Activating latent idle resources through a distributed network increases resource utilization efficiency.
  • Openness and global deployment: Permissionless and open-source protocols enable global deployment, breaking geographical and regulatory restrictions of traditional CePIN.

DePIN addresses centralized control, data privacy concerns, resource wastage, and inconsistent service quality of CePIN through advantages such as decentralization, transparency, user autonomy, incentive mechanisms, and censorship resistance. It drives transformation in the production relations of the physical world, achieving a more efficient and sustainable physical infrastructure network. Therefore, for physical infrastructure networks requiring high security, transparency, and user engagement, DePIN represents a superior choice.

3. How to implement a DePIN network?

3.1 Comparison of different consensus mechanisms

Before discussing how to implement a DePIN network, we first explain the PoPW mechanism commonly used in DePIN networks.

DePIN network demands rapid scalability, low costs for node participation, abundant network supply nodes, and a high degree of decentralization.

Proof of Work (PoW) requires purchasing expensive mining rigs in advance to participate in network operations, significantly raising the entry barrier for DePIN network participation. Therefore, it is not suitable as a consensus mechanism for DePIN networks.

Proof of Stake (PoS) also requires upfront token staking, which reduces user willingness to participate in network node operations. Similarly, it is not suitable as a consensus mechanism for DePIN networks.

The emergence of Proof of Physical Work (PoPW) precisely meets the characteristic demands of DePIN networks. The PoPW mechanism facilitates easier integration of physical devices into the network, greatly accelerating the development process of DePIN networks.

Additionally, the economic model combined with PoPW fundamentally resolves the dilemma of which comes first, the chicken or the egg. Using token rewards, the protocol incentivizes participants to build network supply in a manner attractive to users.

3.2 Main participants of DePIN network

Generally speaking, a complete DePIN network includes the following participants.

  • Founder: The initiator of the DePIN network, often referred to as the “project party.” Founders play a crucial role in the early stages of the network by undertaking network construction and bootstrapping responsibilities.
  • Owner: Providers of resources to the DePIN network, such as computing miners and storage miners. They earn protocol tokens by supplying hardware and software resources to the network. During the network’s initial stages, owners are essential participants.
  • Consumer: Users who demand resources from the DePIN network. Typically, most demand in DePIN projects comes from B2B users, primarily from Web2. The entire Web3 demand for DePIN networks is relatively small, and relying solely on Web3 users’ demand is insufficient to sustain network operations. Projects like Filecoin and Bittensor are examples of such B2B-oriented projects.
  • Builder: Individuals who maintain and expand the DePIN network’s ecosystem. As the network grows, more builders join ecosystem development efforts. In the early stages of network development, builders are primarily composed of founders.

These participants collectively contribute to the growth, operation, and sustainability of the DePIN network ecosystem.

3.3 Basic components of DePIN network

For the DePIN network to operate successfully, it needs to interact with on-chain and off-chain data at the same time, which requires stable and powerful infrastructure and communication protocols. In general, the DePIN network mainly includes the following parts.

  • Physical equipment infrastructure: Usually the Owner provides the physical devices required for the network, such as GPU, CPU, etc.
  • Off-chain computing facilities: Data generated by physical devices needs to be uploaded to the chain for verification through off-chain computing facilities. This is what we call the PoPW proof mechanism. Oracles are usually used to upload data to the chain.
  • On-chain computing facilities: After the data is verified, the on-chain account address of the owner of the device needs to be checked and the token reward is sent to the on-chain address.
  • Token incentive mechanism: It is what we usually call the token economic model. The token economic model plays a very important role in the DePIN network and has different functions in different development periods of the network. It will be elaborated on in the following article.

3.5 Basic operation mode of DePIN network

The operating mode of the DePIN network follows a sequence similar to the architectural diagram mentioned above. Essentially, it involves off-chain data generation followed by on-chain data confirmation. Off-chain data adheres to a “bottom-up” rule, whereas on-chain data follows a “top-down” rule.

  • Service Provision for Incentives: Hardware devices in DePIN projects earn rewards by providing services, such as signal coverage in Helium, which results in $HNT rewards.
  • Presentation of Evidence: Before receiving incentives, devices must present evidence demonstrating that they have performed the required work. This evidence is known as Proof of Physical Work (PoPW).
  • Identity Verification Using Public-Private Keys: Similar to traditional public blockchains, the identity verification process for DePIN devices involves using public-private key pairs. The private key is used to generate and sign the physical work proofs, while the public key is used by external parties to verify the proofs or as an identity tag (Device ID) for the hardware device.
  • Reward Distribution: DePIN projects deploy smart contracts that record the on-chain account addresses of different device owners. This enables tokens earned by off-chain physical devices to be directly deposited into the owners’ on-chain accounts.

To simplify this process using a simple analogy, the operation of the DePIN network can be likened to an exam scenario:

  • Teacher (Issuer of Tokens): Verifies the authenticity of “student scores” (proof of work).
  • Student (Recipient of Tokens): Earns tokens by completing the “exam paper” (providing services).

Initially, the teacher hands out exam papers to students, who must complete the exam according to the paper’s requirements. After completion, students submit their papers to the teacher, who grades them based on a descending order principle, rewarding higher rankings with greater recognition (tokens).

In this analogy:

The “issued exam papers” represent the demand orders from the DePIN network’s demand side.

The solving of the exam questions corresponds to adhering to specific rules (PoPW) in DePIN.

The teacher verifies that the paper belongs to a specific student (using private keys for signatures and public keys for identification).

Grades are assigned based on performance, following a descending order principle that aligns with DePIN’s token distribution principle of “more contribution, more rewards.”

The basic operational mechanism of the DePIN network bears similarities to our everyday exam system. In the realm of cryptocurrencies, many projects essentially mirror real-life patterns on the blockchain. When faced with complex projects, employing analogies like this can aid in understanding and mastering the underlying concepts and operational logic.

4. Types of DePIN network

We have conducted an overview review of the DePIN sector according to the logical sequence of WHAT-WHY-HOW. Next, let’s outline the specific tracks within the DePIN sector. The breakdown of these tracks is divided into two main parts: Physical Resource Networks and Digital Resource Networks.

  • Physical Resource Networks: Incentivize participants to use or deploy location-based hardware infrastructure and provide non-standardized goods and services in the real world. This can be further subdivided into: wireless networks, geographic spatial data networks, mobile data networks, and energy networks.
  • Digital Resource Networks: Incentivize participants to use or deploy hardware infrastructure and provide standardized digital resources. This can be further subdivided into: storage networks, computing power networks, and bandwidth networks.

Among them, the representative projects of some sections are as follows:

4.1 Decentralized storage network - Filecoin ($FIL)

Filecoin is the world’s largest distributed storage network, with over 3,800 storage providers globally contributing more than 17 million terabytes (TB) of storage capacity. Filecoin can be considered one of the most renowned DePIN projects, with its FIL token reaching its peak price on April 1, 2021. Filecoin’s vision is to bring open, verifiable features to the three core pillars supporting the data economy: storage, computation, and content distribution.

Filecoin’s storage of files is based on the InterPlanetary File System (IPFS), which enables secure and efficient file storage.

One unique aspect of Filecoin is its economic model. Before becoming a storage provider on Filecoin, participants must stake a certain amount of FIL tokens. This creates a cycle where during a bull market, “staking tokens -> increased total storage space -> more nodes participating -> increased demand for staking tokens -> price surge” occurs. However, in bear markets, it can lead to a spiral of price decline. This economic model is more suited to thriving in bullish market conditions.

4.2 Decentralized GPU Rendering Platform - Render Network ($RNDR)

Render Network is a decentralized GPU rendering platform under OTOY, consisting of artists and GPU providers, offering powerful rendering capabilities to global users. The $RNDR token reached its peak price on March 17, 2024. Being part of the AI sector, Render Network’s peak coincided with the AI sector’s peak.

The operational model of Render Network works as follows: creators submit jobs requiring GPU rendering, such as 3D scenes or high-resolution images/videos, which are distributed to GPU nodes in the network for processing. Node operators contribute idle GPU computing power to Render Network and receive $RNDR tokens as rewards.

A unique aspect of Render Network is its pricing mechanism, employing a dynamic pricing model based on factors like job complexity, urgency, and available resources. This model determines rendering service prices, providing competitive rates to creators while fairly compensating GPU providers.

A recent positive development for Render Network is its support for “Octane on iPad,” a professional rendering application backed by Render Network.

4.3 Decentralized data market - Ocean ($OCEAN)

Ocean Protocol is a decentralized data exchange protocol primarily focused on secure data sharing and commercial applications of data. Similar to common DePIN projects, it involves several key participants:

  • Data Providers: Share data on the protocol.
  • Data Consumers: Purchase access to data using OCEAN tokens.
  • Node Operators: Maintain network infrastructure and earn OCEAN token rewards.

For data providers, data security and privacy are crucial. Ocean Protocol ensures data flow and protection through the following mechanisms:

  • Data Security and Control: Uses blockchain technology to ensure secure and transparent data transactions, enabling data owners to maintain complete control over their data.
  • Data Tokenization: Allows data to be bought, sold, and exchanged like other cryptocurrencies, enhancing liquidity in the data market.
  • Privacy Protection: Implements Compute-to-Data functionality, enabling computation and analysis on data without exposing the raw data. Data owners can approve AI algorithms to run on their data, with computations occurring locally under their control, ensuring data remains within their scope.
  • Fine-Grained Access Control: Provides detailed access control where data providers can set specific access policies, specifying which users or groups can access particular parts of the data and under what conditions. This granular control mechanism ensures secure data sharing while maintaining data privacy.

4.4 L1 - Lotex ($IOTX) Compatible with EVM

IoTeX was founded in 2017 as an open-source platform focused on privacy, integrating blockchain, secure hardware, and data service innovations to support the Internet of Trusted Things (IoT). Unlike other DePIN projects, IoTeX positions itself as a development platform designed for DePIN builders, akin to Google’s Colab. IoTeX’s flagship technology is the off-chain computing protocol W3bStream, which facilitates the integration of IoT devices into the blockchain. Some notable IoTeX DePIN projects include Envirobloq, Drop Wireless, and HealthBlocks.

4.5 Decentralize hotspot network - Helium ($HNT)

Helium, established in 2013, is a veteran DePIN project known for creating a large-scale network where users contribute new hardware. Users can purchase Helium Hotspots manufactured by third-party vendors to provide hotspot signals for nearby IoT devices. Helium rewards hotspot operators with HNT tokens to maintain network stability, similar to a mining model where the mining equipment is specified by the project.

In the DePIN arena, there are primarily two types of device models: customized dedicated hardware specified by the project, such as Helium, and ubiquitous hardware used in daily life integrated into the network, as seen with Render Network and io.net incorporating idle GPUs from users.

Helium’s key technology is its LoRaWAN protocol, a low-power, long-distance wireless communication protocol ideal for IoT devices. Helium Hotspots utilize the LoRaWAN protocol to provide wireless network coverage.

Despite establishing the world’s largest LoRaWAN network, Helium’s anticipated demand did not materialize as expected. Currently, Helium is focusing on launching 5G cellular networks. On April 20, 2023, Helium migrated to the Solana network and subsequently launched Helium Mobile in the Americas, offering a $20 per month unlimited 5G data plan. Due to its affordable pricing, Helium Mobile quickly gained popularity in North America.

From the global “DePIN” search index spanning five years, a minor peak was observed in December 2023 to January 2024, coinciding with the peak range of the $MOBILE token price. This continued upward trend in DePIN interest indicates that Helium Mobile has initiated a new era for DePIN projects.

Source: @Google Trendes

5. DePIN economic model

The economic model of DePIN projects plays a crucial role in their development, serving different purposes at various stages. For instance, in the initial stages of a project, it primarily utilizes token incentive mechanisms to attract users to contribute their software and hardware resources towards building the supply side of the project.

5.1 BME Model

Before discussing the economic model, let’s briefly outline the BME (Burn-and-Mint Equivalent) model, as it is closely related to most DePIN projects’ economic frameworks.

The BME model manages token supply and demand dynamics. Specifically, it involves the burning of tokens on the demand side for purchasing goods or services, while the protocol platform mints new tokens to reward contributors on the supply side. If the amount of newly minted tokens exceeds those burned, the total supply increases, leading to price depreciation. Conversely, if the burn rate exceeds the minting rate, deflation occurs, causing price appreciation. A continually rising token price attracts more supply-side users, creating a positive feedback loop.

Supply > Demand =>price drop

Supply<Demand=>Price increase

We can further elucidate the BME model using the Fisher Equation, an economic model that describes the relationship between money supply (M), money velocity (V), price level (P), and transaction volume (T):

MV = PT

  • M= money supply
  • V = velocity of money
  • P = price level
  • T = Transaction volume

When the token velocity V increases, and assuming other factors remain constant, the equilibrium of this equation can only be maintained by reducing token circulation (M) through burning mechanisms. Thus, as network usage increases, the burn rate also accelerates. When the inflation rate and burn rate achieve dynamic equilibrium, the BME model can maintain a stable balanced state.

Source: @Medium

Using the specific example of purchasing goods in real life to illustrate this process: First, manufacturers produce goods, which consumers then purchase.

During the purchase process, instead of handing money directly to the manufacturer, you burn a specified amount as proof of your purchase of the goods. Simultaneously, the protocol platform mints new currency at regular intervals. Additionally, this money is transparently and fairly distributed among various contributors in the supply chain, such as producers, distributors, and sellers.

Source:@GryphsisAcademy

5.3 Development stages of economic models

With a basic understanding of the BME model, we can now have a clearer understanding of common economic models in the DePIN space.

Overall, DePIN economic models can be broadly divided into the following three stages:

1st Stage: Initial Launch and Network Construction Phase

  • This is the initial phase of the DePIN project, focusing on establishing physical infrastructure networks.
  • It incentivizes individuals and enterprises to contribute hardware resources (such as computing power, storage space, bandwidth, etc.) through token incentive mechanisms to drive network deployment and expansion.
  • Projects typically rely on core teams to deploy nodes and promote networks in a somewhat centralized manner until reaching a critical mass.
  • Tokens are primarily used to reward contributors for providing hardware resources rather than paying for network usage fees.

2nd Stage: Network Development and Value Capture Phase

  • Once the network reaches critical mass, the project gradually transitions to a decentralized community governance model.
  • The network begins to attract end-users, and tokens are used not only to reward contributors but also to pay for network usage fees.
  • The project captures the economic value generated within the network through tokenization and distributes it to contributors and participants.
  • Token economic models typically use the BME model to balance supply and real demand, avoiding inflation or deflation.

3rd Stage: Maturity and Value Maximization Phase

  • The network has a large number of active users and contributors, forming a virtuous cycle.
  • Token economic models focus more on long-term value creation, enhancing token value through carefully designed deflation mechanisms.
  • Projects may introduce new token models to optimize token supply, promoting positive externalities in bilateral markets.
  • Community autonomy becomes the dominant mode of network governance.

A good economic model can create an economic flywheel effect for DePIN projects. Because DePIN projects employ token incentive mechanisms, they attract significant attention from suppliers during the project’s initial launch phase, enabling rapid scale-up through the flywheel effect.

The token incentive mechanism is key to the rapid growth of DePIN projects. Initially, projects need to develop appropriate reward criteria tailored to the scalability of physical infrastructure types. For example, to expand network coverage, Helium offers higher rewards in areas with lower network density compared to higher-density areas.

As shown in the diagram below, early investors contribute real capital to the project, giving the token initial economic value. Suppliers actively participate in project construction to earn token rewards. As the network scales and with its lower costs compared to CePIN, an increasing number of demand-side users start adopting DePIN project services, generating income for the entire network protocol and forming a solid pathway from suppliers to demand.

With rising demand from the demand side, token prices increase through mechanisms like burning or buybacks (BME model), providing additional incentives for suppliers to continue expanding the network. This increase signifies that the value of tokens they receive also rises.

As the network continues to expand, investor interest in the project grows, prompting them to provide more financial support.

If the project is open-source or shares contributor/user data publicly, developers can build dApps based on this data, creating additional value within the ecosystem and attracting more users and contributors.

Source: @IoTeX

6. Current status of DePin area

The current popularity of the DePIN project is mainly focused on Solana Network and “DePIN x AI “. It can be seen from the Google Index that in network infrastructure, DePIN and Solana have the strongest correlation, and areas of high concern are mainly concentrated in Asia, including China, India, etc. This also shows that the main participants in the DePIN project are from Asia.

Source: @Google Trendes

The current total market value of the entire DePIN track is “$32B ”, compared with traditional CePIN projects, for example, China Mobile’s market capitalization is “$210B ”, AT&T (the largest carrier in the United States) has a market capitalization of “$130B ”, analyzing the entire DePIN track only from the perspective of market value, there is still a lot of room for growth.

Source: @DePINscan

The turning point in the curve of total DePIN devices is evident in December 2023, coinciding with the peak popularity and highest coin price of Helium Mobile. It can be said that the DePIN boom in 2024 was initiated by Helium Mobile.

As shown in the diagram below, it displays the global distribution of DePIN devices, highlighting their concentration in regions such as North America, East Asia, and Western Europe. These areas are typically more economically developed, as becoming a node in the DePIN network requires provisioning of both software and hardware resources, which incur significant costs. For instance, a high-end consumer-grade GPU like the RTX-4090 costs $2,000, which is a substantial expense for users in less economically developed regions.

Due to the token incentive mechanism of DePIN projects, which follows the principle of “more contribution, more reward,” users aiming for higher token rewards must contribute more resources and use higher-end equipment. While advantageous for project teams, this undoubtedly raises the barrier to entry for users. A successful DePIN project should be backward compatible and inclusive, offering opportunities for participation even with lower-end devices, aligning with the blockchain principles of fairness, justice, and transparency.

Looking at the global device distribution map, many regions remain undeveloped. We believe that through continuous technological innovation and market expansion, the DePIN sector has the potential for global growth, reaching every corner, connecting people worldwide, and collectively driving technological advancement and social development.

source: @DePINscan

7. Steps in DePIN Program Analysis

After my brief review of the DePIN track, the author summarized the basic steps for analyzing a DePIN project.

Most importantly, analyze the DePIN project’s operating model as a sharing economy in Web2.

8. io.net

8.1 Basic information

Project Description

io.net is a decentralized computing network that enables the development, execution, and scaling of machine learning applications on the Solana blockchain. Their vision is to “bring 1 million GPUs together to form the world’s largest GPU cluster.” Giving engineers access to massive amounts of computing power in a system that is accessible, customizable, cost-effective, and easy to implement.

Team background

  • Founder and CEO: Ahmed Shadid, who worked in quantitative finance and financial engineering before founding io.net, and is also a volunteer at the Ethereum Foundation.

  • Chief Marketing Officer and Chief Strategy Officer: Garrison Yang, Yang Xudong. Prior to that, he was Vice President of Strategy and Growth at Avalanche and is an alumnus of UC Santa Barbara.

It can be seen that the technical background of this project is relatively solid, and many founders have Crypto experience.

Narrative: AI, DePIN, Solana Ecosystem.

Financing situation

Source: @RootDataLabs

Source: @RootDataLabs

On March 5, 2024, io.net secured $30 million in Series A funding with a valuation of $1 billion, benchmarked against Render Network. The round was led by renowned top-tier investment institutions such as Hack VC, OKX Ventures, Multicoin Capital, and also included influential project leaders like Anatoly Yakovenko (Solana CEO) and Yat Siu (Animoca co-founder). This early backing from top capital is why we refer to io.net as a star project—it has the funding, the background, the technology, and the expectation of an airdrop.

The main products of io.net are as follows:

8.2 Product structure

The main products of io.net are as follows:

  • IO Cloud: Deploys and manages on-demand distributed GPU clusters, serving as the computing power equipment management interface for demand-side users.
  • IO Worker: Provides a comprehensive and user-friendly interface for efficiently managing users’ GPU node operations through an intuitive web application, tailored for supply-side users.
  • IO Explorer: Offers users a window into the internal workings of the network, providing comprehensive statistics and an overview of all aspects of the GPU cloud. Similar to how Solscan or blockchain explorers provide visibility for blockchain transactions.
  • BC8.AI: An advanced AI-driven image generator that uses deep neural networks to create highly detailed and accurate images from text descriptions or seed images. This AI application on io.net can be easily accessed with an IO ID.

Below is an image of a cat in the style of Van Gogh generated on BC8.AI.

Source: @ionet

Product features and advantages

Compared with traditional cloud service providers Google Cloude and AWS, io.net has the following features and advantages:

  • High number of GPUs with powerful computing capabilities
  • Affordable and cost-effective
  • Censorship-resistant – Quickly access advanced GPUs like A100 and H100 without needing approval
  • Anti-monopoly
  • Highly customizable for users

Let’s take AWS as an example to compare in detail:

Accessibility refers to how easily users can access and obtain computing power. When using traditional cloud service providers, you typically need to provide key identification information such as a credit card and contact details in advance. However, when accessing io.net, all you need is a Solana wallet to quickly and conveniently obtain computing power permissions.

Customization refers to the degree of customization available to users for their computing clusters. With traditional cloud service providers, you can only select the machine type and location of the computing cluster. In contrast, when using io.net, in addition to these options, you can also choose bandwidth speed, cluster type, data encryption methods, and billing options.

Source: @ionet

As shown in the image above, when a user selects the NVIDIA A100-SXM4-80GB model GPU, a Hong Kong server, ultra-high-speed bandwidth, hourly billing, and end-to-end encryption, the price per GPU is $1.58 per hour. This demonstrates that io.net offers a high degree of customization with many options available for users, prioritizing their experience. For DePIN projects, this customization is a key way to expand the demand side and promote healthy network growth.

In contrast, the image below shows the price of the NVIDIA A100-SXM4-80GB model GPU from traditional cloud service providers. For the same computing power requirements, io.net’s price is at least half that of traditional cloud providers, making it highly attractive to users.

8.3 Basic information of the network

We can use IO Explorer to comprehensively view the computing power of the entire network, including the number of devices, available service areas, computing power prices, etc.

Computing power equipment situation

Currently, io.net has a total of 101,545 verified GPUs and 31,154 verified CPUs. io.net will verify whether the computing device is online every 6 hours to ensure io.net’s network stability.

Source: @ionet

The second image shows currently available, PoS-verified, and easy-to-deploy computing devices. Compared to Render Network and Filecoin, io.net has a significantly higher number of computing devices. Furthermore, io.net integrates computing devices from both Render Network and Filecoin, allowing users to choose their preferred computing device provider when deploying compute clusters. This user-centric approach ensures that io.net meets users’ customization needs and enhances their overall experience.

Source: @ionet

Another notable feature of io.net’s computing devices is the large number of high-end devices available. In the US, for example, there are several hundred high-end GPUs like the H100 and A100. Given the current sanctions and the AI boom, high-end graphics cards have become extremely valuable computing assets.

With io.net, you can use these high-end computing devices provided by suppliers without any review, regardless of whether you are a US citizen. This is why we highlight the anti-monopoly advantage of io.net.

Source:@ionet

Business revenue

It can be seen from the revenue dashboard of io.net that io.net has stable income every day, and the total income has reached the level of one million US dollars. It can be seen from here that io.net has completed the construction of the supply side, and the entire project The cold start period has gradually passed and the network development period has begun.


source: @ionet


source: @ionet

Source: @ionet

From the supply side of io.net

  • The validity of network nodes has been verified
  • Computing power equipment is stable online and in sufficient quantity
  • Having a certain number of high-end computing equipment can fill part of the market demand

But from the demand side

  • There are still many devices that are idle
  • Most of the computing power task requirements come from BC8.AI
  • The needs of enterprises and individuals have not yet been stimulated.

8.4 Economic model

io.net’s native network token is $IO, with a fixed total supply of 800 million tokens. An initial supply of 500 million tokens will be released, and the remaining 300 million tokens will be distributed as rewards to suppliers and token stakers over 20 years, issued hourly.

$IO employs a burn deflation mechanism: network revenue is used to purchase and burn $IO tokens, with the amount of tokens burned adjusted according to the price of $IO.

Token Utilities:

  • Payment Method: Payments made with USDC for users and suppliers incur a 2% fee, while payments made with $IO incur no fee.
  • Staking: A minimum of 100 $IO tokens must be staked for a node to receive idle rewards from the network.

Token Allocation:

From the allocation chart, it can be seen that half of the tokens are allocated to project community members, indicating the project’s intention to incentivize community participation for its growth. The R&D ecosystem accounts for 16%, ensuring continuous support for the project’s technology and product development.


As can be seen from the token release chart, $IO tokens are released gradually and linearly. This release mechanism helps stabilize the price of $IO tokens and avoid price fluctuations caused by the sudden appearance of a large number of $IO tokens in the market. At the same time, the reward mechanism of $IO tokens can also motivate long-term holders and stakers, enhancing the stability of the project and user stickiness.

Overall, io.net’s tokenomics is a well-designed token scheme. The allocation of half of the tokens to the community highlights the project’s emphasis on community-driven and decentralized governance, which supports long-term development and the establishment of credibility.

In the third stage of the DePIN economic development phases discussed earlier, it was mentioned that “community autonomy becomes the dominant mode of network governance.” io.net has already laid a solid foundation for future community autonomy. Additionally, the gradual release mechanism and burn mechanism of the $IO token effectively distribute market pressure and reduce the risk of price volatility.

From these aspects, it is clear that io.net’s various mechanisms demonstrate that it is a well-planned project with a focus on long-term development.

8.5 Ways to participate in io.net

Currently, io.net’s “Ignition Rewards” has entered its 3rd season, running from June 1st to June 30th. The main way to participate is to integrate your computing devices into the main computing network for mining. Mining rewards in $IO tokens depend on factors such as device computing power, network bandwidth speed, and others.

In the 1st season of “Ignition Rewards,” the initial threshold for device integration was set at the “GeForce RTX 1080 Ti.” This reflects our earlier statement of providing low-end devices with an opportunity to participate, aligning with the blockchain principles of fairness, justice, and transparency. In the 2nd and 3rd seasons of “Ignition Rewards,” the initial threshold was set at “GeForce RTX 3050.”

The reason for this approach is twofold: from the project’s perspective, as the project develops, low-end computing devices contribute less to the overall network and stronger computing devices better maintain network stability. From the demand-side users’ perspective, most users require high-end computing devices for training and inference of AI models, and low-end devices cannot meet their needs.

Therefore, as the project progresses favorably, raising the participation threshold is a correct approach. Similar to the Bitcoin network, the goal for the project is to attract better, stronger, and more numerous computing devices.

8.6 Conclusion & Outlook

io.net has performed well during the project’s cold start and network construction phase, completing the entire network deployment, validating the effectiveness of computational nodes, and generating sustained revenue.

The project’s next main goal is to further expand the network ecosystem and increase demand from the computational needs market, which represents a significant opportunity. Successfully promoting the project in this market will require efforts from the project’s marketing team.

In practice, when we talk about AI algorithm model development, it mainly involves two major parts: training and inference. Let’s illustrate these two concepts with a simple example of a quadratic equation:

The process of using the (x, y) data pair (training set) to solve for the unknown coefficients (a, b, c) is the training process of the AI ​​algorithm; after obtaining the unknown coefficients (a, b, c), according to the process of solving y for a given x is the inference process of the AI ​​algorithm.

In this computation process, we can clearly see that the computational workload of the training process is much greater than that of the inference process. Training a Large Language Model (LLM) requires extensive support from computational clusters and consumes substantial funds. For example, training GPT-3 175B involved thousands of Nvidia V100 GPUs over several months, with training costs reaching tens of millions of dollars.

Performing AI large model training on decentralized computing platforms is challenging because it involves massive data transfers and exchanges, demanding high network bandwidth that decentralized platforms struggle to meet. NVIDIA has established itself as a leader in the AI industry not only due to its high-end computational chips and underlying AI computing acceleration libraries (cuDNN) but also because of its proprietary communication bridge, “NVLink,” which significantly speeds up the movement of large-scale data during model training.

In the AI industry, training large models not only requires extensive computational resources but also involves data collection, processing, and transformation. These processes often necessitate scalable infrastructure and centralized data processing capabilities. Consequently, the AI industry is fundamentally a scalable and centralized sector, relying on robust technological platforms and data processing capabilities to drive innovation and development.

Therefore, decentralized computing platforms like io.net are best suited for AI algorithm inference tasks. Their target customers should include students and those with task requirements for fine-tuning downstream tasks based on large models, benefiting from io.net’s affordability, ease of access, and ample computational power.

9. Aethir

9.1 Project background

Artificial Intelligence is regarded as one of the most significant technologies ever seen by humanity. With the advent of General Artificial Intelligence (AGI), lifestyles are poised to undergo revolutionary changes. However, due to a few companies dominating AI technology development, there exists a wealth gap between GPU affluent and GPU deprived individuals. Aethir, through its distributed physical infrastructure network (DePINs), aims to increase the accessibility of on-demand computing resources, thereby balancing the distribution of AI development outcomes.

Aethir is an innovative distributed cloud computing infrastructure network specifically designed to meet the high demand for on-demand cloud computing resources in the fields of Artificial Intelligence (AI), gaming, and virtual computing. Its core concept involves aggregating enterprise-grade GPU chips from around the world to form a unified global network, significantly increasing the supply of on-demand cloud computing resources.

The primary goal of Aethir is to address the current shortage of computing resources in the AI and cloud computing sectors. With the advancement of AI and the popularity of cloud gaming, the demand for high-performance computing resources continues to grow. However, due to the monopolization of GPU resources by a few large companies, small and medium-sized enterprises and startups often struggle to access sufficient computing power. Aethir provides a viable solution through its distributed network, helping resource owners (such as data centers, tech companies, telecom companies, top gaming studios, and cryptocurrency mining companies) fully utilize their underutilized GPU resources and provide efficient, low-cost computing resources to end-users.

Advantages of Distributed Cloud Computing:

  • Enterprise-grade computing resources: Aethir aggregates high-quality GPU resources, such as NVIDIA’s H100 chips, from various enterprises and data centers, ensuring high-quality and reliable computing resources.
  • Low latency: Aethir’s network design supports low-latency real-time rendering and AI inference applications, which is challenging to achieve with centralized cloud computing infrastructure. Low latency is crucial, especially in the cloud gaming sector, for providing a seamless gaming experience.
  • Rapid scalability: Adopting a distributed model allows Aethir to expand its network more quickly to meet the rapidly growing demands of the AI and cloud gaming markets. Compared to traditional centralized models, distributed networks can flexibly increase computing resource supply.
  • Superior unit economics: Aethir’s distributed network reduces the high operating costs of traditional cloud computing providers, enabling it to offer computing resources at lower prices. This is particularly important for small and medium-sized enterprises and startups.
  • Decentralized ownership: Aethir ensures that resource owners retain control over their resources, allowing them to flexibly adjust their resource utilization according to demand while earning corresponding revenue.

Through these core advantages, Aethir leads not only in technology but also holds significant economic and societal implications. By leveraging distributed physical infrastructure networks (DePINs), it makes the supply of computing resources more equitable, promoting the democratization and innovation of AI technology. This innovative model not only changes the supply of computing resources but also opens up new possibilities for the future development of AI and cloud computing.

Aethir’s technology architecture is composed of multiple core roles and components to ensure that its distributed cloud computing network can operate efficiently and securely. Below is a detailed description of each key role and component:

Core roles and components

Node Operators:

  • Node operators provide the actual computing resources, and they connect their GPU resources to the Aethir network for use.
  • Node operators need to first register their computing resources and undergo specification evaluation and confirmation by the Aethir network before they can start providing services.

Aethir Network

Containers

  • Containers are where computing tasks are performed, ensuring an instantly responsive cloud computing experience.
  • Selection: AI customers choose containers based on performance needs, and gaming customers choose containers based on service quality and cost.
  • Staking: New node operators need to stake $ATH tokens before providing resources. If quality control standards are violated or network services are disrupted, their staked tokens will be slashed.
  • Rewards: Containers are rewarded in two ways, one is the reward for maintaining a high readiness state (PoC), and the other is the service reward for actually using computing resources (PoD and service fees).

Checkers

  • The main responsibility of the checker is to ensure the integrity and performance of containers in the network, performing critical tests in registration, standby and rendering states.
  • Checking methods: including directly reading container performance data and simulation testing.

Indexers

  • The indexer matches user needs to appropriate containers to ensure fast, high-quality service delivery.
  • Selection: Indexers are randomly selected to maintain decentralization and reduce signal latency.
  • Matching criteria: Containers are selected based on service fees, quality of experience, and network rating index.

End Users:

End users are consumers of Aethir network computing resources, whether for AI training and inference, or gaming. End users submit requests, and the network matches the appropriate high-performance resources to meet the needs.

Treasury:

The treasury holds all staked $ATH tokens and pays out all $ATH rewards and fees.

Settlement Layer:

Aethir utilizes blockchain technology as its settlement layer, recording transactions, enabling scalability and efficiency, and using $ATH for incentivization. Blockchain ensures transparency in resource consumption tracking and enables near real-time payments.

For specific relationships, please refer to the following chart:

Source: @AethirCloud

9.3 Consensus mechanism

The Aethir network operates using a unique mechanism, with two primary proofs of work at its core:

Proof of Rendering Capacity:

  • A set of nodes are randomly selected every 15 minutes to validate transactions.
  • The probability of selecting a node is based on the number of tokens invested in it, the quality of its service, and the frequency with which it has been selected before. The more investment, the better the quality, and the fewer times it has been selected before, the more likely it is that the node will be selected.

Proof of Rendering Work:

  • The performance of nodes will be closely monitored to ensure high-quality services.
  • The network adjusts resource allocation based on user needs and geographic location to ensure the best quality of service.

Source: @AethirCloud

9.4 Token economics model

The ATH token plays a variety of roles in the Aethir ecosystem, including medium of exchange, governance tool, incentive, and platform development support.

Specific uses include:

  • Trading tools: ATH serves as the standard transaction medium within the Aethir platform, used to purchase computing power, covering business models such as AI applications, cloud computing and virtualized computing.
  • Diverse applications: ATH is not only used for current business, but also plans to continue to play a role in future merged mining and integration markets as the ecosystem grows.
  • Governance and participation: ATH token holders can participate in Aethir’s Decentralized Autonomous Organization (DAO) and influence platform decisions through proposals, discussions, and voting.
  • Stake: New node operators are required to stake ATH tokens to ensure they are aligned with platform goals and as a safeguard against potential misconduct.

Specific distribution strategy: The token of the Aethir project is $ATH, and the total issuance amount is 42 billion. The largest share of 35% is given to GPU providers, such as data centers and individual retail investors, 17.5% of tokens are given to teams and consultants, 15% and 11.75% of tokens are given to node inspection and sales teams respectively. As shown below:

Source: @AethirCloud

Reward emissions

The mining reward emission strategy aims to balance the participation of resource providers and the sustainability of long-term rewards. Through the decay function of early rewards, it is ensured that participants who join later are still motivated.

9.5 How to participate in Aethir mining

The Aethir platform chooses to allocate the majority of its Total Token Supply (TTS) to mining rewards, which is crucial for strengthening the ecosystem. This allocation aims to support node operators and uphold container standards. Node operators are central to Aethir, providing essential computational power, while containers are pivotal in delivering computing resources.

Mining rewards are divided into two forms: Proof of Rendering Capacity and Proof of Rendering Work. Proof of Rendering Work incentivizes node operators to complete computational tasks and is specifically distributed to containers. Proof of Rendering Capacity, on the other hand, rewards compute providers for making their GPUs available to Aethir; the more GPUs used by clients, the greater the additional token rewards. These rewards are distributed in $ATH tokens. They serve not only as distribution but also as investments in the future sustainability of the Aethir community.

10. Heurist

10.1 Project Background

Heurist is a Layer 2 network based on the ZK Stack, focusing on AI model hosting and inference. It is positioned as the Web3 version of HuggingFace, providing users with serverless access to open-source AI models. These models are hosted on a decentralized computing resource network.

Heurist’s vision is to decentralize AI using blockchain technology, achieving widespread technological adoption and equitable innovation. Its goal is to ensure AI technology’s accessibility and unbiased innovation through blockchain technology, promoting the integration and development of AI and cryptocurrency.

The term “Heurist” is derived from “heuristics,” which refers to the process by which the human brain quickly reaches reasonable conclusions or solutions when solving complex problems. This name reflects Heurist’s vision of rapidly and efficiently solving AI model hosting and inference problems through decentralized technology.

Issues with Closed-Source AI

Closed-source AI typically undergoes scrutiny under U.S. laws, which may not align with the needs of other countries and cultures, leading to over-censorship or inadequate censorship. This not only affects AI models’ performance but also potentially infringes on users’ freedom of expression.

The Rise of Open-Source AI

Open-source AI models have outperformed closed-source models in various fields. For example, Stable Diffusion outperforms OpenAI’s DALL-E 2 in image generation and is more cost-effective. The weights of open-source models are publicly available, allowing developers and artists to fine-tune them based on specific needs.

The community-driven innovation of open-source AI is also noteworthy. Open-source AI projects benefit from the collective contributions and reviews of diverse communities, fostering rapid innovation and improvement. Open-source AI models provide unprecedented transparency, enabling users to review training data and model weights, thereby enhancing trust and security.

Below is a detailed comparison between open-source AI and closed-source AI:

Source: @heurist_ai

10.2 Data privacy

When handling AI model inference, the Heurist project integrates Lit Protocol to encrypt data during transmission, including the input and output of AI inference. For miners, Heurist has two broad categories, divided into public miners and private miners:

  • Public miners: Anyone with a GPU that meets the minimum requirements can become a public miner, and the data processed by such miners is not encrypted.
  • Privacy Miner: Trusted node operators can become privacy miners, processing sensitive information such as confidential documents, health records, and user identity data. Such miners need to comply with off-chain privacy policies. The data is encrypted during transmission and cannot be decrypted by the Heurist protocol’s routers and sequencers. Only miners matching user access control conditions (ACC) can decrypt the data.

Source: @heurist_ai

How to establish trust with privacy-enabled miners? Mainly through the following two methods

  • Off-chain consensus: Off-chain consensus established through real-life laws or protocols, which is technically easy to implement.
  • Trusted Execution Environment (TEE): Utilize TEE to ensure secure and confidential handling of sensitive data. Although there are currently no mature TEE solutions for large AI models, the latest chips from companies like Nvidia have shown potential in enabling TEEs to handle AI workloads.

10.3 Token economics model

The Heurist project’s token, named HUE, is a utility token with a dynamic supply regulated through issuance and burn mechanisms. The maximum supply of HUE tokens is capped at 1 billion.

The token distribution and issuance mechanisms can be divided into two main categories: mining and staking.

  • Mining: Users can mine HUE tokens by hosting AI models on their GPUs. A mining node must stake at least 10,000 HUE or esHUE tokens to activate; below this threshold, no rewards are given. Mining rewards are issued as esHUE tokens, which are automatically compounded into the miner node’s stake. The reward rate depends on GPU efficiency, availability (uptime), the type of AI model run, and the total stake in the node.
  • Staking: Users can stake HUE or esHUE tokens in miner nodes. Staking rewards are issued in HUE or esHUE, with a higher yield for staking esHUE. Unstaking HUE tokens requires a 30-day lock-up period, while esHUE can be unstaked without a lock-up period. esHUE rewards can be converted to HUE tokens through a one-year linear vesting period. Users can instantly transfer their staked HUE or esHUE from one miner node to another, promoting flexibility and competition among miners.

Token Burn Mechanism

Similar to Ethereum’s EIP-1559 model, the Heurist project has implemented a token burn mechanism. When users pay for AI inference services, a portion of the HUE payment is permanently removed from circulation. The balance between token issuance and burn is closely related to network activity. During periods of high usage, the burn rate may exceed the issuance rate, putting the Heurist network into a deflationary phase. This mechanism helps regulate token supply and aligns the token’s value with actual network demand.

Bribe Mechanism

The bribe mechanism, first proposed by Curve Finance users, is a gamified incentive system to help direct liquidity pool rewards. The Heurist project has adopted this mechanism to enhance mining efficiency. Miners can set a percentage of their mining rewards as bribes to attract stakers. Stakers may choose to support miners offering the highest bribes but will also consider factors like hardware performance and uptime. Miners are incentivized to offer bribes because higher staking leads to higher mining efficiency, fostering an environment of both competition and cooperation, where miners and stakers work together to provide better services to the network.

Through these mechanisms, the Heurist project aims to create a dynamic and efficient token economy to support its decentralized AI model hosting and inference network.

10.4 Incentivized Testnet

The Heurist project allocated 5% of the total supply of HUE tokens for mining rewards during the Incentivized Testnet phase. These rewards are calculated in the form of points, which can be redeemed for fully liquid HUE tokens after the Mainnet Token Generation Event (TGE). Testnet rewards are divided into two categories: one for Stable Diffusion models and the other for Large Language Models (LLMs).

Points mechanism

Llama Point: For LLM miners, one Llama Point is earned for every 1000 input/output tokens processed by the Mixtral 8-7b model. The specific calculation is shown in the figure below:

Waifu Point: For Stable Diffusion miners, one Waifu Point is obtained for each 512x512 pixel image generated (using Stable Diffusion 1.5 model, after 20 iterations). The specific calculation is shown in the figure below:

After each computing task is completed, the complexity of the task is evaluated based on GPU performance benchmark results and points are awarded accordingly. The allocation ratio of Llama Points and Waifu Points will be determined closer to TGE, taking into account demand and usage of both model categories over the coming months.

Source: @heurist_ai

There are two main ways to participate in the testnet:

  • Comes with GPU: Whether you’re a gamer with a high-end setup, a former Ethereum miner with a spare GPU, an AI researcher with an occasional spare GPU, or a data center owner with excess capacity, you can download the miner program and set up a miner node. Detailed hardware specifications and setup guides are available on the Miner Guide page.
  • Rent a Hosted Node: For those without the required GPU hardware, Heurist offers competitively priced hosted mining node services. A professional engineering team will handle the setup of the mining hardware and software, allowing you to simply watch your rewards grow daily.

The recommended GPU for participating in Heurist mining is as shown in the figure below:


Source: @heurist_ai

Note that the Heurist testnet has anti-cheating measures, and the input and output of each computing task are stored and tracked by an asynchronous monitoring system. If a miner behaves maliciously to manipulate the reward system (such as submitting incorrect or low-quality results, tampering with downloaded model files, tampering with equipment and latency metric data), the Heurist team reserves the right to reduce their testnet points.

10.5 Heurist liquidity mining

Heurist testnet offers two types of points: Waifu Points and Llama Points. Waifu Points are earned by running the Stable Diffusion model for image generation, while Llama Points are earned by running large language models (LLMs). There are no restrictions on the GPU model for running these models, but there are strict requirements for VRAM. Models with higher VRAM requirements will have higher point coefficients.

The image below lists the currently supported LLM models. For the Stable Diffusion model, there are two modes: enabling SDXL mode and excluding SDXL mode. Enabling SDXL mode requires 12GB of VRAM, while excluding SDXL mode has been found to run with just 8GB of VRAM in my tests.

Source: @heurist_ai

10.6 Applications

The Heurist project has demonstrated its powerful AI capabilities and broad application prospects through three application directions: image generation, chatbots, and AI search engines. In terms of image generation, Heurist uses the Stable Diffusion model to provide efficient and flexible image generation services; in terms of chatbots, it uses large language models to achieve intelligent dialogue and content generation; in terms of AI search engines, it combines pre-trained language models to provide accurate information retrieval and detailed answers.

These applications not only improve the user experience, but also demonstrate Heurist’s innovation and technical advantages in the field of decentralized AI. The application effects are shown in the figure below:

Source: @heurist_ai

Image generation

The image generation application of the Heurist project mainly relies on the Stable Diffusion model to generate high-quality images through text prompts. Users can interact with the Stable Diffusion model via the REST API, submitting textual descriptions to generate images. The cost of each generation task depends on the resolution of the image and the number of iterations. For example, generating a 1024x1024 pixel, 40-iteration image using the SD 1.5 model requires 8 standard credit units. Through this mechanism, Heurist implements an efficient and flexible image generation service.

Chatbot

The chatbot application of the Heurist project implements intelligent dialogue through large language models (LLM). Heurist Gateway is an OpenAI-compatible LLM API endpoint built using LiteLLM that allows developers to call the Heurist API in OpenAI format. For example, using the Mistral 8x7b model, developers can replace existing LLM providers with just a few lines of code and get performance similar to ChatGPT 3.5 or Claude 2 at a lower cost.

Heurist’s LLM model supports a variety of applications, including automated customer service, content generation, and complex question answering. Users can interact with these models through API requests, submit text input, and get responses generated by the models, enabling diverse conversational and interactive experiences.

AI search engine

Project Heurist’s AI search engine provides powerful search and information retrieval capabilities by integrating large-scale pre-trained language models such as Mistral 8x7b. Users can get accurate and detailed answers through simple natural language queries. For example, on the question “Who is the CEO of Binance?”, the Heurist search engine not only provides the name of the current CEO (Richard Teng), but also explains in detail his background and the situation of the previous CEO.

The Heurist search engine combines text generation and information retrieval technology to handle complex queries and provide high-quality search results and relevant information. Users can submit queries through the API interface and obtain structured answers and reference materials, making Heurist’s search engine not only suitable for general users, but also to meet the needs of professional fields.

Conclusion

DePIN (Decentralized Physical Infrastructure Networks) represents a new form of the “sharing economy,” serving as a bridge between the physical and digital worlds. From both a market valuation and application area perspective, DePIN presents significant growth potential. Compared to CePIN (Centralized Physical Infrastructure Networks), DePIN offers advantages such as decentralization, transparency, user autonomy, incentive mechanisms, and resistance to censorship, all of which further drive its development. Due to DePIN’s unique economic model, it is prone to creating a “flywheel effect.” While many current DePIN projects have completed the construction of the “supply side,” the next critical focus is to stimulate real user demand and expand the “demand side.”

Although DePIN shows immense development potential, it still faces challenges in terms of technological maturity, service stability, market acceptance, and regulatory environment. However, with technological advancements and market development, these challenges are expected to be gradually resolved. It is foreseeable that once these challenges are effectively addressed, DePIN will achieve mass adoption, bringing a large influx of new users and draw people’s attention to the crypto field. This could potentially become the driving engine of the next bull market. Let’s witness this day together!

Statement:

  1. This article originally title “解密 DePIN 生态:AI 算力的变革力量” is reproduced from [WeChat public account:Gryphsis Academy]. All copyrights belong to the original author [Gryphsis Academy ]. If you have any objection to the reprint, please contact Gate Learn Team, the team will handle it as soon as possible according to relevant procedures.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

DePIN Ecosystem: The Transformative Force Drives AI Computing Power

AdvancedJul 08, 2024
This article summarizes and distills the basic framework of the DePIN project, providing an overview using the "WHAT-WHY-HOW" structure to review and summarize the DePIN track. The author then outlines an analytical approach to understanding the DePIN project based on their experience, focusing on detailed analysis of specific "computing power" projects.
DePIN Ecosystem: The Transformative Force Drives AI Computing Power

Introduction

DePIN, a concept introduced by Messari in November 2022, is not entirely novel but shares similarities with previous phenomena like IoT (Internet of Things). The author considers DePIN as a new form of “sharing economy.”

Unlike previous DePIN trends, the current cycle focuses primarily on the AI trifecta—data, algorithms, and computing power—with a notable emphasis on “computing power” projects such as io.net, Aethir, and Heurist. Therefore, this article specifically analyzes projects related to “computing power.”

This article summarizes and distills the basic framework of the DePIN project, providing an overview using the “WHAT-WHY-HOW” structure to review and summarize the DePIN track. The author then outlines an analytical approach to understanding the DePIN project based on their experience, focusing on detailed analysis of specific “computing power” projects.

1. What’s DePIN?

1.1 Definition

DePIN, short for Decentralized Physical Infrastructure Networks, is a blockchain-powered network that connects physical hardware infrastructure in a decentralized manner. This allows users to access and utilize network resources without permission, often in a cost-effective manner. DePIN projects typically employ token reward systems to incentivize active participation in network construction, following the principle of “the more you contribute, the more you earn.”

The application scope of DePIN is extensive, encompassing fields such as data collection, computing, and data storage. Areas involving CePIN often feature DePIN’s presence.

Considering the operational and economic model of DePIN projects, DePIN fundamentally operates as a new form of “sharing economy.” Therefore, when conducting an initial analysis of DePIN projects, it can be approached succinctly by first identifying the core business of the project.

If the project mainly involves computing power or storage services, it can be simply defined as a platform providing “shared computing power” and “shared storage” services. This classification helps to clarify the project’s value proposition and its positioning in the market.

Source: @IoTeX

In the aforementioned model of the sharing economy, there are three main participants: the demand side, the supply side, and the platform side. In this model, firstly, the demand side sends requests to the platform side, such as for ridesharing or accommodation. Next, the platform side forwards these requests to the supply side. Finally, the supply side provides the corresponding services based on the requests, thus completing the entire business transaction process.

In this model, the flow of funds begins with the demand side transferring funds to the platform side. After the demand side confirms the order, funds then flow from the platform side to the supply side. The platform side earns profit through transaction fees by providing a stable trading platform and a smooth order fulfillment experience. Think of your experience when using ride-hailing services like DiDi—it exemplifies this model.

In traditional “sharing economy” models, the platform side is typically a centralized large enterprise that retains control over its network, drivers, and operations. In some cases, the supply side in “sharing economy” models is also controlled by the platform side, such as with shared power banks or electric scooters. This setup can lead to issues like monopolization by enterprises, lower costs of malpractice, and excessive fees that infringe upon the interests of the supply side. In essence, pricing power remains centralized within these enterprises, not with those who control the means of production, which does not align with decentralized principles.

However, in the Web3 model of the “sharing economy,” the platform facilitating transactions is a decentralized protocol. This eliminates intermediaries like DiDi, empowering the supply side with pricing control. This approach provides passengers with more economical ride services, offers drivers higher income, and enables them to influence the network they help build each day. It represents a multi-win model where all parties benefit.

1.2 Development History of DePIN

Since the rise of Bitcoin, people have been exploring the integration of peer-to-peer networks with physical infrastructure, aiming to build an open and economically incentivized decentralized network across various devices. Influenced by terms like DeFi and GameFi in Web3, MachineFi was one of the earliest concepts proposed.

  • In December 2021, IoTeX became the first entity to coin the term “MachineFi” for this emerging field. This name combines “Machine” and “DeFi” to signify the financialization of machines and the data they generate.
  • In April 2022, Multicoin introduced the concept of “Proof of Physical Work” (PoPW), which refers to an incentive structure enabling anyone to contribute to a shared objective without permission. This mechanism significantly accelerated the development of DePIN.
  • In September 2022, Borderless Capital proposed “EdgeFi.”
  • In November 2022, Messari conducted a Twitter poll to unify the abbreviation for this field, with options including PoPW, TIPIN, EdgeFi, and DePIN. Ultimately, DePIN won with 31.6% of the votes, becoming the unified name for this domain.

Source: @MessariCrypto

2. Why do we need DePIN?

In traditional physical infrastructure networks (such as communication networks, cloud services, energy networks, etc.), the market is often dominated by large or giant companies due to huge capital investment and operation and maintenance costs. This centralized industrial characteristic has brought about the following major dilemmas and challenges:

  • The interests of government and enterprises are tightly bound, and the threshold for new entrants is high: Taking the U.S. communications industry as an example, the Federal Communications Commission (FCC) auctions wireless spectrum to the highest bidder. This makes it easier for companies with strong capital to win and gain absolute advantage in the market, leading to a significant Matthew effect in the market, with the strong getting stronger.
  • The market competition pattern is stable and innovation and vitality are insufficient: A small number of licensed companies present market pricing power. Due to their generous and stable cash returns, these companies often lack motivation for further development, resulting in slow network optimization, untimely equipment reinvestment and upgrades, and insufficient motivation for technological innovation and personnel renewal.
  • Outsourcing of technical services leads to varying service standards: Traditional industries are moving towards specialized outsourcing, but significant differences in service philosophies and technical capabilities among outsourcing service providers make it challenging to control delivery quality, lacking effective collaboration mechanisms in outsourcing.

2.1 Disadvantages of CePIN

  • Centralized control: CePIN is controlled by centralized institutions, posing risks of single points of failure, vulnerability to attacks, low transparency, and lack of user control over data and operations.
  • High entry barriers: New entrants face high capital investments and complex regulatory barriers, limiting market competition and innovation.
  • Resource wastage: Centralized management leads to resource idleness or wastage, resulting in low resource utilization rates.
  • Inefficient equipment reinvestment: Decision-making centralized in a few institutions leads to inefficient equipment updates and investment.
  • Inconsistent service quality: Outsourced service quality is difficult to guarantee, and the standards vary.
  • Information asymmetry: Central institutions hold all data and operation records, preventing users from fully understanding internal system operations and increasing risks of information asymmetry.
  • Insufficient incentive mechanisms: CePIN lacks effective incentive mechanisms, leading to low user participation and contribution to network resources.

2.2 Advantages of DePin

  • Decentralization: No single point of failure enhances system reliability and resilience, reduces the risk of attacks, and improves overall security.
  • Transparency: All transactions and operation records are publicly auditable, giving users complete control over data, allowing participation in decision-making processes, and increasing system transparency and democracy.
  • Incentive mechanism: Through token economics, users can earn token rewards by contributing network resources, incentivizing active participation and maintenance of the network.
  • Censorship resistance: Without central control points, the network is more resistant to censorship and control, allowing free flow of information.
  • Efficient resource utilization: Activating latent idle resources through a distributed network increases resource utilization efficiency.
  • Openness and global deployment: Permissionless and open-source protocols enable global deployment, breaking geographical and regulatory restrictions of traditional CePIN.

DePIN addresses centralized control, data privacy concerns, resource wastage, and inconsistent service quality of CePIN through advantages such as decentralization, transparency, user autonomy, incentive mechanisms, and censorship resistance. It drives transformation in the production relations of the physical world, achieving a more efficient and sustainable physical infrastructure network. Therefore, for physical infrastructure networks requiring high security, transparency, and user engagement, DePIN represents a superior choice.

3. How to implement a DePIN network?

3.1 Comparison of different consensus mechanisms

Before discussing how to implement a DePIN network, we first explain the PoPW mechanism commonly used in DePIN networks.

DePIN network demands rapid scalability, low costs for node participation, abundant network supply nodes, and a high degree of decentralization.

Proof of Work (PoW) requires purchasing expensive mining rigs in advance to participate in network operations, significantly raising the entry barrier for DePIN network participation. Therefore, it is not suitable as a consensus mechanism for DePIN networks.

Proof of Stake (PoS) also requires upfront token staking, which reduces user willingness to participate in network node operations. Similarly, it is not suitable as a consensus mechanism for DePIN networks.

The emergence of Proof of Physical Work (PoPW) precisely meets the characteristic demands of DePIN networks. The PoPW mechanism facilitates easier integration of physical devices into the network, greatly accelerating the development process of DePIN networks.

Additionally, the economic model combined with PoPW fundamentally resolves the dilemma of which comes first, the chicken or the egg. Using token rewards, the protocol incentivizes participants to build network supply in a manner attractive to users.

3.2 Main participants of DePIN network

Generally speaking, a complete DePIN network includes the following participants.

  • Founder: The initiator of the DePIN network, often referred to as the “project party.” Founders play a crucial role in the early stages of the network by undertaking network construction and bootstrapping responsibilities.
  • Owner: Providers of resources to the DePIN network, such as computing miners and storage miners. They earn protocol tokens by supplying hardware and software resources to the network. During the network’s initial stages, owners are essential participants.
  • Consumer: Users who demand resources from the DePIN network. Typically, most demand in DePIN projects comes from B2B users, primarily from Web2. The entire Web3 demand for DePIN networks is relatively small, and relying solely on Web3 users’ demand is insufficient to sustain network operations. Projects like Filecoin and Bittensor are examples of such B2B-oriented projects.
  • Builder: Individuals who maintain and expand the DePIN network’s ecosystem. As the network grows, more builders join ecosystem development efforts. In the early stages of network development, builders are primarily composed of founders.

These participants collectively contribute to the growth, operation, and sustainability of the DePIN network ecosystem.

3.3 Basic components of DePIN network

For the DePIN network to operate successfully, it needs to interact with on-chain and off-chain data at the same time, which requires stable and powerful infrastructure and communication protocols. In general, the DePIN network mainly includes the following parts.

  • Physical equipment infrastructure: Usually the Owner provides the physical devices required for the network, such as GPU, CPU, etc.
  • Off-chain computing facilities: Data generated by physical devices needs to be uploaded to the chain for verification through off-chain computing facilities. This is what we call the PoPW proof mechanism. Oracles are usually used to upload data to the chain.
  • On-chain computing facilities: After the data is verified, the on-chain account address of the owner of the device needs to be checked and the token reward is sent to the on-chain address.
  • Token incentive mechanism: It is what we usually call the token economic model. The token economic model plays a very important role in the DePIN network and has different functions in different development periods of the network. It will be elaborated on in the following article.

3.5 Basic operation mode of DePIN network

The operating mode of the DePIN network follows a sequence similar to the architectural diagram mentioned above. Essentially, it involves off-chain data generation followed by on-chain data confirmation. Off-chain data adheres to a “bottom-up” rule, whereas on-chain data follows a “top-down” rule.

  • Service Provision for Incentives: Hardware devices in DePIN projects earn rewards by providing services, such as signal coverage in Helium, which results in $HNT rewards.
  • Presentation of Evidence: Before receiving incentives, devices must present evidence demonstrating that they have performed the required work. This evidence is known as Proof of Physical Work (PoPW).
  • Identity Verification Using Public-Private Keys: Similar to traditional public blockchains, the identity verification process for DePIN devices involves using public-private key pairs. The private key is used to generate and sign the physical work proofs, while the public key is used by external parties to verify the proofs or as an identity tag (Device ID) for the hardware device.
  • Reward Distribution: DePIN projects deploy smart contracts that record the on-chain account addresses of different device owners. This enables tokens earned by off-chain physical devices to be directly deposited into the owners’ on-chain accounts.

To simplify this process using a simple analogy, the operation of the DePIN network can be likened to an exam scenario:

  • Teacher (Issuer of Tokens): Verifies the authenticity of “student scores” (proof of work).
  • Student (Recipient of Tokens): Earns tokens by completing the “exam paper” (providing services).

Initially, the teacher hands out exam papers to students, who must complete the exam according to the paper’s requirements. After completion, students submit their papers to the teacher, who grades them based on a descending order principle, rewarding higher rankings with greater recognition (tokens).

In this analogy:

The “issued exam papers” represent the demand orders from the DePIN network’s demand side.

The solving of the exam questions corresponds to adhering to specific rules (PoPW) in DePIN.

The teacher verifies that the paper belongs to a specific student (using private keys for signatures and public keys for identification).

Grades are assigned based on performance, following a descending order principle that aligns with DePIN’s token distribution principle of “more contribution, more rewards.”

The basic operational mechanism of the DePIN network bears similarities to our everyday exam system. In the realm of cryptocurrencies, many projects essentially mirror real-life patterns on the blockchain. When faced with complex projects, employing analogies like this can aid in understanding and mastering the underlying concepts and operational logic.

4. Types of DePIN network

We have conducted an overview review of the DePIN sector according to the logical sequence of WHAT-WHY-HOW. Next, let’s outline the specific tracks within the DePIN sector. The breakdown of these tracks is divided into two main parts: Physical Resource Networks and Digital Resource Networks.

  • Physical Resource Networks: Incentivize participants to use or deploy location-based hardware infrastructure and provide non-standardized goods and services in the real world. This can be further subdivided into: wireless networks, geographic spatial data networks, mobile data networks, and energy networks.
  • Digital Resource Networks: Incentivize participants to use or deploy hardware infrastructure and provide standardized digital resources. This can be further subdivided into: storage networks, computing power networks, and bandwidth networks.

Among them, the representative projects of some sections are as follows:

4.1 Decentralized storage network - Filecoin ($FIL)

Filecoin is the world’s largest distributed storage network, with over 3,800 storage providers globally contributing more than 17 million terabytes (TB) of storage capacity. Filecoin can be considered one of the most renowned DePIN projects, with its FIL token reaching its peak price on April 1, 2021. Filecoin’s vision is to bring open, verifiable features to the three core pillars supporting the data economy: storage, computation, and content distribution.

Filecoin’s storage of files is based on the InterPlanetary File System (IPFS), which enables secure and efficient file storage.

One unique aspect of Filecoin is its economic model. Before becoming a storage provider on Filecoin, participants must stake a certain amount of FIL tokens. This creates a cycle where during a bull market, “staking tokens -> increased total storage space -> more nodes participating -> increased demand for staking tokens -> price surge” occurs. However, in bear markets, it can lead to a spiral of price decline. This economic model is more suited to thriving in bullish market conditions.

4.2 Decentralized GPU Rendering Platform - Render Network ($RNDR)

Render Network is a decentralized GPU rendering platform under OTOY, consisting of artists and GPU providers, offering powerful rendering capabilities to global users. The $RNDR token reached its peak price on March 17, 2024. Being part of the AI sector, Render Network’s peak coincided with the AI sector’s peak.

The operational model of Render Network works as follows: creators submit jobs requiring GPU rendering, such as 3D scenes or high-resolution images/videos, which are distributed to GPU nodes in the network for processing. Node operators contribute idle GPU computing power to Render Network and receive $RNDR tokens as rewards.

A unique aspect of Render Network is its pricing mechanism, employing a dynamic pricing model based on factors like job complexity, urgency, and available resources. This model determines rendering service prices, providing competitive rates to creators while fairly compensating GPU providers.

A recent positive development for Render Network is its support for “Octane on iPad,” a professional rendering application backed by Render Network.

4.3 Decentralized data market - Ocean ($OCEAN)

Ocean Protocol is a decentralized data exchange protocol primarily focused on secure data sharing and commercial applications of data. Similar to common DePIN projects, it involves several key participants:

  • Data Providers: Share data on the protocol.
  • Data Consumers: Purchase access to data using OCEAN tokens.
  • Node Operators: Maintain network infrastructure and earn OCEAN token rewards.

For data providers, data security and privacy are crucial. Ocean Protocol ensures data flow and protection through the following mechanisms:

  • Data Security and Control: Uses blockchain technology to ensure secure and transparent data transactions, enabling data owners to maintain complete control over their data.
  • Data Tokenization: Allows data to be bought, sold, and exchanged like other cryptocurrencies, enhancing liquidity in the data market.
  • Privacy Protection: Implements Compute-to-Data functionality, enabling computation and analysis on data without exposing the raw data. Data owners can approve AI algorithms to run on their data, with computations occurring locally under their control, ensuring data remains within their scope.
  • Fine-Grained Access Control: Provides detailed access control where data providers can set specific access policies, specifying which users or groups can access particular parts of the data and under what conditions. This granular control mechanism ensures secure data sharing while maintaining data privacy.

4.4 L1 - Lotex ($IOTX) Compatible with EVM

IoTeX was founded in 2017 as an open-source platform focused on privacy, integrating blockchain, secure hardware, and data service innovations to support the Internet of Trusted Things (IoT). Unlike other DePIN projects, IoTeX positions itself as a development platform designed for DePIN builders, akin to Google’s Colab. IoTeX’s flagship technology is the off-chain computing protocol W3bStream, which facilitates the integration of IoT devices into the blockchain. Some notable IoTeX DePIN projects include Envirobloq, Drop Wireless, and HealthBlocks.

4.5 Decentralize hotspot network - Helium ($HNT)

Helium, established in 2013, is a veteran DePIN project known for creating a large-scale network where users contribute new hardware. Users can purchase Helium Hotspots manufactured by third-party vendors to provide hotspot signals for nearby IoT devices. Helium rewards hotspot operators with HNT tokens to maintain network stability, similar to a mining model where the mining equipment is specified by the project.

In the DePIN arena, there are primarily two types of device models: customized dedicated hardware specified by the project, such as Helium, and ubiquitous hardware used in daily life integrated into the network, as seen with Render Network and io.net incorporating idle GPUs from users.

Helium’s key technology is its LoRaWAN protocol, a low-power, long-distance wireless communication protocol ideal for IoT devices. Helium Hotspots utilize the LoRaWAN protocol to provide wireless network coverage.

Despite establishing the world’s largest LoRaWAN network, Helium’s anticipated demand did not materialize as expected. Currently, Helium is focusing on launching 5G cellular networks. On April 20, 2023, Helium migrated to the Solana network and subsequently launched Helium Mobile in the Americas, offering a $20 per month unlimited 5G data plan. Due to its affordable pricing, Helium Mobile quickly gained popularity in North America.

From the global “DePIN” search index spanning five years, a minor peak was observed in December 2023 to January 2024, coinciding with the peak range of the $MOBILE token price. This continued upward trend in DePIN interest indicates that Helium Mobile has initiated a new era for DePIN projects.

Source: @Google Trendes

5. DePIN economic model

The economic model of DePIN projects plays a crucial role in their development, serving different purposes at various stages. For instance, in the initial stages of a project, it primarily utilizes token incentive mechanisms to attract users to contribute their software and hardware resources towards building the supply side of the project.

5.1 BME Model

Before discussing the economic model, let’s briefly outline the BME (Burn-and-Mint Equivalent) model, as it is closely related to most DePIN projects’ economic frameworks.

The BME model manages token supply and demand dynamics. Specifically, it involves the burning of tokens on the demand side for purchasing goods or services, while the protocol platform mints new tokens to reward contributors on the supply side. If the amount of newly minted tokens exceeds those burned, the total supply increases, leading to price depreciation. Conversely, if the burn rate exceeds the minting rate, deflation occurs, causing price appreciation. A continually rising token price attracts more supply-side users, creating a positive feedback loop.

Supply > Demand =>price drop

Supply<Demand=>Price increase

We can further elucidate the BME model using the Fisher Equation, an economic model that describes the relationship between money supply (M), money velocity (V), price level (P), and transaction volume (T):

MV = PT

  • M= money supply
  • V = velocity of money
  • P = price level
  • T = Transaction volume

When the token velocity V increases, and assuming other factors remain constant, the equilibrium of this equation can only be maintained by reducing token circulation (M) through burning mechanisms. Thus, as network usage increases, the burn rate also accelerates. When the inflation rate and burn rate achieve dynamic equilibrium, the BME model can maintain a stable balanced state.

Source: @Medium

Using the specific example of purchasing goods in real life to illustrate this process: First, manufacturers produce goods, which consumers then purchase.

During the purchase process, instead of handing money directly to the manufacturer, you burn a specified amount as proof of your purchase of the goods. Simultaneously, the protocol platform mints new currency at regular intervals. Additionally, this money is transparently and fairly distributed among various contributors in the supply chain, such as producers, distributors, and sellers.

Source:@GryphsisAcademy

5.3 Development stages of economic models

With a basic understanding of the BME model, we can now have a clearer understanding of common economic models in the DePIN space.

Overall, DePIN economic models can be broadly divided into the following three stages:

1st Stage: Initial Launch and Network Construction Phase

  • This is the initial phase of the DePIN project, focusing on establishing physical infrastructure networks.
  • It incentivizes individuals and enterprises to contribute hardware resources (such as computing power, storage space, bandwidth, etc.) through token incentive mechanisms to drive network deployment and expansion.
  • Projects typically rely on core teams to deploy nodes and promote networks in a somewhat centralized manner until reaching a critical mass.
  • Tokens are primarily used to reward contributors for providing hardware resources rather than paying for network usage fees.

2nd Stage: Network Development and Value Capture Phase

  • Once the network reaches critical mass, the project gradually transitions to a decentralized community governance model.
  • The network begins to attract end-users, and tokens are used not only to reward contributors but also to pay for network usage fees.
  • The project captures the economic value generated within the network through tokenization and distributes it to contributors and participants.
  • Token economic models typically use the BME model to balance supply and real demand, avoiding inflation or deflation.

3rd Stage: Maturity and Value Maximization Phase

  • The network has a large number of active users and contributors, forming a virtuous cycle.
  • Token economic models focus more on long-term value creation, enhancing token value through carefully designed deflation mechanisms.
  • Projects may introduce new token models to optimize token supply, promoting positive externalities in bilateral markets.
  • Community autonomy becomes the dominant mode of network governance.

A good economic model can create an economic flywheel effect for DePIN projects. Because DePIN projects employ token incentive mechanisms, they attract significant attention from suppliers during the project’s initial launch phase, enabling rapid scale-up through the flywheel effect.

The token incentive mechanism is key to the rapid growth of DePIN projects. Initially, projects need to develop appropriate reward criteria tailored to the scalability of physical infrastructure types. For example, to expand network coverage, Helium offers higher rewards in areas with lower network density compared to higher-density areas.

As shown in the diagram below, early investors contribute real capital to the project, giving the token initial economic value. Suppliers actively participate in project construction to earn token rewards. As the network scales and with its lower costs compared to CePIN, an increasing number of demand-side users start adopting DePIN project services, generating income for the entire network protocol and forming a solid pathway from suppliers to demand.

With rising demand from the demand side, token prices increase through mechanisms like burning or buybacks (BME model), providing additional incentives for suppliers to continue expanding the network. This increase signifies that the value of tokens they receive also rises.

As the network continues to expand, investor interest in the project grows, prompting them to provide more financial support.

If the project is open-source or shares contributor/user data publicly, developers can build dApps based on this data, creating additional value within the ecosystem and attracting more users and contributors.

Source: @IoTeX

6. Current status of DePin area

The current popularity of the DePIN project is mainly focused on Solana Network and “DePIN x AI “. It can be seen from the Google Index that in network infrastructure, DePIN and Solana have the strongest correlation, and areas of high concern are mainly concentrated in Asia, including China, India, etc. This also shows that the main participants in the DePIN project are from Asia.

Source: @Google Trendes

The current total market value of the entire DePIN track is “$32B ”, compared with traditional CePIN projects, for example, China Mobile’s market capitalization is “$210B ”, AT&T (the largest carrier in the United States) has a market capitalization of “$130B ”, analyzing the entire DePIN track only from the perspective of market value, there is still a lot of room for growth.

Source: @DePINscan

The turning point in the curve of total DePIN devices is evident in December 2023, coinciding with the peak popularity and highest coin price of Helium Mobile. It can be said that the DePIN boom in 2024 was initiated by Helium Mobile.

As shown in the diagram below, it displays the global distribution of DePIN devices, highlighting their concentration in regions such as North America, East Asia, and Western Europe. These areas are typically more economically developed, as becoming a node in the DePIN network requires provisioning of both software and hardware resources, which incur significant costs. For instance, a high-end consumer-grade GPU like the RTX-4090 costs $2,000, which is a substantial expense for users in less economically developed regions.

Due to the token incentive mechanism of DePIN projects, which follows the principle of “more contribution, more reward,” users aiming for higher token rewards must contribute more resources and use higher-end equipment. While advantageous for project teams, this undoubtedly raises the barrier to entry for users. A successful DePIN project should be backward compatible and inclusive, offering opportunities for participation even with lower-end devices, aligning with the blockchain principles of fairness, justice, and transparency.

Looking at the global device distribution map, many regions remain undeveloped. We believe that through continuous technological innovation and market expansion, the DePIN sector has the potential for global growth, reaching every corner, connecting people worldwide, and collectively driving technological advancement and social development.

source: @DePINscan

7. Steps in DePIN Program Analysis

After my brief review of the DePIN track, the author summarized the basic steps for analyzing a DePIN project.

Most importantly, analyze the DePIN project’s operating model as a sharing economy in Web2.

8. io.net

8.1 Basic information

Project Description

io.net is a decentralized computing network that enables the development, execution, and scaling of machine learning applications on the Solana blockchain. Their vision is to “bring 1 million GPUs together to form the world’s largest GPU cluster.” Giving engineers access to massive amounts of computing power in a system that is accessible, customizable, cost-effective, and easy to implement.

Team background

  • Founder and CEO: Ahmed Shadid, who worked in quantitative finance and financial engineering before founding io.net, and is also a volunteer at the Ethereum Foundation.

  • Chief Marketing Officer and Chief Strategy Officer: Garrison Yang, Yang Xudong. Prior to that, he was Vice President of Strategy and Growth at Avalanche and is an alumnus of UC Santa Barbara.

It can be seen that the technical background of this project is relatively solid, and many founders have Crypto experience.

Narrative: AI, DePIN, Solana Ecosystem.

Financing situation

Source: @RootDataLabs

Source: @RootDataLabs

On March 5, 2024, io.net secured $30 million in Series A funding with a valuation of $1 billion, benchmarked against Render Network. The round was led by renowned top-tier investment institutions such as Hack VC, OKX Ventures, Multicoin Capital, and also included influential project leaders like Anatoly Yakovenko (Solana CEO) and Yat Siu (Animoca co-founder). This early backing from top capital is why we refer to io.net as a star project—it has the funding, the background, the technology, and the expectation of an airdrop.

The main products of io.net are as follows:

8.2 Product structure

The main products of io.net are as follows:

  • IO Cloud: Deploys and manages on-demand distributed GPU clusters, serving as the computing power equipment management interface for demand-side users.
  • IO Worker: Provides a comprehensive and user-friendly interface for efficiently managing users’ GPU node operations through an intuitive web application, tailored for supply-side users.
  • IO Explorer: Offers users a window into the internal workings of the network, providing comprehensive statistics and an overview of all aspects of the GPU cloud. Similar to how Solscan or blockchain explorers provide visibility for blockchain transactions.
  • BC8.AI: An advanced AI-driven image generator that uses deep neural networks to create highly detailed and accurate images from text descriptions or seed images. This AI application on io.net can be easily accessed with an IO ID.

Below is an image of a cat in the style of Van Gogh generated on BC8.AI.

Source: @ionet

Product features and advantages

Compared with traditional cloud service providers Google Cloude and AWS, io.net has the following features and advantages:

  • High number of GPUs with powerful computing capabilities
  • Affordable and cost-effective
  • Censorship-resistant – Quickly access advanced GPUs like A100 and H100 without needing approval
  • Anti-monopoly
  • Highly customizable for users

Let’s take AWS as an example to compare in detail:

Accessibility refers to how easily users can access and obtain computing power. When using traditional cloud service providers, you typically need to provide key identification information such as a credit card and contact details in advance. However, when accessing io.net, all you need is a Solana wallet to quickly and conveniently obtain computing power permissions.

Customization refers to the degree of customization available to users for their computing clusters. With traditional cloud service providers, you can only select the machine type and location of the computing cluster. In contrast, when using io.net, in addition to these options, you can also choose bandwidth speed, cluster type, data encryption methods, and billing options.

Source: @ionet

As shown in the image above, when a user selects the NVIDIA A100-SXM4-80GB model GPU, a Hong Kong server, ultra-high-speed bandwidth, hourly billing, and end-to-end encryption, the price per GPU is $1.58 per hour. This demonstrates that io.net offers a high degree of customization with many options available for users, prioritizing their experience. For DePIN projects, this customization is a key way to expand the demand side and promote healthy network growth.

In contrast, the image below shows the price of the NVIDIA A100-SXM4-80GB model GPU from traditional cloud service providers. For the same computing power requirements, io.net’s price is at least half that of traditional cloud providers, making it highly attractive to users.

8.3 Basic information of the network

We can use IO Explorer to comprehensively view the computing power of the entire network, including the number of devices, available service areas, computing power prices, etc.

Computing power equipment situation

Currently, io.net has a total of 101,545 verified GPUs and 31,154 verified CPUs. io.net will verify whether the computing device is online every 6 hours to ensure io.net’s network stability.

Source: @ionet

The second image shows currently available, PoS-verified, and easy-to-deploy computing devices. Compared to Render Network and Filecoin, io.net has a significantly higher number of computing devices. Furthermore, io.net integrates computing devices from both Render Network and Filecoin, allowing users to choose their preferred computing device provider when deploying compute clusters. This user-centric approach ensures that io.net meets users’ customization needs and enhances their overall experience.

Source: @ionet

Another notable feature of io.net’s computing devices is the large number of high-end devices available. In the US, for example, there are several hundred high-end GPUs like the H100 and A100. Given the current sanctions and the AI boom, high-end graphics cards have become extremely valuable computing assets.

With io.net, you can use these high-end computing devices provided by suppliers without any review, regardless of whether you are a US citizen. This is why we highlight the anti-monopoly advantage of io.net.

Source:@ionet

Business revenue

It can be seen from the revenue dashboard of io.net that io.net has stable income every day, and the total income has reached the level of one million US dollars. It can be seen from here that io.net has completed the construction of the supply side, and the entire project The cold start period has gradually passed and the network development period has begun.


source: @ionet


source: @ionet

Source: @ionet

From the supply side of io.net

  • The validity of network nodes has been verified
  • Computing power equipment is stable online and in sufficient quantity
  • Having a certain number of high-end computing equipment can fill part of the market demand

But from the demand side

  • There are still many devices that are idle
  • Most of the computing power task requirements come from BC8.AI
  • The needs of enterprises and individuals have not yet been stimulated.

8.4 Economic model

io.net’s native network token is $IO, with a fixed total supply of 800 million tokens. An initial supply of 500 million tokens will be released, and the remaining 300 million tokens will be distributed as rewards to suppliers and token stakers over 20 years, issued hourly.

$IO employs a burn deflation mechanism: network revenue is used to purchase and burn $IO tokens, with the amount of tokens burned adjusted according to the price of $IO.

Token Utilities:

  • Payment Method: Payments made with USDC for users and suppliers incur a 2% fee, while payments made with $IO incur no fee.
  • Staking: A minimum of 100 $IO tokens must be staked for a node to receive idle rewards from the network.

Token Allocation:

From the allocation chart, it can be seen that half of the tokens are allocated to project community members, indicating the project’s intention to incentivize community participation for its growth. The R&D ecosystem accounts for 16%, ensuring continuous support for the project’s technology and product development.


As can be seen from the token release chart, $IO tokens are released gradually and linearly. This release mechanism helps stabilize the price of $IO tokens and avoid price fluctuations caused by the sudden appearance of a large number of $IO tokens in the market. At the same time, the reward mechanism of $IO tokens can also motivate long-term holders and stakers, enhancing the stability of the project and user stickiness.

Overall, io.net’s tokenomics is a well-designed token scheme. The allocation of half of the tokens to the community highlights the project’s emphasis on community-driven and decentralized governance, which supports long-term development and the establishment of credibility.

In the third stage of the DePIN economic development phases discussed earlier, it was mentioned that “community autonomy becomes the dominant mode of network governance.” io.net has already laid a solid foundation for future community autonomy. Additionally, the gradual release mechanism and burn mechanism of the $IO token effectively distribute market pressure and reduce the risk of price volatility.

From these aspects, it is clear that io.net’s various mechanisms demonstrate that it is a well-planned project with a focus on long-term development.

8.5 Ways to participate in io.net

Currently, io.net’s “Ignition Rewards” has entered its 3rd season, running from June 1st to June 30th. The main way to participate is to integrate your computing devices into the main computing network for mining. Mining rewards in $IO tokens depend on factors such as device computing power, network bandwidth speed, and others.

In the 1st season of “Ignition Rewards,” the initial threshold for device integration was set at the “GeForce RTX 1080 Ti.” This reflects our earlier statement of providing low-end devices with an opportunity to participate, aligning with the blockchain principles of fairness, justice, and transparency. In the 2nd and 3rd seasons of “Ignition Rewards,” the initial threshold was set at “GeForce RTX 3050.”

The reason for this approach is twofold: from the project’s perspective, as the project develops, low-end computing devices contribute less to the overall network and stronger computing devices better maintain network stability. From the demand-side users’ perspective, most users require high-end computing devices for training and inference of AI models, and low-end devices cannot meet their needs.

Therefore, as the project progresses favorably, raising the participation threshold is a correct approach. Similar to the Bitcoin network, the goal for the project is to attract better, stronger, and more numerous computing devices.

8.6 Conclusion & Outlook

io.net has performed well during the project’s cold start and network construction phase, completing the entire network deployment, validating the effectiveness of computational nodes, and generating sustained revenue.

The project’s next main goal is to further expand the network ecosystem and increase demand from the computational needs market, which represents a significant opportunity. Successfully promoting the project in this market will require efforts from the project’s marketing team.

In practice, when we talk about AI algorithm model development, it mainly involves two major parts: training and inference. Let’s illustrate these two concepts with a simple example of a quadratic equation:

The process of using the (x, y) data pair (training set) to solve for the unknown coefficients (a, b, c) is the training process of the AI ​​algorithm; after obtaining the unknown coefficients (a, b, c), according to the process of solving y for a given x is the inference process of the AI ​​algorithm.

In this computation process, we can clearly see that the computational workload of the training process is much greater than that of the inference process. Training a Large Language Model (LLM) requires extensive support from computational clusters and consumes substantial funds. For example, training GPT-3 175B involved thousands of Nvidia V100 GPUs over several months, with training costs reaching tens of millions of dollars.

Performing AI large model training on decentralized computing platforms is challenging because it involves massive data transfers and exchanges, demanding high network bandwidth that decentralized platforms struggle to meet. NVIDIA has established itself as a leader in the AI industry not only due to its high-end computational chips and underlying AI computing acceleration libraries (cuDNN) but also because of its proprietary communication bridge, “NVLink,” which significantly speeds up the movement of large-scale data during model training.

In the AI industry, training large models not only requires extensive computational resources but also involves data collection, processing, and transformation. These processes often necessitate scalable infrastructure and centralized data processing capabilities. Consequently, the AI industry is fundamentally a scalable and centralized sector, relying on robust technological platforms and data processing capabilities to drive innovation and development.

Therefore, decentralized computing platforms like io.net are best suited for AI algorithm inference tasks. Their target customers should include students and those with task requirements for fine-tuning downstream tasks based on large models, benefiting from io.net’s affordability, ease of access, and ample computational power.

9. Aethir

9.1 Project background

Artificial Intelligence is regarded as one of the most significant technologies ever seen by humanity. With the advent of General Artificial Intelligence (AGI), lifestyles are poised to undergo revolutionary changes. However, due to a few companies dominating AI technology development, there exists a wealth gap between GPU affluent and GPU deprived individuals. Aethir, through its distributed physical infrastructure network (DePINs), aims to increase the accessibility of on-demand computing resources, thereby balancing the distribution of AI development outcomes.

Aethir is an innovative distributed cloud computing infrastructure network specifically designed to meet the high demand for on-demand cloud computing resources in the fields of Artificial Intelligence (AI), gaming, and virtual computing. Its core concept involves aggregating enterprise-grade GPU chips from around the world to form a unified global network, significantly increasing the supply of on-demand cloud computing resources.

The primary goal of Aethir is to address the current shortage of computing resources in the AI and cloud computing sectors. With the advancement of AI and the popularity of cloud gaming, the demand for high-performance computing resources continues to grow. However, due to the monopolization of GPU resources by a few large companies, small and medium-sized enterprises and startups often struggle to access sufficient computing power. Aethir provides a viable solution through its distributed network, helping resource owners (such as data centers, tech companies, telecom companies, top gaming studios, and cryptocurrency mining companies) fully utilize their underutilized GPU resources and provide efficient, low-cost computing resources to end-users.

Advantages of Distributed Cloud Computing:

  • Enterprise-grade computing resources: Aethir aggregates high-quality GPU resources, such as NVIDIA’s H100 chips, from various enterprises and data centers, ensuring high-quality and reliable computing resources.
  • Low latency: Aethir’s network design supports low-latency real-time rendering and AI inference applications, which is challenging to achieve with centralized cloud computing infrastructure. Low latency is crucial, especially in the cloud gaming sector, for providing a seamless gaming experience.
  • Rapid scalability: Adopting a distributed model allows Aethir to expand its network more quickly to meet the rapidly growing demands of the AI and cloud gaming markets. Compared to traditional centralized models, distributed networks can flexibly increase computing resource supply.
  • Superior unit economics: Aethir’s distributed network reduces the high operating costs of traditional cloud computing providers, enabling it to offer computing resources at lower prices. This is particularly important for small and medium-sized enterprises and startups.
  • Decentralized ownership: Aethir ensures that resource owners retain control over their resources, allowing them to flexibly adjust their resource utilization according to demand while earning corresponding revenue.

Through these core advantages, Aethir leads not only in technology but also holds significant economic and societal implications. By leveraging distributed physical infrastructure networks (DePINs), it makes the supply of computing resources more equitable, promoting the democratization and innovation of AI technology. This innovative model not only changes the supply of computing resources but also opens up new possibilities for the future development of AI and cloud computing.

Aethir’s technology architecture is composed of multiple core roles and components to ensure that its distributed cloud computing network can operate efficiently and securely. Below is a detailed description of each key role and component:

Core roles and components

Node Operators:

  • Node operators provide the actual computing resources, and they connect their GPU resources to the Aethir network for use.
  • Node operators need to first register their computing resources and undergo specification evaluation and confirmation by the Aethir network before they can start providing services.

Aethir Network

Containers

  • Containers are where computing tasks are performed, ensuring an instantly responsive cloud computing experience.
  • Selection: AI customers choose containers based on performance needs, and gaming customers choose containers based on service quality and cost.
  • Staking: New node operators need to stake $ATH tokens before providing resources. If quality control standards are violated or network services are disrupted, their staked tokens will be slashed.
  • Rewards: Containers are rewarded in two ways, one is the reward for maintaining a high readiness state (PoC), and the other is the service reward for actually using computing resources (PoD and service fees).

Checkers

  • The main responsibility of the checker is to ensure the integrity and performance of containers in the network, performing critical tests in registration, standby and rendering states.
  • Checking methods: including directly reading container performance data and simulation testing.

Indexers

  • The indexer matches user needs to appropriate containers to ensure fast, high-quality service delivery.
  • Selection: Indexers are randomly selected to maintain decentralization and reduce signal latency.
  • Matching criteria: Containers are selected based on service fees, quality of experience, and network rating index.

End Users:

End users are consumers of Aethir network computing resources, whether for AI training and inference, or gaming. End users submit requests, and the network matches the appropriate high-performance resources to meet the needs.

Treasury:

The treasury holds all staked $ATH tokens and pays out all $ATH rewards and fees.

Settlement Layer:

Aethir utilizes blockchain technology as its settlement layer, recording transactions, enabling scalability and efficiency, and using $ATH for incentivization. Blockchain ensures transparency in resource consumption tracking and enables near real-time payments.

For specific relationships, please refer to the following chart:

Source: @AethirCloud

9.3 Consensus mechanism

The Aethir network operates using a unique mechanism, with two primary proofs of work at its core:

Proof of Rendering Capacity:

  • A set of nodes are randomly selected every 15 minutes to validate transactions.
  • The probability of selecting a node is based on the number of tokens invested in it, the quality of its service, and the frequency with which it has been selected before. The more investment, the better the quality, and the fewer times it has been selected before, the more likely it is that the node will be selected.

Proof of Rendering Work:

  • The performance of nodes will be closely monitored to ensure high-quality services.
  • The network adjusts resource allocation based on user needs and geographic location to ensure the best quality of service.

Source: @AethirCloud

9.4 Token economics model

The ATH token plays a variety of roles in the Aethir ecosystem, including medium of exchange, governance tool, incentive, and platform development support.

Specific uses include:

  • Trading tools: ATH serves as the standard transaction medium within the Aethir platform, used to purchase computing power, covering business models such as AI applications, cloud computing and virtualized computing.
  • Diverse applications: ATH is not only used for current business, but also plans to continue to play a role in future merged mining and integration markets as the ecosystem grows.
  • Governance and participation: ATH token holders can participate in Aethir’s Decentralized Autonomous Organization (DAO) and influence platform decisions through proposals, discussions, and voting.
  • Stake: New node operators are required to stake ATH tokens to ensure they are aligned with platform goals and as a safeguard against potential misconduct.

Specific distribution strategy: The token of the Aethir project is $ATH, and the total issuance amount is 42 billion. The largest share of 35% is given to GPU providers, such as data centers and individual retail investors, 17.5% of tokens are given to teams and consultants, 15% and 11.75% of tokens are given to node inspection and sales teams respectively. As shown below:

Source: @AethirCloud

Reward emissions

The mining reward emission strategy aims to balance the participation of resource providers and the sustainability of long-term rewards. Through the decay function of early rewards, it is ensured that participants who join later are still motivated.

9.5 How to participate in Aethir mining

The Aethir platform chooses to allocate the majority of its Total Token Supply (TTS) to mining rewards, which is crucial for strengthening the ecosystem. This allocation aims to support node operators and uphold container standards. Node operators are central to Aethir, providing essential computational power, while containers are pivotal in delivering computing resources.

Mining rewards are divided into two forms: Proof of Rendering Capacity and Proof of Rendering Work. Proof of Rendering Work incentivizes node operators to complete computational tasks and is specifically distributed to containers. Proof of Rendering Capacity, on the other hand, rewards compute providers for making their GPUs available to Aethir; the more GPUs used by clients, the greater the additional token rewards. These rewards are distributed in $ATH tokens. They serve not only as distribution but also as investments in the future sustainability of the Aethir community.

10. Heurist

10.1 Project Background

Heurist is a Layer 2 network based on the ZK Stack, focusing on AI model hosting and inference. It is positioned as the Web3 version of HuggingFace, providing users with serverless access to open-source AI models. These models are hosted on a decentralized computing resource network.

Heurist’s vision is to decentralize AI using blockchain technology, achieving widespread technological adoption and equitable innovation. Its goal is to ensure AI technology’s accessibility and unbiased innovation through blockchain technology, promoting the integration and development of AI and cryptocurrency.

The term “Heurist” is derived from “heuristics,” which refers to the process by which the human brain quickly reaches reasonable conclusions or solutions when solving complex problems. This name reflects Heurist’s vision of rapidly and efficiently solving AI model hosting and inference problems through decentralized technology.

Issues with Closed-Source AI

Closed-source AI typically undergoes scrutiny under U.S. laws, which may not align with the needs of other countries and cultures, leading to over-censorship or inadequate censorship. This not only affects AI models’ performance but also potentially infringes on users’ freedom of expression.

The Rise of Open-Source AI

Open-source AI models have outperformed closed-source models in various fields. For example, Stable Diffusion outperforms OpenAI’s DALL-E 2 in image generation and is more cost-effective. The weights of open-source models are publicly available, allowing developers and artists to fine-tune them based on specific needs.

The community-driven innovation of open-source AI is also noteworthy. Open-source AI projects benefit from the collective contributions and reviews of diverse communities, fostering rapid innovation and improvement. Open-source AI models provide unprecedented transparency, enabling users to review training data and model weights, thereby enhancing trust and security.

Below is a detailed comparison between open-source AI and closed-source AI:

Source: @heurist_ai

10.2 Data privacy

When handling AI model inference, the Heurist project integrates Lit Protocol to encrypt data during transmission, including the input and output of AI inference. For miners, Heurist has two broad categories, divided into public miners and private miners:

  • Public miners: Anyone with a GPU that meets the minimum requirements can become a public miner, and the data processed by such miners is not encrypted.
  • Privacy Miner: Trusted node operators can become privacy miners, processing sensitive information such as confidential documents, health records, and user identity data. Such miners need to comply with off-chain privacy policies. The data is encrypted during transmission and cannot be decrypted by the Heurist protocol’s routers and sequencers. Only miners matching user access control conditions (ACC) can decrypt the data.

Source: @heurist_ai

How to establish trust with privacy-enabled miners? Mainly through the following two methods

  • Off-chain consensus: Off-chain consensus established through real-life laws or protocols, which is technically easy to implement.
  • Trusted Execution Environment (TEE): Utilize TEE to ensure secure and confidential handling of sensitive data. Although there are currently no mature TEE solutions for large AI models, the latest chips from companies like Nvidia have shown potential in enabling TEEs to handle AI workloads.

10.3 Token economics model

The Heurist project’s token, named HUE, is a utility token with a dynamic supply regulated through issuance and burn mechanisms. The maximum supply of HUE tokens is capped at 1 billion.

The token distribution and issuance mechanisms can be divided into two main categories: mining and staking.

  • Mining: Users can mine HUE tokens by hosting AI models on their GPUs. A mining node must stake at least 10,000 HUE or esHUE tokens to activate; below this threshold, no rewards are given. Mining rewards are issued as esHUE tokens, which are automatically compounded into the miner node’s stake. The reward rate depends on GPU efficiency, availability (uptime), the type of AI model run, and the total stake in the node.
  • Staking: Users can stake HUE or esHUE tokens in miner nodes. Staking rewards are issued in HUE or esHUE, with a higher yield for staking esHUE. Unstaking HUE tokens requires a 30-day lock-up period, while esHUE can be unstaked without a lock-up period. esHUE rewards can be converted to HUE tokens through a one-year linear vesting period. Users can instantly transfer their staked HUE or esHUE from one miner node to another, promoting flexibility and competition among miners.

Token Burn Mechanism

Similar to Ethereum’s EIP-1559 model, the Heurist project has implemented a token burn mechanism. When users pay for AI inference services, a portion of the HUE payment is permanently removed from circulation. The balance between token issuance and burn is closely related to network activity. During periods of high usage, the burn rate may exceed the issuance rate, putting the Heurist network into a deflationary phase. This mechanism helps regulate token supply and aligns the token’s value with actual network demand.

Bribe Mechanism

The bribe mechanism, first proposed by Curve Finance users, is a gamified incentive system to help direct liquidity pool rewards. The Heurist project has adopted this mechanism to enhance mining efficiency. Miners can set a percentage of their mining rewards as bribes to attract stakers. Stakers may choose to support miners offering the highest bribes but will also consider factors like hardware performance and uptime. Miners are incentivized to offer bribes because higher staking leads to higher mining efficiency, fostering an environment of both competition and cooperation, where miners and stakers work together to provide better services to the network.

Through these mechanisms, the Heurist project aims to create a dynamic and efficient token economy to support its decentralized AI model hosting and inference network.

10.4 Incentivized Testnet

The Heurist project allocated 5% of the total supply of HUE tokens for mining rewards during the Incentivized Testnet phase. These rewards are calculated in the form of points, which can be redeemed for fully liquid HUE tokens after the Mainnet Token Generation Event (TGE). Testnet rewards are divided into two categories: one for Stable Diffusion models and the other for Large Language Models (LLMs).

Points mechanism

Llama Point: For LLM miners, one Llama Point is earned for every 1000 input/output tokens processed by the Mixtral 8-7b model. The specific calculation is shown in the figure below:

Waifu Point: For Stable Diffusion miners, one Waifu Point is obtained for each 512x512 pixel image generated (using Stable Diffusion 1.5 model, after 20 iterations). The specific calculation is shown in the figure below:

After each computing task is completed, the complexity of the task is evaluated based on GPU performance benchmark results and points are awarded accordingly. The allocation ratio of Llama Points and Waifu Points will be determined closer to TGE, taking into account demand and usage of both model categories over the coming months.

Source: @heurist_ai

There are two main ways to participate in the testnet:

  • Comes with GPU: Whether you’re a gamer with a high-end setup, a former Ethereum miner with a spare GPU, an AI researcher with an occasional spare GPU, or a data center owner with excess capacity, you can download the miner program and set up a miner node. Detailed hardware specifications and setup guides are available on the Miner Guide page.
  • Rent a Hosted Node: For those without the required GPU hardware, Heurist offers competitively priced hosted mining node services. A professional engineering team will handle the setup of the mining hardware and software, allowing you to simply watch your rewards grow daily.

The recommended GPU for participating in Heurist mining is as shown in the figure below:


Source: @heurist_ai

Note that the Heurist testnet has anti-cheating measures, and the input and output of each computing task are stored and tracked by an asynchronous monitoring system. If a miner behaves maliciously to manipulate the reward system (such as submitting incorrect or low-quality results, tampering with downloaded model files, tampering with equipment and latency metric data), the Heurist team reserves the right to reduce their testnet points.

10.5 Heurist liquidity mining

Heurist testnet offers two types of points: Waifu Points and Llama Points. Waifu Points are earned by running the Stable Diffusion model for image generation, while Llama Points are earned by running large language models (LLMs). There are no restrictions on the GPU model for running these models, but there are strict requirements for VRAM. Models with higher VRAM requirements will have higher point coefficients.

The image below lists the currently supported LLM models. For the Stable Diffusion model, there are two modes: enabling SDXL mode and excluding SDXL mode. Enabling SDXL mode requires 12GB of VRAM, while excluding SDXL mode has been found to run with just 8GB of VRAM in my tests.

Source: @heurist_ai

10.6 Applications

The Heurist project has demonstrated its powerful AI capabilities and broad application prospects through three application directions: image generation, chatbots, and AI search engines. In terms of image generation, Heurist uses the Stable Diffusion model to provide efficient and flexible image generation services; in terms of chatbots, it uses large language models to achieve intelligent dialogue and content generation; in terms of AI search engines, it combines pre-trained language models to provide accurate information retrieval and detailed answers.

These applications not only improve the user experience, but also demonstrate Heurist’s innovation and technical advantages in the field of decentralized AI. The application effects are shown in the figure below:

Source: @heurist_ai

Image generation

The image generation application of the Heurist project mainly relies on the Stable Diffusion model to generate high-quality images through text prompts. Users can interact with the Stable Diffusion model via the REST API, submitting textual descriptions to generate images. The cost of each generation task depends on the resolution of the image and the number of iterations. For example, generating a 1024x1024 pixel, 40-iteration image using the SD 1.5 model requires 8 standard credit units. Through this mechanism, Heurist implements an efficient and flexible image generation service.

Chatbot

The chatbot application of the Heurist project implements intelligent dialogue through large language models (LLM). Heurist Gateway is an OpenAI-compatible LLM API endpoint built using LiteLLM that allows developers to call the Heurist API in OpenAI format. For example, using the Mistral 8x7b model, developers can replace existing LLM providers with just a few lines of code and get performance similar to ChatGPT 3.5 or Claude 2 at a lower cost.

Heurist’s LLM model supports a variety of applications, including automated customer service, content generation, and complex question answering. Users can interact with these models through API requests, submit text input, and get responses generated by the models, enabling diverse conversational and interactive experiences.

AI search engine

Project Heurist’s AI search engine provides powerful search and information retrieval capabilities by integrating large-scale pre-trained language models such as Mistral 8x7b. Users can get accurate and detailed answers through simple natural language queries. For example, on the question “Who is the CEO of Binance?”, the Heurist search engine not only provides the name of the current CEO (Richard Teng), but also explains in detail his background and the situation of the previous CEO.

The Heurist search engine combines text generation and information retrieval technology to handle complex queries and provide high-quality search results and relevant information. Users can submit queries through the API interface and obtain structured answers and reference materials, making Heurist’s search engine not only suitable for general users, but also to meet the needs of professional fields.

Conclusion

DePIN (Decentralized Physical Infrastructure Networks) represents a new form of the “sharing economy,” serving as a bridge between the physical and digital worlds. From both a market valuation and application area perspective, DePIN presents significant growth potential. Compared to CePIN (Centralized Physical Infrastructure Networks), DePIN offers advantages such as decentralization, transparency, user autonomy, incentive mechanisms, and resistance to censorship, all of which further drive its development. Due to DePIN’s unique economic model, it is prone to creating a “flywheel effect.” While many current DePIN projects have completed the construction of the “supply side,” the next critical focus is to stimulate real user demand and expand the “demand side.”

Although DePIN shows immense development potential, it still faces challenges in terms of technological maturity, service stability, market acceptance, and regulatory environment. However, with technological advancements and market development, these challenges are expected to be gradually resolved. It is foreseeable that once these challenges are effectively addressed, DePIN will achieve mass adoption, bringing a large influx of new users and draw people’s attention to the crypto field. This could potentially become the driving engine of the next bull market. Let’s witness this day together!

Statement:

  1. This article originally title “解密 DePIN 生态:AI 算力的变革力量” is reproduced from [WeChat public account:Gryphsis Academy]. All copyrights belong to the original author [Gryphsis Academy ]. If you have any objection to the reprint, please contact Gate Learn Team, the team will handle it as soon as possible according to relevant procedures.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

Start Now
Sign up and get a
$100
Voucher!
Create Account