DePIN, a concept introduced by Messari in November 2022, is not entirely novel but shares similarities with previous phenomena like IoT (Internet of Things). The author considers DePIN as a new form of “sharing economy.”
Unlike previous DePIN trends, the current cycle focuses primarily on the AI trifecta—data, algorithms, and computing power—with a notable emphasis on “computing power” projects such as io.net, Aethir, and Heurist. Therefore, this article specifically analyzes projects related to “computing power.”
This article summarizes and distills the basic framework of the DePIN project, providing an overview using the “WHAT-WHY-HOW” structure to review and summarize the DePIN track. The author then outlines an analytical approach to understanding the DePIN project based on their experience, focusing on detailed analysis of specific “computing power” projects.
1.1 Definition
DePIN, short for Decentralized Physical Infrastructure Networks, is a blockchain-powered network that connects physical hardware infrastructure in a decentralized manner. This allows users to access and utilize network resources without permission, often in a cost-effective manner. DePIN projects typically employ token reward systems to incentivize active participation in network construction, following the principle of “the more you contribute, the more you earn.”
The application scope of DePIN is extensive, encompassing fields such as data collection, computing, and data storage. Areas involving CePIN often feature DePIN’s presence.
Considering the operational and economic model of DePIN projects, DePIN fundamentally operates as a new form of “sharing economy.” Therefore, when conducting an initial analysis of DePIN projects, it can be approached succinctly by first identifying the core business of the project.
If the project mainly involves computing power or storage services, it can be simply defined as a platform providing “shared computing power” and “shared storage” services. This classification helps to clarify the project’s value proposition and its positioning in the market.
Source: @IoTeX
In the aforementioned model of the sharing economy, there are three main participants: the demand side, the supply side, and the platform side. In this model, firstly, the demand side sends requests to the platform side, such as for ridesharing or accommodation. Next, the platform side forwards these requests to the supply side. Finally, the supply side provides the corresponding services based on the requests, thus completing the entire business transaction process.
In this model, the flow of funds begins with the demand side transferring funds to the platform side. After the demand side confirms the order, funds then flow from the platform side to the supply side. The platform side earns profit through transaction fees by providing a stable trading platform and a smooth order fulfillment experience. Think of your experience when using ride-hailing services like DiDi—it exemplifies this model.
In traditional “sharing economy” models, the platform side is typically a centralized large enterprise that retains control over its network, drivers, and operations. In some cases, the supply side in “sharing economy” models is also controlled by the platform side, such as with shared power banks or electric scooters. This setup can lead to issues like monopolization by enterprises, lower costs of malpractice, and excessive fees that infringe upon the interests of the supply side. In essence, pricing power remains centralized within these enterprises, not with those who control the means of production, which does not align with decentralized principles.
However, in the Web3 model of the “sharing economy,” the platform facilitating transactions is a decentralized protocol. This eliminates intermediaries like DiDi, empowering the supply side with pricing control. This approach provides passengers with more economical ride services, offers drivers higher income, and enables them to influence the network they help build each day. It represents a multi-win model where all parties benefit.
1.2 Development History of DePIN
Since the rise of Bitcoin, people have been exploring the integration of peer-to-peer networks with physical infrastructure, aiming to build an open and economically incentivized decentralized network across various devices. Influenced by terms like DeFi and GameFi in Web3, MachineFi was one of the earliest concepts proposed.
Source: @MessariCrypto
In traditional physical infrastructure networks (such as communication networks, cloud services, energy networks, etc.), the market is often dominated by large or giant companies due to huge capital investment and operation and maintenance costs. This centralized industrial characteristic has brought about the following major dilemmas and challenges:
2.1 Disadvantages of CePIN
2.2 Advantages of DePin
DePIN addresses centralized control, data privacy concerns, resource wastage, and inconsistent service quality of CePIN through advantages such as decentralization, transparency, user autonomy, incentive mechanisms, and censorship resistance. It drives transformation in the production relations of the physical world, achieving a more efficient and sustainable physical infrastructure network. Therefore, for physical infrastructure networks requiring high security, transparency, and user engagement, DePIN represents a superior choice.
3.1 Comparison of different consensus mechanisms
Before discussing how to implement a DePIN network, we first explain the PoPW mechanism commonly used in DePIN networks.
DePIN network demands rapid scalability, low costs for node participation, abundant network supply nodes, and a high degree of decentralization.
Proof of Work (PoW) requires purchasing expensive mining rigs in advance to participate in network operations, significantly raising the entry barrier for DePIN network participation. Therefore, it is not suitable as a consensus mechanism for DePIN networks.
Proof of Stake (PoS) also requires upfront token staking, which reduces user willingness to participate in network node operations. Similarly, it is not suitable as a consensus mechanism for DePIN networks.
The emergence of Proof of Physical Work (PoPW) precisely meets the characteristic demands of DePIN networks. The PoPW mechanism facilitates easier integration of physical devices into the network, greatly accelerating the development process of DePIN networks.
Additionally, the economic model combined with PoPW fundamentally resolves the dilemma of which comes first, the chicken or the egg. Using token rewards, the protocol incentivizes participants to build network supply in a manner attractive to users.
3.2 Main participants of DePIN network
Generally speaking, a complete DePIN network includes the following participants.
These participants collectively contribute to the growth, operation, and sustainability of the DePIN network ecosystem.
3.3 Basic components of DePIN network
For the DePIN network to operate successfully, it needs to interact with on-chain and off-chain data at the same time, which requires stable and powerful infrastructure and communication protocols. In general, the DePIN network mainly includes the following parts.
3.5 Basic operation mode of DePIN network
The operating mode of the DePIN network follows a sequence similar to the architectural diagram mentioned above. Essentially, it involves off-chain data generation followed by on-chain data confirmation. Off-chain data adheres to a “bottom-up” rule, whereas on-chain data follows a “top-down” rule.
To simplify this process using a simple analogy, the operation of the DePIN network can be likened to an exam scenario:
Initially, the teacher hands out exam papers to students, who must complete the exam according to the paper’s requirements. After completion, students submit their papers to the teacher, who grades them based on a descending order principle, rewarding higher rankings with greater recognition (tokens).
In this analogy:
The “issued exam papers” represent the demand orders from the DePIN network’s demand side.
The solving of the exam questions corresponds to adhering to specific rules (PoPW) in DePIN.
The teacher verifies that the paper belongs to a specific student (using private keys for signatures and public keys for identification).
Grades are assigned based on performance, following a descending order principle that aligns with DePIN’s token distribution principle of “more contribution, more rewards.”
The basic operational mechanism of the DePIN network bears similarities to our everyday exam system. In the realm of cryptocurrencies, many projects essentially mirror real-life patterns on the blockchain. When faced with complex projects, employing analogies like this can aid in understanding and mastering the underlying concepts and operational logic.
We have conducted an overview review of the DePIN sector according to the logical sequence of WHAT-WHY-HOW. Next, let’s outline the specific tracks within the DePIN sector. The breakdown of these tracks is divided into two main parts: Physical Resource Networks and Digital Resource Networks.
Among them, the representative projects of some sections are as follows:
4.1 Decentralized storage network - Filecoin ($FIL)
Filecoin is the world’s largest distributed storage network, with over 3,800 storage providers globally contributing more than 17 million terabytes (TB) of storage capacity. Filecoin can be considered one of the most renowned DePIN projects, with its FIL token reaching its peak price on April 1, 2021. Filecoin’s vision is to bring open, verifiable features to the three core pillars supporting the data economy: storage, computation, and content distribution.
Filecoin’s storage of files is based on the InterPlanetary File System (IPFS), which enables secure and efficient file storage.
One unique aspect of Filecoin is its economic model. Before becoming a storage provider on Filecoin, participants must stake a certain amount of FIL tokens. This creates a cycle where during a bull market, “staking tokens -> increased total storage space -> more nodes participating -> increased demand for staking tokens -> price surge” occurs. However, in bear markets, it can lead to a spiral of price decline. This economic model is more suited to thriving in bullish market conditions.
4.2 Decentralized GPU Rendering Platform - Render Network ($RNDR)
Render Network is a decentralized GPU rendering platform under OTOY, consisting of artists and GPU providers, offering powerful rendering capabilities to global users. The $RNDR token reached its peak price on March 17, 2024. Being part of the AI sector, Render Network’s peak coincided with the AI sector’s peak.
The operational model of Render Network works as follows: creators submit jobs requiring GPU rendering, such as 3D scenes or high-resolution images/videos, which are distributed to GPU nodes in the network for processing. Node operators contribute idle GPU computing power to Render Network and receive $RNDR tokens as rewards.
A unique aspect of Render Network is its pricing mechanism, employing a dynamic pricing model based on factors like job complexity, urgency, and available resources. This model determines rendering service prices, providing competitive rates to creators while fairly compensating GPU providers.
A recent positive development for Render Network is its support for “Octane on iPad,” a professional rendering application backed by Render Network.
4.3 Decentralized data market - Ocean ($OCEAN)
Ocean Protocol is a decentralized data exchange protocol primarily focused on secure data sharing and commercial applications of data. Similar to common DePIN projects, it involves several key participants:
For data providers, data security and privacy are crucial. Ocean Protocol ensures data flow and protection through the following mechanisms:
4.4 L1 - Lotex ($IOTX) Compatible with EVM
IoTeX was founded in 2017 as an open-source platform focused on privacy, integrating blockchain, secure hardware, and data service innovations to support the Internet of Trusted Things (IoT). Unlike other DePIN projects, IoTeX positions itself as a development platform designed for DePIN builders, akin to Google’s Colab. IoTeX’s flagship technology is the off-chain computing protocol W3bStream, which facilitates the integration of IoT devices into the blockchain. Some notable IoTeX DePIN projects include Envirobloq, Drop Wireless, and HealthBlocks.
4.5 Decentralize hotspot network - Helium ($HNT)
Helium, established in 2013, is a veteran DePIN project known for creating a large-scale network where users contribute new hardware. Users can purchase Helium Hotspots manufactured by third-party vendors to provide hotspot signals for nearby IoT devices. Helium rewards hotspot operators with HNT tokens to maintain network stability, similar to a mining model where the mining equipment is specified by the project.
In the DePIN arena, there are primarily two types of device models: customized dedicated hardware specified by the project, such as Helium, and ubiquitous hardware used in daily life integrated into the network, as seen with Render Network and io.net incorporating idle GPUs from users.
Helium’s key technology is its LoRaWAN protocol, a low-power, long-distance wireless communication protocol ideal for IoT devices. Helium Hotspots utilize the LoRaWAN protocol to provide wireless network coverage.
Despite establishing the world’s largest LoRaWAN network, Helium’s anticipated demand did not materialize as expected. Currently, Helium is focusing on launching 5G cellular networks. On April 20, 2023, Helium migrated to the Solana network and subsequently launched Helium Mobile in the Americas, offering a $20 per month unlimited 5G data plan. Due to its affordable pricing, Helium Mobile quickly gained popularity in North America.
From the global “DePIN” search index spanning five years, a minor peak was observed in December 2023 to January 2024, coinciding with the peak range of the $MOBILE token price. This continued upward trend in DePIN interest indicates that Helium Mobile has initiated a new era for DePIN projects.
Source: @Google Trendes
The economic model of DePIN projects plays a crucial role in their development, serving different purposes at various stages. For instance, in the initial stages of a project, it primarily utilizes token incentive mechanisms to attract users to contribute their software and hardware resources towards building the supply side of the project.
5.1 BME Model
Before discussing the economic model, let’s briefly outline the BME (Burn-and-Mint Equivalent) model, as it is closely related to most DePIN projects’ economic frameworks.
The BME model manages token supply and demand dynamics. Specifically, it involves the burning of tokens on the demand side for purchasing goods or services, while the protocol platform mints new tokens to reward contributors on the supply side. If the amount of newly minted tokens exceeds those burned, the total supply increases, leading to price depreciation. Conversely, if the burn rate exceeds the minting rate, deflation occurs, causing price appreciation. A continually rising token price attracts more supply-side users, creating a positive feedback loop.
Supply > Demand =>price drop
Supply<Demand=>Price increase
We can further elucidate the BME model using the Fisher Equation, an economic model that describes the relationship between money supply (M), money velocity (V), price level (P), and transaction volume (T):
MV = PT
When the token velocity V increases, and assuming other factors remain constant, the equilibrium of this equation can only be maintained by reducing token circulation (M) through burning mechanisms. Thus, as network usage increases, the burn rate also accelerates. When the inflation rate and burn rate achieve dynamic equilibrium, the BME model can maintain a stable balanced state.
Source: @Medium
Using the specific example of purchasing goods in real life to illustrate this process: First, manufacturers produce goods, which consumers then purchase.
During the purchase process, instead of handing money directly to the manufacturer, you burn a specified amount as proof of your purchase of the goods. Simultaneously, the protocol platform mints new currency at regular intervals. Additionally, this money is transparently and fairly distributed among various contributors in the supply chain, such as producers, distributors, and sellers.
Source:@GryphsisAcademy
5.3 Development stages of economic models
With a basic understanding of the BME model, we can now have a clearer understanding of common economic models in the DePIN space.
Overall, DePIN economic models can be broadly divided into the following three stages:
1st Stage: Initial Launch and Network Construction Phase
2nd Stage: Network Development and Value Capture Phase
3rd Stage: Maturity and Value Maximization Phase
A good economic model can create an economic flywheel effect for DePIN projects. Because DePIN projects employ token incentive mechanisms, they attract significant attention from suppliers during the project’s initial launch phase, enabling rapid scale-up through the flywheel effect.
The token incentive mechanism is key to the rapid growth of DePIN projects. Initially, projects need to develop appropriate reward criteria tailored to the scalability of physical infrastructure types. For example, to expand network coverage, Helium offers higher rewards in areas with lower network density compared to higher-density areas.
As shown in the diagram below, early investors contribute real capital to the project, giving the token initial economic value. Suppliers actively participate in project construction to earn token rewards. As the network scales and with its lower costs compared to CePIN, an increasing number of demand-side users start adopting DePIN project services, generating income for the entire network protocol and forming a solid pathway from suppliers to demand.
With rising demand from the demand side, token prices increase through mechanisms like burning or buybacks (BME model), providing additional incentives for suppliers to continue expanding the network. This increase signifies that the value of tokens they receive also rises.
As the network continues to expand, investor interest in the project grows, prompting them to provide more financial support.
If the project is open-source or shares contributor/user data publicly, developers can build dApps based on this data, creating additional value within the ecosystem and attracting more users and contributors.
Source: @IoTeX
The current popularity of the DePIN project is mainly focused on Solana Network and “DePIN x AI “. It can be seen from the Google Index that in network infrastructure, DePIN and Solana have the strongest correlation, and areas of high concern are mainly concentrated in Asia, including China, India, etc. This also shows that the main participants in the DePIN project are from Asia.
Source: @Google Trendes
The current total market value of the entire DePIN track is “$32B ”, compared with traditional CePIN projects, for example, China Mobile’s market capitalization is “$210B ”, AT&T (the largest carrier in the United States) has a market capitalization of “$130B ”, analyzing the entire DePIN track only from the perspective of market value, there is still a lot of room for growth.
Source: @DePINscan
The turning point in the curve of total DePIN devices is evident in December 2023, coinciding with the peak popularity and highest coin price of Helium Mobile. It can be said that the DePIN boom in 2024 was initiated by Helium Mobile.
As shown in the diagram below, it displays the global distribution of DePIN devices, highlighting their concentration in regions such as North America, East Asia, and Western Europe. These areas are typically more economically developed, as becoming a node in the DePIN network requires provisioning of both software and hardware resources, which incur significant costs. For instance, a high-end consumer-grade GPU like the RTX-4090 costs $2,000, which is a substantial expense for users in less economically developed regions.
Due to the token incentive mechanism of DePIN projects, which follows the principle of “more contribution, more reward,” users aiming for higher token rewards must contribute more resources and use higher-end equipment. While advantageous for project teams, this undoubtedly raises the barrier to entry for users. A successful DePIN project should be backward compatible and inclusive, offering opportunities for participation even with lower-end devices, aligning with the blockchain principles of fairness, justice, and transparency.
Looking at the global device distribution map, many regions remain undeveloped. We believe that through continuous technological innovation and market expansion, the DePIN sector has the potential for global growth, reaching every corner, connecting people worldwide, and collectively driving technological advancement and social development.
source: @DePINscan
After my brief review of the DePIN track, the author summarized the basic steps for analyzing a DePIN project.
Most importantly, analyze the DePIN project’s operating model as a sharing economy in Web2.
8.1 Basic information
Project Description
io.net is a decentralized computing network that enables the development, execution, and scaling of machine learning applications on the Solana blockchain. Their vision is to “bring 1 million GPUs together to form the world’s largest GPU cluster.” Giving engineers access to massive amounts of computing power in a system that is accessible, customizable, cost-effective, and easy to implement.
Team background
Founder and CEO: Ahmed Shadid, who worked in quantitative finance and financial engineering before founding io.net, and is also a volunteer at the Ethereum Foundation.
Chief Marketing Officer and Chief Strategy Officer: Garrison Yang, Yang Xudong. Prior to that, he was Vice President of Strategy and Growth at Avalanche and is an alumnus of UC Santa Barbara.
It can be seen that the technical background of this project is relatively solid, and many founders have Crypto experience.
Narrative: AI, DePIN, Solana Ecosystem.
Financing situation
Source: @RootDataLabs
Source: @RootDataLabs
On March 5, 2024, io.net secured $30 million in Series A funding with a valuation of $1 billion, benchmarked against Render Network. The round was led by renowned top-tier investment institutions such as Hack VC, OKX Ventures, Multicoin Capital, and also included influential project leaders like Anatoly Yakovenko (Solana CEO) and Yat Siu (Animoca co-founder). This early backing from top capital is why we refer to io.net as a star project—it has the funding, the background, the technology, and the expectation of an airdrop.
The main products of io.net are as follows:
8.2 Product structure
The main products of io.net are as follows:
Below is an image of a cat in the style of Van Gogh generated on BC8.AI.
Source: @ionet
Product features and advantages
Compared with traditional cloud service providers Google Cloude and AWS, io.net has the following features and advantages:
Let’s take AWS as an example to compare in detail:
Accessibility refers to how easily users can access and obtain computing power. When using traditional cloud service providers, you typically need to provide key identification information such as a credit card and contact details in advance. However, when accessing io.net, all you need is a Solana wallet to quickly and conveniently obtain computing power permissions.
Customization refers to the degree of customization available to users for their computing clusters. With traditional cloud service providers, you can only select the machine type and location of the computing cluster. In contrast, when using io.net, in addition to these options, you can also choose bandwidth speed, cluster type, data encryption methods, and billing options.
Source: @ionet
As shown in the image above, when a user selects the NVIDIA A100-SXM4-80GB model GPU, a Hong Kong server, ultra-high-speed bandwidth, hourly billing, and end-to-end encryption, the price per GPU is $1.58 per hour. This demonstrates that io.net offers a high degree of customization with many options available for users, prioritizing their experience. For DePIN projects, this customization is a key way to expand the demand side and promote healthy network growth.
In contrast, the image below shows the price of the NVIDIA A100-SXM4-80GB model GPU from traditional cloud service providers. For the same computing power requirements, io.net’s price is at least half that of traditional cloud providers, making it highly attractive to users.
8.3 Basic information of the network
We can use IO Explorer to comprehensively view the computing power of the entire network, including the number of devices, available service areas, computing power prices, etc.
Computing power equipment situation
Currently, io.net has a total of 101,545 verified GPUs and 31,154 verified CPUs. io.net will verify whether the computing device is online every 6 hours to ensure io.net’s network stability.
Source: @ionet
The second image shows currently available, PoS-verified, and easy-to-deploy computing devices. Compared to Render Network and Filecoin, io.net has a significantly higher number of computing devices. Furthermore, io.net integrates computing devices from both Render Network and Filecoin, allowing users to choose their preferred computing device provider when deploying compute clusters. This user-centric approach ensures that io.net meets users’ customization needs and enhances their overall experience.
Source: @ionet
Another notable feature of io.net’s computing devices is the large number of high-end devices available. In the US, for example, there are several hundred high-end GPUs like the H100 and A100. Given the current sanctions and the AI boom, high-end graphics cards have become extremely valuable computing assets.
With io.net, you can use these high-end computing devices provided by suppliers without any review, regardless of whether you are a US citizen. This is why we highlight the anti-monopoly advantage of io.net.
Source:@ionet
Business revenue
It can be seen from the revenue dashboard of io.net that io.net has stable income every day, and the total income has reached the level of one million US dollars. It can be seen from here that io.net has completed the construction of the supply side, and the entire project The cold start period has gradually passed and the network development period has begun.
source: @ionet
source: @ionet
Source: @ionet
From the supply side of io.net
But from the demand side
8.4 Economic model
io.net’s native network token is $IO, with a fixed total supply of 800 million tokens. An initial supply of 500 million tokens will be released, and the remaining 300 million tokens will be distributed as rewards to suppliers and token stakers over 20 years, issued hourly.
$IO employs a burn deflation mechanism: network revenue is used to purchase and burn $IO tokens, with the amount of tokens burned adjusted according to the price of $IO.
Token Utilities:
Token Allocation:
From the allocation chart, it can be seen that half of the tokens are allocated to project community members, indicating the project’s intention to incentivize community participation for its growth. The R&D ecosystem accounts for 16%, ensuring continuous support for the project’s technology and product development.
As can be seen from the token release chart, $IO tokens are released gradually and linearly. This release mechanism helps stabilize the price of $IO tokens and avoid price fluctuations caused by the sudden appearance of a large number of $IO tokens in the market. At the same time, the reward mechanism of $IO tokens can also motivate long-term holders and stakers, enhancing the stability of the project and user stickiness.
Overall, io.net’s tokenomics is a well-designed token scheme. The allocation of half of the tokens to the community highlights the project’s emphasis on community-driven and decentralized governance, which supports long-term development and the establishment of credibility.
In the third stage of the DePIN economic development phases discussed earlier, it was mentioned that “community autonomy becomes the dominant mode of network governance.” io.net has already laid a solid foundation for future community autonomy. Additionally, the gradual release mechanism and burn mechanism of the $IO token effectively distribute market pressure and reduce the risk of price volatility.
From these aspects, it is clear that io.net’s various mechanisms demonstrate that it is a well-planned project with a focus on long-term development.
8.5 Ways to participate in io.net
Currently, io.net’s “Ignition Rewards” has entered its 3rd season, running from June 1st to June 30th. The main way to participate is to integrate your computing devices into the main computing network for mining. Mining rewards in $IO tokens depend on factors such as device computing power, network bandwidth speed, and others.
In the 1st season of “Ignition Rewards,” the initial threshold for device integration was set at the “GeForce RTX 1080 Ti.” This reflects our earlier statement of providing low-end devices with an opportunity to participate, aligning with the blockchain principles of fairness, justice, and transparency. In the 2nd and 3rd seasons of “Ignition Rewards,” the initial threshold was set at “GeForce RTX 3050.”
The reason for this approach is twofold: from the project’s perspective, as the project develops, low-end computing devices contribute less to the overall network and stronger computing devices better maintain network stability. From the demand-side users’ perspective, most users require high-end computing devices for training and inference of AI models, and low-end devices cannot meet their needs.
Therefore, as the project progresses favorably, raising the participation threshold is a correct approach. Similar to the Bitcoin network, the goal for the project is to attract better, stronger, and more numerous computing devices.
8.6 Conclusion & Outlook
io.net has performed well during the project’s cold start and network construction phase, completing the entire network deployment, validating the effectiveness of computational nodes, and generating sustained revenue.
The project’s next main goal is to further expand the network ecosystem and increase demand from the computational needs market, which represents a significant opportunity. Successfully promoting the project in this market will require efforts from the project’s marketing team.
In practice, when we talk about AI algorithm model development, it mainly involves two major parts: training and inference. Let’s illustrate these two concepts with a simple example of a quadratic equation:
The process of using the (x, y) data pair (training set) to solve for the unknown coefficients (a, b, c) is the training process of the AI algorithm; after obtaining the unknown coefficients (a, b, c), according to the process of solving y for a given x is the inference process of the AI algorithm.
In this computation process, we can clearly see that the computational workload of the training process is much greater than that of the inference process. Training a Large Language Model (LLM) requires extensive support from computational clusters and consumes substantial funds. For example, training GPT-3 175B involved thousands of Nvidia V100 GPUs over several months, with training costs reaching tens of millions of dollars.
Performing AI large model training on decentralized computing platforms is challenging because it involves massive data transfers and exchanges, demanding high network bandwidth that decentralized platforms struggle to meet. NVIDIA has established itself as a leader in the AI industry not only due to its high-end computational chips and underlying AI computing acceleration libraries (cuDNN) but also because of its proprietary communication bridge, “NVLink,” which significantly speeds up the movement of large-scale data during model training.
In the AI industry, training large models not only requires extensive computational resources but also involves data collection, processing, and transformation. These processes often necessitate scalable infrastructure and centralized data processing capabilities. Consequently, the AI industry is fundamentally a scalable and centralized sector, relying on robust technological platforms and data processing capabilities to drive innovation and development.
Therefore, decentralized computing platforms like io.net are best suited for AI algorithm inference tasks. Their target customers should include students and those with task requirements for fine-tuning downstream tasks based on large models, benefiting from io.net’s affordability, ease of access, and ample computational power.
9.1 Project background
Artificial Intelligence is regarded as one of the most significant technologies ever seen by humanity. With the advent of General Artificial Intelligence (AGI), lifestyles are poised to undergo revolutionary changes. However, due to a few companies dominating AI technology development, there exists a wealth gap between GPU affluent and GPU deprived individuals. Aethir, through its distributed physical infrastructure network (DePINs), aims to increase the accessibility of on-demand computing resources, thereby balancing the distribution of AI development outcomes.
Aethir is an innovative distributed cloud computing infrastructure network specifically designed to meet the high demand for on-demand cloud computing resources in the fields of Artificial Intelligence (AI), gaming, and virtual computing. Its core concept involves aggregating enterprise-grade GPU chips from around the world to form a unified global network, significantly increasing the supply of on-demand cloud computing resources.
The primary goal of Aethir is to address the current shortage of computing resources in the AI and cloud computing sectors. With the advancement of AI and the popularity of cloud gaming, the demand for high-performance computing resources continues to grow. However, due to the monopolization of GPU resources by a few large companies, small and medium-sized enterprises and startups often struggle to access sufficient computing power. Aethir provides a viable solution through its distributed network, helping resource owners (such as data centers, tech companies, telecom companies, top gaming studios, and cryptocurrency mining companies) fully utilize their underutilized GPU resources and provide efficient, low-cost computing resources to end-users.
Advantages of Distributed Cloud Computing:
Through these core advantages, Aethir leads not only in technology but also holds significant economic and societal implications. By leveraging distributed physical infrastructure networks (DePINs), it makes the supply of computing resources more equitable, promoting the democratization and innovation of AI technology. This innovative model not only changes the supply of computing resources but also opens up new possibilities for the future development of AI and cloud computing.
Aethir’s technology architecture is composed of multiple core roles and components to ensure that its distributed cloud computing network can operate efficiently and securely. Below is a detailed description of each key role and component:
Core roles and components
Node Operators:
Aethir Network
Containers
Checkers
Indexers
End Users:
End users are consumers of Aethir network computing resources, whether for AI training and inference, or gaming. End users submit requests, and the network matches the appropriate high-performance resources to meet the needs.
Treasury:
The treasury holds all staked $ATH tokens and pays out all $ATH rewards and fees.
Settlement Layer:
Aethir utilizes blockchain technology as its settlement layer, recording transactions, enabling scalability and efficiency, and using $ATH for incentivization. Blockchain ensures transparency in resource consumption tracking and enables near real-time payments.
For specific relationships, please refer to the following chart:
Source: @AethirCloud
9.3 Consensus mechanism
The Aethir network operates using a unique mechanism, with two primary proofs of work at its core:
Proof of Rendering Capacity:
Proof of Rendering Work:
Source: @AethirCloud
9.4 Token economics model
The ATH token plays a variety of roles in the Aethir ecosystem, including medium of exchange, governance tool, incentive, and platform development support.
Specific uses include:
Specific distribution strategy: The token of the Aethir project is $ATH, and the total issuance amount is 42 billion. The largest share of 35% is given to GPU providers, such as data centers and individual retail investors, 17.5% of tokens are given to teams and consultants, 15% and 11.75% of tokens are given to node inspection and sales teams respectively. As shown below:
Source: @AethirCloud
Reward emissions
The mining reward emission strategy aims to balance the participation of resource providers and the sustainability of long-term rewards. Through the decay function of early rewards, it is ensured that participants who join later are still motivated.
9.5 How to participate in Aethir mining
The Aethir platform chooses to allocate the majority of its Total Token Supply (TTS) to mining rewards, which is crucial for strengthening the ecosystem. This allocation aims to support node operators and uphold container standards. Node operators are central to Aethir, providing essential computational power, while containers are pivotal in delivering computing resources.
Mining rewards are divided into two forms: Proof of Rendering Capacity and Proof of Rendering Work. Proof of Rendering Work incentivizes node operators to complete computational tasks and is specifically distributed to containers. Proof of Rendering Capacity, on the other hand, rewards compute providers for making their GPUs available to Aethir; the more GPUs used by clients, the greater the additional token rewards. These rewards are distributed in $ATH tokens. They serve not only as distribution but also as investments in the future sustainability of the Aethir community.
10.1 Project Background
Heurist is a Layer 2 network based on the ZK Stack, focusing on AI model hosting and inference. It is positioned as the Web3 version of HuggingFace, providing users with serverless access to open-source AI models. These models are hosted on a decentralized computing resource network.
Heurist’s vision is to decentralize AI using blockchain technology, achieving widespread technological adoption and equitable innovation. Its goal is to ensure AI technology’s accessibility and unbiased innovation through blockchain technology, promoting the integration and development of AI and cryptocurrency.
The term “Heurist” is derived from “heuristics,” which refers to the process by which the human brain quickly reaches reasonable conclusions or solutions when solving complex problems. This name reflects Heurist’s vision of rapidly and efficiently solving AI model hosting and inference problems through decentralized technology.
Issues with Closed-Source AI
Closed-source AI typically undergoes scrutiny under U.S. laws, which may not align with the needs of other countries and cultures, leading to over-censorship or inadequate censorship. This not only affects AI models’ performance but also potentially infringes on users’ freedom of expression.
The Rise of Open-Source AI
Open-source AI models have outperformed closed-source models in various fields. For example, Stable Diffusion outperforms OpenAI’s DALL-E 2 in image generation and is more cost-effective. The weights of open-source models are publicly available, allowing developers and artists to fine-tune them based on specific needs.
The community-driven innovation of open-source AI is also noteworthy. Open-source AI projects benefit from the collective contributions and reviews of diverse communities, fostering rapid innovation and improvement. Open-source AI models provide unprecedented transparency, enabling users to review training data and model weights, thereby enhancing trust and security.
Below is a detailed comparison between open-source AI and closed-source AI:
Source: @heurist_ai
10.2 Data privacy
When handling AI model inference, the Heurist project integrates Lit Protocol to encrypt data during transmission, including the input and output of AI inference. For miners, Heurist has two broad categories, divided into public miners and private miners:
Source: @heurist_ai
How to establish trust with privacy-enabled miners? Mainly through the following two methods
10.3 Token economics model
The Heurist project’s token, named HUE, is a utility token with a dynamic supply regulated through issuance and burn mechanisms. The maximum supply of HUE tokens is capped at 1 billion.
The token distribution and issuance mechanisms can be divided into two main categories: mining and staking.
Token Burn Mechanism
Similar to Ethereum’s EIP-1559 model, the Heurist project has implemented a token burn mechanism. When users pay for AI inference services, a portion of the HUE payment is permanently removed from circulation. The balance between token issuance and burn is closely related to network activity. During periods of high usage, the burn rate may exceed the issuance rate, putting the Heurist network into a deflationary phase. This mechanism helps regulate token supply and aligns the token’s value with actual network demand.
Bribe Mechanism
The bribe mechanism, first proposed by Curve Finance users, is a gamified incentive system to help direct liquidity pool rewards. The Heurist project has adopted this mechanism to enhance mining efficiency. Miners can set a percentage of their mining rewards as bribes to attract stakers. Stakers may choose to support miners offering the highest bribes but will also consider factors like hardware performance and uptime. Miners are incentivized to offer bribes because higher staking leads to higher mining efficiency, fostering an environment of both competition and cooperation, where miners and stakers work together to provide better services to the network.
Through these mechanisms, the Heurist project aims to create a dynamic and efficient token economy to support its decentralized AI model hosting and inference network.
10.4 Incentivized Testnet
The Heurist project allocated 5% of the total supply of HUE tokens for mining rewards during the Incentivized Testnet phase. These rewards are calculated in the form of points, which can be redeemed for fully liquid HUE tokens after the Mainnet Token Generation Event (TGE). Testnet rewards are divided into two categories: one for Stable Diffusion models and the other for Large Language Models (LLMs).
Points mechanism
Llama Point: For LLM miners, one Llama Point is earned for every 1000 input/output tokens processed by the Mixtral 8-7b model. The specific calculation is shown in the figure below:
Waifu Point: For Stable Diffusion miners, one Waifu Point is obtained for each 512x512 pixel image generated (using Stable Diffusion 1.5 model, after 20 iterations). The specific calculation is shown in the figure below:
After each computing task is completed, the complexity of the task is evaluated based on GPU performance benchmark results and points are awarded accordingly. The allocation ratio of Llama Points and Waifu Points will be determined closer to TGE, taking into account demand and usage of both model categories over the coming months.
Source: @heurist_ai
There are two main ways to participate in the testnet:
The recommended GPU for participating in Heurist mining is as shown in the figure below:
Source: @heurist_ai
Note that the Heurist testnet has anti-cheating measures, and the input and output of each computing task are stored and tracked by an asynchronous monitoring system. If a miner behaves maliciously to manipulate the reward system (such as submitting incorrect or low-quality results, tampering with downloaded model files, tampering with equipment and latency metric data), the Heurist team reserves the right to reduce their testnet points.
10.5 Heurist liquidity mining
Heurist testnet offers two types of points: Waifu Points and Llama Points. Waifu Points are earned by running the Stable Diffusion model for image generation, while Llama Points are earned by running large language models (LLMs). There are no restrictions on the GPU model for running these models, but there are strict requirements for VRAM. Models with higher VRAM requirements will have higher point coefficients.
The image below lists the currently supported LLM models. For the Stable Diffusion model, there are two modes: enabling SDXL mode and excluding SDXL mode. Enabling SDXL mode requires 12GB of VRAM, while excluding SDXL mode has been found to run with just 8GB of VRAM in my tests.
Source: @heurist_ai
10.6 Applications
The Heurist project has demonstrated its powerful AI capabilities and broad application prospects through three application directions: image generation, chatbots, and AI search engines. In terms of image generation, Heurist uses the Stable Diffusion model to provide efficient and flexible image generation services; in terms of chatbots, it uses large language models to achieve intelligent dialogue and content generation; in terms of AI search engines, it combines pre-trained language models to provide accurate information retrieval and detailed answers.
These applications not only improve the user experience, but also demonstrate Heurist’s innovation and technical advantages in the field of decentralized AI. The application effects are shown in the figure below:
Source: @heurist_ai
Image generation
The image generation application of the Heurist project mainly relies on the Stable Diffusion model to generate high-quality images through text prompts. Users can interact with the Stable Diffusion model via the REST API, submitting textual descriptions to generate images. The cost of each generation task depends on the resolution of the image and the number of iterations. For example, generating a 1024x1024 pixel, 40-iteration image using the SD 1.5 model requires 8 standard credit units. Through this mechanism, Heurist implements an efficient and flexible image generation service.
Chatbot
The chatbot application of the Heurist project implements intelligent dialogue through large language models (LLM). Heurist Gateway is an OpenAI-compatible LLM API endpoint built using LiteLLM that allows developers to call the Heurist API in OpenAI format. For example, using the Mistral 8x7b model, developers can replace existing LLM providers with just a few lines of code and get performance similar to ChatGPT 3.5 or Claude 2 at a lower cost.
Heurist’s LLM model supports a variety of applications, including automated customer service, content generation, and complex question answering. Users can interact with these models through API requests, submit text input, and get responses generated by the models, enabling diverse conversational and interactive experiences.
AI search engine
Project Heurist’s AI search engine provides powerful search and information retrieval capabilities by integrating large-scale pre-trained language models such as Mistral 8x7b. Users can get accurate and detailed answers through simple natural language queries. For example, on the question “Who is the CEO of Binance?”, the Heurist search engine not only provides the name of the current CEO (Richard Teng), but also explains in detail his background and the situation of the previous CEO.
The Heurist search engine combines text generation and information retrieval technology to handle complex queries and provide high-quality search results and relevant information. Users can submit queries through the API interface and obtain structured answers and reference materials, making Heurist’s search engine not only suitable for general users, but also to meet the needs of professional fields.
DePIN (Decentralized Physical Infrastructure Networks) represents a new form of the “sharing economy,” serving as a bridge between the physical and digital worlds. From both a market valuation and application area perspective, DePIN presents significant growth potential. Compared to CePIN (Centralized Physical Infrastructure Networks), DePIN offers advantages such as decentralization, transparency, user autonomy, incentive mechanisms, and resistance to censorship, all of which further drive its development. Due to DePIN’s unique economic model, it is prone to creating a “flywheel effect.” While many current DePIN projects have completed the construction of the “supply side,” the next critical focus is to stimulate real user demand and expand the “demand side.”
Although DePIN shows immense development potential, it still faces challenges in terms of technological maturity, service stability, market acceptance, and regulatory environment. However, with technological advancements and market development, these challenges are expected to be gradually resolved. It is foreseeable that once these challenges are effectively addressed, DePIN will achieve mass adoption, bringing a large influx of new users and draw people’s attention to the crypto field. This could potentially become the driving engine of the next bull market. Let’s witness this day together!
This article originally title “解密 DePIN 生态:AI 算力的变革力量” is reproduced from [WeChat public account:Gryphsis Academy]. All copyrights belong to the original author [Gryphsis Academy ]. If you have any objection to the reprint, please contact Gate Learn Team, the team will handle it as soon as possible according to relevant procedures.
Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.
Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
DePIN, a concept introduced by Messari in November 2022, is not entirely novel but shares similarities with previous phenomena like IoT (Internet of Things). The author considers DePIN as a new form of “sharing economy.”
Unlike previous DePIN trends, the current cycle focuses primarily on the AI trifecta—data, algorithms, and computing power—with a notable emphasis on “computing power” projects such as io.net, Aethir, and Heurist. Therefore, this article specifically analyzes projects related to “computing power.”
This article summarizes and distills the basic framework of the DePIN project, providing an overview using the “WHAT-WHY-HOW” structure to review and summarize the DePIN track. The author then outlines an analytical approach to understanding the DePIN project based on their experience, focusing on detailed analysis of specific “computing power” projects.
1.1 Definition
DePIN, short for Decentralized Physical Infrastructure Networks, is a blockchain-powered network that connects physical hardware infrastructure in a decentralized manner. This allows users to access and utilize network resources without permission, often in a cost-effective manner. DePIN projects typically employ token reward systems to incentivize active participation in network construction, following the principle of “the more you contribute, the more you earn.”
The application scope of DePIN is extensive, encompassing fields such as data collection, computing, and data storage. Areas involving CePIN often feature DePIN’s presence.
Considering the operational and economic model of DePIN projects, DePIN fundamentally operates as a new form of “sharing economy.” Therefore, when conducting an initial analysis of DePIN projects, it can be approached succinctly by first identifying the core business of the project.
If the project mainly involves computing power or storage services, it can be simply defined as a platform providing “shared computing power” and “shared storage” services. This classification helps to clarify the project’s value proposition and its positioning in the market.
Source: @IoTeX
In the aforementioned model of the sharing economy, there are three main participants: the demand side, the supply side, and the platform side. In this model, firstly, the demand side sends requests to the platform side, such as for ridesharing or accommodation. Next, the platform side forwards these requests to the supply side. Finally, the supply side provides the corresponding services based on the requests, thus completing the entire business transaction process.
In this model, the flow of funds begins with the demand side transferring funds to the platform side. After the demand side confirms the order, funds then flow from the platform side to the supply side. The platform side earns profit through transaction fees by providing a stable trading platform and a smooth order fulfillment experience. Think of your experience when using ride-hailing services like DiDi—it exemplifies this model.
In traditional “sharing economy” models, the platform side is typically a centralized large enterprise that retains control over its network, drivers, and operations. In some cases, the supply side in “sharing economy” models is also controlled by the platform side, such as with shared power banks or electric scooters. This setup can lead to issues like monopolization by enterprises, lower costs of malpractice, and excessive fees that infringe upon the interests of the supply side. In essence, pricing power remains centralized within these enterprises, not with those who control the means of production, which does not align with decentralized principles.
However, in the Web3 model of the “sharing economy,” the platform facilitating transactions is a decentralized protocol. This eliminates intermediaries like DiDi, empowering the supply side with pricing control. This approach provides passengers with more economical ride services, offers drivers higher income, and enables them to influence the network they help build each day. It represents a multi-win model where all parties benefit.
1.2 Development History of DePIN
Since the rise of Bitcoin, people have been exploring the integration of peer-to-peer networks with physical infrastructure, aiming to build an open and economically incentivized decentralized network across various devices. Influenced by terms like DeFi and GameFi in Web3, MachineFi was one of the earliest concepts proposed.
Source: @MessariCrypto
In traditional physical infrastructure networks (such as communication networks, cloud services, energy networks, etc.), the market is often dominated by large or giant companies due to huge capital investment and operation and maintenance costs. This centralized industrial characteristic has brought about the following major dilemmas and challenges:
2.1 Disadvantages of CePIN
2.2 Advantages of DePin
DePIN addresses centralized control, data privacy concerns, resource wastage, and inconsistent service quality of CePIN through advantages such as decentralization, transparency, user autonomy, incentive mechanisms, and censorship resistance. It drives transformation in the production relations of the physical world, achieving a more efficient and sustainable physical infrastructure network. Therefore, for physical infrastructure networks requiring high security, transparency, and user engagement, DePIN represents a superior choice.
3.1 Comparison of different consensus mechanisms
Before discussing how to implement a DePIN network, we first explain the PoPW mechanism commonly used in DePIN networks.
DePIN network demands rapid scalability, low costs for node participation, abundant network supply nodes, and a high degree of decentralization.
Proof of Work (PoW) requires purchasing expensive mining rigs in advance to participate in network operations, significantly raising the entry barrier for DePIN network participation. Therefore, it is not suitable as a consensus mechanism for DePIN networks.
Proof of Stake (PoS) also requires upfront token staking, which reduces user willingness to participate in network node operations. Similarly, it is not suitable as a consensus mechanism for DePIN networks.
The emergence of Proof of Physical Work (PoPW) precisely meets the characteristic demands of DePIN networks. The PoPW mechanism facilitates easier integration of physical devices into the network, greatly accelerating the development process of DePIN networks.
Additionally, the economic model combined with PoPW fundamentally resolves the dilemma of which comes first, the chicken or the egg. Using token rewards, the protocol incentivizes participants to build network supply in a manner attractive to users.
3.2 Main participants of DePIN network
Generally speaking, a complete DePIN network includes the following participants.
These participants collectively contribute to the growth, operation, and sustainability of the DePIN network ecosystem.
3.3 Basic components of DePIN network
For the DePIN network to operate successfully, it needs to interact with on-chain and off-chain data at the same time, which requires stable and powerful infrastructure and communication protocols. In general, the DePIN network mainly includes the following parts.
3.5 Basic operation mode of DePIN network
The operating mode of the DePIN network follows a sequence similar to the architectural diagram mentioned above. Essentially, it involves off-chain data generation followed by on-chain data confirmation. Off-chain data adheres to a “bottom-up” rule, whereas on-chain data follows a “top-down” rule.
To simplify this process using a simple analogy, the operation of the DePIN network can be likened to an exam scenario:
Initially, the teacher hands out exam papers to students, who must complete the exam according to the paper’s requirements. After completion, students submit their papers to the teacher, who grades them based on a descending order principle, rewarding higher rankings with greater recognition (tokens).
In this analogy:
The “issued exam papers” represent the demand orders from the DePIN network’s demand side.
The solving of the exam questions corresponds to adhering to specific rules (PoPW) in DePIN.
The teacher verifies that the paper belongs to a specific student (using private keys for signatures and public keys for identification).
Grades are assigned based on performance, following a descending order principle that aligns with DePIN’s token distribution principle of “more contribution, more rewards.”
The basic operational mechanism of the DePIN network bears similarities to our everyday exam system. In the realm of cryptocurrencies, many projects essentially mirror real-life patterns on the blockchain. When faced with complex projects, employing analogies like this can aid in understanding and mastering the underlying concepts and operational logic.
We have conducted an overview review of the DePIN sector according to the logical sequence of WHAT-WHY-HOW. Next, let’s outline the specific tracks within the DePIN sector. The breakdown of these tracks is divided into two main parts: Physical Resource Networks and Digital Resource Networks.
Among them, the representative projects of some sections are as follows:
4.1 Decentralized storage network - Filecoin ($FIL)
Filecoin is the world’s largest distributed storage network, with over 3,800 storage providers globally contributing more than 17 million terabytes (TB) of storage capacity. Filecoin can be considered one of the most renowned DePIN projects, with its FIL token reaching its peak price on April 1, 2021. Filecoin’s vision is to bring open, verifiable features to the three core pillars supporting the data economy: storage, computation, and content distribution.
Filecoin’s storage of files is based on the InterPlanetary File System (IPFS), which enables secure and efficient file storage.
One unique aspect of Filecoin is its economic model. Before becoming a storage provider on Filecoin, participants must stake a certain amount of FIL tokens. This creates a cycle where during a bull market, “staking tokens -> increased total storage space -> more nodes participating -> increased demand for staking tokens -> price surge” occurs. However, in bear markets, it can lead to a spiral of price decline. This economic model is more suited to thriving in bullish market conditions.
4.2 Decentralized GPU Rendering Platform - Render Network ($RNDR)
Render Network is a decentralized GPU rendering platform under OTOY, consisting of artists and GPU providers, offering powerful rendering capabilities to global users. The $RNDR token reached its peak price on March 17, 2024. Being part of the AI sector, Render Network’s peak coincided with the AI sector’s peak.
The operational model of Render Network works as follows: creators submit jobs requiring GPU rendering, such as 3D scenes or high-resolution images/videos, which are distributed to GPU nodes in the network for processing. Node operators contribute idle GPU computing power to Render Network and receive $RNDR tokens as rewards.
A unique aspect of Render Network is its pricing mechanism, employing a dynamic pricing model based on factors like job complexity, urgency, and available resources. This model determines rendering service prices, providing competitive rates to creators while fairly compensating GPU providers.
A recent positive development for Render Network is its support for “Octane on iPad,” a professional rendering application backed by Render Network.
4.3 Decentralized data market - Ocean ($OCEAN)
Ocean Protocol is a decentralized data exchange protocol primarily focused on secure data sharing and commercial applications of data. Similar to common DePIN projects, it involves several key participants:
For data providers, data security and privacy are crucial. Ocean Protocol ensures data flow and protection through the following mechanisms:
4.4 L1 - Lotex ($IOTX) Compatible with EVM
IoTeX was founded in 2017 as an open-source platform focused on privacy, integrating blockchain, secure hardware, and data service innovations to support the Internet of Trusted Things (IoT). Unlike other DePIN projects, IoTeX positions itself as a development platform designed for DePIN builders, akin to Google’s Colab. IoTeX’s flagship technology is the off-chain computing protocol W3bStream, which facilitates the integration of IoT devices into the blockchain. Some notable IoTeX DePIN projects include Envirobloq, Drop Wireless, and HealthBlocks.
4.5 Decentralize hotspot network - Helium ($HNT)
Helium, established in 2013, is a veteran DePIN project known for creating a large-scale network where users contribute new hardware. Users can purchase Helium Hotspots manufactured by third-party vendors to provide hotspot signals for nearby IoT devices. Helium rewards hotspot operators with HNT tokens to maintain network stability, similar to a mining model where the mining equipment is specified by the project.
In the DePIN arena, there are primarily two types of device models: customized dedicated hardware specified by the project, such as Helium, and ubiquitous hardware used in daily life integrated into the network, as seen with Render Network and io.net incorporating idle GPUs from users.
Helium’s key technology is its LoRaWAN protocol, a low-power, long-distance wireless communication protocol ideal for IoT devices. Helium Hotspots utilize the LoRaWAN protocol to provide wireless network coverage.
Despite establishing the world’s largest LoRaWAN network, Helium’s anticipated demand did not materialize as expected. Currently, Helium is focusing on launching 5G cellular networks. On April 20, 2023, Helium migrated to the Solana network and subsequently launched Helium Mobile in the Americas, offering a $20 per month unlimited 5G data plan. Due to its affordable pricing, Helium Mobile quickly gained popularity in North America.
From the global “DePIN” search index spanning five years, a minor peak was observed in December 2023 to January 2024, coinciding with the peak range of the $MOBILE token price. This continued upward trend in DePIN interest indicates that Helium Mobile has initiated a new era for DePIN projects.
Source: @Google Trendes
The economic model of DePIN projects plays a crucial role in their development, serving different purposes at various stages. For instance, in the initial stages of a project, it primarily utilizes token incentive mechanisms to attract users to contribute their software and hardware resources towards building the supply side of the project.
5.1 BME Model
Before discussing the economic model, let’s briefly outline the BME (Burn-and-Mint Equivalent) model, as it is closely related to most DePIN projects’ economic frameworks.
The BME model manages token supply and demand dynamics. Specifically, it involves the burning of tokens on the demand side for purchasing goods or services, while the protocol platform mints new tokens to reward contributors on the supply side. If the amount of newly minted tokens exceeds those burned, the total supply increases, leading to price depreciation. Conversely, if the burn rate exceeds the minting rate, deflation occurs, causing price appreciation. A continually rising token price attracts more supply-side users, creating a positive feedback loop.
Supply > Demand =>price drop
Supply<Demand=>Price increase
We can further elucidate the BME model using the Fisher Equation, an economic model that describes the relationship between money supply (M), money velocity (V), price level (P), and transaction volume (T):
MV = PT
When the token velocity V increases, and assuming other factors remain constant, the equilibrium of this equation can only be maintained by reducing token circulation (M) through burning mechanisms. Thus, as network usage increases, the burn rate also accelerates. When the inflation rate and burn rate achieve dynamic equilibrium, the BME model can maintain a stable balanced state.
Source: @Medium
Using the specific example of purchasing goods in real life to illustrate this process: First, manufacturers produce goods, which consumers then purchase.
During the purchase process, instead of handing money directly to the manufacturer, you burn a specified amount as proof of your purchase of the goods. Simultaneously, the protocol platform mints new currency at regular intervals. Additionally, this money is transparently and fairly distributed among various contributors in the supply chain, such as producers, distributors, and sellers.
Source:@GryphsisAcademy
5.3 Development stages of economic models
With a basic understanding of the BME model, we can now have a clearer understanding of common economic models in the DePIN space.
Overall, DePIN economic models can be broadly divided into the following three stages:
1st Stage: Initial Launch and Network Construction Phase
2nd Stage: Network Development and Value Capture Phase
3rd Stage: Maturity and Value Maximization Phase
A good economic model can create an economic flywheel effect for DePIN projects. Because DePIN projects employ token incentive mechanisms, they attract significant attention from suppliers during the project’s initial launch phase, enabling rapid scale-up through the flywheel effect.
The token incentive mechanism is key to the rapid growth of DePIN projects. Initially, projects need to develop appropriate reward criteria tailored to the scalability of physical infrastructure types. For example, to expand network coverage, Helium offers higher rewards in areas with lower network density compared to higher-density areas.
As shown in the diagram below, early investors contribute real capital to the project, giving the token initial economic value. Suppliers actively participate in project construction to earn token rewards. As the network scales and with its lower costs compared to CePIN, an increasing number of demand-side users start adopting DePIN project services, generating income for the entire network protocol and forming a solid pathway from suppliers to demand.
With rising demand from the demand side, token prices increase through mechanisms like burning or buybacks (BME model), providing additional incentives for suppliers to continue expanding the network. This increase signifies that the value of tokens they receive also rises.
As the network continues to expand, investor interest in the project grows, prompting them to provide more financial support.
If the project is open-source or shares contributor/user data publicly, developers can build dApps based on this data, creating additional value within the ecosystem and attracting more users and contributors.
Source: @IoTeX
The current popularity of the DePIN project is mainly focused on Solana Network and “DePIN x AI “. It can be seen from the Google Index that in network infrastructure, DePIN and Solana have the strongest correlation, and areas of high concern are mainly concentrated in Asia, including China, India, etc. This also shows that the main participants in the DePIN project are from Asia.
Source: @Google Trendes
The current total market value of the entire DePIN track is “$32B ”, compared with traditional CePIN projects, for example, China Mobile’s market capitalization is “$210B ”, AT&T (the largest carrier in the United States) has a market capitalization of “$130B ”, analyzing the entire DePIN track only from the perspective of market value, there is still a lot of room for growth.
Source: @DePINscan
The turning point in the curve of total DePIN devices is evident in December 2023, coinciding with the peak popularity and highest coin price of Helium Mobile. It can be said that the DePIN boom in 2024 was initiated by Helium Mobile.
As shown in the diagram below, it displays the global distribution of DePIN devices, highlighting their concentration in regions such as North America, East Asia, and Western Europe. These areas are typically more economically developed, as becoming a node in the DePIN network requires provisioning of both software and hardware resources, which incur significant costs. For instance, a high-end consumer-grade GPU like the RTX-4090 costs $2,000, which is a substantial expense for users in less economically developed regions.
Due to the token incentive mechanism of DePIN projects, which follows the principle of “more contribution, more reward,” users aiming for higher token rewards must contribute more resources and use higher-end equipment. While advantageous for project teams, this undoubtedly raises the barrier to entry for users. A successful DePIN project should be backward compatible and inclusive, offering opportunities for participation even with lower-end devices, aligning with the blockchain principles of fairness, justice, and transparency.
Looking at the global device distribution map, many regions remain undeveloped. We believe that through continuous technological innovation and market expansion, the DePIN sector has the potential for global growth, reaching every corner, connecting people worldwide, and collectively driving technological advancement and social development.
source: @DePINscan
After my brief review of the DePIN track, the author summarized the basic steps for analyzing a DePIN project.
Most importantly, analyze the DePIN project’s operating model as a sharing economy in Web2.
8.1 Basic information
Project Description
io.net is a decentralized computing network that enables the development, execution, and scaling of machine learning applications on the Solana blockchain. Their vision is to “bring 1 million GPUs together to form the world’s largest GPU cluster.” Giving engineers access to massive amounts of computing power in a system that is accessible, customizable, cost-effective, and easy to implement.
Team background
Founder and CEO: Ahmed Shadid, who worked in quantitative finance and financial engineering before founding io.net, and is also a volunteer at the Ethereum Foundation.
Chief Marketing Officer and Chief Strategy Officer: Garrison Yang, Yang Xudong. Prior to that, he was Vice President of Strategy and Growth at Avalanche and is an alumnus of UC Santa Barbara.
It can be seen that the technical background of this project is relatively solid, and many founders have Crypto experience.
Narrative: AI, DePIN, Solana Ecosystem.
Financing situation
Source: @RootDataLabs
Source: @RootDataLabs
On March 5, 2024, io.net secured $30 million in Series A funding with a valuation of $1 billion, benchmarked against Render Network. The round was led by renowned top-tier investment institutions such as Hack VC, OKX Ventures, Multicoin Capital, and also included influential project leaders like Anatoly Yakovenko (Solana CEO) and Yat Siu (Animoca co-founder). This early backing from top capital is why we refer to io.net as a star project—it has the funding, the background, the technology, and the expectation of an airdrop.
The main products of io.net are as follows:
8.2 Product structure
The main products of io.net are as follows:
Below is an image of a cat in the style of Van Gogh generated on BC8.AI.
Source: @ionet
Product features and advantages
Compared with traditional cloud service providers Google Cloude and AWS, io.net has the following features and advantages:
Let’s take AWS as an example to compare in detail:
Accessibility refers to how easily users can access and obtain computing power. When using traditional cloud service providers, you typically need to provide key identification information such as a credit card and contact details in advance. However, when accessing io.net, all you need is a Solana wallet to quickly and conveniently obtain computing power permissions.
Customization refers to the degree of customization available to users for their computing clusters. With traditional cloud service providers, you can only select the machine type and location of the computing cluster. In contrast, when using io.net, in addition to these options, you can also choose bandwidth speed, cluster type, data encryption methods, and billing options.
Source: @ionet
As shown in the image above, when a user selects the NVIDIA A100-SXM4-80GB model GPU, a Hong Kong server, ultra-high-speed bandwidth, hourly billing, and end-to-end encryption, the price per GPU is $1.58 per hour. This demonstrates that io.net offers a high degree of customization with many options available for users, prioritizing their experience. For DePIN projects, this customization is a key way to expand the demand side and promote healthy network growth.
In contrast, the image below shows the price of the NVIDIA A100-SXM4-80GB model GPU from traditional cloud service providers. For the same computing power requirements, io.net’s price is at least half that of traditional cloud providers, making it highly attractive to users.
8.3 Basic information of the network
We can use IO Explorer to comprehensively view the computing power of the entire network, including the number of devices, available service areas, computing power prices, etc.
Computing power equipment situation
Currently, io.net has a total of 101,545 verified GPUs and 31,154 verified CPUs. io.net will verify whether the computing device is online every 6 hours to ensure io.net’s network stability.
Source: @ionet
The second image shows currently available, PoS-verified, and easy-to-deploy computing devices. Compared to Render Network and Filecoin, io.net has a significantly higher number of computing devices. Furthermore, io.net integrates computing devices from both Render Network and Filecoin, allowing users to choose their preferred computing device provider when deploying compute clusters. This user-centric approach ensures that io.net meets users’ customization needs and enhances their overall experience.
Source: @ionet
Another notable feature of io.net’s computing devices is the large number of high-end devices available. In the US, for example, there are several hundred high-end GPUs like the H100 and A100. Given the current sanctions and the AI boom, high-end graphics cards have become extremely valuable computing assets.
With io.net, you can use these high-end computing devices provided by suppliers without any review, regardless of whether you are a US citizen. This is why we highlight the anti-monopoly advantage of io.net.
Source:@ionet
Business revenue
It can be seen from the revenue dashboard of io.net that io.net has stable income every day, and the total income has reached the level of one million US dollars. It can be seen from here that io.net has completed the construction of the supply side, and the entire project The cold start period has gradually passed and the network development period has begun.
source: @ionet
source: @ionet
Source: @ionet
From the supply side of io.net
But from the demand side
8.4 Economic model
io.net’s native network token is $IO, with a fixed total supply of 800 million tokens. An initial supply of 500 million tokens will be released, and the remaining 300 million tokens will be distributed as rewards to suppliers and token stakers over 20 years, issued hourly.
$IO employs a burn deflation mechanism: network revenue is used to purchase and burn $IO tokens, with the amount of tokens burned adjusted according to the price of $IO.
Token Utilities:
Token Allocation:
From the allocation chart, it can be seen that half of the tokens are allocated to project community members, indicating the project’s intention to incentivize community participation for its growth. The R&D ecosystem accounts for 16%, ensuring continuous support for the project’s technology and product development.
As can be seen from the token release chart, $IO tokens are released gradually and linearly. This release mechanism helps stabilize the price of $IO tokens and avoid price fluctuations caused by the sudden appearance of a large number of $IO tokens in the market. At the same time, the reward mechanism of $IO tokens can also motivate long-term holders and stakers, enhancing the stability of the project and user stickiness.
Overall, io.net’s tokenomics is a well-designed token scheme. The allocation of half of the tokens to the community highlights the project’s emphasis on community-driven and decentralized governance, which supports long-term development and the establishment of credibility.
In the third stage of the DePIN economic development phases discussed earlier, it was mentioned that “community autonomy becomes the dominant mode of network governance.” io.net has already laid a solid foundation for future community autonomy. Additionally, the gradual release mechanism and burn mechanism of the $IO token effectively distribute market pressure and reduce the risk of price volatility.
From these aspects, it is clear that io.net’s various mechanisms demonstrate that it is a well-planned project with a focus on long-term development.
8.5 Ways to participate in io.net
Currently, io.net’s “Ignition Rewards” has entered its 3rd season, running from June 1st to June 30th. The main way to participate is to integrate your computing devices into the main computing network for mining. Mining rewards in $IO tokens depend on factors such as device computing power, network bandwidth speed, and others.
In the 1st season of “Ignition Rewards,” the initial threshold for device integration was set at the “GeForce RTX 1080 Ti.” This reflects our earlier statement of providing low-end devices with an opportunity to participate, aligning with the blockchain principles of fairness, justice, and transparency. In the 2nd and 3rd seasons of “Ignition Rewards,” the initial threshold was set at “GeForce RTX 3050.”
The reason for this approach is twofold: from the project’s perspective, as the project develops, low-end computing devices contribute less to the overall network and stronger computing devices better maintain network stability. From the demand-side users’ perspective, most users require high-end computing devices for training and inference of AI models, and low-end devices cannot meet their needs.
Therefore, as the project progresses favorably, raising the participation threshold is a correct approach. Similar to the Bitcoin network, the goal for the project is to attract better, stronger, and more numerous computing devices.
8.6 Conclusion & Outlook
io.net has performed well during the project’s cold start and network construction phase, completing the entire network deployment, validating the effectiveness of computational nodes, and generating sustained revenue.
The project’s next main goal is to further expand the network ecosystem and increase demand from the computational needs market, which represents a significant opportunity. Successfully promoting the project in this market will require efforts from the project’s marketing team.
In practice, when we talk about AI algorithm model development, it mainly involves two major parts: training and inference. Let’s illustrate these two concepts with a simple example of a quadratic equation:
The process of using the (x, y) data pair (training set) to solve for the unknown coefficients (a, b, c) is the training process of the AI algorithm; after obtaining the unknown coefficients (a, b, c), according to the process of solving y for a given x is the inference process of the AI algorithm.
In this computation process, we can clearly see that the computational workload of the training process is much greater than that of the inference process. Training a Large Language Model (LLM) requires extensive support from computational clusters and consumes substantial funds. For example, training GPT-3 175B involved thousands of Nvidia V100 GPUs over several months, with training costs reaching tens of millions of dollars.
Performing AI large model training on decentralized computing platforms is challenging because it involves massive data transfers and exchanges, demanding high network bandwidth that decentralized platforms struggle to meet. NVIDIA has established itself as a leader in the AI industry not only due to its high-end computational chips and underlying AI computing acceleration libraries (cuDNN) but also because of its proprietary communication bridge, “NVLink,” which significantly speeds up the movement of large-scale data during model training.
In the AI industry, training large models not only requires extensive computational resources but also involves data collection, processing, and transformation. These processes often necessitate scalable infrastructure and centralized data processing capabilities. Consequently, the AI industry is fundamentally a scalable and centralized sector, relying on robust technological platforms and data processing capabilities to drive innovation and development.
Therefore, decentralized computing platforms like io.net are best suited for AI algorithm inference tasks. Their target customers should include students and those with task requirements for fine-tuning downstream tasks based on large models, benefiting from io.net’s affordability, ease of access, and ample computational power.
9.1 Project background
Artificial Intelligence is regarded as one of the most significant technologies ever seen by humanity. With the advent of General Artificial Intelligence (AGI), lifestyles are poised to undergo revolutionary changes. However, due to a few companies dominating AI technology development, there exists a wealth gap between GPU affluent and GPU deprived individuals. Aethir, through its distributed physical infrastructure network (DePINs), aims to increase the accessibility of on-demand computing resources, thereby balancing the distribution of AI development outcomes.
Aethir is an innovative distributed cloud computing infrastructure network specifically designed to meet the high demand for on-demand cloud computing resources in the fields of Artificial Intelligence (AI), gaming, and virtual computing. Its core concept involves aggregating enterprise-grade GPU chips from around the world to form a unified global network, significantly increasing the supply of on-demand cloud computing resources.
The primary goal of Aethir is to address the current shortage of computing resources in the AI and cloud computing sectors. With the advancement of AI and the popularity of cloud gaming, the demand for high-performance computing resources continues to grow. However, due to the monopolization of GPU resources by a few large companies, small and medium-sized enterprises and startups often struggle to access sufficient computing power. Aethir provides a viable solution through its distributed network, helping resource owners (such as data centers, tech companies, telecom companies, top gaming studios, and cryptocurrency mining companies) fully utilize their underutilized GPU resources and provide efficient, low-cost computing resources to end-users.
Advantages of Distributed Cloud Computing:
Through these core advantages, Aethir leads not only in technology but also holds significant economic and societal implications. By leveraging distributed physical infrastructure networks (DePINs), it makes the supply of computing resources more equitable, promoting the democratization and innovation of AI technology. This innovative model not only changes the supply of computing resources but also opens up new possibilities for the future development of AI and cloud computing.
Aethir’s technology architecture is composed of multiple core roles and components to ensure that its distributed cloud computing network can operate efficiently and securely. Below is a detailed description of each key role and component:
Core roles and components
Node Operators:
Aethir Network
Containers
Checkers
Indexers
End Users:
End users are consumers of Aethir network computing resources, whether for AI training and inference, or gaming. End users submit requests, and the network matches the appropriate high-performance resources to meet the needs.
Treasury:
The treasury holds all staked $ATH tokens and pays out all $ATH rewards and fees.
Settlement Layer:
Aethir utilizes blockchain technology as its settlement layer, recording transactions, enabling scalability and efficiency, and using $ATH for incentivization. Blockchain ensures transparency in resource consumption tracking and enables near real-time payments.
For specific relationships, please refer to the following chart:
Source: @AethirCloud
9.3 Consensus mechanism
The Aethir network operates using a unique mechanism, with two primary proofs of work at its core:
Proof of Rendering Capacity:
Proof of Rendering Work:
Source: @AethirCloud
9.4 Token economics model
The ATH token plays a variety of roles in the Aethir ecosystem, including medium of exchange, governance tool, incentive, and platform development support.
Specific uses include:
Specific distribution strategy: The token of the Aethir project is $ATH, and the total issuance amount is 42 billion. The largest share of 35% is given to GPU providers, such as data centers and individual retail investors, 17.5% of tokens are given to teams and consultants, 15% and 11.75% of tokens are given to node inspection and sales teams respectively. As shown below:
Source: @AethirCloud
Reward emissions
The mining reward emission strategy aims to balance the participation of resource providers and the sustainability of long-term rewards. Through the decay function of early rewards, it is ensured that participants who join later are still motivated.
9.5 How to participate in Aethir mining
The Aethir platform chooses to allocate the majority of its Total Token Supply (TTS) to mining rewards, which is crucial for strengthening the ecosystem. This allocation aims to support node operators and uphold container standards. Node operators are central to Aethir, providing essential computational power, while containers are pivotal in delivering computing resources.
Mining rewards are divided into two forms: Proof of Rendering Capacity and Proof of Rendering Work. Proof of Rendering Work incentivizes node operators to complete computational tasks and is specifically distributed to containers. Proof of Rendering Capacity, on the other hand, rewards compute providers for making their GPUs available to Aethir; the more GPUs used by clients, the greater the additional token rewards. These rewards are distributed in $ATH tokens. They serve not only as distribution but also as investments in the future sustainability of the Aethir community.
10.1 Project Background
Heurist is a Layer 2 network based on the ZK Stack, focusing on AI model hosting and inference. It is positioned as the Web3 version of HuggingFace, providing users with serverless access to open-source AI models. These models are hosted on a decentralized computing resource network.
Heurist’s vision is to decentralize AI using blockchain technology, achieving widespread technological adoption and equitable innovation. Its goal is to ensure AI technology’s accessibility and unbiased innovation through blockchain technology, promoting the integration and development of AI and cryptocurrency.
The term “Heurist” is derived from “heuristics,” which refers to the process by which the human brain quickly reaches reasonable conclusions or solutions when solving complex problems. This name reflects Heurist’s vision of rapidly and efficiently solving AI model hosting and inference problems through decentralized technology.
Issues with Closed-Source AI
Closed-source AI typically undergoes scrutiny under U.S. laws, which may not align with the needs of other countries and cultures, leading to over-censorship or inadequate censorship. This not only affects AI models’ performance but also potentially infringes on users’ freedom of expression.
The Rise of Open-Source AI
Open-source AI models have outperformed closed-source models in various fields. For example, Stable Diffusion outperforms OpenAI’s DALL-E 2 in image generation and is more cost-effective. The weights of open-source models are publicly available, allowing developers and artists to fine-tune them based on specific needs.
The community-driven innovation of open-source AI is also noteworthy. Open-source AI projects benefit from the collective contributions and reviews of diverse communities, fostering rapid innovation and improvement. Open-source AI models provide unprecedented transparency, enabling users to review training data and model weights, thereby enhancing trust and security.
Below is a detailed comparison between open-source AI and closed-source AI:
Source: @heurist_ai
10.2 Data privacy
When handling AI model inference, the Heurist project integrates Lit Protocol to encrypt data during transmission, including the input and output of AI inference. For miners, Heurist has two broad categories, divided into public miners and private miners:
Source: @heurist_ai
How to establish trust with privacy-enabled miners? Mainly through the following two methods
10.3 Token economics model
The Heurist project’s token, named HUE, is a utility token with a dynamic supply regulated through issuance and burn mechanisms. The maximum supply of HUE tokens is capped at 1 billion.
The token distribution and issuance mechanisms can be divided into two main categories: mining and staking.
Token Burn Mechanism
Similar to Ethereum’s EIP-1559 model, the Heurist project has implemented a token burn mechanism. When users pay for AI inference services, a portion of the HUE payment is permanently removed from circulation. The balance between token issuance and burn is closely related to network activity. During periods of high usage, the burn rate may exceed the issuance rate, putting the Heurist network into a deflationary phase. This mechanism helps regulate token supply and aligns the token’s value with actual network demand.
Bribe Mechanism
The bribe mechanism, first proposed by Curve Finance users, is a gamified incentive system to help direct liquidity pool rewards. The Heurist project has adopted this mechanism to enhance mining efficiency. Miners can set a percentage of their mining rewards as bribes to attract stakers. Stakers may choose to support miners offering the highest bribes but will also consider factors like hardware performance and uptime. Miners are incentivized to offer bribes because higher staking leads to higher mining efficiency, fostering an environment of both competition and cooperation, where miners and stakers work together to provide better services to the network.
Through these mechanisms, the Heurist project aims to create a dynamic and efficient token economy to support its decentralized AI model hosting and inference network.
10.4 Incentivized Testnet
The Heurist project allocated 5% of the total supply of HUE tokens for mining rewards during the Incentivized Testnet phase. These rewards are calculated in the form of points, which can be redeemed for fully liquid HUE tokens after the Mainnet Token Generation Event (TGE). Testnet rewards are divided into two categories: one for Stable Diffusion models and the other for Large Language Models (LLMs).
Points mechanism
Llama Point: For LLM miners, one Llama Point is earned for every 1000 input/output tokens processed by the Mixtral 8-7b model. The specific calculation is shown in the figure below:
Waifu Point: For Stable Diffusion miners, one Waifu Point is obtained for each 512x512 pixel image generated (using Stable Diffusion 1.5 model, after 20 iterations). The specific calculation is shown in the figure below:
After each computing task is completed, the complexity of the task is evaluated based on GPU performance benchmark results and points are awarded accordingly. The allocation ratio of Llama Points and Waifu Points will be determined closer to TGE, taking into account demand and usage of both model categories over the coming months.
Source: @heurist_ai
There are two main ways to participate in the testnet:
The recommended GPU for participating in Heurist mining is as shown in the figure below:
Source: @heurist_ai
Note that the Heurist testnet has anti-cheating measures, and the input and output of each computing task are stored and tracked by an asynchronous monitoring system. If a miner behaves maliciously to manipulate the reward system (such as submitting incorrect or low-quality results, tampering with downloaded model files, tampering with equipment and latency metric data), the Heurist team reserves the right to reduce their testnet points.
10.5 Heurist liquidity mining
Heurist testnet offers two types of points: Waifu Points and Llama Points. Waifu Points are earned by running the Stable Diffusion model for image generation, while Llama Points are earned by running large language models (LLMs). There are no restrictions on the GPU model for running these models, but there are strict requirements for VRAM. Models with higher VRAM requirements will have higher point coefficients.
The image below lists the currently supported LLM models. For the Stable Diffusion model, there are two modes: enabling SDXL mode and excluding SDXL mode. Enabling SDXL mode requires 12GB of VRAM, while excluding SDXL mode has been found to run with just 8GB of VRAM in my tests.
Source: @heurist_ai
10.6 Applications
The Heurist project has demonstrated its powerful AI capabilities and broad application prospects through three application directions: image generation, chatbots, and AI search engines. In terms of image generation, Heurist uses the Stable Diffusion model to provide efficient and flexible image generation services; in terms of chatbots, it uses large language models to achieve intelligent dialogue and content generation; in terms of AI search engines, it combines pre-trained language models to provide accurate information retrieval and detailed answers.
These applications not only improve the user experience, but also demonstrate Heurist’s innovation and technical advantages in the field of decentralized AI. The application effects are shown in the figure below:
Source: @heurist_ai
Image generation
The image generation application of the Heurist project mainly relies on the Stable Diffusion model to generate high-quality images through text prompts. Users can interact with the Stable Diffusion model via the REST API, submitting textual descriptions to generate images. The cost of each generation task depends on the resolution of the image and the number of iterations. For example, generating a 1024x1024 pixel, 40-iteration image using the SD 1.5 model requires 8 standard credit units. Through this mechanism, Heurist implements an efficient and flexible image generation service.
Chatbot
The chatbot application of the Heurist project implements intelligent dialogue through large language models (LLM). Heurist Gateway is an OpenAI-compatible LLM API endpoint built using LiteLLM that allows developers to call the Heurist API in OpenAI format. For example, using the Mistral 8x7b model, developers can replace existing LLM providers with just a few lines of code and get performance similar to ChatGPT 3.5 or Claude 2 at a lower cost.
Heurist’s LLM model supports a variety of applications, including automated customer service, content generation, and complex question answering. Users can interact with these models through API requests, submit text input, and get responses generated by the models, enabling diverse conversational and interactive experiences.
AI search engine
Project Heurist’s AI search engine provides powerful search and information retrieval capabilities by integrating large-scale pre-trained language models such as Mistral 8x7b. Users can get accurate and detailed answers through simple natural language queries. For example, on the question “Who is the CEO of Binance?”, the Heurist search engine not only provides the name of the current CEO (Richard Teng), but also explains in detail his background and the situation of the previous CEO.
The Heurist search engine combines text generation and information retrieval technology to handle complex queries and provide high-quality search results and relevant information. Users can submit queries through the API interface and obtain structured answers and reference materials, making Heurist’s search engine not only suitable for general users, but also to meet the needs of professional fields.
DePIN (Decentralized Physical Infrastructure Networks) represents a new form of the “sharing economy,” serving as a bridge between the physical and digital worlds. From both a market valuation and application area perspective, DePIN presents significant growth potential. Compared to CePIN (Centralized Physical Infrastructure Networks), DePIN offers advantages such as decentralization, transparency, user autonomy, incentive mechanisms, and resistance to censorship, all of which further drive its development. Due to DePIN’s unique economic model, it is prone to creating a “flywheel effect.” While many current DePIN projects have completed the construction of the “supply side,” the next critical focus is to stimulate real user demand and expand the “demand side.”
Although DePIN shows immense development potential, it still faces challenges in terms of technological maturity, service stability, market acceptance, and regulatory environment. However, with technological advancements and market development, these challenges are expected to be gradually resolved. It is foreseeable that once these challenges are effectively addressed, DePIN will achieve mass adoption, bringing a large influx of new users and draw people’s attention to the crypto field. This could potentially become the driving engine of the next bull market. Let’s witness this day together!
This article originally title “解密 DePIN 生态:AI 算力的变革力量” is reproduced from [WeChat public account:Gryphsis Academy]. All copyrights belong to the original author [Gryphsis Academy ]. If you have any objection to the reprint, please contact Gate Learn Team, the team will handle it as soon as possible according to relevant procedures.
Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.
Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.