Born on the Edge: How Decentralized Computing Power Networks Empower Crypto and AI?

Advanced7/7/2024, 7:34:41 PM
This article will deconstruct specific projects and the entire field from both micro and macro perspectives, aiming to provide readers with analytical insights to understand the core competitive advantages of each project and the overall development of the decentralized computing power track. The author will introduce and analyze five projects: Aethir, io.net, Render Network, Akash Network, and Gensyn, and summarize and evaluate their situations and the development of the track.

1 The Intersection of AI and Crypto

On May 23rd, chip giant NVIDIA released its first-quarter fiscal year 2025 financial report. The report showed that NVIDIA’s first-quarter revenue was $26 billion. Among them, data center revenue increased by a staggering 427% from last year to reach $22.6 billion. NVIDIA’s ability to single-handedly boost the financial performance of the US stock market reflects the explosive demand for computing power among global technology companies competing in the AI arena. The more top-tier technology companies expand their ambitions in the AI race, the greater their exponentially growing demand for computing power. According to TrendForce’s forecast, by 2024, the demand for high-end AI servers from the four major US cloud service providers—Microsoft, Google, AWS, and Meta—is expected to collectively account for over 60% of global demand, with shares forecasted at 20.2%, 16.6%, 16%, and 10.8%, respectively.

Image source: https://investor.nvidia.com/financial-info/financial-reports/default.aspx

“Chip shortages” have continuously been an annual buzzword in recent years. On one hand, large language models (LLMs) require substantial computing power for training and inference. As models iterate, the costs and demand for computing power exponentially increase. On the other hand, large companies like Meta purchase massive quantities of chips, causing global computing resources to tilt towards these tech giants, making it increasingly difficult for small enterprises to obtain the necessary computing resources. The challenges faced by small enterprises stem not only from the shortage of chips due to skyrocketing demand but also from structural contradictions in the supply. Currently, there are still a large number of idle GPUs on the supply side; for example, some data centers have a large amount of idle computing power (with utilization rates as low as 12% to 18%), and significant computing power resources are also idle in encrypted mining due to reduced profitability. Although not all of this computing power is suitable for specialized applications such as AI training, consumer-grade hardware can still play a significant role in other areas such as AI inference, cloud gaming rendering, cloud phones, etc. The opportunity to integrate and utilize these computing resources is enormous.

Shifting focus from AI to crypto, after a three-year silence in the cryptocurrency market, another bull market has finally emerged. Bitcoin prices have repeatedly hit new highs, and various meme coins continue to emerge. Although AI and Crypto have been buzzwords in recent years, artificial intelligence and blockchain as two important technologies seem like parallel lines that have yet to find an “intersection.” Earlier this year, Vitalik published an article titled “The promise and challenges of crypto + AI applications,” discussing future scenarios where AI and crypto converge. Vitalik outlined many visions in the article, including using blockchain and MPC (multi-party computation) encryption technologies for decentralized training and inference of AI, which could open up the black box of machine learning and make AI models more trustless, among other benefits. While realizing these visions will require considerable effort, one use case mentioned by Vitalik—empowering AI through crypto-economic incentives—is an important direction that can be achieved in the short term. Decentralized computing power networks are currently one of the most suitable scenarios for AI + crypto integration.

2 Decentralized Computing Power Network

Currently, there are numerous projects developing in the decentralized computing power network space. The underlying logic of these projects is similar and can be summarized as follows: using tokens to incentivize computing power providers to participate in the network and offer their computing resources. These scattered computing resources can aggregate into decentralized computing power networks of significant scale. This approach not only increases the utilization of idle computing power but also meets the computing needs of clients at lower costs, achieving a win-win situation for both buyers and sellers.

To provide readers with a comprehensive understanding of this sector in a short time, this article will deconstruct specific projects and the entire field from both micro and macro perspectives. The aim is to provide analytical insights for readers to understand the core competitive advantages of each project and the overall development of the decentralized computing power network sector. The author will introduce and analyze five projects: Aethir, io.net, Render Network, Akash Network, and Gensyn, and summarize and evaluate their situations and the development of the sector.

In terms of analytical framework, focusing on a specific decentralized computing power network, we can break it down into four core components:

  • Hardware Network: Integrating scattered computing resources together through nodes distributed globally to facilitate resource sharing and load balancing forms the foundational layer of decentralized computing power networks.
  • Bilateral Market: Matching computing power providers with demanders through effective pricing and discovery mechanisms, providing a secure trading platform ensuring transparent, fair, and trustworthy transactions for both sides.
  • Consensus Mechanism: Ensuring nodes within the network operate correctly and complete tasks. The consensus mechanism monitors two aspects: 1) Node uptime to ensure they are active and ready to accept tasks at any time. 2) Task completion proof: Nodes effectively and correctly complete tasks without diverting computing power for other purposes, occupying processes and threads.
  • Token Incentives: Token models incentivize more participants to provide/use services, capturing network effects with tokens to facilitate community benefit sharing.

From an overview perspective of the decentralized computing power sector, Blockworks Research provides a robust analytical framework that categorizes projects into three distinct layers.

  • Bare Metal Layer: Forms the foundational layer of the decentralized computing stack, responsible for aggregating raw computing resources and making them accessible via API calls.
  • Orchestration Layer: Constitutes the middle layer of the decentralized computing stack, primarily focused on coordination and abstraction. It handles tasks such as scheduling, scaling, operation, load balancing, and fault tolerance of computing power. Its main role is to “abstract” the complexity of managing the underlying hardware, providing a more advanced user interface tailored to specific customer needs.
  • Aggregation Layer: Forms the top layer of the decentralized computing stack, primarily responsible for integration. It provides a unified interface for users to execute various computing tasks in one place, such as AI training, rendering, zkML, and more. This layer acts as an orchestration and distribution layer for multiple decentralized computing services.

Image source: Youbi Capital

Based on the two analysis frameworks provided, we will conduct a comparative analysis of five selected projects across four dimensions: core business, market positioning, hardware facilities, and financial performance.

2.1 Core Business

From a foundational perspective, decentralized computing power networks are highly homogenized, utilizing tokens to incentivize idle computing power providers to offer their services. Based on this foundational logic, we can understand the core business differences among projects from three aspects:

  • The source of idle computing power
    • The sources of idle computing power on the market primarily come from two main categories: 1) data centers, mining companies, and other enterprises; and 2) individual users. Data centers typically possess professional-grade hardware, whereas individual users generally purchase consumer-grade chips.
    • Aethir, Akash Network, and Gensyn primarily gather computing power from enterprises. The benefits of sourcing from enterprises include: 1) higher quality hardware and professional maintenance teams, leading to higher performance and reliability of computing resources; 2) more homogeneity and centralized management of computing resources in enterprises and data centers, resulting in more efficient scheduling and maintenance. However, this approach requires higher demands from project teams, necessitating business relationships with enterprises that control computing power. Additionally, scalability and decentralization may be somewhat compromised.
    • Render Network and io.net incentivize individual users to provide their idle computing power. The advantages of sourcing from individuals include: 1) lower explicit costs of idle computing power from individuals, providing more economical computing resources; 2) higher scalability and decentralization of the network, enhancing system resilience and robustness. However, the disadvantages include the widespread and heterogeneous distribution of resources among individuals, which complicates management and scheduling, increasing operational challenges. Moreover, relying on individual computing power to initiate network effects can be more difficult. Lastly, devices owned by individuals may pose more security risks, potentially leading to data leaks and misuse of computing power.
  • Computing power consumer
    • From the perspective of computing power consumers, Aethir, io.net, and Gensyn primarily target enterprises. For B-end clients, such as those requiring AI and real-time gaming rendering, there is a high demand for high-performance computing resources, typically requiring high-end GPUs or professional-grade hardware. Additionally, B-end clients have stringent requirements for the stability and reliability of computing resources, necessitating high-quality service level agreements to ensure smooth project operations and timely technical support. Moreover, the migration costs for B-end clients are substantial. If decentralized networks lack mature SDKs to facilitate rapid deployment for projects (for example, Akash Network requiring users to develop based on remote ports), it becomes challenging to persuade clients to migrate. Unless there is a significant price advantage, client willingness to migrate remains low.
    • Render Network and Akash Network primarily serve individual users for computing power services. Serving C-end consumers requires projects to design simple and user-friendly interfaces and tools to deliver a positive consumer experience. Additionally, consumers are highly price-sensitive, necessitating competitive pricing strategies from projects.
  • Hardware type
    • Common computing hardware resources include CPU, FPGA, GPU, ASIC, and SoC. These hardware types have significant differences in design goals, performance characteristics, and application areas. In summary, CPUs excel at general computing tasks, FPGAs are advantageous for high parallel processing and programmability, GPUs perform well in parallel computing, ASICs are most efficient for specific tasks, and SoCs integrate multiple functions into one unit, suitable for highly integrated applications. The choice of hardware depends on the specific application needs, performance requirements, and cost considerations.
    • The decentralized computing power projects we discuss mostly collect GPU computing power, which is determined by the type of project and the characteristics of GPUs. GPUs have unique advantages in AI training, parallel computing, multimedia rendering, etc. Although these projects mostly involve GPU integration, different applications have different hardware specifications and requirements, resulting in heterogeneous optimization cores and parameters. These parameters include parallelism/serial dependencies, memory, latency, etc. For example, rendering workloads are actually more suitable for consumer-grade GPUs rather than higher-performance data center GPUs because rendering demands high requirements for tasks like ray tracing. Consumer-grade chips like the 4090s are enhanced with RT cores, specifically optimized for ray tracing tasks. AI training and inference require professional-grade GPUs. Thus, the Render Network can aggregate consumer-grade GPUs like RTX 3090s and 4090s from individual users, while IO.NET requires more H100s, A100s, and other professional-grade GPUs to meet the needs of AI startups.

2.2 Market Positioning

In terms of project positioning, the core issues to be addressed, optimization focus, and value capture capabilities differ for the bare metal layer, orchestration layer, and aggregation layer.

  • The bare metal layer focuses on the collection and utilization of physical resources. The orchestration layer is concerned with the scheduling and optimization of computing power, designing the optimal configuration of physical hardware according to customer needs. The aggregation layer is general-purpose, focusing on the integration and abstraction of different resources.
  • From a value chain perspective, each project should start from the bare metal layer and strive to ascend upwards. In terms of value capture, the capability increases progressively from the bare metal layer to the orchestration layer and finally to the aggregation layer. The aggregation layer can capture the most value because an aggregation platform can achieve the greatest network effects and directly reach the most users, effectively acting as the traffic entry point for a decentralized network, thus occupying the highest value capture position in the entire computing resource management stack.
  • Correspondingly, building an aggregation platform is the most challenging. A project needs to comprehensively address technical complexity, heterogeneous resource management, system reliability and scalability, network effect realization, security and privacy protection, and complex operational management issues. These challenges are unfavorable for the cold start of a project and depend on the development situation and timing of the sector. It is unrealistic to work on the aggregation layer before the orchestration layer has matured and captured a significant market share.
  • Currently, Aethir, Render Network, Akash Network, and Gensyn belong to the orchestration layer. They aim to provide services for specific targets and customer groups. Aethir’s main business is real-time rendering for cloud gaming and providing certain development and deployment environments and tools for B-end customers; Render Network’s main business is video rendering; Akash Network’s mission is to provide a marketplace platform similar to Taobao; and Gensyn focuses deeply on the AI training field. IO.net positions itself as an aggregation layer, but its current functionality is still some distance from that of a complete aggregation layer. Although it has collected hardware from Render Network and Filecoin, the abstraction and integration of hardware resources have not yet been completed.

2.3 Hardware Facilities

  • Currently, not all projects have disclosed detailed network data. Comparatively, io.net’s explorer UI is the best, displaying parameters such as GPU/CPU quantity, types, prices, distribution, network usage, and node revenue. However, at the end of April, io.net’s frontend was attacked due to the lack of authentication for PUT/POST interfaces, leading to hackers tampering with the frontend data. This incident has raised concerns about the privacy and reliability of network data for other projects as well.
  • In terms of GPU quantity and models, io.net, being an aggregation layer, should logically have the most hardware. Aethir follows closely behind, while the hardware status of other projects is less transparent. io.net has a wide variety of GPUs, including professional-grade GPUs like the A100 and consumer-grade GPUs like the 4090, aligning with io.net’s aggregation positioning. This allows io to select the most suitable GPU based on specific task requirements. However, different GPU models and brands may require different drivers and configurations, and the software also needs complex optimization, increasing management and maintenance complexity. Currently, io’s task allocation mainly relies on user self-selection.
  • Aethir has released its own mining machine, and in May, Aethir Edge, supported by Qualcomm, was officially launched. This breaks away from the single centralized GPU cluster deployment far from users, deploying computing power to the edge. Aethir Edge, combined with the H100 cluster computing power, serves AI scenarios, deploying trained models to provide inference computing services at optimal costs. This solution is closer to users, faster in service, and offers higher cost efficiency.
  • From the supply and demand perspective, taking Akash Network as an example, its statistics show a total CPU count of around 16k and 378 GPUs. Based on network rental demand, the utilization rates for CPU and GPU are 11.1% and 19.3%, respectively. Only the professional-grade GPU H100 has a relatively high rental rate, while most other models remain idle. This situation is generally similar across other networks, with overall network demand being low and most computing power, except for popular chips like the A100 and H100, remaining idle.
  • In terms of price advantage, compared to traditional service providers, the cost advantage is not significant except against the giants of the cloud computing market.

2.4 Financial Performance

  • Regardless of how the token model is designed, a healthy tokenomics must meet the following basic conditions: 1) User demand for the network must be reflected in the token price, meaning that the token can capture value; 2) All participants, whether developers, nodes, or users, need to receive long-term and fair incentives; 3) Ensure decentralized governance and avoid excessive holding by insiders; 4) Reasonable inflation and deflation mechanisms and token release schedules to avoid significant price volatility affecting network stability and sustainability.
  • If we broadly categorize token models into BME (burn and mint equilibrium) and SFA (stake for access), the deflationary pressure of these two models comes from different sources: In the BME model, tokens are burned after users purchase services, so the system’s deflationary pressure is determined by demand. In the SFA model, service providers/nodes are required to stake tokens to obtain the qualification to provide services, so the deflationary pressure is brought by supply. The advantage of BME is that it is more suitable for non-standardized goods. However, if network demand is insufficient, it may face continuous inflationary pressure. The token models of various projects differ in details, but generally speaking, Aethir leans more towards SFA, while io.net, Render Network, and Akash Network lean more towards BME. Gensyn’s model is still unknown.
  • In terms of revenue, network demand will be directly reflected in the overall network revenue (excluding miner income, as miners receive rewards for completing tasks and subsidies from projects). According to publicly available data, io.net has the highest value. Although Aethir’s revenue has not yet been disclosed, public information indicates they have announced signing orders with many B-end customers.
  • Regarding token prices, only Render Network and Akash Network have conducted ICOs so far. Aethir and io.net have also recently issued tokens, but their price performance needs to be further observed and will not be discussed in detail here. Gensyn’s plans are still unclear. From the two projects that have issued tokens and other projects in the same sector not discussed here, decentralized computing power networks have shown very impressive price performance, reflecting the significant market potential and high expectations of the community to a certain extent.

2.5 Summary

  • The decentralized computing power network sector is developing rapidly, with many projects already capable of serving customers through their products and generating some revenue. The sector has moved beyond pure narrative and entered a phase where preliminary services can be provided.
  • A common issue faced by decentralized computing power networks is weak demand, with long-term customer needs not being well validated and explored. However, demand-side challenges have not significantly impacted token prices, as the few projects that have issued tokens have shown impressive performance.
  • AI is the main narrative for decentralized computing power networks but not the only application. In addition to AI training and inference, computing power can also be used for real-time rendering in cloud gaming, cloud mobile services, and more.
  • The hardware in computing power networks is highly heterogeneous, and the quality and scale of these networks need further improvement. For C-end users, the cost advantage is not very significant. For B-end users, aside from cost savings, factors such as service stability, reliability, technical support, compliance, and legal support must also be considered. Web3 projects generally do not perform well in these areas.

3 Closing Thoughts

The exponential growth in AI has undeniably led to a massive demand for computing power. Since 2012, the computational power used in AI training tasks has been growing exponentially, doubling approximately every 3.5 months (in comparison, Moore’s Law predicts a doubling every 18 months). Since 2012, the demand for computing power has increased by more than 300,000 times, far exceeding the 12-fold increase predicted by Moore’s Law. Forecasts predict that the GPU market will grow at a compound annual growth rate of 32% over the next five years, reaching over $200 billion. AMD’s estimates are even higher, with the company predicting that the GPU chip market will reach $400 billion by 2027.

Image source: https://www.stateof.ai/

The explosive growth of artificial intelligence and other compute-intensive workloads, such as AR/VR rendering, has exposed structural inefficiencies in the traditional cloud computing and leading computing markets. In theory, decentralized computing power networks can leverage distributed idle computing resources to provide more flexible, cost-effective, and efficient solutions to meet the massive demand for computing resources.

Thus, the combination of crypto and AI has enormous market potential but also faces intense competition with traditional enterprises, high entry barriers, and a complex market environment. Overall, among all crypto sectors, decentralized computing power networks are one of the most promising verticals in the crypto field to meet real demand.

Image source: https://vitalik.eth.limo/general/2024/01/30/cryptoai.html

The future is bright, but the road is challenging. To achieve the aforementioned vision, we need to address numerous problems and challenges. In summary, at this stage, simply providing traditional cloud services results in a small profit margin for projects.

From the demand side, large enterprises typically build their own computing power, while most individual developers tend to choose established cloud services. It remains to be further explored and verified whether small and medium-sized enterprises, the real users of decentralized computing power network resources, will have stable demand.

On the other hand, AI is a vast market with extremely high potential and imagination. To tap into this broader market, future decentralized computing power service providers will need to transition towards offering models and AI services, exploring more use cases of crypto + AI, and expanding the value their projects can create. However, at present, many problems and challenges remain to be addressed before further development into the AI field can be achieved:

  • Price advantage not prominent: Comparing previous data reveals that decentralized computing power networks do not demonstrate significant cost advantages. This may be due to market mechanisms dictating that high-demand specialized chips like H100 and A100 are not priced cheaply. Additionally, the lack of economies of scale from decentralization, high network and bandwidth costs, and the significant complexity of management and operations add hidden costs that further increase computing costs.
  • Specific challenges in AI training: Conducting AI training in a decentralized manner faces substantial technical bottlenecks at present. These bottlenecks are visually evident in the GPU workflow: during large language model training, GPUs first receive pre-processed data batches for forward and backward propagation to compute gradients. GPUs then aggregate gradients and update model parameters to ensure synchronization. This iterative process continues until all batches are trained or a specified number of epochs is reached. It involves extensive data transfer and synchronization. Questions such as which parallel and synchronization strategies to use, how to optimize network bandwidth and latency, and how to reduce communication costs remain largely unresolved. Currently, using decentralized computing power networks for AI training is impractical.
  • Data security and privacy concerns: In the training process of large language models, every stage involving data handling and transmission—such as data allocation, model training, and parameter and gradient aggregation—can potentially impact data security and privacy. Privacy concerns are especially critical in models involving sensitive data. Without resolving data privacy issues, scaling on the demand side is not feasible.

From a pragmatic perspective, a decentralized computing power network needs to balance current demand exploration with future market opportunities. It’s crucial to identify a clear product positioning and target audience. Initially focusing on non-AI or Web3 native projects, addressing relatively niche demands, can help establish an early user base. Simultaneously, continuous exploration of various scenarios where AI and crypto converge is essential. This involves exploring technological frontiers and upgrading services to meet evolving needs. By strategically aligning product offerings with market demands and staying at the forefront of technological advancements, decentralized computing power networks can effectively position themselves for sustained growth and market relevance.

References

https://www.stateof.ai/

https://vitalik.eth.limo/general/2024/01/30/cryptoai.html

https://foresightnews.pro/article/detail/34368

https://app.blockworksresearch.com/unlocked/compute-de-pi-ns-paths-to-adoption-in-an-ai-dominated-market?callback=%2Fresearch%2Fcompute-de-pi-ns-paths-to-adoption-in-an-ai-dominated-market

https://research.web3caff.com/zh/archives/17351?ref=1554

Statement:

  1. This article is reproduced from [Youbi Capital], the copyright belongs to the original author [Youbi], if you have any objections to the reprint, please contact the Gate Learn team, and the team will handle it as soon as possible according to relevant procedures.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Other language versions of the article are translated by the Gate Learn team and are not mentioned in Gate.io, the translated article may not be reproduced, distributed or plagiarized.

Born on the Edge: How Decentralized Computing Power Networks Empower Crypto and AI?

Advanced7/7/2024, 7:34:41 PM
This article will deconstruct specific projects and the entire field from both micro and macro perspectives, aiming to provide readers with analytical insights to understand the core competitive advantages of each project and the overall development of the decentralized computing power track. The author will introduce and analyze five projects: Aethir, io.net, Render Network, Akash Network, and Gensyn, and summarize and evaluate their situations and the development of the track.

1 The Intersection of AI and Crypto

On May 23rd, chip giant NVIDIA released its first-quarter fiscal year 2025 financial report. The report showed that NVIDIA’s first-quarter revenue was $26 billion. Among them, data center revenue increased by a staggering 427% from last year to reach $22.6 billion. NVIDIA’s ability to single-handedly boost the financial performance of the US stock market reflects the explosive demand for computing power among global technology companies competing in the AI arena. The more top-tier technology companies expand their ambitions in the AI race, the greater their exponentially growing demand for computing power. According to TrendForce’s forecast, by 2024, the demand for high-end AI servers from the four major US cloud service providers—Microsoft, Google, AWS, and Meta—is expected to collectively account for over 60% of global demand, with shares forecasted at 20.2%, 16.6%, 16%, and 10.8%, respectively.

Image source: https://investor.nvidia.com/financial-info/financial-reports/default.aspx

“Chip shortages” have continuously been an annual buzzword in recent years. On one hand, large language models (LLMs) require substantial computing power for training and inference. As models iterate, the costs and demand for computing power exponentially increase. On the other hand, large companies like Meta purchase massive quantities of chips, causing global computing resources to tilt towards these tech giants, making it increasingly difficult for small enterprises to obtain the necessary computing resources. The challenges faced by small enterprises stem not only from the shortage of chips due to skyrocketing demand but also from structural contradictions in the supply. Currently, there are still a large number of idle GPUs on the supply side; for example, some data centers have a large amount of idle computing power (with utilization rates as low as 12% to 18%), and significant computing power resources are also idle in encrypted mining due to reduced profitability. Although not all of this computing power is suitable for specialized applications such as AI training, consumer-grade hardware can still play a significant role in other areas such as AI inference, cloud gaming rendering, cloud phones, etc. The opportunity to integrate and utilize these computing resources is enormous.

Shifting focus from AI to crypto, after a three-year silence in the cryptocurrency market, another bull market has finally emerged. Bitcoin prices have repeatedly hit new highs, and various meme coins continue to emerge. Although AI and Crypto have been buzzwords in recent years, artificial intelligence and blockchain as two important technologies seem like parallel lines that have yet to find an “intersection.” Earlier this year, Vitalik published an article titled “The promise and challenges of crypto + AI applications,” discussing future scenarios where AI and crypto converge. Vitalik outlined many visions in the article, including using blockchain and MPC (multi-party computation) encryption technologies for decentralized training and inference of AI, which could open up the black box of machine learning and make AI models more trustless, among other benefits. While realizing these visions will require considerable effort, one use case mentioned by Vitalik—empowering AI through crypto-economic incentives—is an important direction that can be achieved in the short term. Decentralized computing power networks are currently one of the most suitable scenarios for AI + crypto integration.

2 Decentralized Computing Power Network

Currently, there are numerous projects developing in the decentralized computing power network space. The underlying logic of these projects is similar and can be summarized as follows: using tokens to incentivize computing power providers to participate in the network and offer their computing resources. These scattered computing resources can aggregate into decentralized computing power networks of significant scale. This approach not only increases the utilization of idle computing power but also meets the computing needs of clients at lower costs, achieving a win-win situation for both buyers and sellers.

To provide readers with a comprehensive understanding of this sector in a short time, this article will deconstruct specific projects and the entire field from both micro and macro perspectives. The aim is to provide analytical insights for readers to understand the core competitive advantages of each project and the overall development of the decentralized computing power network sector. The author will introduce and analyze five projects: Aethir, io.net, Render Network, Akash Network, and Gensyn, and summarize and evaluate their situations and the development of the sector.

In terms of analytical framework, focusing on a specific decentralized computing power network, we can break it down into four core components:

  • Hardware Network: Integrating scattered computing resources together through nodes distributed globally to facilitate resource sharing and load balancing forms the foundational layer of decentralized computing power networks.
  • Bilateral Market: Matching computing power providers with demanders through effective pricing and discovery mechanisms, providing a secure trading platform ensuring transparent, fair, and trustworthy transactions for both sides.
  • Consensus Mechanism: Ensuring nodes within the network operate correctly and complete tasks. The consensus mechanism monitors two aspects: 1) Node uptime to ensure they are active and ready to accept tasks at any time. 2) Task completion proof: Nodes effectively and correctly complete tasks without diverting computing power for other purposes, occupying processes and threads.
  • Token Incentives: Token models incentivize more participants to provide/use services, capturing network effects with tokens to facilitate community benefit sharing.

From an overview perspective of the decentralized computing power sector, Blockworks Research provides a robust analytical framework that categorizes projects into three distinct layers.

  • Bare Metal Layer: Forms the foundational layer of the decentralized computing stack, responsible for aggregating raw computing resources and making them accessible via API calls.
  • Orchestration Layer: Constitutes the middle layer of the decentralized computing stack, primarily focused on coordination and abstraction. It handles tasks such as scheduling, scaling, operation, load balancing, and fault tolerance of computing power. Its main role is to “abstract” the complexity of managing the underlying hardware, providing a more advanced user interface tailored to specific customer needs.
  • Aggregation Layer: Forms the top layer of the decentralized computing stack, primarily responsible for integration. It provides a unified interface for users to execute various computing tasks in one place, such as AI training, rendering, zkML, and more. This layer acts as an orchestration and distribution layer for multiple decentralized computing services.

Image source: Youbi Capital

Based on the two analysis frameworks provided, we will conduct a comparative analysis of five selected projects across four dimensions: core business, market positioning, hardware facilities, and financial performance.

2.1 Core Business

From a foundational perspective, decentralized computing power networks are highly homogenized, utilizing tokens to incentivize idle computing power providers to offer their services. Based on this foundational logic, we can understand the core business differences among projects from three aspects:

  • The source of idle computing power
    • The sources of idle computing power on the market primarily come from two main categories: 1) data centers, mining companies, and other enterprises; and 2) individual users. Data centers typically possess professional-grade hardware, whereas individual users generally purchase consumer-grade chips.
    • Aethir, Akash Network, and Gensyn primarily gather computing power from enterprises. The benefits of sourcing from enterprises include: 1) higher quality hardware and professional maintenance teams, leading to higher performance and reliability of computing resources; 2) more homogeneity and centralized management of computing resources in enterprises and data centers, resulting in more efficient scheduling and maintenance. However, this approach requires higher demands from project teams, necessitating business relationships with enterprises that control computing power. Additionally, scalability and decentralization may be somewhat compromised.
    • Render Network and io.net incentivize individual users to provide their idle computing power. The advantages of sourcing from individuals include: 1) lower explicit costs of idle computing power from individuals, providing more economical computing resources; 2) higher scalability and decentralization of the network, enhancing system resilience and robustness. However, the disadvantages include the widespread and heterogeneous distribution of resources among individuals, which complicates management and scheduling, increasing operational challenges. Moreover, relying on individual computing power to initiate network effects can be more difficult. Lastly, devices owned by individuals may pose more security risks, potentially leading to data leaks and misuse of computing power.
  • Computing power consumer
    • From the perspective of computing power consumers, Aethir, io.net, and Gensyn primarily target enterprises. For B-end clients, such as those requiring AI and real-time gaming rendering, there is a high demand for high-performance computing resources, typically requiring high-end GPUs or professional-grade hardware. Additionally, B-end clients have stringent requirements for the stability and reliability of computing resources, necessitating high-quality service level agreements to ensure smooth project operations and timely technical support. Moreover, the migration costs for B-end clients are substantial. If decentralized networks lack mature SDKs to facilitate rapid deployment for projects (for example, Akash Network requiring users to develop based on remote ports), it becomes challenging to persuade clients to migrate. Unless there is a significant price advantage, client willingness to migrate remains low.
    • Render Network and Akash Network primarily serve individual users for computing power services. Serving C-end consumers requires projects to design simple and user-friendly interfaces and tools to deliver a positive consumer experience. Additionally, consumers are highly price-sensitive, necessitating competitive pricing strategies from projects.
  • Hardware type
    • Common computing hardware resources include CPU, FPGA, GPU, ASIC, and SoC. These hardware types have significant differences in design goals, performance characteristics, and application areas. In summary, CPUs excel at general computing tasks, FPGAs are advantageous for high parallel processing and programmability, GPUs perform well in parallel computing, ASICs are most efficient for specific tasks, and SoCs integrate multiple functions into one unit, suitable for highly integrated applications. The choice of hardware depends on the specific application needs, performance requirements, and cost considerations.
    • The decentralized computing power projects we discuss mostly collect GPU computing power, which is determined by the type of project and the characteristics of GPUs. GPUs have unique advantages in AI training, parallel computing, multimedia rendering, etc. Although these projects mostly involve GPU integration, different applications have different hardware specifications and requirements, resulting in heterogeneous optimization cores and parameters. These parameters include parallelism/serial dependencies, memory, latency, etc. For example, rendering workloads are actually more suitable for consumer-grade GPUs rather than higher-performance data center GPUs because rendering demands high requirements for tasks like ray tracing. Consumer-grade chips like the 4090s are enhanced with RT cores, specifically optimized for ray tracing tasks. AI training and inference require professional-grade GPUs. Thus, the Render Network can aggregate consumer-grade GPUs like RTX 3090s and 4090s from individual users, while IO.NET requires more H100s, A100s, and other professional-grade GPUs to meet the needs of AI startups.

2.2 Market Positioning

In terms of project positioning, the core issues to be addressed, optimization focus, and value capture capabilities differ for the bare metal layer, orchestration layer, and aggregation layer.

  • The bare metal layer focuses on the collection and utilization of physical resources. The orchestration layer is concerned with the scheduling and optimization of computing power, designing the optimal configuration of physical hardware according to customer needs. The aggregation layer is general-purpose, focusing on the integration and abstraction of different resources.
  • From a value chain perspective, each project should start from the bare metal layer and strive to ascend upwards. In terms of value capture, the capability increases progressively from the bare metal layer to the orchestration layer and finally to the aggregation layer. The aggregation layer can capture the most value because an aggregation platform can achieve the greatest network effects and directly reach the most users, effectively acting as the traffic entry point for a decentralized network, thus occupying the highest value capture position in the entire computing resource management stack.
  • Correspondingly, building an aggregation platform is the most challenging. A project needs to comprehensively address technical complexity, heterogeneous resource management, system reliability and scalability, network effect realization, security and privacy protection, and complex operational management issues. These challenges are unfavorable for the cold start of a project and depend on the development situation and timing of the sector. It is unrealistic to work on the aggregation layer before the orchestration layer has matured and captured a significant market share.
  • Currently, Aethir, Render Network, Akash Network, and Gensyn belong to the orchestration layer. They aim to provide services for specific targets and customer groups. Aethir’s main business is real-time rendering for cloud gaming and providing certain development and deployment environments and tools for B-end customers; Render Network’s main business is video rendering; Akash Network’s mission is to provide a marketplace platform similar to Taobao; and Gensyn focuses deeply on the AI training field. IO.net positions itself as an aggregation layer, but its current functionality is still some distance from that of a complete aggregation layer. Although it has collected hardware from Render Network and Filecoin, the abstraction and integration of hardware resources have not yet been completed.

2.3 Hardware Facilities

  • Currently, not all projects have disclosed detailed network data. Comparatively, io.net’s explorer UI is the best, displaying parameters such as GPU/CPU quantity, types, prices, distribution, network usage, and node revenue. However, at the end of April, io.net’s frontend was attacked due to the lack of authentication for PUT/POST interfaces, leading to hackers tampering with the frontend data. This incident has raised concerns about the privacy and reliability of network data for other projects as well.
  • In terms of GPU quantity and models, io.net, being an aggregation layer, should logically have the most hardware. Aethir follows closely behind, while the hardware status of other projects is less transparent. io.net has a wide variety of GPUs, including professional-grade GPUs like the A100 and consumer-grade GPUs like the 4090, aligning with io.net’s aggregation positioning. This allows io to select the most suitable GPU based on specific task requirements. However, different GPU models and brands may require different drivers and configurations, and the software also needs complex optimization, increasing management and maintenance complexity. Currently, io’s task allocation mainly relies on user self-selection.
  • Aethir has released its own mining machine, and in May, Aethir Edge, supported by Qualcomm, was officially launched. This breaks away from the single centralized GPU cluster deployment far from users, deploying computing power to the edge. Aethir Edge, combined with the H100 cluster computing power, serves AI scenarios, deploying trained models to provide inference computing services at optimal costs. This solution is closer to users, faster in service, and offers higher cost efficiency.
  • From the supply and demand perspective, taking Akash Network as an example, its statistics show a total CPU count of around 16k and 378 GPUs. Based on network rental demand, the utilization rates for CPU and GPU are 11.1% and 19.3%, respectively. Only the professional-grade GPU H100 has a relatively high rental rate, while most other models remain idle. This situation is generally similar across other networks, with overall network demand being low and most computing power, except for popular chips like the A100 and H100, remaining idle.
  • In terms of price advantage, compared to traditional service providers, the cost advantage is not significant except against the giants of the cloud computing market.

2.4 Financial Performance

  • Regardless of how the token model is designed, a healthy tokenomics must meet the following basic conditions: 1) User demand for the network must be reflected in the token price, meaning that the token can capture value; 2) All participants, whether developers, nodes, or users, need to receive long-term and fair incentives; 3) Ensure decentralized governance and avoid excessive holding by insiders; 4) Reasonable inflation and deflation mechanisms and token release schedules to avoid significant price volatility affecting network stability and sustainability.
  • If we broadly categorize token models into BME (burn and mint equilibrium) and SFA (stake for access), the deflationary pressure of these two models comes from different sources: In the BME model, tokens are burned after users purchase services, so the system’s deflationary pressure is determined by demand. In the SFA model, service providers/nodes are required to stake tokens to obtain the qualification to provide services, so the deflationary pressure is brought by supply. The advantage of BME is that it is more suitable for non-standardized goods. However, if network demand is insufficient, it may face continuous inflationary pressure. The token models of various projects differ in details, but generally speaking, Aethir leans more towards SFA, while io.net, Render Network, and Akash Network lean more towards BME. Gensyn’s model is still unknown.
  • In terms of revenue, network demand will be directly reflected in the overall network revenue (excluding miner income, as miners receive rewards for completing tasks and subsidies from projects). According to publicly available data, io.net has the highest value. Although Aethir’s revenue has not yet been disclosed, public information indicates they have announced signing orders with many B-end customers.
  • Regarding token prices, only Render Network and Akash Network have conducted ICOs so far. Aethir and io.net have also recently issued tokens, but their price performance needs to be further observed and will not be discussed in detail here. Gensyn’s plans are still unclear. From the two projects that have issued tokens and other projects in the same sector not discussed here, decentralized computing power networks have shown very impressive price performance, reflecting the significant market potential and high expectations of the community to a certain extent.

2.5 Summary

  • The decentralized computing power network sector is developing rapidly, with many projects already capable of serving customers through their products and generating some revenue. The sector has moved beyond pure narrative and entered a phase where preliminary services can be provided.
  • A common issue faced by decentralized computing power networks is weak demand, with long-term customer needs not being well validated and explored. However, demand-side challenges have not significantly impacted token prices, as the few projects that have issued tokens have shown impressive performance.
  • AI is the main narrative for decentralized computing power networks but not the only application. In addition to AI training and inference, computing power can also be used for real-time rendering in cloud gaming, cloud mobile services, and more.
  • The hardware in computing power networks is highly heterogeneous, and the quality and scale of these networks need further improvement. For C-end users, the cost advantage is not very significant. For B-end users, aside from cost savings, factors such as service stability, reliability, technical support, compliance, and legal support must also be considered. Web3 projects generally do not perform well in these areas.

3 Closing Thoughts

The exponential growth in AI has undeniably led to a massive demand for computing power. Since 2012, the computational power used in AI training tasks has been growing exponentially, doubling approximately every 3.5 months (in comparison, Moore’s Law predicts a doubling every 18 months). Since 2012, the demand for computing power has increased by more than 300,000 times, far exceeding the 12-fold increase predicted by Moore’s Law. Forecasts predict that the GPU market will grow at a compound annual growth rate of 32% over the next five years, reaching over $200 billion. AMD’s estimates are even higher, with the company predicting that the GPU chip market will reach $400 billion by 2027.

Image source: https://www.stateof.ai/

The explosive growth of artificial intelligence and other compute-intensive workloads, such as AR/VR rendering, has exposed structural inefficiencies in the traditional cloud computing and leading computing markets. In theory, decentralized computing power networks can leverage distributed idle computing resources to provide more flexible, cost-effective, and efficient solutions to meet the massive demand for computing resources.

Thus, the combination of crypto and AI has enormous market potential but also faces intense competition with traditional enterprises, high entry barriers, and a complex market environment. Overall, among all crypto sectors, decentralized computing power networks are one of the most promising verticals in the crypto field to meet real demand.

Image source: https://vitalik.eth.limo/general/2024/01/30/cryptoai.html

The future is bright, but the road is challenging. To achieve the aforementioned vision, we need to address numerous problems and challenges. In summary, at this stage, simply providing traditional cloud services results in a small profit margin for projects.

From the demand side, large enterprises typically build their own computing power, while most individual developers tend to choose established cloud services. It remains to be further explored and verified whether small and medium-sized enterprises, the real users of decentralized computing power network resources, will have stable demand.

On the other hand, AI is a vast market with extremely high potential and imagination. To tap into this broader market, future decentralized computing power service providers will need to transition towards offering models and AI services, exploring more use cases of crypto + AI, and expanding the value their projects can create. However, at present, many problems and challenges remain to be addressed before further development into the AI field can be achieved:

  • Price advantage not prominent: Comparing previous data reveals that decentralized computing power networks do not demonstrate significant cost advantages. This may be due to market mechanisms dictating that high-demand specialized chips like H100 and A100 are not priced cheaply. Additionally, the lack of economies of scale from decentralization, high network and bandwidth costs, and the significant complexity of management and operations add hidden costs that further increase computing costs.
  • Specific challenges in AI training: Conducting AI training in a decentralized manner faces substantial technical bottlenecks at present. These bottlenecks are visually evident in the GPU workflow: during large language model training, GPUs first receive pre-processed data batches for forward and backward propagation to compute gradients. GPUs then aggregate gradients and update model parameters to ensure synchronization. This iterative process continues until all batches are trained or a specified number of epochs is reached. It involves extensive data transfer and synchronization. Questions such as which parallel and synchronization strategies to use, how to optimize network bandwidth and latency, and how to reduce communication costs remain largely unresolved. Currently, using decentralized computing power networks for AI training is impractical.
  • Data security and privacy concerns: In the training process of large language models, every stage involving data handling and transmission—such as data allocation, model training, and parameter and gradient aggregation—can potentially impact data security and privacy. Privacy concerns are especially critical in models involving sensitive data. Without resolving data privacy issues, scaling on the demand side is not feasible.

From a pragmatic perspective, a decentralized computing power network needs to balance current demand exploration with future market opportunities. It’s crucial to identify a clear product positioning and target audience. Initially focusing on non-AI or Web3 native projects, addressing relatively niche demands, can help establish an early user base. Simultaneously, continuous exploration of various scenarios where AI and crypto converge is essential. This involves exploring technological frontiers and upgrading services to meet evolving needs. By strategically aligning product offerings with market demands and staying at the forefront of technological advancements, decentralized computing power networks can effectively position themselves for sustained growth and market relevance.

References

https://www.stateof.ai/

https://vitalik.eth.limo/general/2024/01/30/cryptoai.html

https://foresightnews.pro/article/detail/34368

https://app.blockworksresearch.com/unlocked/compute-de-pi-ns-paths-to-adoption-in-an-ai-dominated-market?callback=%2Fresearch%2Fcompute-de-pi-ns-paths-to-adoption-in-an-ai-dominated-market

https://research.web3caff.com/zh/archives/17351?ref=1554

Statement:

  1. This article is reproduced from [Youbi Capital], the copyright belongs to the original author [Youbi], if you have any objections to the reprint, please contact the Gate Learn team, and the team will handle it as soon as possible according to relevant procedures.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Other language versions of the article are translated by the Gate Learn team and are not mentioned in Gate.io, the translated article may not be reproduced, distributed or plagiarized.

Start Now
Sign up and get a
$100
Voucher!