AI/DePIN/Sol Ecosystem Triple Halo: Analyzing the IO.NET Token Launch

Advanced4/17/2024, 2:18:16 PM
This article will organize the key information of the AI decentralized computing project: the IO.NET project, including product logic, competitive situation, and project background. It also provides valuation estimations, analyzing the valuation from different perspectives through data analysis, and offering a reference calculation for valuation.

Introduction

In my previous article, I mentioned that compared to the previous two cycles, this crypto bull market cycle lacks influential new business and asset narratives. AI is one of the few new narratives in this round of the Web3 field. In this article, I will combine this year’s hot AI project, IO.NET, to ponder the following two issues:

  1. The necessity of AI+Web3 in business

  2. The necessity and challenges of distributed computing services

Furthermore, I will organize the key information of the representative project in AI distributed computing: the IO.NET project, including product logic, competitive situation, and project background. I will also delve into the project’s valuation.

The part of this article about the combination of AI and Web3 was inspired by “The Real Merge” written by Delphi Digital researcher Michael Rinko. Some views in this article digest and quote from that paper, and I recommend readers refer to the original.

This article represents my interim thoughts as of its publication. The situation may change in the future, and the viewpoints have a strong subjective nature. They may also contain factual, data, or reasoning errors. Please do not use this as investment advice, and I welcome criticism and discussion from my peers.

The following is the main text.

1.Business Logic: The Intersection of AI and Web3

1.1 2023: The New “Miracle Year” Created by AI

Looking back on human history, once there is a breakthrough in technology, everything from individual daily life, to various industrial landscapes, and even the entire civilization of humanity, undergoes revolutionary changes.

There are two significant years in the history of mankind, namely 1666 and 1905, which are now referred to as the two great “miracle years” in the history of technology.

The year 1666 is considered a miracle year because Newton’s scientific achievements were prominently emerged during this time. In that year, he pioneered the branch of physics known as optics, founded the mathematical branch of calculus, and derived the law of gravity, a fundamental law of modern natural science. Each of these achievements was a foundational contribution to the scientific development of humanity for the next century, significantly accelerating the overall progress of science.

The second miracle year was 1905, when Einstein, at just 26 years old, published four papers in a row in the “Annals of Physics,” covering the photoelectric effect (laying the foundation for quantum mechanics), Brownian motion (becoming a crucial reference for analyzing stochastic processes), the theory of special relativity, and the mass-energy equation (the famous formula E=MC^2). In subsequent evaluations, each of these papers was considered to exceed the average level of the Nobel Prize in Physics (Einstein himself also received the Nobel Prize for his paper on the photoelectric effect), and once again, the historical progression of human civilization took several giant leaps forward.

The recently passed year of 2023 is likely to be called another “miracle year” because of ChatGPT.

We regard 2023 as another “miracle year” in the history of human technology not only because of the significant advances GPT has made in natural language understanding and generation but also because humanity has deciphered the growth pattern of large language models from the evolution of GPT—that is, by expanding the model parameters and training data, the capabilities of the model can be exponentially enhanced—and this process does not yet see a short-term bottleneck (as long as there is enough computational power).

This capability extends far beyond understanding language and generating dialogue and is widely used across various technological fields. For example, in the biological field in 2018, Nobel laureate in Chemistry, Frances Arnold, said during the award ceremony, “Today, in practical applications, we can read, write, and edit any DNA sequence, but we are still unable to compose it.” Just five years after her speech, in 2023, researchers from Stanford University and Salesforce Research in Silicon Valley published a paper in “Nature Biotechnology.” They created 1 million new proteins from scratch using a large language model fine-tuned from GPT3 and identified two proteins with distinct structures, both with antibacterial capabilities, potentially becoming a new solution for combating bacteria beyond antibiotics. This signifies that the bottleneck in protein “creation” has been broken with the help of AI.

Furthermore, the AI algorithm AlphaFold predicted the structures of nearly all 214 million proteins on Earth within 18 months, a feat that exceeds the collective outcomes of all structural biologists in history by hundreds of times.

With AI-based models in biotechnology, material science, drug development, and other hard sciences, as well as in the humanities such as law and art, a revolutionary transformation is inevitable, and 2023 is indeed the inaugural year for all these advancements.

As we all know, in the past century, human wealth creation has grown exponentially, and the rapid maturation of AI technology will undoubtedly accelerate this process further.

Global GDP trend chart, data source: World Bank

1.2 The Integration of AI and Crypto

To fundamentally understand the necessity of integrating AI and Crypto, we can start from their complementary characteristics.

Complementary Characteristics of AI and Crypto

AI possesses three attributes:

  1. Randomness: AI exhibits randomness; the mechanism behind its content production is a black box that is difficult to replicate and inspect, thus the results are also random.

  2. Resource Intensive: AI is a resource-intensive industry requiring significant amounts of energy, chips, and computational power.

  3. Human-like Intelligence: AI will soon be able to pass the Turing test, thereafter making it difficult to distinguish between humans and machines.

On October 30, 2023, a research team from the University of California, San Diego released Turing test results for GPT-3.5 and GPT-4.0. GPT-4.0 scored 41%, just 9% shy of the passing mark of 50%, with human participants scoring 63%. The meaning of this Turing test is the percentage of people who believe that their conversational partner is a human. If over 50% believe so, it indicates that at least half of the people consider the conversational entity to be human, not a machine, thus regarded as passing the Turing test.

While AI creates new breakthrough productivity for humanity, its three attributes also bring significant challenges to human society, including:

  • How to verify and control the randomness of AI, turning randomness from a flaw into an advantage.

  • How to fulfill the significant energy and computational power needs of AI.

  • How to differentiate between humans and machines.

Crypto and blockchain economics may well be the remedy to the challenges brought by AI. The cryptographic economy has the following three characteristics:

  1. Determinism: Business operations are based on blockchain, code, and smart contracts, with clear rules and boundaries; the input dictates the outcome, ensuring high determinism.

  2. Efficient Resource Allocation: The crypto economy has built a massive global free market where the pricing, fundraising, and circulation of resources are very fast. Due to the presence of tokens, incentives can accelerate the matching of market supply and demand, reaching critical points more quickly.

  3. Trustless: With public ledgers and open-source code, everyone can easily verify operations, leading to a “trustless” system. Furthermore, ZK (Zero-Knowledge) technology avoids the exposure of privacy during verification.

Let’s illustrate the complementarity between AI and the crypto economy with three examples.

Example A: Addressing Randomness, Crypto Economy-Based AI Agents

AI agents, such as those from Fetch.AI, are designed to act on human will and perform tasks on behalf of humans. If we want our AI agent to handle a financial transaction, like “buying $1000 of BTC,” it might face two scenarios:

  • Scenario One: It needs to interface with traditional financial institutions (like BlackRock) to purchase BTC ETFs, facing numerous compatibility issues with AI agents and centralized institutions, such as KYC, documentation review, login, and identity verification, which are currently quite cumbersome.

  • Scenario Two: It operates based on the native crypto economy, which is much simpler; it could execute transactions directly through Uniswap or a similar aggregated trading platform using your account’s signature, completing the transaction quickly and simply to receive WBTC (or another wrapped form of BTC). Essentially, this is what various trading bots are already doing, albeit focused solely on trading for now. As AI integrates and evolves, future trading bots will undoubtedly be capable of executing more complex trading intentions, such as tracking the trading strategies and success rates of 100 smart money addresses on the blockchain, executing similar transactions with 10% of my funds over a week, and stopping and summarizing the reasons for failure if the results are unsatisfactory.

AI performs better within blockchain systems primarily due to the clarity of crypto economic rules and unrestricted system access. Within these defined rules, the potential risks brought by AI’s randomness are minimized. For example, AI has already outperformed humans in card games and video games due to the clear, closed sandbox of rules. However, progress in autonomous driving is relatively slow due to the challenges of the open external environment, and we are less tolerant of the randomness in AI’s problem-solving in such settings.

Example B: Shaping Resources through Token Incentives**

The global network behind BTC, with a current total hash rate of 576.70 EH/s, surpasses the combined computational power of any country’s supercomputer. Its development is driven by a simple, fair network incentive.

BTC network computing power trend, source: https://www.coinwarz.com/

In addition to this, projects including Mobile’s DePIN are attempting to shape a two-sided market for supply and demand through token incentives, aiming to achieve network effects. The focus of the following discussion in this article, IO.NET, is a platform designed to aggregate AI computational power, hoping to unleash more AI potential through a token model.

Example C: Open-source code, introduction of Zero-Knowledge Proofs (ZK) to differentiate humans from machines while protecting privacy

As a Web3 project involving OpenAI founder Sam Altman, Worldcoin uses a hardware device called Orb, which generates a unique and anonymous hash value based on human iris biometrics through ZK technology to verify identity and differentiate humans from machines. In early March this year, the Web3 art project Drip began using Worldcoin’s ID to verify real human users and distribute rewards.

Moreover, Worldcoin has recently open-sourced the program code of its iris-recognition hardware Orb, ensuring the security and privacy of user biometric data.

Overall, the crypto economy has become a significant potential solution to the challenges posed by AI to human society, due to the certainty of code and cryptography, the advantages of resource circulation and fundraising brought by token mechanisms, and the trustless nature based on open-source code and public ledgers.

The most urgent and commercially demanding challenge is the extreme hunger for computational resources by AI products, which revolves around the enormous demand for chips and computational power.

This is also the main reason why distributed computing projects have led the overall AI track in this bull market cycle.

The commercial necessity of decentralized computing

AI requires substantial computational resources, both for training models and for inference.

In the practice of training large language models, it has been confirmed that as long as the scale of data parameters is sufficiently large, new capabilities emerge that were not present before. Each generation of GPT shows an exponential leap in capabilities compared to its predecessor, backed by an exponential growth in computational volume needed for model training.

Research by DeepMind and Stanford University shows that different large language models, when facing various tasks (computation, Persian QA, natural language understanding, etc.), perform similarly to random answers until the training reaches less than 10^22 FLOPs (FLOPs denote floating-point operations per second, a measure of computational performance); however, once the scale of parameters surpasses that critical threshold, the performance of any task dramatically improves, regardless of the language model.

来源:Emergent Abilities of Large Language Models

Emergent Abilities of Large Language Models

It is precisely the principle of “achieving miracles with great computing power” and its practical verification that led Sam Altman, the founder of OpenAI, to propose raising 7 trillion US dollars to build an advanced chip factory that is ten times the size of the current TSMC. It is expected that 1.5 trillion dollars will be spent on this part, with the remaining funds used for chip production and model training.

Besides the training of AI models, the inference process of the models themselves also requires substantial computing power, although less than that needed for training. Therefore, the craving for chips and computing power has become a norm among AI competitors.

Compared to centralized AI computing providers like Amazon Web Services, Google Cloud Platform, and Microsoft’s Azure, the main value propositions of distributed AI computing include:

  • Accessibility: Accessing computing chips through cloud services such as AWS, GCP, or Azure usually takes weeks, and popular GPU models are often out of stock. Moreover, to obtain computing power, consumers often need to sign long-term, inflexible contracts with these large companies. In contrast, distributed computing platforms can provide flexible hardware options with greater accessibility.
  • Lower Pricing: By utilizing idle chips, combined with token subsidies from the network protocol to chip and computing power providers, distributed computing networks may offer more affordable computing power.
  • Censorship Resistance: Currently, cutting-edge computing chips and supplies are monopolized by large technology companies. Additionally, governments led by the United States are intensifying scrutiny over AI computing services. The ability to acquire computing power in a distributed, flexible, and free manner is becoming a clear demand, which is also a core value proposition of web3-based computing service platforms.

If fossil fuels were the lifeblood of the industrial age, then computing power will likely be the lifeblood of the new digital age ushered in by AI, with the supply of computing power becoming the infrastructure of the AI era. Just as stablecoins have become a robust offshoot of fiat currency in the Web3 era, could the distributed computing market become a fast-growing offshoot of the AI computing market?

Since this is still a relatively early market, everything is still under observation. However, the following factors could potentially stimulate the narrative or market adoption of distributed computing:

  • Continuous GPU supply and demand tension. The ongoing tension in GPU supply might encourage some developers to turn to distributed computing platforms.
  • Regulatory expansion. Accessing AI computing services from large cloud computing platforms requires KYC and extensive scrutiny. This might instead encourage the adoption of distributed computing platforms, especially in regions facing restrictions and sanctions.
  • Token price incentives. Bull market cycles and rising token prices increase the subsidy value to the GPU supply side, attracting more suppliers to the market, increasing the market size, and reducing the actual purchase price for consumers.

However, the challenges faced by distributed computing platforms are also quite evident:

  • Technical and Engineering Challenges
  • Proof of Work Issues: The computation for deep learning models, due to their hierarchical structure where each layer’s output serves as the input for the next, requires executing all previous work to verify the computation’s validity. This cannot be simply and effectively verified. To address this issue, distributed computing platforms need to develop new algorithms or use approximate verification techniques, which can provide probabilistic guarantees of result correctness, rather than absolute certainty.
  • Parallelization Challenges: Distributed computing platforms gather the long tail of chip supply, meaning that individual devices can only offer limited computing power. A single chip supplier can hardly complete the training or inference tasks of an AI model independently in a short period, so tasks must be decomposed and distributed through parallelization to shorten the overall completion time. Parallelization also inevitably faces issues such as how tasks are decomposed (especially complex deep learning tasks), data dependency, and additional communication costs between devices.
  • Privacy Protection Issues: How to ensure that the data and models of the purchasing party are not exposed to the task recipients?

Regulatory compliance challenges

  • Due to the unlicensed nature of the supply and procurement dual markets of distributed computing platforms, they can attract certain customers as selling points. On the other hand, they may become targets of government regulation as AI regulatory standards are refined. Additionally, some GPU suppliers may worry about whether their leased computing resources are being provided to sanctioned businesses or individuals.

Overall, the consumers of distributed computing platforms are mostly professional developers or small to medium-sized institutions, unlike cryptocurrency and NFT investors who differ in their expectations for the stability and continuity of the services offered by the protocol. Price may not be their main motive in decision-making. For now, it appears that distributed computing platforms still have a long way to go to gain the approval of such users.

Next, we will organize and analyze the project information for a new distributed computing project in this cycle, IO.NET, and estimate its possible market valuation after listing, based on current market competitors in the AI and distributed computing sectors.

2. Distributed AI Computing Platform: IO.NETDistributed AI Computing Platform: IO.NET

2.1 Project Positioning

IO.NET is a decentralized computing network that has established a bilateral market centered around chips. The supply side consists of chips (primarily GPUs, but also CPUs and Apple’s iGPUs) distributed globally, while the demand side is comprised of artificial intelligence engineers seeking to perform AI model training or inference tasks.

As stated on IO.NET’s official website:

Our Mission

Putting together one million GPUs in a DePIN – decentralized physical infrastructure network.

The mission is to integrate millions of GPUs into its DePIN network.

Compared to existing cloud AI computing service providers, IO.NET emphasizes the following key selling points:

  • Flexible Combination: AI engineers can freely select and combine the chips they need to form “Clusters” to complete their computing tasks.
  • Rapid Deployment: Deployment can be completed in seconds, without the weeks of approval and waiting typically required by centralized providers like AWS.
  • Cost-effective Service: The cost of services is 90% lower than that of mainstream providers.

In addition, IO.NET plans to launch services such as an AI model store in the future.

2.2 Product Mechanism and Business Data

Product Mechanism and Deployment Experience

Similar to Amazon Cloud, Google Cloud, and Alibaba Cloud, the computing service provided by IO.NET is called IO Cloud. IO Cloud is a distributed, decentralized network of chips capable of executing Python-based machine learning code and running AI and machine learning programs.

The basic business module of IO Cloud is called “Clusters.” Clusters are groups of GPUs that can autonomously coordinate to complete computing tasks. Artificial intelligence engineers can customize their desired Clusters based on their needs.

IO.NET’s product interface is highly user-friendly. If you need to deploy your own chip Clusters to complete AI computing tasks, you can start configuring your desired chip Clusters as soon as you enter the Clusters product page on their website.

Page information: https://cloud.io.net/cloud/clusters/create-cluster, the same below

First, you need to select your project scenario, and currently, there are three types available:

  1. General (Generic type): Provides a more generic environment, suitable for early project stages when specific resource needs are uncertain.

  2. Train (Training type): Designed for the training and fine-tuning of machine learning models. This option offers additional GPU resources, higher memory capacity, and/or faster network connections to handle these intensive computational tasks.

  3. Inference (Inference type): Designed for low-latency inference and high-load tasks. In the context of machine learning, inference refers to using trained models to predict or analyze new data and provide feedback. Therefore, this option focuses on optimizing latency and throughput to support real-time or near-real-time data processing needs.

Next, you need to choose the supplier for the chip Clusters. Currently, IO.NET has partnerships with Render Network and Filecoin’s mining network, allowing users to choose chips from IO.NET or the other two networks as their computing Clusters’ supplier. IO.NET acts as an aggregator (although, at the time of writing, Filecoin’s service is temporarily offline). Notably, according to the page display, the online available GPU count for IO.NET is over 200,000, while that for Render Network is over 3,700.

Finally, you enter the chip hardware selection phase for the Clusters. Currently, IO.NET only lists GPUs for selection, excluding CPUs or Apple’s iGPUs (M1, M2, etc.), and the GPUs mainly feature NVIDIA products.

In the official list of available GPU hardware options, based on data tested by the author on that day, the total number of GPUs available online in the IO.NET network is 206,001. Of these, the GeForce RTX 4090 has the highest availability with 45,250 units, followed by the GeForce RTX 3090 Ti with 30,779 units.

Additionally, the A100-SXM4-80GB chip, which is more efficient for AI computing tasks such as machine learning, deep learning, and scientific computation (market price over $15,000), has 7,965 units online.

The NVIDIA H100 80GB HBM3 graphics card, specifically designed from the ground up for AI (market price over $40,000), has a training performance 3.3 times that of the A100 and an inference performance 4.5 times that of the A100, with a total of 86 units online.

After selecting the hardware type for Clusters, users also need to choose the region, communication speed, number of GPUs rented, and rental duration, among other parameters.

Finally, IO.NET will provide a bill based on the comprehensive selection. For example, in the author’s Clusters configuration:

  • General task scenario
  • 16 A100-SXM4-80GB chips
  • Ultra high-speed connection
  • Located in the USA
  • Rental period of 1 week

The total bill is $3311.6, with a per-hour price per card of $1.232

In comparison, the hourly rental prices of the A100-SXM4-80GB on Amazon Cloud, Google Cloud, and Microsoft Azure are $5.12, $5.07, and $3.67, respectively (data source: https://cloud-gpus.com/, actual prices may vary based on contract details).

Thus, purely in terms of price, IO.NET’s computing power is significantly cheaper than that of mainstream manufacturers, and the supply and procurement options are very flexible, making it easy to get started.

Business conditions

Supply side situation

As of April 4 this year, according to official data, IO.NET has a total supply of 371,027 GPUs and 42,321 CPUs on the supply side. In addition, Render Network, as its partner, has also connected 9,997 GPUs and 776 CPUs to the network’s supply.

Data source: https://cloud.io.net/explorer/home, the same below

As of the writing of this article, 214,387 of the GPUs connected by IO.NET are online, with an online rate of 57.8%. The online rate for GPUs from Render Network is 45.1%.

What do the above supply-side data imply?

To provide a comparison, let’s introduce another, older distributed computing project, Akash Network, for contrast. Akash Network launched its mainnet as early as 2020, initially focusing on distributed services for CPUs and storage. In June 2023, it launched a testnet for GPU services and went live with its mainnet for distributed GPU computing power in September of the same year.

Data source: https://stats.akash.network/provider-graph/graphics-gpu

According to official data from Akash, although the supply side has continued to grow, the total number of GPUs connected to its network has only reached 365 to date.

In terms of GPU supply volume, IO.NET is several orders of magnitude higher than Akash Network, making it the largest supply network in the distributed GPU computing power race.

Demand side situation

However, looking at the demand side, IO.NET is still in the early stages of market cultivation, and the actual volume of computing tasks performed using IO.NET is not large. Most of the online GPUs have a workload of 0%, with only four types of chips—A100 PCIe 80GB K8S, RTX A6000 K8S, RTX A4000 K8S, and H100 80GB HBM3—handling tasks. Except for the A100 PCIe 80GB K8S, the workload of the other three chips is less than 20%.

The official network stress value disclosed on the day is 0%, indicating that most of the chip supply is in an online standby state. Meanwhile, IO.NET has generated a total of $586,029 in service fees, with the cost in the last day amounting to $3,200.

Data source: https://cloud.io.net/explorer/clusters

The scale of these network settlement fees, both in total and in daily transaction volume, is on the same order of magnitude as Akash, although most of Akash’s network revenue comes from the CPU segment, with over 20,000 CPUs supplied.

Data source: https://stats.akash.network/

Additionally, IO.NET has disclosed data on AI inference tasks processed by the network; to date, it has processed and verified more than 230,000 inference tasks, though most of this volume has been generated by projects sponsored by IO.NET, such as BC8.AI.

Data source: https://cloud.io.net/explorer/inferences

Based on the current business data, IO.NET’s supply side expansion is progressing smoothly, buoyed by the anticipation of airdrops and a community event dubbed “Ignition”, which has quickly amassed a significant amount of AI chip computing power. However, the expansion on the demand side is still in its early stages, with organic demand currently insufficient. It remains to be assessed whether the current lack of demand is due to the fact that consumer outreach has not yet begun, or because the current service experience is not stable enough, thus lacking widespread adoption.

Considering the short-term difficulty in bridging the gap in AI computing power, many AI engineers and projects are seeking alternative solutions, which may spark interest in decentralized service providers. Additionally, since IO.NET has not yet initiated economic and activity incentives for the demand side, along with the gradual improvement of the product experience, the eventual matching of supply and demand is still anticipated with optimism.

2.3 Team Background and Financing

Team situation

IO.NET’s core team initially focused on quantitative trading, developing institutional-level quantitative trading systems for stocks and crypto assets until June 2022. Driven by the backend system’s need for computing power, the team began to explore the possibilities of decentralized computing, ultimately focusing on reducing the cost of GPU computing services.

Founder & CEO: Ahmad Shadid, who has a background in quantitative finance and engineering and has also volunteered with the Ethereum Foundation.

CMO & Chief Strategy Officer: Garrison Yang, who joined IO.NET in March this year. He was previously the VP of Strategy and Growth at Avalanche and graduated from the University of California, Santa Barbara.

COO: Tory Green, previously the COO at Hum Capital and Director of Corporate Development and Strategy at Fox Mobile Group, graduated from Stanford.

According to LinkedIn information, IO.NET is headquartered in New York, USA, with a branch in San Francisco, and the team size exceeds 50 members.

Financing situation

As of now, IO.NET has only disclosed one round of financing, which is the Series A completed in March this year, valued at USD 1 billion. It raised USD 30 million led by Hack VC, with other participants including Multicoin Capital, Delphi Digital, Foresight Ventures, Animoca Brands, Continue Capital, Solana Ventures, Aptos, LongHash Ventures, OKX Ventures, Amber Group, SevenX Ventures, and ArkStream Capital.

It is worth mentioning that perhaps due to the investment from the Aptos Foundation, the BC8.AI project, originally settling accounts on Solana, has switched to the high-performance L1 blockchain Aptos.

2.4 Valuation Estimation

According to IO.NET’s founder and CEO Ahmad Shadid, the company will launch its token at the end of April.

IO.NET has two comparable projects for valuation reference: Render Network and Akash Network, both representative of distributed computing projects.

There are two ways to extrapolate the market cap range of IO.NET: 1. Price-to-sales ratio (P/S ratio), i.e., market cap/revenue ratio; 2. Market cap per network chip ratio.

First, let’s look at the valuation extrapolation based on the P/S ratio:

From the perspective of P/S ratio, Akash can serve as the lower limit of IO.NET’s valuation range, while Render acts as a reference for high valuation pricing. Their FDV (Fully Diluted Valuation) range is from USD 1.67 billion to USD 5.93 billion.

However, considering updates to the IO.NET project, its hotter narrative, smaller early circulating market cap, and currently larger supply side scale, the likelihood of its FDV exceeding Render’s is not small.

Next, let’s look at another valuation perspective, the “market-to-core ratio”.

In a market where the demand for AI computing power exceeds supply, the most crucial element of distributed AI computing power networks is the scale of GPU supply. Therefore, we can use the “market-to-core ratio,” the ratio of total project market cap to the number of chips in the network, to extrapolate the possible valuation range of IO.NET for readers as a market value reference.

)

If calculated based on the market-to-core ratio, with Render Network as the upper limit and Akash Network as the lower limit, the FDV range for IO.NET is between USD 20.6 billion and USD 197.5 billion.

Readers who are optimistic about the IO.NET project would consider this a very optimistic market value estimation.

Moreover, we need to consider that the current large online chip count of IO.NET may be stimulated by airdrop expectations and incentive activities, and the actual online count on the supply side still needs to be observed after the project officially launches.

Therefore, overall, the valuation estimation from the P/S ratio perspective may be more referential.

IO.NET, as a project that combines AI, DePIN, and the Solana ecosystem, awaits its market performance post-launch with great anticipation.

3. Reference information

Disclaimer:

  1. This article is reprinted from [mintventures]. All copyrights belong to the original author IO.NET. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

AI/DePIN/Sol Ecosystem Triple Halo: Analyzing the IO.NET Token Launch

Advanced4/17/2024, 2:18:16 PM
This article will organize the key information of the AI decentralized computing project: the IO.NET project, including product logic, competitive situation, and project background. It also provides valuation estimations, analyzing the valuation from different perspectives through data analysis, and offering a reference calculation for valuation.

Introduction

In my previous article, I mentioned that compared to the previous two cycles, this crypto bull market cycle lacks influential new business and asset narratives. AI is one of the few new narratives in this round of the Web3 field. In this article, I will combine this year’s hot AI project, IO.NET, to ponder the following two issues:

  1. The necessity of AI+Web3 in business

  2. The necessity and challenges of distributed computing services

Furthermore, I will organize the key information of the representative project in AI distributed computing: the IO.NET project, including product logic, competitive situation, and project background. I will also delve into the project’s valuation.

The part of this article about the combination of AI and Web3 was inspired by “The Real Merge” written by Delphi Digital researcher Michael Rinko. Some views in this article digest and quote from that paper, and I recommend readers refer to the original.

This article represents my interim thoughts as of its publication. The situation may change in the future, and the viewpoints have a strong subjective nature. They may also contain factual, data, or reasoning errors. Please do not use this as investment advice, and I welcome criticism and discussion from my peers.

The following is the main text.

1.Business Logic: The Intersection of AI and Web3

1.1 2023: The New “Miracle Year” Created by AI

Looking back on human history, once there is a breakthrough in technology, everything from individual daily life, to various industrial landscapes, and even the entire civilization of humanity, undergoes revolutionary changes.

There are two significant years in the history of mankind, namely 1666 and 1905, which are now referred to as the two great “miracle years” in the history of technology.

The year 1666 is considered a miracle year because Newton’s scientific achievements were prominently emerged during this time. In that year, he pioneered the branch of physics known as optics, founded the mathematical branch of calculus, and derived the law of gravity, a fundamental law of modern natural science. Each of these achievements was a foundational contribution to the scientific development of humanity for the next century, significantly accelerating the overall progress of science.

The second miracle year was 1905, when Einstein, at just 26 years old, published four papers in a row in the “Annals of Physics,” covering the photoelectric effect (laying the foundation for quantum mechanics), Brownian motion (becoming a crucial reference for analyzing stochastic processes), the theory of special relativity, and the mass-energy equation (the famous formula E=MC^2). In subsequent evaluations, each of these papers was considered to exceed the average level of the Nobel Prize in Physics (Einstein himself also received the Nobel Prize for his paper on the photoelectric effect), and once again, the historical progression of human civilization took several giant leaps forward.

The recently passed year of 2023 is likely to be called another “miracle year” because of ChatGPT.

We regard 2023 as another “miracle year” in the history of human technology not only because of the significant advances GPT has made in natural language understanding and generation but also because humanity has deciphered the growth pattern of large language models from the evolution of GPT—that is, by expanding the model parameters and training data, the capabilities of the model can be exponentially enhanced—and this process does not yet see a short-term bottleneck (as long as there is enough computational power).

This capability extends far beyond understanding language and generating dialogue and is widely used across various technological fields. For example, in the biological field in 2018, Nobel laureate in Chemistry, Frances Arnold, said during the award ceremony, “Today, in practical applications, we can read, write, and edit any DNA sequence, but we are still unable to compose it.” Just five years after her speech, in 2023, researchers from Stanford University and Salesforce Research in Silicon Valley published a paper in “Nature Biotechnology.” They created 1 million new proteins from scratch using a large language model fine-tuned from GPT3 and identified two proteins with distinct structures, both with antibacterial capabilities, potentially becoming a new solution for combating bacteria beyond antibiotics. This signifies that the bottleneck in protein “creation” has been broken with the help of AI.

Furthermore, the AI algorithm AlphaFold predicted the structures of nearly all 214 million proteins on Earth within 18 months, a feat that exceeds the collective outcomes of all structural biologists in history by hundreds of times.

With AI-based models in biotechnology, material science, drug development, and other hard sciences, as well as in the humanities such as law and art, a revolutionary transformation is inevitable, and 2023 is indeed the inaugural year for all these advancements.

As we all know, in the past century, human wealth creation has grown exponentially, and the rapid maturation of AI technology will undoubtedly accelerate this process further.

Global GDP trend chart, data source: World Bank

1.2 The Integration of AI and Crypto

To fundamentally understand the necessity of integrating AI and Crypto, we can start from their complementary characteristics.

Complementary Characteristics of AI and Crypto

AI possesses three attributes:

  1. Randomness: AI exhibits randomness; the mechanism behind its content production is a black box that is difficult to replicate and inspect, thus the results are also random.

  2. Resource Intensive: AI is a resource-intensive industry requiring significant amounts of energy, chips, and computational power.

  3. Human-like Intelligence: AI will soon be able to pass the Turing test, thereafter making it difficult to distinguish between humans and machines.

On October 30, 2023, a research team from the University of California, San Diego released Turing test results for GPT-3.5 and GPT-4.0. GPT-4.0 scored 41%, just 9% shy of the passing mark of 50%, with human participants scoring 63%. The meaning of this Turing test is the percentage of people who believe that their conversational partner is a human. If over 50% believe so, it indicates that at least half of the people consider the conversational entity to be human, not a machine, thus regarded as passing the Turing test.

While AI creates new breakthrough productivity for humanity, its three attributes also bring significant challenges to human society, including:

  • How to verify and control the randomness of AI, turning randomness from a flaw into an advantage.

  • How to fulfill the significant energy and computational power needs of AI.

  • How to differentiate between humans and machines.

Crypto and blockchain economics may well be the remedy to the challenges brought by AI. The cryptographic economy has the following three characteristics:

  1. Determinism: Business operations are based on blockchain, code, and smart contracts, with clear rules and boundaries; the input dictates the outcome, ensuring high determinism.

  2. Efficient Resource Allocation: The crypto economy has built a massive global free market where the pricing, fundraising, and circulation of resources are very fast. Due to the presence of tokens, incentives can accelerate the matching of market supply and demand, reaching critical points more quickly.

  3. Trustless: With public ledgers and open-source code, everyone can easily verify operations, leading to a “trustless” system. Furthermore, ZK (Zero-Knowledge) technology avoids the exposure of privacy during verification.

Let’s illustrate the complementarity between AI and the crypto economy with three examples.

Example A: Addressing Randomness, Crypto Economy-Based AI Agents

AI agents, such as those from Fetch.AI, are designed to act on human will and perform tasks on behalf of humans. If we want our AI agent to handle a financial transaction, like “buying $1000 of BTC,” it might face two scenarios:

  • Scenario One: It needs to interface with traditional financial institutions (like BlackRock) to purchase BTC ETFs, facing numerous compatibility issues with AI agents and centralized institutions, such as KYC, documentation review, login, and identity verification, which are currently quite cumbersome.

  • Scenario Two: It operates based on the native crypto economy, which is much simpler; it could execute transactions directly through Uniswap or a similar aggregated trading platform using your account’s signature, completing the transaction quickly and simply to receive WBTC (or another wrapped form of BTC). Essentially, this is what various trading bots are already doing, albeit focused solely on trading for now. As AI integrates and evolves, future trading bots will undoubtedly be capable of executing more complex trading intentions, such as tracking the trading strategies and success rates of 100 smart money addresses on the blockchain, executing similar transactions with 10% of my funds over a week, and stopping and summarizing the reasons for failure if the results are unsatisfactory.

AI performs better within blockchain systems primarily due to the clarity of crypto economic rules and unrestricted system access. Within these defined rules, the potential risks brought by AI’s randomness are minimized. For example, AI has already outperformed humans in card games and video games due to the clear, closed sandbox of rules. However, progress in autonomous driving is relatively slow due to the challenges of the open external environment, and we are less tolerant of the randomness in AI’s problem-solving in such settings.

Example B: Shaping Resources through Token Incentives**

The global network behind BTC, with a current total hash rate of 576.70 EH/s, surpasses the combined computational power of any country’s supercomputer. Its development is driven by a simple, fair network incentive.

BTC network computing power trend, source: https://www.coinwarz.com/

In addition to this, projects including Mobile’s DePIN are attempting to shape a two-sided market for supply and demand through token incentives, aiming to achieve network effects. The focus of the following discussion in this article, IO.NET, is a platform designed to aggregate AI computational power, hoping to unleash more AI potential through a token model.

Example C: Open-source code, introduction of Zero-Knowledge Proofs (ZK) to differentiate humans from machines while protecting privacy

As a Web3 project involving OpenAI founder Sam Altman, Worldcoin uses a hardware device called Orb, which generates a unique and anonymous hash value based on human iris biometrics through ZK technology to verify identity and differentiate humans from machines. In early March this year, the Web3 art project Drip began using Worldcoin’s ID to verify real human users and distribute rewards.

Moreover, Worldcoin has recently open-sourced the program code of its iris-recognition hardware Orb, ensuring the security and privacy of user biometric data.

Overall, the crypto economy has become a significant potential solution to the challenges posed by AI to human society, due to the certainty of code and cryptography, the advantages of resource circulation and fundraising brought by token mechanisms, and the trustless nature based on open-source code and public ledgers.

The most urgent and commercially demanding challenge is the extreme hunger for computational resources by AI products, which revolves around the enormous demand for chips and computational power.

This is also the main reason why distributed computing projects have led the overall AI track in this bull market cycle.

The commercial necessity of decentralized computing

AI requires substantial computational resources, both for training models and for inference.

In the practice of training large language models, it has been confirmed that as long as the scale of data parameters is sufficiently large, new capabilities emerge that were not present before. Each generation of GPT shows an exponential leap in capabilities compared to its predecessor, backed by an exponential growth in computational volume needed for model training.

Research by DeepMind and Stanford University shows that different large language models, when facing various tasks (computation, Persian QA, natural language understanding, etc.), perform similarly to random answers until the training reaches less than 10^22 FLOPs (FLOPs denote floating-point operations per second, a measure of computational performance); however, once the scale of parameters surpasses that critical threshold, the performance of any task dramatically improves, regardless of the language model.

来源:Emergent Abilities of Large Language Models

Emergent Abilities of Large Language Models

It is precisely the principle of “achieving miracles with great computing power” and its practical verification that led Sam Altman, the founder of OpenAI, to propose raising 7 trillion US dollars to build an advanced chip factory that is ten times the size of the current TSMC. It is expected that 1.5 trillion dollars will be spent on this part, with the remaining funds used for chip production and model training.

Besides the training of AI models, the inference process of the models themselves also requires substantial computing power, although less than that needed for training. Therefore, the craving for chips and computing power has become a norm among AI competitors.

Compared to centralized AI computing providers like Amazon Web Services, Google Cloud Platform, and Microsoft’s Azure, the main value propositions of distributed AI computing include:

  • Accessibility: Accessing computing chips through cloud services such as AWS, GCP, or Azure usually takes weeks, and popular GPU models are often out of stock. Moreover, to obtain computing power, consumers often need to sign long-term, inflexible contracts with these large companies. In contrast, distributed computing platforms can provide flexible hardware options with greater accessibility.
  • Lower Pricing: By utilizing idle chips, combined with token subsidies from the network protocol to chip and computing power providers, distributed computing networks may offer more affordable computing power.
  • Censorship Resistance: Currently, cutting-edge computing chips and supplies are monopolized by large technology companies. Additionally, governments led by the United States are intensifying scrutiny over AI computing services. The ability to acquire computing power in a distributed, flexible, and free manner is becoming a clear demand, which is also a core value proposition of web3-based computing service platforms.

If fossil fuels were the lifeblood of the industrial age, then computing power will likely be the lifeblood of the new digital age ushered in by AI, with the supply of computing power becoming the infrastructure of the AI era. Just as stablecoins have become a robust offshoot of fiat currency in the Web3 era, could the distributed computing market become a fast-growing offshoot of the AI computing market?

Since this is still a relatively early market, everything is still under observation. However, the following factors could potentially stimulate the narrative or market adoption of distributed computing:

  • Continuous GPU supply and demand tension. The ongoing tension in GPU supply might encourage some developers to turn to distributed computing platforms.
  • Regulatory expansion. Accessing AI computing services from large cloud computing platforms requires KYC and extensive scrutiny. This might instead encourage the adoption of distributed computing platforms, especially in regions facing restrictions and sanctions.
  • Token price incentives. Bull market cycles and rising token prices increase the subsidy value to the GPU supply side, attracting more suppliers to the market, increasing the market size, and reducing the actual purchase price for consumers.

However, the challenges faced by distributed computing platforms are also quite evident:

  • Technical and Engineering Challenges
  • Proof of Work Issues: The computation for deep learning models, due to their hierarchical structure where each layer’s output serves as the input for the next, requires executing all previous work to verify the computation’s validity. This cannot be simply and effectively verified. To address this issue, distributed computing platforms need to develop new algorithms or use approximate verification techniques, which can provide probabilistic guarantees of result correctness, rather than absolute certainty.
  • Parallelization Challenges: Distributed computing platforms gather the long tail of chip supply, meaning that individual devices can only offer limited computing power. A single chip supplier can hardly complete the training or inference tasks of an AI model independently in a short period, so tasks must be decomposed and distributed through parallelization to shorten the overall completion time. Parallelization also inevitably faces issues such as how tasks are decomposed (especially complex deep learning tasks), data dependency, and additional communication costs between devices.
  • Privacy Protection Issues: How to ensure that the data and models of the purchasing party are not exposed to the task recipients?

Regulatory compliance challenges

  • Due to the unlicensed nature of the supply and procurement dual markets of distributed computing platforms, they can attract certain customers as selling points. On the other hand, they may become targets of government regulation as AI regulatory standards are refined. Additionally, some GPU suppliers may worry about whether their leased computing resources are being provided to sanctioned businesses or individuals.

Overall, the consumers of distributed computing platforms are mostly professional developers or small to medium-sized institutions, unlike cryptocurrency and NFT investors who differ in their expectations for the stability and continuity of the services offered by the protocol. Price may not be their main motive in decision-making. For now, it appears that distributed computing platforms still have a long way to go to gain the approval of such users.

Next, we will organize and analyze the project information for a new distributed computing project in this cycle, IO.NET, and estimate its possible market valuation after listing, based on current market competitors in the AI and distributed computing sectors.

2. Distributed AI Computing Platform: IO.NETDistributed AI Computing Platform: IO.NET

2.1 Project Positioning

IO.NET is a decentralized computing network that has established a bilateral market centered around chips. The supply side consists of chips (primarily GPUs, but also CPUs and Apple’s iGPUs) distributed globally, while the demand side is comprised of artificial intelligence engineers seeking to perform AI model training or inference tasks.

As stated on IO.NET’s official website:

Our Mission

Putting together one million GPUs in a DePIN – decentralized physical infrastructure network.

The mission is to integrate millions of GPUs into its DePIN network.

Compared to existing cloud AI computing service providers, IO.NET emphasizes the following key selling points:

  • Flexible Combination: AI engineers can freely select and combine the chips they need to form “Clusters” to complete their computing tasks.
  • Rapid Deployment: Deployment can be completed in seconds, without the weeks of approval and waiting typically required by centralized providers like AWS.
  • Cost-effective Service: The cost of services is 90% lower than that of mainstream providers.

In addition, IO.NET plans to launch services such as an AI model store in the future.

2.2 Product Mechanism and Business Data

Product Mechanism and Deployment Experience

Similar to Amazon Cloud, Google Cloud, and Alibaba Cloud, the computing service provided by IO.NET is called IO Cloud. IO Cloud is a distributed, decentralized network of chips capable of executing Python-based machine learning code and running AI and machine learning programs.

The basic business module of IO Cloud is called “Clusters.” Clusters are groups of GPUs that can autonomously coordinate to complete computing tasks. Artificial intelligence engineers can customize their desired Clusters based on their needs.

IO.NET’s product interface is highly user-friendly. If you need to deploy your own chip Clusters to complete AI computing tasks, you can start configuring your desired chip Clusters as soon as you enter the Clusters product page on their website.

Page information: https://cloud.io.net/cloud/clusters/create-cluster, the same below

First, you need to select your project scenario, and currently, there are three types available:

  1. General (Generic type): Provides a more generic environment, suitable for early project stages when specific resource needs are uncertain.

  2. Train (Training type): Designed for the training and fine-tuning of machine learning models. This option offers additional GPU resources, higher memory capacity, and/or faster network connections to handle these intensive computational tasks.

  3. Inference (Inference type): Designed for low-latency inference and high-load tasks. In the context of machine learning, inference refers to using trained models to predict or analyze new data and provide feedback. Therefore, this option focuses on optimizing latency and throughput to support real-time or near-real-time data processing needs.

Next, you need to choose the supplier for the chip Clusters. Currently, IO.NET has partnerships with Render Network and Filecoin’s mining network, allowing users to choose chips from IO.NET or the other two networks as their computing Clusters’ supplier. IO.NET acts as an aggregator (although, at the time of writing, Filecoin’s service is temporarily offline). Notably, according to the page display, the online available GPU count for IO.NET is over 200,000, while that for Render Network is over 3,700.

Finally, you enter the chip hardware selection phase for the Clusters. Currently, IO.NET only lists GPUs for selection, excluding CPUs or Apple’s iGPUs (M1, M2, etc.), and the GPUs mainly feature NVIDIA products.

In the official list of available GPU hardware options, based on data tested by the author on that day, the total number of GPUs available online in the IO.NET network is 206,001. Of these, the GeForce RTX 4090 has the highest availability with 45,250 units, followed by the GeForce RTX 3090 Ti with 30,779 units.

Additionally, the A100-SXM4-80GB chip, which is more efficient for AI computing tasks such as machine learning, deep learning, and scientific computation (market price over $15,000), has 7,965 units online.

The NVIDIA H100 80GB HBM3 graphics card, specifically designed from the ground up for AI (market price over $40,000), has a training performance 3.3 times that of the A100 and an inference performance 4.5 times that of the A100, with a total of 86 units online.

After selecting the hardware type for Clusters, users also need to choose the region, communication speed, number of GPUs rented, and rental duration, among other parameters.

Finally, IO.NET will provide a bill based on the comprehensive selection. For example, in the author’s Clusters configuration:

  • General task scenario
  • 16 A100-SXM4-80GB chips
  • Ultra high-speed connection
  • Located in the USA
  • Rental period of 1 week

The total bill is $3311.6, with a per-hour price per card of $1.232

In comparison, the hourly rental prices of the A100-SXM4-80GB on Amazon Cloud, Google Cloud, and Microsoft Azure are $5.12, $5.07, and $3.67, respectively (data source: https://cloud-gpus.com/, actual prices may vary based on contract details).

Thus, purely in terms of price, IO.NET’s computing power is significantly cheaper than that of mainstream manufacturers, and the supply and procurement options are very flexible, making it easy to get started.

Business conditions

Supply side situation

As of April 4 this year, according to official data, IO.NET has a total supply of 371,027 GPUs and 42,321 CPUs on the supply side. In addition, Render Network, as its partner, has also connected 9,997 GPUs and 776 CPUs to the network’s supply.

Data source: https://cloud.io.net/explorer/home, the same below

As of the writing of this article, 214,387 of the GPUs connected by IO.NET are online, with an online rate of 57.8%. The online rate for GPUs from Render Network is 45.1%.

What do the above supply-side data imply?

To provide a comparison, let’s introduce another, older distributed computing project, Akash Network, for contrast. Akash Network launched its mainnet as early as 2020, initially focusing on distributed services for CPUs and storage. In June 2023, it launched a testnet for GPU services and went live with its mainnet for distributed GPU computing power in September of the same year.

Data source: https://stats.akash.network/provider-graph/graphics-gpu

According to official data from Akash, although the supply side has continued to grow, the total number of GPUs connected to its network has only reached 365 to date.

In terms of GPU supply volume, IO.NET is several orders of magnitude higher than Akash Network, making it the largest supply network in the distributed GPU computing power race.

Demand side situation

However, looking at the demand side, IO.NET is still in the early stages of market cultivation, and the actual volume of computing tasks performed using IO.NET is not large. Most of the online GPUs have a workload of 0%, with only four types of chips—A100 PCIe 80GB K8S, RTX A6000 K8S, RTX A4000 K8S, and H100 80GB HBM3—handling tasks. Except for the A100 PCIe 80GB K8S, the workload of the other three chips is less than 20%.

The official network stress value disclosed on the day is 0%, indicating that most of the chip supply is in an online standby state. Meanwhile, IO.NET has generated a total of $586,029 in service fees, with the cost in the last day amounting to $3,200.

Data source: https://cloud.io.net/explorer/clusters

The scale of these network settlement fees, both in total and in daily transaction volume, is on the same order of magnitude as Akash, although most of Akash’s network revenue comes from the CPU segment, with over 20,000 CPUs supplied.

Data source: https://stats.akash.network/

Additionally, IO.NET has disclosed data on AI inference tasks processed by the network; to date, it has processed and verified more than 230,000 inference tasks, though most of this volume has been generated by projects sponsored by IO.NET, such as BC8.AI.

Data source: https://cloud.io.net/explorer/inferences

Based on the current business data, IO.NET’s supply side expansion is progressing smoothly, buoyed by the anticipation of airdrops and a community event dubbed “Ignition”, which has quickly amassed a significant amount of AI chip computing power. However, the expansion on the demand side is still in its early stages, with organic demand currently insufficient. It remains to be assessed whether the current lack of demand is due to the fact that consumer outreach has not yet begun, or because the current service experience is not stable enough, thus lacking widespread adoption.

Considering the short-term difficulty in bridging the gap in AI computing power, many AI engineers and projects are seeking alternative solutions, which may spark interest in decentralized service providers. Additionally, since IO.NET has not yet initiated economic and activity incentives for the demand side, along with the gradual improvement of the product experience, the eventual matching of supply and demand is still anticipated with optimism.

2.3 Team Background and Financing

Team situation

IO.NET’s core team initially focused on quantitative trading, developing institutional-level quantitative trading systems for stocks and crypto assets until June 2022. Driven by the backend system’s need for computing power, the team began to explore the possibilities of decentralized computing, ultimately focusing on reducing the cost of GPU computing services.

Founder & CEO: Ahmad Shadid, who has a background in quantitative finance and engineering and has also volunteered with the Ethereum Foundation.

CMO & Chief Strategy Officer: Garrison Yang, who joined IO.NET in March this year. He was previously the VP of Strategy and Growth at Avalanche and graduated from the University of California, Santa Barbara.

COO: Tory Green, previously the COO at Hum Capital and Director of Corporate Development and Strategy at Fox Mobile Group, graduated from Stanford.

According to LinkedIn information, IO.NET is headquartered in New York, USA, with a branch in San Francisco, and the team size exceeds 50 members.

Financing situation

As of now, IO.NET has only disclosed one round of financing, which is the Series A completed in March this year, valued at USD 1 billion. It raised USD 30 million led by Hack VC, with other participants including Multicoin Capital, Delphi Digital, Foresight Ventures, Animoca Brands, Continue Capital, Solana Ventures, Aptos, LongHash Ventures, OKX Ventures, Amber Group, SevenX Ventures, and ArkStream Capital.

It is worth mentioning that perhaps due to the investment from the Aptos Foundation, the BC8.AI project, originally settling accounts on Solana, has switched to the high-performance L1 blockchain Aptos.

2.4 Valuation Estimation

According to IO.NET’s founder and CEO Ahmad Shadid, the company will launch its token at the end of April.

IO.NET has two comparable projects for valuation reference: Render Network and Akash Network, both representative of distributed computing projects.

There are two ways to extrapolate the market cap range of IO.NET: 1. Price-to-sales ratio (P/S ratio), i.e., market cap/revenue ratio; 2. Market cap per network chip ratio.

First, let’s look at the valuation extrapolation based on the P/S ratio:

From the perspective of P/S ratio, Akash can serve as the lower limit of IO.NET’s valuation range, while Render acts as a reference for high valuation pricing. Their FDV (Fully Diluted Valuation) range is from USD 1.67 billion to USD 5.93 billion.

However, considering updates to the IO.NET project, its hotter narrative, smaller early circulating market cap, and currently larger supply side scale, the likelihood of its FDV exceeding Render’s is not small.

Next, let’s look at another valuation perspective, the “market-to-core ratio”.

In a market where the demand for AI computing power exceeds supply, the most crucial element of distributed AI computing power networks is the scale of GPU supply. Therefore, we can use the “market-to-core ratio,” the ratio of total project market cap to the number of chips in the network, to extrapolate the possible valuation range of IO.NET for readers as a market value reference.

)

If calculated based on the market-to-core ratio, with Render Network as the upper limit and Akash Network as the lower limit, the FDV range for IO.NET is between USD 20.6 billion and USD 197.5 billion.

Readers who are optimistic about the IO.NET project would consider this a very optimistic market value estimation.

Moreover, we need to consider that the current large online chip count of IO.NET may be stimulated by airdrop expectations and incentive activities, and the actual online count on the supply side still needs to be observed after the project officially launches.

Therefore, overall, the valuation estimation from the P/S ratio perspective may be more referential.

IO.NET, as a project that combines AI, DePIN, and the Solana ecosystem, awaits its market performance post-launch with great anticipation.

3. Reference information

Disclaimer:

  1. This article is reprinted from [mintventures]. All copyrights belong to the original author IO.NET. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!