io.net: The Underrated AI Computing Power Revolution

Intermediate7/10/2024, 1:32:24 AM
As a new form of productive relationship, Web3 naturally fits with AI, which represents a new type of productivity. This simultaneous progress in technology and productive relationships is at the core of io.net's logic. By adopting the "Web3 + token economy" economic infrastructure, io.net aims to transform the traditional production relationships between cloud service giants, mid-to-long tail computing power users, and idle global network computing resources.

The Grassroots Nature of io.net: How Do You See It?

With $30 million in funding and backed by top-tier capital such as Hack VC, Multicoin Capital, Delphi Digital, and Solana Lab, io.net doesn’t seem very “grassroots.” The labels of GPU computing power and AI revolution are far from being down-to-earth, often associated with high-end connotations.

However, amidst the bustling community discussions, crucial clues are often overlooked, especially regarding the profound transformation io.net might bring to the global computing power network. Unlike the “elite” positioning of AWS, Azure, and GCP, io.net essentially follows a populist route:

It aims to supplement the ignored “mid-tail + long-tail” computing power demand by aggregating idle GPU resources. By creating an enterprise-grade, decentralized distributed computing network, io.net empowers a broader range of small and medium-sized users with AI innovation. It achieves a low-cost, highly flexible “re-liberation of productivity” for global AI innovation.

The Overlooked Underlying Computing Power Production Relations Behind the AI Wave

What is the core productivity resource in the current AI wave and the future digital economy era?

Undoubtedly, it is computing power.

According to data from Precedence Research, the global artificial intelligence hardware market is expected to grow at a compound annual growth rate (CAGR) of 24.3%, surpassing $473.53 billion by 2033.

Even setting aside these predictive figures, from both incremental and stock logic perspectives, it is evident that two main contradictions will persist in the future development of the computing power market:

  1. Incremental Dimension: The exponential growth in demand for computing power far exceeds the linear growth in supply.
  2. Stock Dimension: Due to the top-heavy distribution, computing power is concentrated at the top, leaving mid-tier and long-tail players with insufficient resources. Meanwhile, a large amount of distributed GPU resources remains idle, leading to severe mismatches between supply and demand.

Incremental Dimension: Demand for Computing Power Far Exceeds Supply

Firstly, in the incremental dimension, aside from the rapid expansion of AIGC (AI-generated content) models, numerous AI scenarios in their early explosive stages, such as healthcare, education, and intelligent driving, are rapidly unfolding. All these scenarios require vast amounts of computing resources. Therefore, the current market shortage of GPU computing power resources will not only persist but will continue to expand.

In other words, from a supply and demand perspective, in the foreseeable future, the market demand for computing power will undoubtedly far exceed supply. The demand curve is expected to show an exponential upward trend in the short term.

On the supply side, however, due to physical laws and practical production factors, whether it is improvements in process technology or large-scale factory expansions, at most, only linear growth can be achieved. This inevitably means that the computing power bottleneck in AI development will persist for a long time.

Supply-Demand Imbalance: Misalignment for Mid-Tier and Long-Tail Players

Meanwhile, with limited computing power resources facing severe growth bottlenecks, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) collectively occupy over 60% of the cloud computing market share, creating a clear seller’s market.

These companies hoard high-performance GPU chips, monopolizing a vast amount of computing power. Mid-tier and long-tail small and medium-sized demand-side players not only lack bargaining power but also have to contend with high capital costs, KYC entry barriers, and restrictive leasing terms. Additionally, traditional cloud service giants, driven by profit considerations, often overlook the differentiated business needs of “mid-tier + long-tail” users (such as shorter, more immediate, and smaller-scale leasing requirements).

In reality, however, a large amount of GPU computing power is left unused outside the computing networks of cloud service giants. For example, tens of thousands of independent third-party Internet Data Centers (IDC) globally waste resources on small training tasks. This includes vast computing power being idle in crypto mining farms and projects like Filecoin, Render, and Aethir.

According to official estimates from io.net, the idle rate of graphics cards in IDCs in the US alone exceeds 60%. This creates an ironic paradox of supply-demand mismatch: over half of the computing power resources of tens of thousands of small and medium-sized data centers and crypto mining farms are wasted daily, failing to generate effective revenue, while mid-tier and long-tail AI entrepreneurs endure high costs and high entry barriers of cloud giant computing services, with their diverse innovative needs unmet.

This stark contrast reveals the core contradiction in the current global AI development and global computing power market—on one hand, AI innovation is widespread, and computing power demand is continually expanding. On the other hand, the computing power needs of mid-tier and long-tail players and idle GPU resources are not being effectively met, remaining outside the current computing power market.

This issue is not just the conflict between the growing computing power demands of AI entrepreneurs and the lagging growth of computing power. It is also the mismatch between the vast majority of mid-tier and long-tail AI entrepreneurs, computing power operators, and the imbalanced supply-demand, which far exceeds the scope of centralized cloud service providers’ solutions.

Therefore, the market is calling for new solutions. Imagine if these operators with computational power could flexibly rent out their computing power during idle times. Wouldn’t that provide a low-cost computational cluster similar to AWS?

Building such a large-scale computational network is extremely expensive. This has led to the emergence of platforms specifically designed to match idle computational resources with small and medium-sized AI startups. These platforms aggregate scattered idle computing resources and match them with specific needs in sectors like healthcare, law, and finance for training small and large models.

Not only can this meet the diverse computing needs of the mid-to-long tail, but it also complements the existing centralized cloud giants’ computing services:

Cloud giants with vast computing resources handle large model training and high-performance computing for urgent and heavy demands.

Decentralized cloud computing markets like io.net cater to small model computations, large model fine-tuning, inference deployment, and more diversified, low-cost needs.

In essence, it provides a dynamic balance between cost-effectiveness and computational quality, aligning with the economic logic of optimizing resource allocation in the market. Thus, distributed computing networks like io.net essentially offer an “AI+Crypto” solution. They use a decentralized collaborative framework combined with token incentives to meet the significant but underserved demand in the mid-to-long tail AI market. This allows small and medium AI teams to customize and purchase GPU computing services as needed, which large clouds cannot provide, thus achieving a “liberation of productivity” in the global computing power market and AI development.

In simpler terms, io.net is not a direct competitor to AWS, Azure, or GCP. Instead, it is a complementary ally that optimizes global computing resource allocation and expands the market. They cater to different layers of “cost-effectiveness & computational quality” needs. It is even possible that io.net, by aggregating mid-to-long tail supply and demand players, could create a market share comparable to the existing top three cloud giants.

io.net: A Global GPU Computing Power Matching Trading Platform

io.net aims to reshape the mid- and long-tail computing power market’s production relationships through Web3 distributed collaboration and token incentives. As a result, it is reminiscent of shared economy platforms like Uber and Didi, functioning as a matching trading platform for GPU computing power.

Before the advent of Uber and Didi, the user experience of “ride-hailing on demand” was virtually nonexistent. The private car network was vast yet chaotic, with cars being idle and unorganized. To catch a ride, users had to either hail a cab from the roadside or request a dispatch from the city’s taxi center, which was time-consuming, highly uncertain, and predominantly a seller’s market—unfriendly to most ordinary people.

This scenario is akin to the current state of the computing power market. As mentioned earlier, mid- and long-tail small and medium computing power demanders not only lack bargaining power but also face high capital costs, KYC entry barriers, and harsh leasing terms.

So, how exactly does io.net achieve its position as a “global GPU computing power hub and matching market”? What kind of system architecture and functional services are needed to help mid- and long-tail users obtain computing power resources?

Flexible and Low-Cost Matching Platform

The primary feature of io.net is its lightweight computing power matching platform. Similar to Uber or Didi, it does not involve the high-risk actual operation of GPU hardware or other heavy assets. Instead, it connects mid-to-long-tail retail computing power (often considered secondary computing power by major cloud providers like AWS) with demand through matching, revitalizing previously idle computing resources (private cars) and the mid-tail AI demand for computing power (passengers).

On one end, io.net connects tens of thousands of idle GPUs (private cars) from small and medium-sized IDCs, mining farms, and crypto projects. On the other end, it links the computing power needs of millions of small and medium-sized companies (passengers). io.net acts as an intermediary, similar to a broker matching numerous buy and sell orders.

By aggregating idle computing power at a low cost and with more flexible deployment configurations, io.net helps entrepreneurs train more personalized small and medium AI models, significantly improving resource utilization. The advantages are clear: regardless of market conditions, as long as there is a resource mismatch, the demand for a matching platform is robust.

Supply Side:On the supply side, small and medium-sized IDCs, mining farms, and crypto projects can connect their idle computing resources to io.net. They do not need to establish a dedicated business development department or be forced to sell at a discount to AWS due to small-scale computing power. Instead, they can match their idle computing power to suitable small and medium computing customers at market prices or even higher, with minimal friction costs, thereby earning revenue.

Demand Side:On the demand side, small and medium computing power demanders, who previously had no bargaining power against major cloud providers like AWS, can connect to smaller-scale, permissionless, wait-free, and KYC-free computing power through io.net. They can freely choose and combine the chips they need to form a “cluster” to complete personalized computing tasks.

Both supply and demand sides at the mid-tail have similar pain points of weak bargaining power and low autonomy when facing major clouds like AWS. io.net revitalizes the supply and demand of the mid-to-long-tail, providing a matching platform that allows both sides to complete transactions at better prices and with more flexible configurations than major clouds like AWS.

From this perspective, similar to platforms like Taobao, the early appearance of low-quality computing power is an inevitable development pattern of the platform economy. io.net has also set up a reputation system for both suppliers and demanders, accumulating scores based on computing performance and network participation to earn rewards or discounts.

Decentralized GPU Cluster

In addition to being a matching platform between retail supply and demand, io.net addresses the needs of large-scale computing scenarios, such as those required by modern models, which involve multiple GPUs working together. The effectiveness of this platform depends not only on how many idle GPU resources it can aggregate but also on how tightly connected the distributed computing power on the platform is.

This means that io.net needs to create a “decentralized yet centralized” computing architecture for its distributed network, which encompasses small and medium-sized computing resources from different regions and scales. This architecture must support flexible computing demands by allowing several distributed GPUs to work within the same framework for training while ensuring that communication and coordination among these GPUs are swift and achieve usable low latency.

This approach is fundamentally different from some decentralized cloud computing projects that are constrained to using GPUs within the same data center. The technical realization behind io.net’s product suite, known as the “Three Horses,” includes IO Cloud, IO Worker, and IO Explorer.

  1. IO Cloud
    • The basic business module for clusters, IO Cloud, is a group of GPUs capable of self-coordinating to complete computing tasks. AI engineers can customize clusters according to their needs. It integrates seamlessly with the IO-SDK, providing a comprehensive solution for expanding AI and Python applications.
  2. IO Worker
    • IO Worker offers a user-friendly UI, allowing both suppliers and demanders to effectively manage their operations through a web application. Its functions range from user account management, monitoring computing activities, displaying real-time data, tracking temperature and power consumption, providing installation assistance, managing wallets, implementing security measures, to calculating profits.
  3. IO Explorer
    • IO Explorer provides users with comprehensive statistics and visualizations of various aspects of the GPU cloud. By offering complete visibility into network activities, key statistics, data points, and reward transactions, it enables users to easily monitor, analyze, and understand the details of the io.net network.

Thanks to this functional architecture, io.net allows computing power suppliers to easily share idle resources, significantly lowering the entry barrier. Demanders can quickly form clusters with the required GPUs without signing long-term contracts or enduring the lengthy wait times commonly associated with traditional cloud services. This setup provides them with supercomputing power and optimized server response times.

Lightweight Elastic Demand Scenarios

When discussing io.net’s unique service scenarios compared to AWS and other major clouds, the focus is on lightweight elastic demand where large clouds may not be cost-effective. These scenarios include niche areas such as model training for small and medium AI startups, fine-tuning large models, and other diverse applications. One commonly overlooked yet widely applicable scenario is model inference.

It is well-known that the early training of large models like GPT requires thousands of high-performance GPUs, immense computing power, and massive data for extended periods. This is an area where AWS, GCP, and other major clouds have a definite advantage. However, once the model is trained, the primary computing demand shifts to model inference. This stage, which involves using the trained model to make predictions or decisions, constitutes 80%-90% of AI computing workloads, as seen in daily interactions with GPT and similar models.

Interestingly, the computing power required for inference is more stable and less intense, often needing just a few dozen GPUs for a few minutes to obtain results. This process also has lower requirements for network latency and concurrency. Additionally, most AI companies are unlikely to train their own large models from scratch; instead, they tend to optimize and fine-tune top-tier models like GPT. These scenarios are naturally suited for io.net’s distributed idle computing resources.

Beyond the high-intensity, high-standard application scenarios, there is a broader and untapped market for everyday lightweight scenarios. These may appear fragmented but actually hold a larger market share. According to a recent Bank of America report, high-performance computing only accounts for about 5% of the total addressable market (TAM) in data centers.

In summary, it’s not that AWS or GCP are unaffordable, but io.net offers a more cost-effective solution for these specific needs.

The Decisive Factor in Web2 BD

Ultimately, the core competitive advantage of platforms like io.net, which are geared towards distributed computing resources, lies in their business development (BD) capabilities. This is the critical determining factor for success.

Apart from the phenomenon where Nvidia’s high-performance chips have given rise to a market for GPU brokers, the main challenge for many small and medium-sized Internet Data Centers (IDCs) and computing power operators is the problem of “a good wine still fears deep alleys,” meaning even great products need effective promotion to be discovered.

From this perspective, io.net holds a unique competitive edge that is hard for other projects in the same field to replicate – a dedicated Web2 BD team based directly in Silicon Valley. These veterans have extensive experience in the computing power market and understand the diverse scenarios of small and medium-sized clients. Moreover, they have a deep understanding of the end-user needs of numerous Web2 clients.

According to official disclosures from io.net, more than 20-30 Web2 companies have already expressed interest in purchasing or leasing computing power. These companies are willing to explore or even experiment with lower-cost, more flexible computing services (some may not even be able to secure computing power on AWS). Each of these clients requires at least hundreds to thousands of GPUs, translating to computing power orders worth tens of thousands of dollars per month.

This genuine demand from the paying end-users will essentially attract more idle computing power resources to proactively flow in on the supply side, thus easily leading to a?

Disclaimer:

  1. This article is reprinted from [LFG Labs]. All copyrights belong to the original author [LFG Labs]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

io.net: The Underrated AI Computing Power Revolution

Intermediate7/10/2024, 1:32:24 AM
As a new form of productive relationship, Web3 naturally fits with AI, which represents a new type of productivity. This simultaneous progress in technology and productive relationships is at the core of io.net's logic. By adopting the "Web3 + token economy" economic infrastructure, io.net aims to transform the traditional production relationships between cloud service giants, mid-to-long tail computing power users, and idle global network computing resources.

The Grassroots Nature of io.net: How Do You See It?

With $30 million in funding and backed by top-tier capital such as Hack VC, Multicoin Capital, Delphi Digital, and Solana Lab, io.net doesn’t seem very “grassroots.” The labels of GPU computing power and AI revolution are far from being down-to-earth, often associated with high-end connotations.

However, amidst the bustling community discussions, crucial clues are often overlooked, especially regarding the profound transformation io.net might bring to the global computing power network. Unlike the “elite” positioning of AWS, Azure, and GCP, io.net essentially follows a populist route:

It aims to supplement the ignored “mid-tail + long-tail” computing power demand by aggregating idle GPU resources. By creating an enterprise-grade, decentralized distributed computing network, io.net empowers a broader range of small and medium-sized users with AI innovation. It achieves a low-cost, highly flexible “re-liberation of productivity” for global AI innovation.

The Overlooked Underlying Computing Power Production Relations Behind the AI Wave

What is the core productivity resource in the current AI wave and the future digital economy era?

Undoubtedly, it is computing power.

According to data from Precedence Research, the global artificial intelligence hardware market is expected to grow at a compound annual growth rate (CAGR) of 24.3%, surpassing $473.53 billion by 2033.

Even setting aside these predictive figures, from both incremental and stock logic perspectives, it is evident that two main contradictions will persist in the future development of the computing power market:

  1. Incremental Dimension: The exponential growth in demand for computing power far exceeds the linear growth in supply.
  2. Stock Dimension: Due to the top-heavy distribution, computing power is concentrated at the top, leaving mid-tier and long-tail players with insufficient resources. Meanwhile, a large amount of distributed GPU resources remains idle, leading to severe mismatches between supply and demand.

Incremental Dimension: Demand for Computing Power Far Exceeds Supply

Firstly, in the incremental dimension, aside from the rapid expansion of AIGC (AI-generated content) models, numerous AI scenarios in their early explosive stages, such as healthcare, education, and intelligent driving, are rapidly unfolding. All these scenarios require vast amounts of computing resources. Therefore, the current market shortage of GPU computing power resources will not only persist but will continue to expand.

In other words, from a supply and demand perspective, in the foreseeable future, the market demand for computing power will undoubtedly far exceed supply. The demand curve is expected to show an exponential upward trend in the short term.

On the supply side, however, due to physical laws and practical production factors, whether it is improvements in process technology or large-scale factory expansions, at most, only linear growth can be achieved. This inevitably means that the computing power bottleneck in AI development will persist for a long time.

Supply-Demand Imbalance: Misalignment for Mid-Tier and Long-Tail Players

Meanwhile, with limited computing power resources facing severe growth bottlenecks, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) collectively occupy over 60% of the cloud computing market share, creating a clear seller’s market.

These companies hoard high-performance GPU chips, monopolizing a vast amount of computing power. Mid-tier and long-tail small and medium-sized demand-side players not only lack bargaining power but also have to contend with high capital costs, KYC entry barriers, and restrictive leasing terms. Additionally, traditional cloud service giants, driven by profit considerations, often overlook the differentiated business needs of “mid-tier + long-tail” users (such as shorter, more immediate, and smaller-scale leasing requirements).

In reality, however, a large amount of GPU computing power is left unused outside the computing networks of cloud service giants. For example, tens of thousands of independent third-party Internet Data Centers (IDC) globally waste resources on small training tasks. This includes vast computing power being idle in crypto mining farms and projects like Filecoin, Render, and Aethir.

According to official estimates from io.net, the idle rate of graphics cards in IDCs in the US alone exceeds 60%. This creates an ironic paradox of supply-demand mismatch: over half of the computing power resources of tens of thousands of small and medium-sized data centers and crypto mining farms are wasted daily, failing to generate effective revenue, while mid-tier and long-tail AI entrepreneurs endure high costs and high entry barriers of cloud giant computing services, with their diverse innovative needs unmet.

This stark contrast reveals the core contradiction in the current global AI development and global computing power market—on one hand, AI innovation is widespread, and computing power demand is continually expanding. On the other hand, the computing power needs of mid-tier and long-tail players and idle GPU resources are not being effectively met, remaining outside the current computing power market.

This issue is not just the conflict between the growing computing power demands of AI entrepreneurs and the lagging growth of computing power. It is also the mismatch between the vast majority of mid-tier and long-tail AI entrepreneurs, computing power operators, and the imbalanced supply-demand, which far exceeds the scope of centralized cloud service providers’ solutions.

Therefore, the market is calling for new solutions. Imagine if these operators with computational power could flexibly rent out their computing power during idle times. Wouldn’t that provide a low-cost computational cluster similar to AWS?

Building such a large-scale computational network is extremely expensive. This has led to the emergence of platforms specifically designed to match idle computational resources with small and medium-sized AI startups. These platforms aggregate scattered idle computing resources and match them with specific needs in sectors like healthcare, law, and finance for training small and large models.

Not only can this meet the diverse computing needs of the mid-to-long tail, but it also complements the existing centralized cloud giants’ computing services:

Cloud giants with vast computing resources handle large model training and high-performance computing for urgent and heavy demands.

Decentralized cloud computing markets like io.net cater to small model computations, large model fine-tuning, inference deployment, and more diversified, low-cost needs.

In essence, it provides a dynamic balance between cost-effectiveness and computational quality, aligning with the economic logic of optimizing resource allocation in the market. Thus, distributed computing networks like io.net essentially offer an “AI+Crypto” solution. They use a decentralized collaborative framework combined with token incentives to meet the significant but underserved demand in the mid-to-long tail AI market. This allows small and medium AI teams to customize and purchase GPU computing services as needed, which large clouds cannot provide, thus achieving a “liberation of productivity” in the global computing power market and AI development.

In simpler terms, io.net is not a direct competitor to AWS, Azure, or GCP. Instead, it is a complementary ally that optimizes global computing resource allocation and expands the market. They cater to different layers of “cost-effectiveness & computational quality” needs. It is even possible that io.net, by aggregating mid-to-long tail supply and demand players, could create a market share comparable to the existing top three cloud giants.

io.net: A Global GPU Computing Power Matching Trading Platform

io.net aims to reshape the mid- and long-tail computing power market’s production relationships through Web3 distributed collaboration and token incentives. As a result, it is reminiscent of shared economy platforms like Uber and Didi, functioning as a matching trading platform for GPU computing power.

Before the advent of Uber and Didi, the user experience of “ride-hailing on demand” was virtually nonexistent. The private car network was vast yet chaotic, with cars being idle and unorganized. To catch a ride, users had to either hail a cab from the roadside or request a dispatch from the city’s taxi center, which was time-consuming, highly uncertain, and predominantly a seller’s market—unfriendly to most ordinary people.

This scenario is akin to the current state of the computing power market. As mentioned earlier, mid- and long-tail small and medium computing power demanders not only lack bargaining power but also face high capital costs, KYC entry barriers, and harsh leasing terms.

So, how exactly does io.net achieve its position as a “global GPU computing power hub and matching market”? What kind of system architecture and functional services are needed to help mid- and long-tail users obtain computing power resources?

Flexible and Low-Cost Matching Platform

The primary feature of io.net is its lightweight computing power matching platform. Similar to Uber or Didi, it does not involve the high-risk actual operation of GPU hardware or other heavy assets. Instead, it connects mid-to-long-tail retail computing power (often considered secondary computing power by major cloud providers like AWS) with demand through matching, revitalizing previously idle computing resources (private cars) and the mid-tail AI demand for computing power (passengers).

On one end, io.net connects tens of thousands of idle GPUs (private cars) from small and medium-sized IDCs, mining farms, and crypto projects. On the other end, it links the computing power needs of millions of small and medium-sized companies (passengers). io.net acts as an intermediary, similar to a broker matching numerous buy and sell orders.

By aggregating idle computing power at a low cost and with more flexible deployment configurations, io.net helps entrepreneurs train more personalized small and medium AI models, significantly improving resource utilization. The advantages are clear: regardless of market conditions, as long as there is a resource mismatch, the demand for a matching platform is robust.

Supply Side:On the supply side, small and medium-sized IDCs, mining farms, and crypto projects can connect their idle computing resources to io.net. They do not need to establish a dedicated business development department or be forced to sell at a discount to AWS due to small-scale computing power. Instead, they can match their idle computing power to suitable small and medium computing customers at market prices or even higher, with minimal friction costs, thereby earning revenue.

Demand Side:On the demand side, small and medium computing power demanders, who previously had no bargaining power against major cloud providers like AWS, can connect to smaller-scale, permissionless, wait-free, and KYC-free computing power through io.net. They can freely choose and combine the chips they need to form a “cluster” to complete personalized computing tasks.

Both supply and demand sides at the mid-tail have similar pain points of weak bargaining power and low autonomy when facing major clouds like AWS. io.net revitalizes the supply and demand of the mid-to-long-tail, providing a matching platform that allows both sides to complete transactions at better prices and with more flexible configurations than major clouds like AWS.

From this perspective, similar to platforms like Taobao, the early appearance of low-quality computing power is an inevitable development pattern of the platform economy. io.net has also set up a reputation system for both suppliers and demanders, accumulating scores based on computing performance and network participation to earn rewards or discounts.

Decentralized GPU Cluster

In addition to being a matching platform between retail supply and demand, io.net addresses the needs of large-scale computing scenarios, such as those required by modern models, which involve multiple GPUs working together. The effectiveness of this platform depends not only on how many idle GPU resources it can aggregate but also on how tightly connected the distributed computing power on the platform is.

This means that io.net needs to create a “decentralized yet centralized” computing architecture for its distributed network, which encompasses small and medium-sized computing resources from different regions and scales. This architecture must support flexible computing demands by allowing several distributed GPUs to work within the same framework for training while ensuring that communication and coordination among these GPUs are swift and achieve usable low latency.

This approach is fundamentally different from some decentralized cloud computing projects that are constrained to using GPUs within the same data center. The technical realization behind io.net’s product suite, known as the “Three Horses,” includes IO Cloud, IO Worker, and IO Explorer.

  1. IO Cloud
    • The basic business module for clusters, IO Cloud, is a group of GPUs capable of self-coordinating to complete computing tasks. AI engineers can customize clusters according to their needs. It integrates seamlessly with the IO-SDK, providing a comprehensive solution for expanding AI and Python applications.
  2. IO Worker
    • IO Worker offers a user-friendly UI, allowing both suppliers and demanders to effectively manage their operations through a web application. Its functions range from user account management, monitoring computing activities, displaying real-time data, tracking temperature and power consumption, providing installation assistance, managing wallets, implementing security measures, to calculating profits.
  3. IO Explorer
    • IO Explorer provides users with comprehensive statistics and visualizations of various aspects of the GPU cloud. By offering complete visibility into network activities, key statistics, data points, and reward transactions, it enables users to easily monitor, analyze, and understand the details of the io.net network.

Thanks to this functional architecture, io.net allows computing power suppliers to easily share idle resources, significantly lowering the entry barrier. Demanders can quickly form clusters with the required GPUs without signing long-term contracts or enduring the lengthy wait times commonly associated with traditional cloud services. This setup provides them with supercomputing power and optimized server response times.

Lightweight Elastic Demand Scenarios

When discussing io.net’s unique service scenarios compared to AWS and other major clouds, the focus is on lightweight elastic demand where large clouds may not be cost-effective. These scenarios include niche areas such as model training for small and medium AI startups, fine-tuning large models, and other diverse applications. One commonly overlooked yet widely applicable scenario is model inference.

It is well-known that the early training of large models like GPT requires thousands of high-performance GPUs, immense computing power, and massive data for extended periods. This is an area where AWS, GCP, and other major clouds have a definite advantage. However, once the model is trained, the primary computing demand shifts to model inference. This stage, which involves using the trained model to make predictions or decisions, constitutes 80%-90% of AI computing workloads, as seen in daily interactions with GPT and similar models.

Interestingly, the computing power required for inference is more stable and less intense, often needing just a few dozen GPUs for a few minutes to obtain results. This process also has lower requirements for network latency and concurrency. Additionally, most AI companies are unlikely to train their own large models from scratch; instead, they tend to optimize and fine-tune top-tier models like GPT. These scenarios are naturally suited for io.net’s distributed idle computing resources.

Beyond the high-intensity, high-standard application scenarios, there is a broader and untapped market for everyday lightweight scenarios. These may appear fragmented but actually hold a larger market share. According to a recent Bank of America report, high-performance computing only accounts for about 5% of the total addressable market (TAM) in data centers.

In summary, it’s not that AWS or GCP are unaffordable, but io.net offers a more cost-effective solution for these specific needs.

The Decisive Factor in Web2 BD

Ultimately, the core competitive advantage of platforms like io.net, which are geared towards distributed computing resources, lies in their business development (BD) capabilities. This is the critical determining factor for success.

Apart from the phenomenon where Nvidia’s high-performance chips have given rise to a market for GPU brokers, the main challenge for many small and medium-sized Internet Data Centers (IDCs) and computing power operators is the problem of “a good wine still fears deep alleys,” meaning even great products need effective promotion to be discovered.

From this perspective, io.net holds a unique competitive edge that is hard for other projects in the same field to replicate – a dedicated Web2 BD team based directly in Silicon Valley. These veterans have extensive experience in the computing power market and understand the diverse scenarios of small and medium-sized clients. Moreover, they have a deep understanding of the end-user needs of numerous Web2 clients.

According to official disclosures from io.net, more than 20-30 Web2 companies have already expressed interest in purchasing or leasing computing power. These companies are willing to explore or even experiment with lower-cost, more flexible computing services (some may not even be able to secure computing power on AWS). Each of these clients requires at least hundreds to thousands of GPUs, translating to computing power orders worth tens of thousands of dollars per month.

This genuine demand from the paying end-users will essentially attract more idle computing power resources to proactively flow in on the supply side, thus easily leading to a?

Disclaimer:

  1. This article is reprinted from [LFG Labs]. All copyrights belong to the original author [LFG Labs]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Nu Starten
Meld Je Aan En Ontvang
$100
Voucher!