What is Io.net? A Comprehensive Exploration of Decentralized Computing Network Based on Solana

Intermediate4/17/2024, 5:31:49 AM
This article provides an in-depth introduction to Io.net, a decentralized computing network based on the public chain Solana, which not only aims to alleviate the current shortage of resources but supports the ongoing development of AI technology. We will explore the core functionalities of these products, how they provide more computational power to users, and simplify the deployment and management of GPU/CPU resources, offering a flexible, scalable computing solution.

Introduction

In the digital age, computing power has become an essential element of technological progress. It defines the resources computers require to process operations, including memory, processor speed, and the number of processors. These resources directly affect the performance and cost of devices, especially when handling multiple programs simultaneously. With the widespread adoption of artificial intelligence and deep learning technologies, the demand for high-performance computing resources, such as GPUs, has skyrocketed, leading to a global supply shortage.

The Central Processing Unit (CPU) plays a pivotal role as the core of a computer, while the Graphics Processing Unit (GPU) significantly enhances computational efficiency by handling parallel tasks. A more powerful CPU can process operations faster, and the GPU effectively supports the growing computational demands.

What is Io.net?

Source: io.net

Io.net is a DePIN project based on Solana, focused on providing GPU computing power to AI and machine learning companies, making computing more scalable, accessible, and efficient.

Modern AI models are increasingly large, and training and inference are no longer simple tasks that can be performed on a single device. Often, parallel and distributed computing is needed, utilizing the powerful capabilities across multiple systems and cores to optimize computing performance or to expand to accommodate larger data sets and models. Coordinating the GPU network as a computing resource is crucial in this process.

Team Background and Funding

Team Background

The core team of Io.net originally specialized in quantitative trading. Until June 2022, they focused on developing institutional-level quantitative trading systems covering stocks and cryptocurrencies. As the backend systems’ demand for computing power increased, the team began to explore the possibilities of decentralized computing, ultimately focusing on solving specific problems related to reducing the cost of GPU computing services.

  • Founder & CEO: Ahmad Shadid, who worked in quant and financial engineering. Before Io.net, he was a volunteer at the Ethereum Foundation.
  • CMO & Chief Strategy Officer: Garrison Yang, who joined Io.net in March this year, previously served as VP of Strategy and Growth at Avalanche and graduated from the University of California, Santa Barbara.
  • COO: Tory Green, the COO of Io.net, previously served as COO at Hum Capital and Director of Business Development and Strategy at Fox Mobile Group, and is a Stanford graduate.

According to Io.net’s LinkedIn information, the team is headquartered in New York, USA, with a branch in San Francisco, and currently has more than 50 team members.

Funding Situation

Io.net completed a $30 million Series A funding round led by Hack VC, with participation from other notable institutions such as Multicoin Capital, Delphi Digital, Animoca Brands, OKX, Aptos Labs, and Solana Labs. Additionally, founders of Solana, Aptos, and Animoca Brands also participated in this round as individual investors. Notably, following investment from the Aptos Foundation, the BC8.AI project, settled initially on Solana, has switched to the equally efficient L1 platform, Aptos.

Addressing the Shortage of Computing Resources

In recent years, the rapid advancements in AI have fueled a surge in demand for computing chips, with AI applications doubling their computational power requirements every three months and nearly tenfold every 18 months. This exponential growth has put a strain on the global supply chain, which is still struggling to recover from the disruptions caused by the pandemic. Public clouds usually have priority access to more GPUs, making it challenging for smaller businesses and research institutions to obtain computational resources, such as:

  • High Costs: Using high-end GPUs is very expensive, easily reaching hundreds of thousands per month for training and inference.
  • Quality Issues: Users have little choice regarding the quality, security level, computational delay, and other options of GPU hardware and must settle for what is available.
  • Usage Restrictions: When using cloud services like Google’s AWS, GCP, or Microsoft Azure, access usually takes weeks, and higher-end GPUs are often unavailable.

Io.net addresses this problem by aggregating underutilized computational resources (such as independent data computing centers, cryptocurrency miners, Filecoin, Render, and other crypto project networks) of surplus GPUs. These computational resources form a decentralized computing network, enabling engineers to obtain vast computing power in an easily accessible, customizable, cost-effective system.

Source: io.net

Io.net Products Built for Four Core Functionalities

  • Batch Inference and Model Services: Batch data can be processed in parallel by exporting the architecture and weights of trained models to shared object storage. Io.net enables machine learning teams to establish inference and model service workflows across distributed GPU networks.
  • Parallel Training: CPU/GPU memory limitations and sequential processing workflows create significant bottlenecks when training single-device models. Io.net utilizes distributed computing libraries to orchestrate and batch training jobs, enabling parallelism of data and models across many distributed devices.
  • Parallel Hyperparameter Tuning: Hyperparameter tuning experiments are inherently parallel. Io.net uses a distributed computing library with advanced hyperparameter tuning capabilities to find the best results, optimize scheduling, and define search patterns.
  • Reinforcement Learning: Io.net employs an open-source reinforcement learning library that supports production-level, highly distributed RL workloads and a set of simple APIs.

Io.net Products

IO Cloud

IO Cloud manages dispersed GPU clusters, offering flexible, scalable resource access without the need for expensive hardware investments and infrastructure management. Utilizing a decentralized node network gives machine learning engineers an experience akin to any cloud provider. Integrated seamlessly via the IO-SDK, it offers solutions for AI and Python applications and simplifies the deployment and management of GPU/CPU resources, adapting to changing needs.

Highlights:

  • Global Coverage: Utilizing a CDN-like approach, it globally distributes GPU resources to optimize machine learning services and inference.
  • Scalability and Cost Efficiency: Committed to being the most cost-efficient GPU cloud platform, it is projected to reduce AI/ML project costs by up to 90%.
  • Integration with IO SDK: Enhances the performance of AI projects through seamless integration, creating a unified high-performance environment.
  • Exclusive Features: Provides private access to the OpenAI ChatGPT plugin, simplifying the deployment of training clusters.
  • Support for RAY Framework: Utilizes the RAY distributed computing framework for scalable Python application development.
  • Innovation in Crypto Mining: Aims to revolutionize the crypto mining industry by supporting the ML and AI ecosystems.

IO Worker

Designed to optimize supply operations in WebApps, IO Worker includes user account management, real-time activity monitoring, temperature and power consumption tracking, installation support, wallet management, security assessment, and profitability analysis. It bridges the gap between AI processing power demands and the supply of underutilized computing resources, facilitating a more cost-effective and smooth AI learning process.

Highlights:

  • Worker Homepage: Provides a dashboard for real-time monitoring of connected devices, supporting functions such as device deletion and renaming.
  • Device Detail Page: Offers comprehensive analysis of devices, including traffic, connection status, and operation history.
  • Add Device Page: Simplifies the device connection process, supporting quick and easy integration of new devices.
  • Earnings and Rewards Page: Tracks earnings and operation history with transaction details available on Solscan.

IO Explorer

IO Explorer aims to provide a window into the workings of the network, offering users comprehensive statistics and operational insights into all aspects of the GPU cloud. Like Solscan or blockchain explorers provide visibility into blockchain transactions, IO Explorer brings a similar level of transparency to GPU-driven operations, enabling users to monitor, analyze, and understand the details of the GPU cloud, ensuring complete visibility of network activities, statistics, and transactions while protecting the privacy of sensitive information.

Highlights:

  • Device Page: Displays public details of devices connected to the network, providing real-time data and transaction tracking.
  • Browser Home Page: Offers insights into supply volume, verified suppliers, active hardware numbers, and real-time market pricing.
  • Clusters Page: Shows public information about clusters deployed in the network, along with real-time metrics and reservation details.
  • Real-Time Clusters Monitoring: Provides immediate insights into the status, health, and performance of clusters, ensuring users have the latest information.

IO Architecture

As a branch of Ray, the IO-SDK forms the foundation of Io.net’s capabilities, supporting task parallel execution and handling multilingual environments. Its compatibility with mainstream machine learning (ML) frameworks allows Io.net to flexibly and efficiently meet diverse computational demands. This technical setup, supported by a well-defined technical system, ensures that the Io.net platform can meet current needs and adapt to future developments.

Multi-layer Architecture:

  • User Interface Layer: Provides a visual front-end interface for users, including public websites, client areas, and GPU supplier zones, to deliver an intuitive and user-friendly experience.
  • Security Layer: Ensures system integrity and security, incorporating mechanisms like network defense, user authentication, and activity logging.
  • API Layer: As the communication hub for websites, suppliers, and internal management, it facilitates data exchange and operations.
  • Backend Layer: Forms the system’s core and is responsible for managing clusters/GPU, client interactions, and automatic scalability.
  • Database Layer: Handles data storage and management, with primary storage for structured data and caching for temporary data handling.
  • Task Layer: Manages asynchronous communication and task execution, ensuring efficient data processing and flow.
  • Infrastructure Layer: Constitutes the system foundation, including the GPU resource pool, orchestration tools, and execution/ML task processing, equipped with a robust monitoring solution.

IO Tunnels

IO Tunnels facilitate secure connections from clients to remote servers, allowing engineers to bypass firewalls and NAT without complex configurations, enabling remote access.

Workflow: IO Workers first establish a connection with a middle server (i.e., the io.net server). The io.net server then listens for connection requests from IO Workers and engineers’ machines, facilitating data exchange through reverse tunnel technology.

(Image Source: io.net, 2024.4.11)

Application in io.net: Engineers can easily connect to IO Workers through the io.net server, overcoming network configuration challenges to achieve remote access and management.

Advantages:

  • Accessibility: Direct connection to IO Workers eliminates network barriers.
  • Security: Ensures communication security, protecting data privacy.
  • Scalability and Flexibility: Efficiently manages multiple IO Workers across different environments.

IO Network

IO Network employs a mesh VPN architecture to provide ultra-low-latency communication between antMiner nodes.

Mesh VPN Network Features: Decentralized Connections: Unlike traditional hub-and-spoke models, the mesh VPN enables direct inter-node connections, enhancing redundancy, fault tolerance, and load distribution.

Advantages for io.net:

  • Direct connections reduce communication delays, enhancing application performance.
  • No single point of failure ensures the network continues to operate even if an individual node fails.
  • Enhances user privacy protection by increasing the complexity of data tracking and analysis.
  • Easy integration of new nodes without affecting network performance.
  • Facilitates resource sharing and efficient processing among nodes.

Source: io.net

Comparison of Decentralized Computing Platforms

Akash and Render Network

Both Akash and Render Network are decentralized computing networks that allow users to buy and sell computing resources. Akash operates as an open market, offering CPU, GPU, and storage resources where users can set prices and conditions, and providers bid to deploy tasks. In contrast, Render uses a dynamic pricing algorithm focused on GPU rendering services, with resources supplied by hardware providers and prices adjusted based on market conditions. Render is not an open market but uses a multi-tier pricing algorithm to match service buyers with users.

Io.net and Bittensor

Io.net focuses on artificial intelligence and machine learning tasks, utilizing a decentralized computing network to harness GPU computing power scattered around the globe, and collaborating with networks like Render to handle AI and machine learning tasks. Its primary distinctions lie in its focus on AI and machine learning tasks and its emphasis on utilizing GPU clusters.

Bittensor is an AI-focused blockchain project aiming to create a decentralized machine-learning market that competes with centralized projects. Using a subnet structure, it focuses on various AI-related tasks, such as text prompt AI networks and image generation AI. Miners in the Bittensor ecosystem provide computing resources and host machine learning models, computing for off-chain AI tasks, and competing to offer the best results for users.

Source: TokenInsight

Conclusion

Io.net is poised to significantly impact the promising AI computing market, backed by a seasoned technical team and strong support from well-known entities such as Multicoin Capital, Solana Ventures, OKX Ventures, Aptos Labs, and Delphi Digital. As the first and only GPU DePIN, io.net provides a platform that connects computing power providers with users, showcasing its powerful functionality and efficiency in delivering distributed GPU network training and inference workflows for machine learning teams.

Auteur: Allen
Vertaler: Paine
Revisor(s): KOWEI、Piccolo、Elisa、Ashley、Joyce
* The information is not intended to be and does not constitute financial advice or any other recommendation of any sort offered or endorsed by Gate.io.
* This article may not be reproduced, transmitted or copied without referencing Gate.io. Contravention is an infringement of Copyright Act and may be subject to legal action.

What is Io.net? A Comprehensive Exploration of Decentralized Computing Network Based on Solana

Intermediate4/17/2024, 5:31:49 AM
This article provides an in-depth introduction to Io.net, a decentralized computing network based on the public chain Solana, which not only aims to alleviate the current shortage of resources but supports the ongoing development of AI technology. We will explore the core functionalities of these products, how they provide more computational power to users, and simplify the deployment and management of GPU/CPU resources, offering a flexible, scalable computing solution.

Introduction

In the digital age, computing power has become an essential element of technological progress. It defines the resources computers require to process operations, including memory, processor speed, and the number of processors. These resources directly affect the performance and cost of devices, especially when handling multiple programs simultaneously. With the widespread adoption of artificial intelligence and deep learning technologies, the demand for high-performance computing resources, such as GPUs, has skyrocketed, leading to a global supply shortage.

The Central Processing Unit (CPU) plays a pivotal role as the core of a computer, while the Graphics Processing Unit (GPU) significantly enhances computational efficiency by handling parallel tasks. A more powerful CPU can process operations faster, and the GPU effectively supports the growing computational demands.

What is Io.net?

Source: io.net

Io.net is a DePIN project based on Solana, focused on providing GPU computing power to AI and machine learning companies, making computing more scalable, accessible, and efficient.

Modern AI models are increasingly large, and training and inference are no longer simple tasks that can be performed on a single device. Often, parallel and distributed computing is needed, utilizing the powerful capabilities across multiple systems and cores to optimize computing performance or to expand to accommodate larger data sets and models. Coordinating the GPU network as a computing resource is crucial in this process.

Team Background and Funding

Team Background

The core team of Io.net originally specialized in quantitative trading. Until June 2022, they focused on developing institutional-level quantitative trading systems covering stocks and cryptocurrencies. As the backend systems’ demand for computing power increased, the team began to explore the possibilities of decentralized computing, ultimately focusing on solving specific problems related to reducing the cost of GPU computing services.

  • Founder & CEO: Ahmad Shadid, who worked in quant and financial engineering. Before Io.net, he was a volunteer at the Ethereum Foundation.
  • CMO & Chief Strategy Officer: Garrison Yang, who joined Io.net in March this year, previously served as VP of Strategy and Growth at Avalanche and graduated from the University of California, Santa Barbara.
  • COO: Tory Green, the COO of Io.net, previously served as COO at Hum Capital and Director of Business Development and Strategy at Fox Mobile Group, and is a Stanford graduate.

According to Io.net’s LinkedIn information, the team is headquartered in New York, USA, with a branch in San Francisco, and currently has more than 50 team members.

Funding Situation

Io.net completed a $30 million Series A funding round led by Hack VC, with participation from other notable institutions such as Multicoin Capital, Delphi Digital, Animoca Brands, OKX, Aptos Labs, and Solana Labs. Additionally, founders of Solana, Aptos, and Animoca Brands also participated in this round as individual investors. Notably, following investment from the Aptos Foundation, the BC8.AI project, settled initially on Solana, has switched to the equally efficient L1 platform, Aptos.

Addressing the Shortage of Computing Resources

In recent years, the rapid advancements in AI have fueled a surge in demand for computing chips, with AI applications doubling their computational power requirements every three months and nearly tenfold every 18 months. This exponential growth has put a strain on the global supply chain, which is still struggling to recover from the disruptions caused by the pandemic. Public clouds usually have priority access to more GPUs, making it challenging for smaller businesses and research institutions to obtain computational resources, such as:

  • High Costs: Using high-end GPUs is very expensive, easily reaching hundreds of thousands per month for training and inference.
  • Quality Issues: Users have little choice regarding the quality, security level, computational delay, and other options of GPU hardware and must settle for what is available.
  • Usage Restrictions: When using cloud services like Google’s AWS, GCP, or Microsoft Azure, access usually takes weeks, and higher-end GPUs are often unavailable.

Io.net addresses this problem by aggregating underutilized computational resources (such as independent data computing centers, cryptocurrency miners, Filecoin, Render, and other crypto project networks) of surplus GPUs. These computational resources form a decentralized computing network, enabling engineers to obtain vast computing power in an easily accessible, customizable, cost-effective system.

Source: io.net

Io.net Products Built for Four Core Functionalities

  • Batch Inference and Model Services: Batch data can be processed in parallel by exporting the architecture and weights of trained models to shared object storage. Io.net enables machine learning teams to establish inference and model service workflows across distributed GPU networks.
  • Parallel Training: CPU/GPU memory limitations and sequential processing workflows create significant bottlenecks when training single-device models. Io.net utilizes distributed computing libraries to orchestrate and batch training jobs, enabling parallelism of data and models across many distributed devices.
  • Parallel Hyperparameter Tuning: Hyperparameter tuning experiments are inherently parallel. Io.net uses a distributed computing library with advanced hyperparameter tuning capabilities to find the best results, optimize scheduling, and define search patterns.
  • Reinforcement Learning: Io.net employs an open-source reinforcement learning library that supports production-level, highly distributed RL workloads and a set of simple APIs.

Io.net Products

IO Cloud

IO Cloud manages dispersed GPU clusters, offering flexible, scalable resource access without the need for expensive hardware investments and infrastructure management. Utilizing a decentralized node network gives machine learning engineers an experience akin to any cloud provider. Integrated seamlessly via the IO-SDK, it offers solutions for AI and Python applications and simplifies the deployment and management of GPU/CPU resources, adapting to changing needs.

Highlights:

  • Global Coverage: Utilizing a CDN-like approach, it globally distributes GPU resources to optimize machine learning services and inference.
  • Scalability and Cost Efficiency: Committed to being the most cost-efficient GPU cloud platform, it is projected to reduce AI/ML project costs by up to 90%.
  • Integration with IO SDK: Enhances the performance of AI projects through seamless integration, creating a unified high-performance environment.
  • Exclusive Features: Provides private access to the OpenAI ChatGPT plugin, simplifying the deployment of training clusters.
  • Support for RAY Framework: Utilizes the RAY distributed computing framework for scalable Python application development.
  • Innovation in Crypto Mining: Aims to revolutionize the crypto mining industry by supporting the ML and AI ecosystems.

IO Worker

Designed to optimize supply operations in WebApps, IO Worker includes user account management, real-time activity monitoring, temperature and power consumption tracking, installation support, wallet management, security assessment, and profitability analysis. It bridges the gap between AI processing power demands and the supply of underutilized computing resources, facilitating a more cost-effective and smooth AI learning process.

Highlights:

  • Worker Homepage: Provides a dashboard for real-time monitoring of connected devices, supporting functions such as device deletion and renaming.
  • Device Detail Page: Offers comprehensive analysis of devices, including traffic, connection status, and operation history.
  • Add Device Page: Simplifies the device connection process, supporting quick and easy integration of new devices.
  • Earnings and Rewards Page: Tracks earnings and operation history with transaction details available on Solscan.

IO Explorer

IO Explorer aims to provide a window into the workings of the network, offering users comprehensive statistics and operational insights into all aspects of the GPU cloud. Like Solscan or blockchain explorers provide visibility into blockchain transactions, IO Explorer brings a similar level of transparency to GPU-driven operations, enabling users to monitor, analyze, and understand the details of the GPU cloud, ensuring complete visibility of network activities, statistics, and transactions while protecting the privacy of sensitive information.

Highlights:

  • Device Page: Displays public details of devices connected to the network, providing real-time data and transaction tracking.
  • Browser Home Page: Offers insights into supply volume, verified suppliers, active hardware numbers, and real-time market pricing.
  • Clusters Page: Shows public information about clusters deployed in the network, along with real-time metrics and reservation details.
  • Real-Time Clusters Monitoring: Provides immediate insights into the status, health, and performance of clusters, ensuring users have the latest information.

IO Architecture

As a branch of Ray, the IO-SDK forms the foundation of Io.net’s capabilities, supporting task parallel execution and handling multilingual environments. Its compatibility with mainstream machine learning (ML) frameworks allows Io.net to flexibly and efficiently meet diverse computational demands. This technical setup, supported by a well-defined technical system, ensures that the Io.net platform can meet current needs and adapt to future developments.

Multi-layer Architecture:

  • User Interface Layer: Provides a visual front-end interface for users, including public websites, client areas, and GPU supplier zones, to deliver an intuitive and user-friendly experience.
  • Security Layer: Ensures system integrity and security, incorporating mechanisms like network defense, user authentication, and activity logging.
  • API Layer: As the communication hub for websites, suppliers, and internal management, it facilitates data exchange and operations.
  • Backend Layer: Forms the system’s core and is responsible for managing clusters/GPU, client interactions, and automatic scalability.
  • Database Layer: Handles data storage and management, with primary storage for structured data and caching for temporary data handling.
  • Task Layer: Manages asynchronous communication and task execution, ensuring efficient data processing and flow.
  • Infrastructure Layer: Constitutes the system foundation, including the GPU resource pool, orchestration tools, and execution/ML task processing, equipped with a robust monitoring solution.

IO Tunnels

IO Tunnels facilitate secure connections from clients to remote servers, allowing engineers to bypass firewalls and NAT without complex configurations, enabling remote access.

Workflow: IO Workers first establish a connection with a middle server (i.e., the io.net server). The io.net server then listens for connection requests from IO Workers and engineers’ machines, facilitating data exchange through reverse tunnel technology.

(Image Source: io.net, 2024.4.11)

Application in io.net: Engineers can easily connect to IO Workers through the io.net server, overcoming network configuration challenges to achieve remote access and management.

Advantages:

  • Accessibility: Direct connection to IO Workers eliminates network barriers.
  • Security: Ensures communication security, protecting data privacy.
  • Scalability and Flexibility: Efficiently manages multiple IO Workers across different environments.

IO Network

IO Network employs a mesh VPN architecture to provide ultra-low-latency communication between antMiner nodes.

Mesh VPN Network Features: Decentralized Connections: Unlike traditional hub-and-spoke models, the mesh VPN enables direct inter-node connections, enhancing redundancy, fault tolerance, and load distribution.

Advantages for io.net:

  • Direct connections reduce communication delays, enhancing application performance.
  • No single point of failure ensures the network continues to operate even if an individual node fails.
  • Enhances user privacy protection by increasing the complexity of data tracking and analysis.
  • Easy integration of new nodes without affecting network performance.
  • Facilitates resource sharing and efficient processing among nodes.

Source: io.net

Comparison of Decentralized Computing Platforms

Akash and Render Network

Both Akash and Render Network are decentralized computing networks that allow users to buy and sell computing resources. Akash operates as an open market, offering CPU, GPU, and storage resources where users can set prices and conditions, and providers bid to deploy tasks. In contrast, Render uses a dynamic pricing algorithm focused on GPU rendering services, with resources supplied by hardware providers and prices adjusted based on market conditions. Render is not an open market but uses a multi-tier pricing algorithm to match service buyers with users.

Io.net and Bittensor

Io.net focuses on artificial intelligence and machine learning tasks, utilizing a decentralized computing network to harness GPU computing power scattered around the globe, and collaborating with networks like Render to handle AI and machine learning tasks. Its primary distinctions lie in its focus on AI and machine learning tasks and its emphasis on utilizing GPU clusters.

Bittensor is an AI-focused blockchain project aiming to create a decentralized machine-learning market that competes with centralized projects. Using a subnet structure, it focuses on various AI-related tasks, such as text prompt AI networks and image generation AI. Miners in the Bittensor ecosystem provide computing resources and host machine learning models, computing for off-chain AI tasks, and competing to offer the best results for users.

Source: TokenInsight

Conclusion

Io.net is poised to significantly impact the promising AI computing market, backed by a seasoned technical team and strong support from well-known entities such as Multicoin Capital, Solana Ventures, OKX Ventures, Aptos Labs, and Delphi Digital. As the first and only GPU DePIN, io.net provides a platform that connects computing power providers with users, showcasing its powerful functionality and efficiency in delivering distributed GPU network training and inference workflows for machine learning teams.

Auteur: Allen
Vertaler: Paine
Revisor(s): KOWEI、Piccolo、Elisa、Ashley、Joyce
* The information is not intended to be and does not constitute financial advice or any other recommendation of any sort offered or endorsed by Gate.io.
* This article may not be reproduced, transmitted or copied without referencing Gate.io. Contravention is an infringement of Copyright Act and may be subject to legal action.
Nu Starten
Meld Je Aan En Ontvang
$100
Voucher!