Why we are bullish on Bittensor?

Intermediate4/16/2024, 7:28:07 AM
The Bittensor ecosystem has strong inclusivity, a competitive environment, and effective incentive mechanisms. The article provides a detailed introduction to Bittensor's planned upgrade mechanisms and subnetwork introductions, encouraging effective competition to promote high-quality artificial intelligence products.

First things first, what exactly is Bittensor?

Bittensor itself is not an AI product, nor does it produce or provide any AI products or services. Bittensor is an economic system that serves as an optimizer for the AI product market by providing a highly competitive incentive system for AI product producers. In the Bittensor ecosystem, high-quality producers receive more incentives, while less competitive producers are gradually eliminated.

So, how does Bittensor specifically create this incentive mechanism that encourages effective competition and promotes the organic production of high-quality AI products?

Bittensor flywheel model

Bittensor achieves this goal through a flywheel model. Validators evaluate the quality of AI products in the ecosystem and distribute incentives based on their quality, ensuring that high-quality producers receive more incentives. This stimulates a continuous increase in high-quality output, thereby enhancing the value of the Bittensor network and increasing TAO appreciation. The appreciation of TAO not only attracts more high-quality producers to join the Bittensor ecosystem but also increases the cost of attacks by manipulators manipulating the quality evaluation results. This further strengthens the consensus of honest Validators and enhances the objectivity and fairness of the evaluation results, thus achieving a more effective competition and incentive mechanism.

Ensuring the fairness and objectivity of evaluation results is a crucial step in turning the flywheel. This is also the core technology of Bittensor, namely the abstract validation system based on Yuma Consensus.

So, what is Yuma Consensus and how does it ensure that the quality evaluation results after consensus are fair and objective?

Yuma Consensus is a consensus mechanism designed to calculate the final evaluation results from the diverse evaluations provided by numerous Validators. Similar to Byzantine fault tolerance consensus mechanisms, as long as the majority of Validators in the network are honest, the correct decision can be reached in the end. Assuming that honest Validators can provide objective evaluations, the evaluation results after consensus will also be fair and objective.

Taking the evaluation of Subnets’ quality as an example, Root Network Validators evaluate and rank the quality of each Subnet’s output. The evaluation results from 64 Validators are aggregated, and the final evaluation results are obtained through the Yuma Consensus algorithm. The final results are then used to allocate newly minted TAO to each Subnet.

Currently, Yuma Consensus indeed has room for improvement:

  1. Root Network Validators may not fully represent all TAO holders, and the evaluation results they provide may not necessarily reflect a wide range of viewpoints. Additionally, the evaluations from a few top Validators may not always be objective. Even if instances of bias are identified, they may not be corrected immediately.
  2. The presence of Root Network Validators limits the number of Subnets that Bittensor can accommodate. To compete with centralized AI giants, having only 32 Subnets is insufficient. However, even with 32 Subnets, Root Network Validators may struggle to effectively monitor all of them.
  3. Validators may not have a strong inclination to migrate to new Subnets. In the short term, Validators may lose some rewards when migrating from an older Subnet with a higher emission to a new Subnet with a lower emission. The uncertainty of whether the emission of the new Subnet will eventually catch up, coupled with the definite loss of rewards during the pursuit, dampens their willingness to migrate.

Bittensor is also planning upgrade mechanisms to address these shortcomings:

  1. Dynamic TAO will decentralize the power to evaluate Subnet quality by distributing it to all TAO holders, rather than a few Validators. TAO holders will be able to indirectly determine the allocation ratio of each Subnet through staking.
  2. Without the limitations of Root Network Validators, the maximum number of active Subnets will be increased to 1024. This will significantly lower the barrier for new teams to join the Bittensor ecosystem, leading to fiercer competition among Subnets.
  3. Validators migrating to new Subnets earlier are likely to receive higher rewards. Early migration to a new Subnet means purchasing dTAO of that Subnet at a lower price, increasing the likelihood of receiving more TAO in the future.

The strong inclusivity is also one of the major advantages of Yuma Consensus. Yuma Consensus is not only used to determine the emissions of each Subnet but also to decide the allocation ratio of each Miner and Validator within the same Subnet. Moreover, regardless of the Miner’s task, the contributions it contains, including computing power, data, human contribution and intelligence, are abstractly considered. Therefore, any stage of AI commodity production can access the Bittensor ecosystem, enjoying incentives while also enhancing the value of the Bittensor network.

Next, let’s explore some leading Subnets and observe how Bittensor incentivizes the output of these Subnets.

Subnet #3 Myshell TTS

GitHub — myshell-ai/MyShell-TTS-Subnet

Contribute to myshell-ai/MyShell-TTS-Subnet development by creating an account on GitHub.

github.com

Emission:3.46% (2024–04–09)

Background: Myshell is the team behind Myshell TTS (Text-to-Speech), comprising core members from renowned institutions such as MIT, Oxford University, and Princeton University. Myshell aims to create a no-code platform, allowing college students without programming backgrounds to easily create their desired robots. Specializing in the TTS field, audiobooks, and virtual assistants, Myshell launched its first voice chatbot, Samantha, in March 2023. With the continuous expansion of its product matrix, it has amassed over a million registered users to date. The platform hosts various types of robots, including language learning, educational, and utility-focused ones.

Positioning: Myshell launched this Subnet to gather the wisdom of the entire open-source community and build the best open-source TTS models. In other words, Myshell TTS does not directly run models or handle end users’ requests; instead, it is a network for training TTS models.

Myshell TSS Architecture

The process run by Myshell TTS is illustrated in the diagram above. Miners are responsible for training models and uploading the trained models to the Model Pool (the metadata of the models is also stored in the Bittensor blockchain network); Validators evaluate the models by generating test cases, assessing model performance, and scoring based on the results; the Bittensor blockchain is responsible for aggregating weights using Yuma Consensus, determining the final weights and allocation ratios for each Miner.

In conclusion, Miners must continuously submit higher-quality models to sustain their rewards.

Currently, Myshell has also launched a demo on its platform for users to try out the models in Myshell TTS.

In the future, as models trained by Myshell TTS become more reliable, there will be more use cases coming online. Moreover, as open-source models, they will not only be limited to Myshell but can also be expanded to other platforms. Isn’t training and incentivizing open-source models through such decentralized approaches exactly what we aim for in Decentralized AI?

Subnet #5 Open Kaito

GitHub — OpenKaito/openkaito

Contribute to OpenKaito/openkaito development by creating an account on GitHub.

github.com

Emission:4.39% (2024–04–09)

Background: Kaito.ai is backed by the team behind Open Kaito, whose core members have extensive experience in the AI field, having previously worked at top-tier companies such as AWS, META, and Citadel. Before venturing into the Bittensor Subnet, they launched their flagship product, Kaito.ai — a Web3 off-chain data search engine, in Q4 2023. Leveraging AI algorithms, Kaito.ai optimizes core components of search engines, including data collection, ranking algorithms, and retrieval algorithms. It has gained recognition as one of the premier information gathering tools in the crypto community.

Positioning: Open Kaito aims to establish a decentralized indexing layer to support intelligent search and analysis. A search engine is not simply a database or ranking algorithm but a complex system. Moreover, an effective search engine also requires low latency, posing additional challenges for building a decentralized version. Fortunately, with the incentive system of Bittensor, these challenges are expected to be addressed.

Open Kaito Architecture

The operation process of Open Kaito is illustrated in the diagram above. Open Kaito does not simply decentralize each component of the search engine but defines the indexing problem as a Miner-Validator problem. That is, Miners are responsible for responding to user indexing requests, while Validators distribute the demands and score the responses from Miners.

Open Kaito does not restrict how Miners complete indexing tasks, but rather focuses on the final results output by Miners to encourage innovative solutions. This helps foster a healthy competitive environment among Miners. Faced with user indexing demands, Miners strive to improve their execution plans to achieve higher-quality response results with fewer resources.

Subnet #6 Nous Finetuning

GitHub — NousResearch/finetuning-subnet

Contribute to NousResearch/finetuning-subnet development by creating an account on GitHub.

github.com

Emission:6.26% (2024–04–09)

Background: The team behind Nous Finetuning hails from Nous Research, a dedicated research team focused on large-scale language model (LLM) architecture, data synthesis, and on-device inference. Its co-founders previously served as the Principal Engineer at Eden Network.

Positioning: Nous Finetuning is a subnet dedicated to fine-tuning large language models. Moreover, the data used for fine-tuning also comes from the Bittensor ecosystem, specifically Subnet #18.

The operation process of Nous Finetuning is similar to that of Myshell TSS. Miners train models based on the data from Subnet #18 and regularly release them to be hosted on Hugging Face; Validators evaluate the models and provide ratings; similarly, the Bittensor blockchain is responsible for aggregating weights using Yuma Consensus, determining the final weights and emissions for each Miner.

Subnet #18 Cortex.t

GitHub — corcel-api/cortex.t

Contribute to corcel-api/cortex.t development by creating an account on GitHub.

github.com

Emission:7.74%(2024–04–09)

Background: The team behind Cortex.t is Corcel.io, which has received support from Mog, the second-largest validator in the Bittensor network. Corcel.io is an application aimed at end-users, providing an experience similar to ChatGPT by leveraging AI products from the Bittensor ecosystem.

Positioning: Cortex.t is positioned as a final layer before delivering results to end-users. It is responsible for detecting and optimizing the outputs of various subnets to ensure that the results are accurate and reliable, especially when a single prompt calls upon multiple models. Cortex.t aims to prevent blank or inconsistent outputs, ensuring a seamless user experience.

Miners in Cortex.t utilize other subnets within the Bittensor ecosystem to handle requests from end-users. They also employ GPT-3.5-turbo or GPT-4 to verify output results, guaranteeing reliability for end-users. Validators assess Miner outputs by comparing them to results generated by OpenAI.

Subnet #19 Vision

GitHub — namoray/vision

Contribute to namoray/vision development by creating an account on GitHub.

github.com

Emission:9.47%(2024–04–09)

Background: The development team behind Vision also originates from Corcel.io.

Positioning: Vision aims to maximize the output capacity of the Bittensor network by leveraging an optimized subnet building framework called DSIS (Decentralized subnet inference at scale). This framework accelerates Miner responses to Validators. Currently, Vision is focused on the scenario of image generation.

Validators receive demands from the Corcel.io frontend and distribute them to Miners. Miners have the freedom to choose their preferred technology stack (not limited to models) to process the demands and generate responses. Validators then score the performance of Miners. Thanks to DSIS, Vision can respond to these demands more quickly and efficiently than other Subnets.

Summary

From the examples above, it is evident that Bittensor exhibits a high degree of inclusivity. The generation by Miners and validation by Validators occur off-chain, with the Bittensor network serving solely to allocate rewards to each Miner based on the evaluation from Validators. Any aspect of AI product generation that fits the Miner-Validator architecture can be transformed into a Subnet.

Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.

Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

In theory, competition among Subnets should be intense. For any Subnet to continue receiving rewards, it must consistently produce high-quality outputs. Otherwise, if a Subnet’s output is deemed low-value by Root Network Validators, its allocation may decrease, and it could eventually be replaced by a new Subnet.

However, in reality, we have indeed observed some issues:

  1. Redundancy and duplication of resources due to similar positioning of Subnets. Among the existing 32 Subnets, there are multiple Subnets focusing on popular directions such as text-to-image, text prompt, and price prediction.
  2. Existence of Subnets without practical use cases. While price prediction Subnets may hold theoretical value as oracle providers, the current performance of prediction data is far from being usable by end-users.
  3. Instances of “bad money driving out good.” Certain top Validators may not have a strong inclination to migrate to new Subnets, even if some new Subnets demonstrate significantly higher quality. However, due to a lack of capital support, they might not receive sufficient emissions in the short term. Since new subnets have a protection period of only 7 days after launch, if they fail to quickly accumulate an adequate amount of emissions, they may face the risk of being phased out and going offline.

These issues reflect insufficient competition among Subnets, and some Validators have not played a role in encouraging effective competition.

The Open Tensor Foundation Validator (OTF) has implemented some temporary measures to alleviate this situation. As the largest Validator holding 23% of staking power (including delegation), OTF provides channels for Subnets to vie for more Staked TAO: Subnet owners can submit requests to OTF weekly to adjust its proportion of Staked TAO in the Subnet. These requests must cover 10 aspects, including “Subnet goals and contributions to the Bittensor ecosystem,” “Subnet reward mechanism,” “Communication protocol design,” “Data sources and security,” “Computational requirements,” and “Roadmap,” among others, to facilitate OTF’s final decision-making.

However, to fundamentally address this issue, on one hand, we urgently need the launch of dTAO (@0xai.dev/what-is-the-impact-of-dynamic-tao-on-bittensor-efcc8ebe4e27">Dynamic TAO), which is designed to fundamentally change the aforementioned unreasonable problems. Alternatively, we can appeal to large Validators holding a significant amount of Staking TAO to consider the long-term development of the Bittensor ecosystem more from the perspective of “ecosystem development” rather than solely from a “financial return” standpoint.

In conclusion, relying on its strong inclusivity, fierce competitive environment, and effective incentive mechanism, we believe that the Bittensor ecosystem can organically produce high-quality AI products. Although not all outputs from existing Subnets may rival those of centralized products, let’s not forget that the current Bittensor architecture has just turned one year old (Subnet #1 was registered on April 13, 2023). For a platform with the potential to rival centralized AI giants, perhaps we should focus on proposing practical improvement plans rather than hastily criticizing its shortcomings. After all, we all do not want to see AI constantly controlled by a few giants.

Disclaimer:

  1. This article is reprinted from [Medium], All copyrights belong to the original author [0xai]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

Why we are bullish on Bittensor?

Intermediate4/16/2024, 7:28:07 AM
The Bittensor ecosystem has strong inclusivity, a competitive environment, and effective incentive mechanisms. The article provides a detailed introduction to Bittensor's planned upgrade mechanisms and subnetwork introductions, encouraging effective competition to promote high-quality artificial intelligence products.

First things first, what exactly is Bittensor?

Bittensor itself is not an AI product, nor does it produce or provide any AI products or services. Bittensor is an economic system that serves as an optimizer for the AI product market by providing a highly competitive incentive system for AI product producers. In the Bittensor ecosystem, high-quality producers receive more incentives, while less competitive producers are gradually eliminated.

So, how does Bittensor specifically create this incentive mechanism that encourages effective competition and promotes the organic production of high-quality AI products?

Bittensor flywheel model

Bittensor achieves this goal through a flywheel model. Validators evaluate the quality of AI products in the ecosystem and distribute incentives based on their quality, ensuring that high-quality producers receive more incentives. This stimulates a continuous increase in high-quality output, thereby enhancing the value of the Bittensor network and increasing TAO appreciation. The appreciation of TAO not only attracts more high-quality producers to join the Bittensor ecosystem but also increases the cost of attacks by manipulators manipulating the quality evaluation results. This further strengthens the consensus of honest Validators and enhances the objectivity and fairness of the evaluation results, thus achieving a more effective competition and incentive mechanism.

Ensuring the fairness and objectivity of evaluation results is a crucial step in turning the flywheel. This is also the core technology of Bittensor, namely the abstract validation system based on Yuma Consensus.

So, what is Yuma Consensus and how does it ensure that the quality evaluation results after consensus are fair and objective?

Yuma Consensus is a consensus mechanism designed to calculate the final evaluation results from the diverse evaluations provided by numerous Validators. Similar to Byzantine fault tolerance consensus mechanisms, as long as the majority of Validators in the network are honest, the correct decision can be reached in the end. Assuming that honest Validators can provide objective evaluations, the evaluation results after consensus will also be fair and objective.

Taking the evaluation of Subnets’ quality as an example, Root Network Validators evaluate and rank the quality of each Subnet’s output. The evaluation results from 64 Validators are aggregated, and the final evaluation results are obtained through the Yuma Consensus algorithm. The final results are then used to allocate newly minted TAO to each Subnet.

Currently, Yuma Consensus indeed has room for improvement:

  1. Root Network Validators may not fully represent all TAO holders, and the evaluation results they provide may not necessarily reflect a wide range of viewpoints. Additionally, the evaluations from a few top Validators may not always be objective. Even if instances of bias are identified, they may not be corrected immediately.
  2. The presence of Root Network Validators limits the number of Subnets that Bittensor can accommodate. To compete with centralized AI giants, having only 32 Subnets is insufficient. However, even with 32 Subnets, Root Network Validators may struggle to effectively monitor all of them.
  3. Validators may not have a strong inclination to migrate to new Subnets. In the short term, Validators may lose some rewards when migrating from an older Subnet with a higher emission to a new Subnet with a lower emission. The uncertainty of whether the emission of the new Subnet will eventually catch up, coupled with the definite loss of rewards during the pursuit, dampens their willingness to migrate.

Bittensor is also planning upgrade mechanisms to address these shortcomings:

  1. Dynamic TAO will decentralize the power to evaluate Subnet quality by distributing it to all TAO holders, rather than a few Validators. TAO holders will be able to indirectly determine the allocation ratio of each Subnet through staking.
  2. Without the limitations of Root Network Validators, the maximum number of active Subnets will be increased to 1024. This will significantly lower the barrier for new teams to join the Bittensor ecosystem, leading to fiercer competition among Subnets.
  3. Validators migrating to new Subnets earlier are likely to receive higher rewards. Early migration to a new Subnet means purchasing dTAO of that Subnet at a lower price, increasing the likelihood of receiving more TAO in the future.

The strong inclusivity is also one of the major advantages of Yuma Consensus. Yuma Consensus is not only used to determine the emissions of each Subnet but also to decide the allocation ratio of each Miner and Validator within the same Subnet. Moreover, regardless of the Miner’s task, the contributions it contains, including computing power, data, human contribution and intelligence, are abstractly considered. Therefore, any stage of AI commodity production can access the Bittensor ecosystem, enjoying incentives while also enhancing the value of the Bittensor network.

Next, let’s explore some leading Subnets and observe how Bittensor incentivizes the output of these Subnets.

Subnet #3 Myshell TTS

GitHub — myshell-ai/MyShell-TTS-Subnet

Contribute to myshell-ai/MyShell-TTS-Subnet development by creating an account on GitHub.

github.com

Emission:3.46% (2024–04–09)

Background: Myshell is the team behind Myshell TTS (Text-to-Speech), comprising core members from renowned institutions such as MIT, Oxford University, and Princeton University. Myshell aims to create a no-code platform, allowing college students without programming backgrounds to easily create their desired robots. Specializing in the TTS field, audiobooks, and virtual assistants, Myshell launched its first voice chatbot, Samantha, in March 2023. With the continuous expansion of its product matrix, it has amassed over a million registered users to date. The platform hosts various types of robots, including language learning, educational, and utility-focused ones.

Positioning: Myshell launched this Subnet to gather the wisdom of the entire open-source community and build the best open-source TTS models. In other words, Myshell TTS does not directly run models or handle end users’ requests; instead, it is a network for training TTS models.

Myshell TSS Architecture

The process run by Myshell TTS is illustrated in the diagram above. Miners are responsible for training models and uploading the trained models to the Model Pool (the metadata of the models is also stored in the Bittensor blockchain network); Validators evaluate the models by generating test cases, assessing model performance, and scoring based on the results; the Bittensor blockchain is responsible for aggregating weights using Yuma Consensus, determining the final weights and allocation ratios for each Miner.

In conclusion, Miners must continuously submit higher-quality models to sustain their rewards.

Currently, Myshell has also launched a demo on its platform for users to try out the models in Myshell TTS.

In the future, as models trained by Myshell TTS become more reliable, there will be more use cases coming online. Moreover, as open-source models, they will not only be limited to Myshell but can also be expanded to other platforms. Isn’t training and incentivizing open-source models through such decentralized approaches exactly what we aim for in Decentralized AI?

Subnet #5 Open Kaito

GitHub — OpenKaito/openkaito

Contribute to OpenKaito/openkaito development by creating an account on GitHub.

github.com

Emission:4.39% (2024–04–09)

Background: Kaito.ai is backed by the team behind Open Kaito, whose core members have extensive experience in the AI field, having previously worked at top-tier companies such as AWS, META, and Citadel. Before venturing into the Bittensor Subnet, they launched their flagship product, Kaito.ai — a Web3 off-chain data search engine, in Q4 2023. Leveraging AI algorithms, Kaito.ai optimizes core components of search engines, including data collection, ranking algorithms, and retrieval algorithms. It has gained recognition as one of the premier information gathering tools in the crypto community.

Positioning: Open Kaito aims to establish a decentralized indexing layer to support intelligent search and analysis. A search engine is not simply a database or ranking algorithm but a complex system. Moreover, an effective search engine also requires low latency, posing additional challenges for building a decentralized version. Fortunately, with the incentive system of Bittensor, these challenges are expected to be addressed.

Open Kaito Architecture

The operation process of Open Kaito is illustrated in the diagram above. Open Kaito does not simply decentralize each component of the search engine but defines the indexing problem as a Miner-Validator problem. That is, Miners are responsible for responding to user indexing requests, while Validators distribute the demands and score the responses from Miners.

Open Kaito does not restrict how Miners complete indexing tasks, but rather focuses on the final results output by Miners to encourage innovative solutions. This helps foster a healthy competitive environment among Miners. Faced with user indexing demands, Miners strive to improve their execution plans to achieve higher-quality response results with fewer resources.

Subnet #6 Nous Finetuning

GitHub — NousResearch/finetuning-subnet

Contribute to NousResearch/finetuning-subnet development by creating an account on GitHub.

github.com

Emission:6.26% (2024–04–09)

Background: The team behind Nous Finetuning hails from Nous Research, a dedicated research team focused on large-scale language model (LLM) architecture, data synthesis, and on-device inference. Its co-founders previously served as the Principal Engineer at Eden Network.

Positioning: Nous Finetuning is a subnet dedicated to fine-tuning large language models. Moreover, the data used for fine-tuning also comes from the Bittensor ecosystem, specifically Subnet #18.

The operation process of Nous Finetuning is similar to that of Myshell TSS. Miners train models based on the data from Subnet #18 and regularly release them to be hosted on Hugging Face; Validators evaluate the models and provide ratings; similarly, the Bittensor blockchain is responsible for aggregating weights using Yuma Consensus, determining the final weights and emissions for each Miner.

Subnet #18 Cortex.t

GitHub — corcel-api/cortex.t

Contribute to corcel-api/cortex.t development by creating an account on GitHub.

github.com

Emission:7.74%(2024–04–09)

Background: The team behind Cortex.t is Corcel.io, which has received support from Mog, the second-largest validator in the Bittensor network. Corcel.io is an application aimed at end-users, providing an experience similar to ChatGPT by leveraging AI products from the Bittensor ecosystem.

Positioning: Cortex.t is positioned as a final layer before delivering results to end-users. It is responsible for detecting and optimizing the outputs of various subnets to ensure that the results are accurate and reliable, especially when a single prompt calls upon multiple models. Cortex.t aims to prevent blank or inconsistent outputs, ensuring a seamless user experience.

Miners in Cortex.t utilize other subnets within the Bittensor ecosystem to handle requests from end-users. They also employ GPT-3.5-turbo or GPT-4 to verify output results, guaranteeing reliability for end-users. Validators assess Miner outputs by comparing them to results generated by OpenAI.

Subnet #19 Vision

GitHub — namoray/vision

Contribute to namoray/vision development by creating an account on GitHub.

github.com

Emission:9.47%(2024–04–09)

Background: The development team behind Vision also originates from Corcel.io.

Positioning: Vision aims to maximize the output capacity of the Bittensor network by leveraging an optimized subnet building framework called DSIS (Decentralized subnet inference at scale). This framework accelerates Miner responses to Validators. Currently, Vision is focused on the scenario of image generation.

Validators receive demands from the Corcel.io frontend and distribute them to Miners. Miners have the freedom to choose their preferred technology stack (not limited to models) to process the demands and generate responses. Validators then score the performance of Miners. Thanks to DSIS, Vision can respond to these demands more quickly and efficiently than other Subnets.

Summary

From the examples above, it is evident that Bittensor exhibits a high degree of inclusivity. The generation by Miners and validation by Validators occur off-chain, with the Bittensor network serving solely to allocate rewards to each Miner based on the evaluation from Validators. Any aspect of AI product generation that fits the Miner-Validator architecture can be transformed into a Subnet.

Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.

Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

In theory, competition among Subnets should be intense. For any Subnet to continue receiving rewards, it must consistently produce high-quality outputs. Otherwise, if a Subnet’s output is deemed low-value by Root Network Validators, its allocation may decrease, and it could eventually be replaced by a new Subnet.

However, in reality, we have indeed observed some issues:

  1. Redundancy and duplication of resources due to similar positioning of Subnets. Among the existing 32 Subnets, there are multiple Subnets focusing on popular directions such as text-to-image, text prompt, and price prediction.
  2. Existence of Subnets without practical use cases. While price prediction Subnets may hold theoretical value as oracle providers, the current performance of prediction data is far from being usable by end-users.
  3. Instances of “bad money driving out good.” Certain top Validators may not have a strong inclination to migrate to new Subnets, even if some new Subnets demonstrate significantly higher quality. However, due to a lack of capital support, they might not receive sufficient emissions in the short term. Since new subnets have a protection period of only 7 days after launch, if they fail to quickly accumulate an adequate amount of emissions, they may face the risk of being phased out and going offline.

These issues reflect insufficient competition among Subnets, and some Validators have not played a role in encouraging effective competition.

The Open Tensor Foundation Validator (OTF) has implemented some temporary measures to alleviate this situation. As the largest Validator holding 23% of staking power (including delegation), OTF provides channels for Subnets to vie for more Staked TAO: Subnet owners can submit requests to OTF weekly to adjust its proportion of Staked TAO in the Subnet. These requests must cover 10 aspects, including “Subnet goals and contributions to the Bittensor ecosystem,” “Subnet reward mechanism,” “Communication protocol design,” “Data sources and security,” “Computational requirements,” and “Roadmap,” among others, to facilitate OTF’s final decision-making.

However, to fundamentally address this issue, on one hand, we urgently need the launch of dTAO (@0xai.dev/what-is-the-impact-of-dynamic-tao-on-bittensor-efcc8ebe4e27">Dynamic TAO), which is designed to fundamentally change the aforementioned unreasonable problems. Alternatively, we can appeal to large Validators holding a significant amount of Staking TAO to consider the long-term development of the Bittensor ecosystem more from the perspective of “ecosystem development” rather than solely from a “financial return” standpoint.

In conclusion, relying on its strong inclusivity, fierce competitive environment, and effective incentive mechanism, we believe that the Bittensor ecosystem can organically produce high-quality AI products. Although not all outputs from existing Subnets may rival those of centralized products, let’s not forget that the current Bittensor architecture has just turned one year old (Subnet #1 was registered on April 13, 2023). For a platform with the potential to rival centralized AI giants, perhaps we should focus on proposing practical improvement plans rather than hastily criticizing its shortcomings. After all, we all do not want to see AI constantly controlled by a few giants.

Disclaimer:

  1. This article is reprinted from [Medium], All copyrights belong to the original author [0xai]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!