At first glance, AI and Web3 appear to be independent technologies, each based on fundamentally different principles and serving distinct functions. However, a deeper exploration reveals that these two technologies have the potential to balance each other’s trade-offs, with their unique strengths complementing and enhancing one another. Balaji Srinivasan eloquently articulated this concept of complementary capabilities at the SuperAI conference, sparking a detailed comparison of how these technologies interact.
Tokens emerged from a bottom-up approach, rising from the decentralized efforts of anonymous network enthusiasts and evolving over a decade through the collaborative efforts of numerous independent entities worldwide. In contrast, artificial intelligence has been developed through a top-down approach, dominated by a few tech giants that set the pace and dynamics of the industry. The barriers to entry in AI are more determined by resource intensity rather than technical complexity.
These two technologies also have fundamentally different natures. Tokens are deterministic systems that produce immutable results, such as the predictability of hash functions or zero-knowledge proofs. This contrasts sharply with the probabilistic and often unpredictable nature of AI.
Similarly, cryptographic technology excels in validation, ensuring the authenticity and security of transactions and establishing trustless processes and systems, while AI focuses on generation, creating rich digital content. However, ensuring content provenance and preventing identity theft pose challenges in the creation of digital content.
Fortunately, tokens provide a counterpoint to digital abundance—digital scarcity. They offer relatively mature tools that can be applied to AI technologies to ensure content provenance and address identity theft issues.
A notable advantage of tokens is their ability to attract substantial hardware and capital into coordinated networks to serve specific goals. This capability is particularly beneficial for AI, which consumes large amounts of computing power. Mobilizing underutilized resources to provide more affordable computing power can significantly enhance AI efficiency.
By comparing these two technologies, we not only appreciate their individual contributions but also see how they can together pave new paths in technology and economics. Each technology can address the shortcomings of the other, creating a more integrated and innovative future. This blog post aims to explore the emerging AI x Web3 industry landscape, focusing on some new verticals at the intersection of these technologies.
Source: IOSG Ventures
Computing networks are primarily used for two main functions: training and inference. The demand for these networks comes from both Web 2.0 and Web 3.0 projects. In the Web 3.0 space, projects like Bittensor utilize computing resources for model fine-tuning. For inference, Web 3.0 projects emphasize the verifiability of the process. This focus has led to the emergence of verifiable inference as a market vertical, with projects exploring how to integrate AI inference into smart contracts while maintaining decentralization principles.
Source: IOSG Ventures
In the integration of AI and Web3, data is a core component. Data is a strategic asset in AI competition, constituting key resources alongside computing resources. However, this category is often overlooked as most industry attention is focused on the computing layer. In reality, primitives provide many interesting value directions in the process of data acquisition, mainly including the following two high-level directions:
Accessing public internet data
Accessing protected data
Accessing public internet data: This direction aims to build a distributed crawler network that can crawl the entire internet within a few days, acquiring massive datasets or accessing very specific internet data in real-time. However, to crawl large datasets on the internet, the network demand is very high, requiring at least a few hundred nodes to start some meaningful work. Fortunately, Grass, a distributed crawler node network, already has over 2 million nodes actively sharing internet bandwidth with the network, aiming to crawl the entire internet. This demonstrates the great potential of economic incentives in attracting valuable resources.
Although Grass provides a fair competitive environment for public data, the challenge of utilizing potential data—specifically, access to proprietary datasets—remains. Specifically, a large amount of data is still stored in a privacy-protected manner due to its sensitive nature. Many startups are using cryptographic tools that enable AI developers to utilize the foundational data structure of proprietary datasets to build and fine-tune large language models while keeping sensitive information private.
Technologies such as federated learning, differential privacy, trusted execution environments, fully homomorphic encryption, and multi-party computation provide different levels of privacy protection and trade-offs. Bagel’s research paper summarizes an excellent overview of these technologies. These technologies not only protect data privacy during the machine learning process but also achieve comprehensive privacy-protected AI solutions at the computing layer.
Data and model provenance technologies aim to establish processes that assure users they are interacting with the intended models and data. Moreover, these technologies provide guarantees of authenticity and provenance. For instance, watermarking, a type of model provenance technology, embeds signatures directly into machine learning algorithms, more specifically into the model weights, so that during retrieval, it can be verified whether the inference originates from the intended model.
In terms of applications, the design possibilities are limitless. In the industry landscape above, we have listed some particularly anticipated development cases as AI technology is applied in the Web 3.0 field. Since these use cases are mostly self-explanatory, we will not comment further. However, it is worth noting that the intersection of AI and Web 3.0 has the potential to reshape many verticals within the field, as these new primitives offer developers more freedom to create innovative use cases and optimize existing ones.
The integration of AI and Web3 brings a landscape full of innovation and potential. By leveraging the unique advantages of each technology, we can address various challenges and open up new technological pathways. As we explore this emerging industry, the synergy between AI and Web3 can drive progress, reshape our future digital experiences, and transform how we interact online.
The fusion of digital scarcity and digital abundance, the mobilization of underutilized resources to achieve computational efficiency, and the establishment of secure, privacy-protecting data practices will define the era of next-generation technological evolution.
However, we must recognize that this industry is still in its infancy, and the current landscape may quickly become outdated. The rapid pace of innovation means that today’s cutting-edge solutions might soon be replaced by new breakthroughs. Nevertheless, the fundamental concepts discussed—such as computational networks, agent platforms, and data protocols—highlight the immense possibilities of integrating AI with Web3.
This article is reproduced from [深潮TechFlow], the copyright belongs to the original author [IOSG Ventures], if you have any objections to the reprint, please contact the Gate Learn team, and the team will handle it as soon as possible according to relevant procedures.
Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.
Other language versions of the article are translated by the Gate Learn team and are not mentioned in Gate.io, the translated article may not be reproduced, distributed or plagiarized.
At first glance, AI and Web3 appear to be independent technologies, each based on fundamentally different principles and serving distinct functions. However, a deeper exploration reveals that these two technologies have the potential to balance each other’s trade-offs, with their unique strengths complementing and enhancing one another. Balaji Srinivasan eloquently articulated this concept of complementary capabilities at the SuperAI conference, sparking a detailed comparison of how these technologies interact.
Tokens emerged from a bottom-up approach, rising from the decentralized efforts of anonymous network enthusiasts and evolving over a decade through the collaborative efforts of numerous independent entities worldwide. In contrast, artificial intelligence has been developed through a top-down approach, dominated by a few tech giants that set the pace and dynamics of the industry. The barriers to entry in AI are more determined by resource intensity rather than technical complexity.
These two technologies also have fundamentally different natures. Tokens are deterministic systems that produce immutable results, such as the predictability of hash functions or zero-knowledge proofs. This contrasts sharply with the probabilistic and often unpredictable nature of AI.
Similarly, cryptographic technology excels in validation, ensuring the authenticity and security of transactions and establishing trustless processes and systems, while AI focuses on generation, creating rich digital content. However, ensuring content provenance and preventing identity theft pose challenges in the creation of digital content.
Fortunately, tokens provide a counterpoint to digital abundance—digital scarcity. They offer relatively mature tools that can be applied to AI technologies to ensure content provenance and address identity theft issues.
A notable advantage of tokens is their ability to attract substantial hardware and capital into coordinated networks to serve specific goals. This capability is particularly beneficial for AI, which consumes large amounts of computing power. Mobilizing underutilized resources to provide more affordable computing power can significantly enhance AI efficiency.
By comparing these two technologies, we not only appreciate their individual contributions but also see how they can together pave new paths in technology and economics. Each technology can address the shortcomings of the other, creating a more integrated and innovative future. This blog post aims to explore the emerging AI x Web3 industry landscape, focusing on some new verticals at the intersection of these technologies.
Source: IOSG Ventures
Computing networks are primarily used for two main functions: training and inference. The demand for these networks comes from both Web 2.0 and Web 3.0 projects. In the Web 3.0 space, projects like Bittensor utilize computing resources for model fine-tuning. For inference, Web 3.0 projects emphasize the verifiability of the process. This focus has led to the emergence of verifiable inference as a market vertical, with projects exploring how to integrate AI inference into smart contracts while maintaining decentralization principles.
Source: IOSG Ventures
In the integration of AI and Web3, data is a core component. Data is a strategic asset in AI competition, constituting key resources alongside computing resources. However, this category is often overlooked as most industry attention is focused on the computing layer. In reality, primitives provide many interesting value directions in the process of data acquisition, mainly including the following two high-level directions:
Accessing public internet data
Accessing protected data
Accessing public internet data: This direction aims to build a distributed crawler network that can crawl the entire internet within a few days, acquiring massive datasets or accessing very specific internet data in real-time. However, to crawl large datasets on the internet, the network demand is very high, requiring at least a few hundred nodes to start some meaningful work. Fortunately, Grass, a distributed crawler node network, already has over 2 million nodes actively sharing internet bandwidth with the network, aiming to crawl the entire internet. This demonstrates the great potential of economic incentives in attracting valuable resources.
Although Grass provides a fair competitive environment for public data, the challenge of utilizing potential data—specifically, access to proprietary datasets—remains. Specifically, a large amount of data is still stored in a privacy-protected manner due to its sensitive nature. Many startups are using cryptographic tools that enable AI developers to utilize the foundational data structure of proprietary datasets to build and fine-tune large language models while keeping sensitive information private.
Technologies such as federated learning, differential privacy, trusted execution environments, fully homomorphic encryption, and multi-party computation provide different levels of privacy protection and trade-offs. Bagel’s research paper summarizes an excellent overview of these technologies. These technologies not only protect data privacy during the machine learning process but also achieve comprehensive privacy-protected AI solutions at the computing layer.
Data and model provenance technologies aim to establish processes that assure users they are interacting with the intended models and data. Moreover, these technologies provide guarantees of authenticity and provenance. For instance, watermarking, a type of model provenance technology, embeds signatures directly into machine learning algorithms, more specifically into the model weights, so that during retrieval, it can be verified whether the inference originates from the intended model.
In terms of applications, the design possibilities are limitless. In the industry landscape above, we have listed some particularly anticipated development cases as AI technology is applied in the Web 3.0 field. Since these use cases are mostly self-explanatory, we will not comment further. However, it is worth noting that the intersection of AI and Web 3.0 has the potential to reshape many verticals within the field, as these new primitives offer developers more freedom to create innovative use cases and optimize existing ones.
The integration of AI and Web3 brings a landscape full of innovation and potential. By leveraging the unique advantages of each technology, we can address various challenges and open up new technological pathways. As we explore this emerging industry, the synergy between AI and Web3 can drive progress, reshape our future digital experiences, and transform how we interact online.
The fusion of digital scarcity and digital abundance, the mobilization of underutilized resources to achieve computational efficiency, and the establishment of secure, privacy-protecting data practices will define the era of next-generation technological evolution.
However, we must recognize that this industry is still in its infancy, and the current landscape may quickly become outdated. The rapid pace of innovation means that today’s cutting-edge solutions might soon be replaced by new breakthroughs. Nevertheless, the fundamental concepts discussed—such as computational networks, agent platforms, and data protocols—highlight the immense possibilities of integrating AI with Web3.
This article is reproduced from [深潮TechFlow], the copyright belongs to the original author [IOSG Ventures], if you have any objections to the reprint, please contact the Gate Learn team, and the team will handle it as soon as possible according to relevant procedures.
Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.
Other language versions of the article are translated by the Gate Learn team and are not mentioned in Gate.io, the translated article may not be reproduced, distributed or plagiarized.