Forward the Original Title:’Metrics Ventures研报 | 从V神文章出发,Crypto×AI有哪些值得关注的细分赛道?’
Decentralization is the consensus maintained by blockchain, ensuring security is the core principle, and openness is the key foundation from a cryptographic perspective to make on-chain behavior possess the aforementioned characteristics. This approach has been applicable in several rounds of blockchain revolutions in the past few years. However, when artificial intelligence gets involved, the situation undergoes some changes.
Imagine designing the architecture of blockchain or applications through artificial intelligence. In this case, the model needs to be open source, but doing so will expose its vulnerability in adversarial machine learning. Conversely, not being open source would result in losing decentralization. Therefore, it is necessary to consider in what way and to what extent the integration should be accomplished when introducing artificial intelligence into current blockchain or applications.
Source: DE UNIVERSITY OF ETHEREUM
In the article ‘When Giants Collide: Exploring the Convergence of Crypto x AI’ from @ueth">DE UNIVERSITY OF ETHEREUM, the differences in core characteristics between artificial intelligence and blockchain are outlined. As shown in the figure above, the characteristics of artificial intelligence are:
The characteristics mentioned above are completely opposite in blockchain when compared to artificial intelligence. This is the true argument of Vitalik’s article. If artificial intelligence and blockchain are combined, then applications born from it need to make trade-offs in terms of data ownership, transparency, monetization capabilities, energy costs, etc. Additionally, what infrastructure needs to be created to ensure the effective integration of both also needs to be considered.
Following the above criteria and his own thoughts, Vitalik categorizes applications formed by the combination of artificial intelligence and blockchain into four main types:
Among them, the first three mainly represent three ways in which AI is introduced into the Crypto world, representing three levels of depth from shallow to deep. According to the author’s understanding, this classification represents the extent to which AI influences human decision-making, and thus introduces different levels of systemic risk to the entire Crypto world:
Finally, the fourth category of projects aims to leverage the characteristics of Crypto to create better artificial intelligence. As mentioned earlier, centralization, low transparency, energy consumption, monopolistic tendencies, and weak monetary attributes can naturally be mitigated through the properties of Crypto. Although many people are skeptical about whether Crypto can have an impact on the development of artificial intelligence, the most fascinating narrative of Crypto has always been its ability to influence the real world through decentralization. This track has also become the most intensely speculated part of the AI track due to its grand vision.
In mechanisms where AI participates, the ultimate source of incentives often comes from protocols inputted by humans. Before AI becomes an interface or even a rule, we typically need to evaluate the performance of different AIs, allowing AI to participate in a mechanism, and ultimately receive rewards or penalties through an on-chain mechanism.
When AI acts as a participant, compared to being an interface or rule, the risks to users and the entire system are generally negligible. It can be considered as a necessary stage before AI deeply influences user decisions and behavior. Therefore, the cost and trade-offs required for the fusion of artificial intelligence and blockchain at this level are relatively small. This is also a category of products that Vitalik believes currently have a high degree of practicality.
In terms of breadth and implementation, many current AI applications fall into this category, such as AI-empowered trading bots and chatbots. The current level of implementation still makes it difficult for AI to serve as an interface or even a rule. Users are comparing and gradually optimizing among different bots, and crypto users have not yet developed habits of using AI applications. In Vitalik’s article, Autonomous Agents are also classified into this category.
However, in a narrower sense and from a long-term vision perspective, we tend to make more detailed distinctions for AI applications or AI agents. Therefore, under this category, representative subcategories include:
To some extent, AI games can indeed be classified into this category. Players interact with AI and train their AI characters to better fit their personal preferences, such as aligning more closely with individual tastes or becoming more competitive within the game mechanics. Games serve as a transitional stage for AI before it enters the real world. They also represent a track with relatively low implementation risks and are the easiest for ordinary users to understand. Iconic projects in this category include AI Arena, Echelon Prime, and Altered State Machine.
AI Arena: A player-versus-player (PVP) fighting game where players can train and evolve their in-game characters using AI. The game aims to allow more ordinary users to interact with, understand, and experience AI through gaming, while also providing AI engineers with various AI algorithms to increase their income. Each in-game character is powered by AI-enabled NFTs, with the Core containing the AI model’s architecture and parameters stored on IPFS. The parameters in a new NFT are randomly generated, meaning it will perform random actions. Users need to improve their character’s strategic abilities through imitation learning (IL). Each time a user trains a character and saves progress, the parameters are updated on IPFS. \
Altered State Machine: .ASM is not an AI game but a protocol for rights verification and trading for AI agents. It is positioned as a metaverse AI protocol and is currently integrating with multiple games including FIFA, introducing AI agents into games and the metaverse. ASM utilizes NFTs to verify and trade AI agents, with each agent consisting of three parts: Brain (the agent’s intrinsic characteristics), Memories (storing the agent’s learned behavior strategies and model training, linked to the Brain), and Form (character appearance, etc.). ASM has a Gym module, including a decentralized GPU cloud provider, to provide computational support for agents. Projects currently built on ASM include AIFA (AI soccer game), Muhammed Ali (AI boxing game), AI League (street soccer game in partnership with FIFA), Raicers (AI-driven racing game), and FLUF World’s Thingies (generative NFTs). \
Parallel Colony (PRIME): Echelon Prime is developing Parallel Colony, a game based on AI LLM (Large Language Models). Players can interact with their AI avatars and influence them, with avatars autonomously acting based on memories and life trajectories. Colony is currently one of the most anticipated AI games, and the official whitepaper has recently been released. Additionally, the announcement of migration to Solana has sparked another wave of excitement and increased value for PRIME.
The predictive capability is the foundation for AI to make future decisions and behaviors. Before AI models are used for practical predictions, prediction competitions compare the performance of AI models at a higher level. By providing incentives in the form of tokens for data scientists/AI models, this approach has positive implications for the development of the entire Crypto×AI field. It continuously fosters the development of more efficient and high-performing models and applications suitable for the crypto world. Before AI deeply influences decision-making and behavior, this creates higher-quality and safer products. As Vitalik stated, prediction markets are a powerful primitive that can be expanded to many other types of problems. Iconic projects in this track include Numerai and Ocean Protocol.
AI can assist users in understanding what is happening in the crypto world using simple and easy-to-understand language, acting as a mentor for users and providing alerts for potential risks to reduce the entry barriers and user risks in Crypto, thus improving user experience. The functionalities of products that can be realized are diverse, such as risk alerts during wallet interactions, AI-driven intent trading, AI chatbots capable of answering common user questions about crypto, and more. The audience for these services is expanding, including not only ordinary users but also developers, analysts, and almost all other groups, making them potential recipients of AI services.
Let’s reiterate the commonalities of these projects: they have not yet replaced humans in executing certain decisions and behaviors, but are utilizing AI models to provide information and tools for assisting human decision-making and behavior. At this level, the risks of AI malfeasance are starting to be exposed in the system - providing incorrect information to interfere with human judgment. This aspect has been thoroughly analyzed in Vitalik’s article.
There are many and varied projects that can be classified under this category, including AI chatbots, AI smart contract audits, AI code generation, AI trading bots, and more. It can be said that the vast majority of AI applications are currently at this basic level. Representative projects include:
ChainGPT: ChainGPT relies on artificial intelligence to develop a series of crypto tools, such as chatbot, NFT generator, news collection, smart contract generation and audit, transaction assistant, Prompt market and AI cross-chain exchange. However, ChainGPT’s current focus is on project incubation and Launchpad, and it has completed IDOs for 24 projects and 4 Free Giveaways.
This is the most exciting part—enabling AI to replace human decision-making and behavior. Your AI will directly control your wallet, making trading decisions and actions on your behalf. In this category, the author believes it can mainly be divided into three levels: AI applications (especially those with the vision of autonomous decision-making, such as AI automated trading bots, AI DeFi yield bots), Autonomous Agent protocols, and zkML/opML.
AI applications are tools for making specific decisions in a particular field. They accumulate knowledge and data from different sectors and rely on AI models tailored to specific problems for decision-making. It’s worth noting that AI applications are classified into both interfaces and rules in this article. In terms of development vision, AI applications should become independent decision-making agents, but currently, neither the effectiveness of AI models nor the security of integrated AI can meet this requirement. Even as interfaces, they are somewhat forced. AI applications are still in a very early stage, with specific projects introduced earlier.
Autonomous Agents, mentioned by Vitalik, are classified in the first category (AI as participants), but this article categorizes them into the third category based on their long-term vision. Autonomous Agents use a large amount of data and algorithms to simulate human thinking and decision-making processes, executing various tasks and interactions. This article mainly focuses on the infrastructure of Agents, such as communication layers and network layers, which define the ownership of Agents, establish their identity, communication standards, and methods, connect multiple Agent applications, and enable them to collaborate on decision-making and behavior.
zkML/opML: Ensure that outputs provided through correct model reasoning processes are credible through cryptographic or economic methods. Security issues are fatal when introducing AI into smart contracts. Smart contracts rely on inputs to generate outputs and automate a series of functions. If AI provides erroneous inputs, it will introduce significant systemic risks to the entire Crypto system. Therefore, zkML/opML and a series of potential solutions are the foundation for enabling AI to act independently and make decisions.
Finally, the three together constitute the three basic levels of AI as rule operators: zkml/opml as the lowest-level infrastructure ensuring protocol security; Agent protocols establish the Agent ecosystem, enabling collaborative decision-making and behavior; AI applications, also specific AI Agents, will continuously improve their capabilities in specific domains and actually make decisions and take action.
The application of AI Agents in the crypto world is natural. From smart contracts to TG Bots to AI Agents, the crypto space is moving towards higher automation and lower user barriers. While smart contracts execute functions automatically through immutable code, they still rely on external triggers to activate and cannot run autonomously or continuously. TG Bots reduce user barriers by allowing users to interact with the blockchain through natural language, but they can only perform simple and specific tasks and cannot achieve user-centric transactions. AI Agents, however, possess a certain degree of independent decision-making capability. They understand natural language and autonomously combine other agents and blockchain tools to accomplish user-specified goals.
AI Agents are dedicated to significantly improving the user experience of crypto products, while blockchain technology can further enhance the decentralization, transparency, and security of AI Agent operations. Specific assistance includes:
The main projects of this track are as follows:
Zero-knowledge proof currently has two main application directions:
Similarly, the application of ZKP in machine learning can also be divided into two categories:
The author believes that currently, the most important aspect for crypto is inference verification, and here we further elaborate on the scenarios for inference verification. Starting from AI as a participant to AI as the rules of the world, we hope to integrate AI into on-chain processes. However, the high computational cost of AI model inference prevents direct on-chain execution. Moving this process off-chain means we must tolerate the trust issues brought by this black box—did the AI model operator tamper with my input? Did they use the model I specified for inference? By converting ML models into ZK circuits, we can achieve: (1) On-chain storage of smaller models, storing small zkML models in smart contracts directly addresses the opacity issue; (2) Completing inference off-chain while generating ZK proofs, using on-chain execution of ZK proofs to verify the correctness of the inference process. The infrastructure will include two contracts—the main contract (which uses the ML model to output results) and the ZK-Proof verification contract.
zkML is still in its very early stages and faces technical challenges in converting ML models into ZK circuits, as well as high computational and cryptographic overhead costs. Similar to the development path of Rollup, opML serves as another solution from an economic perspective. opML uses the AnyTrust assumption of Arbitrum, meaning each claim has at least one honest node, ensuring that the submitter or at least one verifier is honest. However, OPML can only serve as an alternative for inference verification and cannot achieve privacy protection.
Current projects are building the infrastructure for zkML and exploring its applications. The establishment of applications is equally important because it needs to clearly demonstrate to crypto users the significant role of zkML and prove that the ultimate value can outweigh the enormous costs. In these projects, some focus on ZK technology development related to machine learning (such as Modulus Labs), while others focus on more general ZK infrastructure building. Related projects include:
If the previous three categories focus more on how AI empowers Crypto, then “AI as a goal” emphasizes Crypto’s assistance to AI, namely how to utilize Crypto to create better AI models and products. This may include multiple evaluation criteria such as greater efficiency, precision, and decentralization. AI comprises three core elements: data, computing power, and algorithms, and in each dimension, Crypto is striving to provide more effective support for AI:
The monopolization of data and computing power by large tech companies has led to a monopoly on the model training process, where closed-source models become key profit drivers for these corporations. From an infrastructure perspective, Crypto incentivizes the decentralized supply of data and computing power through economic means. Additionally, it ensures data privacy during the process through cryptographic methods. This serves as the foundation to facilitate decentralized model training, aiming to achieve a more transparent and decentralized AI ecosystem.
Decentralized data protocols primarily operate through crowdsourcing of data, incentivizing users to provide datasets or data services (such as data labeling) for enterprises to use in model training. They also establish Data Marketplaces to facilitate matching between supply and demand. Some protocols are also exploring incentivizing users through DePIN protocols to acquire browsing data or utilizing users’ devices/bandwidth for web data scraping.
Grass: The decentralized data layer, dubbed as AI, essentially functions as a decentralized network scraping market, obtaining data for AI model training purposes. Internet websites serve as vital sources of training data for AI, with many sites like Twitter, Google, and Reddit holding significant value. However, these websites continually impose restrictions on data scraping. Grass leverages unused bandwidth within individual networks to mitigate the impact of data blocking by employing different IP addresses for scraping data from public websites. It conducts initial data cleaning and serves as a data source for AI model training endeavors. Currently in the beta testing phase, Grass allows users to earn points by providing bandwidth, which can be redeemed for potential airdrops.
AIT Protocol: AIT Protocol is a decentralized data labeling protocol designed to provide developers with high-quality datasets for model training. Web3 enables global labor forces to quickly access the network and earn incentives through data labeling. AIT’s data scientists pre-label the data, which is then further processed by users. After undergoing quality checks by data scientists, the validated data is provided to developers for use.
In addition to the aforementioned data provisioning and data labeling protocols, former decentralized storage infrastructure such as Filecoin, Arweave, and others will also contribute to a more decentralized data supply.
In the era of AI, the importance of computing power is self-evident. Not only has the stock price of NVIDIA soared, but in the crypto world, decentralized computing power can be said to be the hottest niche direction in the AI track—out of the top 200 AI projects by market capitalization, 5 projects (Render/Akash/AIOZ Network/Golem/Nosana) focus on decentralized computing power, and have experienced significant growth in the past few months. Many projects in the low market cap range have also seen the emergence of decentralized computing power platforms. Although they are just getting started, they have quickly gained momentum, especially with the wave of enthusiasm from the NVIDIA conference.
From the characteristics of the track, the basic logic of projects in this direction is highly homogeneous—using token incentives to encourage individuals or enterprises with idle computing resources to provide resources, thereby significantly reducing usage costs and establishing a supply-demand market for computing power. Currently, the main sources of computing power come from data centers, miners (especially after Ethereum transitions to PoS), consumer-level computing power, and collaborations with other projects. Although homogenized, this is a track where leading projects have high moats. The main competitive advantages of projects come from: computing power resources, computing power leasing prices, computing power utilization rates, and other technical advantages. The leading projects in this track include Akash, Render, io.net, and Gensyn.
According to specific business directions, projects can be roughly divided into two categories: AI model inference and AI model training. Since the requirements for computing power and bandwidth for AI model training are much higher than inference, and the market for model inference is expanding rapidly, predictable income will be significantly higher than model training in the future. Therefore, currently, the vast majority of projects focus on the inference direction (Akash, Render, io.net), with Gensyn focusing on training. Among them, Akash and Render were not initially developed for AI computing. Akash was originally used for general computing, while Render was primarily used for video and image rendering. io.net is specifically designed for AI computing, but after AI raised the level of computing demand, these projects have all tended to develop in the direction of AI.
The most important two competitive indicators still come from the supply side (computing power resources) and the demand side (computing power utilization). Akash has 282 GPUs and over 20,000 CPUs, with more than 160,000 leases completed, and a GPU network utilization rate of 50-70%, which is a good figure in this track. io.net has 40,272 GPUs and 5,958 CPUs, along with Render’s 4,318 GPUs and 159 CPUs, and Filecoin’s 1,024 GPUs usage license, including about 200 H100s and thousands of A100s. io.net is attracting computing power resources with extremely high airdrop expectations, and GPU data is growing rapidly, requiring a reassessment of its ability to attract resources after the token is listed. Render and Gensyn have not disclosed specific data. In addition, many projects are enhancing their competitiveness on both the supply and demand sides through ecosystem collaborations. For example, io.net uses Render and Filecoin’s computing power to enhance its own resource reserves, and Render has established the Computing Client Program (RNP-004), allowing users to indirectly access Render’s computing power resources through computing clients such as io.net, Nosana, FedMl, and Beam, thus quickly transitioning from the rendering field to artificial intelligence computing.
In addition, the verification of decentralized computing remains a challenge — how to prove that workers with computational resources correctly execute computing tasks. Gensyn is attempting to establish such a verification layer, ensuring the correctness of computations through probabilistic learning proofs, graph-based precise positioning protocols, and incentives. Validators and reporters jointly inspect computations in Gensyn, so besides providing computational support for decentralized training, its established verification mechanism also holds unique value. The computing protocol Fluence, situated on Solana, also enhances validation of computing tasks. Developers can verify if their applications run as expected and if computations are correctly executed by examining the proofs provided by the on-chain providers. However, the practical need still prioritizes feasibility over trustworthiness. Computing platforms must first have sufficient computational power to be competitive. Of course, for excellent verification protocols, there’s the option to access computational resources from other platforms, serving as validation and protocol layers to play a unique role.
The ultimate scenario described by Vitalik, as depicted in the diagram below, is still very distant. Currently, we are unable to achieve a trusted black-box AI created through blockchain and encryption technologies to address adversarial machine learning. Encrypting the entire AI process from training data to query outputs incurs significant costs. However, there are projects currently attempting to incentivize the creation of better AI models. They first bridge the closed-off states between different models, creating a landscape where models can learn from each other, collaborate, and engage in healthy competition. Bittensor is one of the most representative projects in this regard.
Bittensor: Bittensor is facilitating the integration of various AI models, but it’s important to note that Bittensor itself does not engage in model training; rather, it primarily provides AI inference services. Its 32 subnets focus on different service directions, such as data fetching, text generation, Text2Image, etc. When completing a task, AI models belonging to different directions can collaborate with each other. Incentive mechanisms drive competition between subnets and within subnets. Currently, rewards are distributed at a rate of 1 TAO per block, totaling approximately 7200 TAO tokens per day. The 64 validators in SN0 (Root Network) determine the distribution ratio of these rewards among different subnets based on subnet performance. Subnet validators, on the other hand, determine the distribution ratio among different miners based on their work evaluation. As a result, better-performing services and models receive more incentives, promoting overall improvement in the quality of system inference.
From Sam Altman’s moves driving the skyrocketing prices of ARKM and WLD to the Nvidia conference boosting a series of participating projects, many are adjusting their investment ideas in the AI field. Is the AI field primarily driven by meme speculation or technological revolution?
Apart from a few celebrity themes (such as ARKM and WLD), the overall AI field in crypto seems more like a “meme driven by technological narrative.”
On one hand, the overall speculation in the Crypto AI field is undoubtedly closely linked to the progress of Web2 AI. External hype led by entities like OpenAI will serve as the catalyst for the Crypto AI field. On the other hand, the story of the AI field still revolves around technological narratives. However, it’s crucial to emphasize the “technological narrative” rather than just the technology itself. This underscores the importance of choosing specific directions within the AI field and paying attention to project fundamentals. It’s necessary to find narrative directions with speculative value as well as projects with long-term competitiveness and moats.
Looking at the four potential combinations proposed by Vitalik, we see a balance between narrative charm and feasibility. In the first and second categories, represented by AI applications, we observe many GPT wrappers. While these products are quickly deployed, they also exhibit a high degree of business homogeneity. First-mover advantage, ecosystems, user base, and revenue become the stories told in the context of homogeneous competition. The third and fourth categories represent grand narratives combining AI with crypto, such as Agent on-chain collaboration networks, zkML, and decentralized reshaping of AI. These are still in the early stages, and projects with technological innovation will quickly attract funds, even if they are only in the early stages of implementation.
Forward the Original Title:’Metrics Ventures研报 | 从V神文章出发,Crypto×AI有哪些值得关注的细分赛道?’
Decentralization is the consensus maintained by blockchain, ensuring security is the core principle, and openness is the key foundation from a cryptographic perspective to make on-chain behavior possess the aforementioned characteristics. This approach has been applicable in several rounds of blockchain revolutions in the past few years. However, when artificial intelligence gets involved, the situation undergoes some changes.
Imagine designing the architecture of blockchain or applications through artificial intelligence. In this case, the model needs to be open source, but doing so will expose its vulnerability in adversarial machine learning. Conversely, not being open source would result in losing decentralization. Therefore, it is necessary to consider in what way and to what extent the integration should be accomplished when introducing artificial intelligence into current blockchain or applications.
Source: DE UNIVERSITY OF ETHEREUM
In the article ‘When Giants Collide: Exploring the Convergence of Crypto x AI’ from @ueth">DE UNIVERSITY OF ETHEREUM, the differences in core characteristics between artificial intelligence and blockchain are outlined. As shown in the figure above, the characteristics of artificial intelligence are:
The characteristics mentioned above are completely opposite in blockchain when compared to artificial intelligence. This is the true argument of Vitalik’s article. If artificial intelligence and blockchain are combined, then applications born from it need to make trade-offs in terms of data ownership, transparency, monetization capabilities, energy costs, etc. Additionally, what infrastructure needs to be created to ensure the effective integration of both also needs to be considered.
Following the above criteria and his own thoughts, Vitalik categorizes applications formed by the combination of artificial intelligence and blockchain into four main types:
Among them, the first three mainly represent three ways in which AI is introduced into the Crypto world, representing three levels of depth from shallow to deep. According to the author’s understanding, this classification represents the extent to which AI influences human decision-making, and thus introduces different levels of systemic risk to the entire Crypto world:
Finally, the fourth category of projects aims to leverage the characteristics of Crypto to create better artificial intelligence. As mentioned earlier, centralization, low transparency, energy consumption, monopolistic tendencies, and weak monetary attributes can naturally be mitigated through the properties of Crypto. Although many people are skeptical about whether Crypto can have an impact on the development of artificial intelligence, the most fascinating narrative of Crypto has always been its ability to influence the real world through decentralization. This track has also become the most intensely speculated part of the AI track due to its grand vision.
In mechanisms where AI participates, the ultimate source of incentives often comes from protocols inputted by humans. Before AI becomes an interface or even a rule, we typically need to evaluate the performance of different AIs, allowing AI to participate in a mechanism, and ultimately receive rewards or penalties through an on-chain mechanism.
When AI acts as a participant, compared to being an interface or rule, the risks to users and the entire system are generally negligible. It can be considered as a necessary stage before AI deeply influences user decisions and behavior. Therefore, the cost and trade-offs required for the fusion of artificial intelligence and blockchain at this level are relatively small. This is also a category of products that Vitalik believes currently have a high degree of practicality.
In terms of breadth and implementation, many current AI applications fall into this category, such as AI-empowered trading bots and chatbots. The current level of implementation still makes it difficult for AI to serve as an interface or even a rule. Users are comparing and gradually optimizing among different bots, and crypto users have not yet developed habits of using AI applications. In Vitalik’s article, Autonomous Agents are also classified into this category.
However, in a narrower sense and from a long-term vision perspective, we tend to make more detailed distinctions for AI applications or AI agents. Therefore, under this category, representative subcategories include:
To some extent, AI games can indeed be classified into this category. Players interact with AI and train their AI characters to better fit their personal preferences, such as aligning more closely with individual tastes or becoming more competitive within the game mechanics. Games serve as a transitional stage for AI before it enters the real world. They also represent a track with relatively low implementation risks and are the easiest for ordinary users to understand. Iconic projects in this category include AI Arena, Echelon Prime, and Altered State Machine.
AI Arena: A player-versus-player (PVP) fighting game where players can train and evolve their in-game characters using AI. The game aims to allow more ordinary users to interact with, understand, and experience AI through gaming, while also providing AI engineers with various AI algorithms to increase their income. Each in-game character is powered by AI-enabled NFTs, with the Core containing the AI model’s architecture and parameters stored on IPFS. The parameters in a new NFT are randomly generated, meaning it will perform random actions. Users need to improve their character’s strategic abilities through imitation learning (IL). Each time a user trains a character and saves progress, the parameters are updated on IPFS. \
Altered State Machine: .ASM is not an AI game but a protocol for rights verification and trading for AI agents. It is positioned as a metaverse AI protocol and is currently integrating with multiple games including FIFA, introducing AI agents into games and the metaverse. ASM utilizes NFTs to verify and trade AI agents, with each agent consisting of three parts: Brain (the agent’s intrinsic characteristics), Memories (storing the agent’s learned behavior strategies and model training, linked to the Brain), and Form (character appearance, etc.). ASM has a Gym module, including a decentralized GPU cloud provider, to provide computational support for agents. Projects currently built on ASM include AIFA (AI soccer game), Muhammed Ali (AI boxing game), AI League (street soccer game in partnership with FIFA), Raicers (AI-driven racing game), and FLUF World’s Thingies (generative NFTs). \
Parallel Colony (PRIME): Echelon Prime is developing Parallel Colony, a game based on AI LLM (Large Language Models). Players can interact with their AI avatars and influence them, with avatars autonomously acting based on memories and life trajectories. Colony is currently one of the most anticipated AI games, and the official whitepaper has recently been released. Additionally, the announcement of migration to Solana has sparked another wave of excitement and increased value for PRIME.
The predictive capability is the foundation for AI to make future decisions and behaviors. Before AI models are used for practical predictions, prediction competitions compare the performance of AI models at a higher level. By providing incentives in the form of tokens for data scientists/AI models, this approach has positive implications for the development of the entire Crypto×AI field. It continuously fosters the development of more efficient and high-performing models and applications suitable for the crypto world. Before AI deeply influences decision-making and behavior, this creates higher-quality and safer products. As Vitalik stated, prediction markets are a powerful primitive that can be expanded to many other types of problems. Iconic projects in this track include Numerai and Ocean Protocol.
AI can assist users in understanding what is happening in the crypto world using simple and easy-to-understand language, acting as a mentor for users and providing alerts for potential risks to reduce the entry barriers and user risks in Crypto, thus improving user experience. The functionalities of products that can be realized are diverse, such as risk alerts during wallet interactions, AI-driven intent trading, AI chatbots capable of answering common user questions about crypto, and more. The audience for these services is expanding, including not only ordinary users but also developers, analysts, and almost all other groups, making them potential recipients of AI services.
Let’s reiterate the commonalities of these projects: they have not yet replaced humans in executing certain decisions and behaviors, but are utilizing AI models to provide information and tools for assisting human decision-making and behavior. At this level, the risks of AI malfeasance are starting to be exposed in the system - providing incorrect information to interfere with human judgment. This aspect has been thoroughly analyzed in Vitalik’s article.
There are many and varied projects that can be classified under this category, including AI chatbots, AI smart contract audits, AI code generation, AI trading bots, and more. It can be said that the vast majority of AI applications are currently at this basic level. Representative projects include:
ChainGPT: ChainGPT relies on artificial intelligence to develop a series of crypto tools, such as chatbot, NFT generator, news collection, smart contract generation and audit, transaction assistant, Prompt market and AI cross-chain exchange. However, ChainGPT’s current focus is on project incubation and Launchpad, and it has completed IDOs for 24 projects and 4 Free Giveaways.
This is the most exciting part—enabling AI to replace human decision-making and behavior. Your AI will directly control your wallet, making trading decisions and actions on your behalf. In this category, the author believes it can mainly be divided into three levels: AI applications (especially those with the vision of autonomous decision-making, such as AI automated trading bots, AI DeFi yield bots), Autonomous Agent protocols, and zkML/opML.
AI applications are tools for making specific decisions in a particular field. They accumulate knowledge and data from different sectors and rely on AI models tailored to specific problems for decision-making. It’s worth noting that AI applications are classified into both interfaces and rules in this article. In terms of development vision, AI applications should become independent decision-making agents, but currently, neither the effectiveness of AI models nor the security of integrated AI can meet this requirement. Even as interfaces, they are somewhat forced. AI applications are still in a very early stage, with specific projects introduced earlier.
Autonomous Agents, mentioned by Vitalik, are classified in the first category (AI as participants), but this article categorizes them into the third category based on their long-term vision. Autonomous Agents use a large amount of data and algorithms to simulate human thinking and decision-making processes, executing various tasks and interactions. This article mainly focuses on the infrastructure of Agents, such as communication layers and network layers, which define the ownership of Agents, establish their identity, communication standards, and methods, connect multiple Agent applications, and enable them to collaborate on decision-making and behavior.
zkML/opML: Ensure that outputs provided through correct model reasoning processes are credible through cryptographic or economic methods. Security issues are fatal when introducing AI into smart contracts. Smart contracts rely on inputs to generate outputs and automate a series of functions. If AI provides erroneous inputs, it will introduce significant systemic risks to the entire Crypto system. Therefore, zkML/opML and a series of potential solutions are the foundation for enabling AI to act independently and make decisions.
Finally, the three together constitute the three basic levels of AI as rule operators: zkml/opml as the lowest-level infrastructure ensuring protocol security; Agent protocols establish the Agent ecosystem, enabling collaborative decision-making and behavior; AI applications, also specific AI Agents, will continuously improve their capabilities in specific domains and actually make decisions and take action.
The application of AI Agents in the crypto world is natural. From smart contracts to TG Bots to AI Agents, the crypto space is moving towards higher automation and lower user barriers. While smart contracts execute functions automatically through immutable code, they still rely on external triggers to activate and cannot run autonomously or continuously. TG Bots reduce user barriers by allowing users to interact with the blockchain through natural language, but they can only perform simple and specific tasks and cannot achieve user-centric transactions. AI Agents, however, possess a certain degree of independent decision-making capability. They understand natural language and autonomously combine other agents and blockchain tools to accomplish user-specified goals.
AI Agents are dedicated to significantly improving the user experience of crypto products, while blockchain technology can further enhance the decentralization, transparency, and security of AI Agent operations. Specific assistance includes:
The main projects of this track are as follows:
Zero-knowledge proof currently has two main application directions:
Similarly, the application of ZKP in machine learning can also be divided into two categories:
The author believes that currently, the most important aspect for crypto is inference verification, and here we further elaborate on the scenarios for inference verification. Starting from AI as a participant to AI as the rules of the world, we hope to integrate AI into on-chain processes. However, the high computational cost of AI model inference prevents direct on-chain execution. Moving this process off-chain means we must tolerate the trust issues brought by this black box—did the AI model operator tamper with my input? Did they use the model I specified for inference? By converting ML models into ZK circuits, we can achieve: (1) On-chain storage of smaller models, storing small zkML models in smart contracts directly addresses the opacity issue; (2) Completing inference off-chain while generating ZK proofs, using on-chain execution of ZK proofs to verify the correctness of the inference process. The infrastructure will include two contracts—the main contract (which uses the ML model to output results) and the ZK-Proof verification contract.
zkML is still in its very early stages and faces technical challenges in converting ML models into ZK circuits, as well as high computational and cryptographic overhead costs. Similar to the development path of Rollup, opML serves as another solution from an economic perspective. opML uses the AnyTrust assumption of Arbitrum, meaning each claim has at least one honest node, ensuring that the submitter or at least one verifier is honest. However, OPML can only serve as an alternative for inference verification and cannot achieve privacy protection.
Current projects are building the infrastructure for zkML and exploring its applications. The establishment of applications is equally important because it needs to clearly demonstrate to crypto users the significant role of zkML and prove that the ultimate value can outweigh the enormous costs. In these projects, some focus on ZK technology development related to machine learning (such as Modulus Labs), while others focus on more general ZK infrastructure building. Related projects include:
If the previous three categories focus more on how AI empowers Crypto, then “AI as a goal” emphasizes Crypto’s assistance to AI, namely how to utilize Crypto to create better AI models and products. This may include multiple evaluation criteria such as greater efficiency, precision, and decentralization. AI comprises three core elements: data, computing power, and algorithms, and in each dimension, Crypto is striving to provide more effective support for AI:
The monopolization of data and computing power by large tech companies has led to a monopoly on the model training process, where closed-source models become key profit drivers for these corporations. From an infrastructure perspective, Crypto incentivizes the decentralized supply of data and computing power through economic means. Additionally, it ensures data privacy during the process through cryptographic methods. This serves as the foundation to facilitate decentralized model training, aiming to achieve a more transparent and decentralized AI ecosystem.
Decentralized data protocols primarily operate through crowdsourcing of data, incentivizing users to provide datasets or data services (such as data labeling) for enterprises to use in model training. They also establish Data Marketplaces to facilitate matching between supply and demand. Some protocols are also exploring incentivizing users through DePIN protocols to acquire browsing data or utilizing users’ devices/bandwidth for web data scraping.
Grass: The decentralized data layer, dubbed as AI, essentially functions as a decentralized network scraping market, obtaining data for AI model training purposes. Internet websites serve as vital sources of training data for AI, with many sites like Twitter, Google, and Reddit holding significant value. However, these websites continually impose restrictions on data scraping. Grass leverages unused bandwidth within individual networks to mitigate the impact of data blocking by employing different IP addresses for scraping data from public websites. It conducts initial data cleaning and serves as a data source for AI model training endeavors. Currently in the beta testing phase, Grass allows users to earn points by providing bandwidth, which can be redeemed for potential airdrops.
AIT Protocol: AIT Protocol is a decentralized data labeling protocol designed to provide developers with high-quality datasets for model training. Web3 enables global labor forces to quickly access the network and earn incentives through data labeling. AIT’s data scientists pre-label the data, which is then further processed by users. After undergoing quality checks by data scientists, the validated data is provided to developers for use.
In addition to the aforementioned data provisioning and data labeling protocols, former decentralized storage infrastructure such as Filecoin, Arweave, and others will also contribute to a more decentralized data supply.
In the era of AI, the importance of computing power is self-evident. Not only has the stock price of NVIDIA soared, but in the crypto world, decentralized computing power can be said to be the hottest niche direction in the AI track—out of the top 200 AI projects by market capitalization, 5 projects (Render/Akash/AIOZ Network/Golem/Nosana) focus on decentralized computing power, and have experienced significant growth in the past few months. Many projects in the low market cap range have also seen the emergence of decentralized computing power platforms. Although they are just getting started, they have quickly gained momentum, especially with the wave of enthusiasm from the NVIDIA conference.
From the characteristics of the track, the basic logic of projects in this direction is highly homogeneous—using token incentives to encourage individuals or enterprises with idle computing resources to provide resources, thereby significantly reducing usage costs and establishing a supply-demand market for computing power. Currently, the main sources of computing power come from data centers, miners (especially after Ethereum transitions to PoS), consumer-level computing power, and collaborations with other projects. Although homogenized, this is a track where leading projects have high moats. The main competitive advantages of projects come from: computing power resources, computing power leasing prices, computing power utilization rates, and other technical advantages. The leading projects in this track include Akash, Render, io.net, and Gensyn.
According to specific business directions, projects can be roughly divided into two categories: AI model inference and AI model training. Since the requirements for computing power and bandwidth for AI model training are much higher than inference, and the market for model inference is expanding rapidly, predictable income will be significantly higher than model training in the future. Therefore, currently, the vast majority of projects focus on the inference direction (Akash, Render, io.net), with Gensyn focusing on training. Among them, Akash and Render were not initially developed for AI computing. Akash was originally used for general computing, while Render was primarily used for video and image rendering. io.net is specifically designed for AI computing, but after AI raised the level of computing demand, these projects have all tended to develop in the direction of AI.
The most important two competitive indicators still come from the supply side (computing power resources) and the demand side (computing power utilization). Akash has 282 GPUs and over 20,000 CPUs, with more than 160,000 leases completed, and a GPU network utilization rate of 50-70%, which is a good figure in this track. io.net has 40,272 GPUs and 5,958 CPUs, along with Render’s 4,318 GPUs and 159 CPUs, and Filecoin’s 1,024 GPUs usage license, including about 200 H100s and thousands of A100s. io.net is attracting computing power resources with extremely high airdrop expectations, and GPU data is growing rapidly, requiring a reassessment of its ability to attract resources after the token is listed. Render and Gensyn have not disclosed specific data. In addition, many projects are enhancing their competitiveness on both the supply and demand sides through ecosystem collaborations. For example, io.net uses Render and Filecoin’s computing power to enhance its own resource reserves, and Render has established the Computing Client Program (RNP-004), allowing users to indirectly access Render’s computing power resources through computing clients such as io.net, Nosana, FedMl, and Beam, thus quickly transitioning from the rendering field to artificial intelligence computing.
In addition, the verification of decentralized computing remains a challenge — how to prove that workers with computational resources correctly execute computing tasks. Gensyn is attempting to establish such a verification layer, ensuring the correctness of computations through probabilistic learning proofs, graph-based precise positioning protocols, and incentives. Validators and reporters jointly inspect computations in Gensyn, so besides providing computational support for decentralized training, its established verification mechanism also holds unique value. The computing protocol Fluence, situated on Solana, also enhances validation of computing tasks. Developers can verify if their applications run as expected and if computations are correctly executed by examining the proofs provided by the on-chain providers. However, the practical need still prioritizes feasibility over trustworthiness. Computing platforms must first have sufficient computational power to be competitive. Of course, for excellent verification protocols, there’s the option to access computational resources from other platforms, serving as validation and protocol layers to play a unique role.
The ultimate scenario described by Vitalik, as depicted in the diagram below, is still very distant. Currently, we are unable to achieve a trusted black-box AI created through blockchain and encryption technologies to address adversarial machine learning. Encrypting the entire AI process from training data to query outputs incurs significant costs. However, there are projects currently attempting to incentivize the creation of better AI models. They first bridge the closed-off states between different models, creating a landscape where models can learn from each other, collaborate, and engage in healthy competition. Bittensor is one of the most representative projects in this regard.
Bittensor: Bittensor is facilitating the integration of various AI models, but it’s important to note that Bittensor itself does not engage in model training; rather, it primarily provides AI inference services. Its 32 subnets focus on different service directions, such as data fetching, text generation, Text2Image, etc. When completing a task, AI models belonging to different directions can collaborate with each other. Incentive mechanisms drive competition between subnets and within subnets. Currently, rewards are distributed at a rate of 1 TAO per block, totaling approximately 7200 TAO tokens per day. The 64 validators in SN0 (Root Network) determine the distribution ratio of these rewards among different subnets based on subnet performance. Subnet validators, on the other hand, determine the distribution ratio among different miners based on their work evaluation. As a result, better-performing services and models receive more incentives, promoting overall improvement in the quality of system inference.
From Sam Altman’s moves driving the skyrocketing prices of ARKM and WLD to the Nvidia conference boosting a series of participating projects, many are adjusting their investment ideas in the AI field. Is the AI field primarily driven by meme speculation or technological revolution?
Apart from a few celebrity themes (such as ARKM and WLD), the overall AI field in crypto seems more like a “meme driven by technological narrative.”
On one hand, the overall speculation in the Crypto AI field is undoubtedly closely linked to the progress of Web2 AI. External hype led by entities like OpenAI will serve as the catalyst for the Crypto AI field. On the other hand, the story of the AI field still revolves around technological narratives. However, it’s crucial to emphasize the “technological narrative” rather than just the technology itself. This underscores the importance of choosing specific directions within the AI field and paying attention to project fundamentals. It’s necessary to find narrative directions with speculative value as well as projects with long-term competitiveness and moats.
Looking at the four potential combinations proposed by Vitalik, we see a balance between narrative charm and feasibility. In the first and second categories, represented by AI applications, we observe many GPT wrappers. While these products are quickly deployed, they also exhibit a high degree of business homogeneity. First-mover advantage, ecosystems, user base, and revenue become the stories told in the context of homogeneous competition. The third and fourth categories represent grand narratives combining AI with crypto, such as Agent on-chain collaboration networks, zkML, and decentralized reshaping of AI. These are still in the early stages, and projects with technological innovation will quickly attract funds, even if they are only in the early stages of implementation.