From Verifiable AI to Composable AI - Reflections on ZKML Application Scenarios

Intermediate12/17/2023, 5:58:42 PM
This paper re-examines verifiable AI solutions from an application perspective, and analyzes in which scenarios they are needed immediately, and in which scenarios the demand is relatively weak. Finally, the AI ecosystem model based on the public chain was discussed, and two different development models, horizontal and vertical, were proposed.
  1. Whether verifiable AI is needed depends on: whether on-chain data is modified, and whether fairness and privacy are involved

    1. When AI does not affect on-chain status, AI can act as a adviser. People can judge the quality of AI services through actual results without verifying the calculation process.
    2. When the on-chain state is affected, if the service targets individuals and does not affect privacy, then users can still directly judge the quality of AI services without checking the calculation process.
    3. When AI output affects fairness and personal privacy among many people, such as using AI to evaluate and distribute rewards to community members, use AI to optimize AMM, or involve biological data, people will want to review AI calculations. This is where it can be verified that AI may find PMF.
  2. Vertical AI application ecosystem: Since one end of verifiable AI is a smart contract, verifiable AI applications and even AI and native dapps may be able to use each other without trust. This is a potential composable AI application ecosystem

  3. Horizontal AI application ecosystem: The public chain system can handle issues such as service payment, payment dispute coordination, and matching of user needs and service content for AI service providers, so that users can enjoy a decentralized AI service experience with a higher degree of freedom.

1. Modulus Labs Overview and Application Stories

1.1 Introduction and core solutions

Modulus Labs is an “on-chain” AI company that believes AI can significantly enhance the capabilities of smart contracts and make web3 applications more powerful. However, there is a contradiction when AI is applied to web3, that is, AI requires a large amount of computing power to operate, and AI is a black box for off-chain computing. This does not meet the basic requirements of web3 to be trustless and verifiable.

Therefore, Modulus Labs drew on the zk rollup [off-chain preprocessing+on-chain verification] scheme and proposed an architecture that can verify AI. Specifically, the ML model runs off-chain, and in addition, a zkp is generated for the ML calculation process off-chain. Through this zkp, the architecture, weights, and inputs (inputs) of the off-chain model can be verified. Of course, this zkp can also be posted to the chain for verification by smart contracts. At this point, AI and on-chain contracts can interact more trustlessly, that is, “on-chain AI” has been realized.

Based on the idea of verifiable AI, Modulus Labs has launched three “on-chain AI” applications so far, and has also proposed many possible application scenarios.

1.2 Application cases

  1. The first to launch was Rocky Bot, an automated trading AI. Rocky was trained by historical data from the Weth/USDC trading pair. It judges future weth trends based on historical data. After making a trading decision, it will generate a zkp for the decision process (calculation process) and send a message to L1 to trigger the transaction.
  2. The second is the on-chain chess game “Leela vs. the World”. Both players in the game are AI and humans, and the game situation is in a contract. The player operates through a wallet (interacts with contracts). However, AI reads the new chess game situation, makes a judgment, and generates zkp for the entire calculation process. Both steps are completed on the AWS cloud, and zkp is verified by an on-chain contract. After the verification is successful, the game contract is used to “play chess.”
  3. The third is an “on-chain” AI artist and launched the NFT series zKMon. The core is that AI generates NFTs and posts them on the chain, and also generates a zkp. Users can check whether their NFT is generated from the corresponding AI model through zkp.

Additionally, Modulus Labs mentioned a few other use cases:

  1. Use AI to evaluate personal on-chain data and other information, generate personal reputation ratings, and publish zkp for user verification;
  2. Use AI to optimize AMM performance and publish zkp for users to verify;
  3. Use verifiable AI to help privacy projects cope with regulatory pressure, but at the same time not expose privacy (perhaps using ML to prove that this transaction is not money laundering, while not disclosing information such as user addresses);
  4. AI oracles, and release zkp for everyone to check the reliability of off-chain data;
  5. In the AI model competition, contestants submit their own architecture and weights, then run the model with unified test input to generate zkp for computation, and the final contract automatically sends the prize money to the winner;
  6. Worldcoin said that in the future, users may be able to download a model of iris to generate the corresponding code on the local device, run the model locally and generate zkp. In this way, the on-chain contract can use zkp to verify that the user’s iris code is generated from the correct model and reasonable iris, while the biological information does not leave the user’s own device;

Photo Credit: Modulus Labs

1.3 Discuss different application scenarios based on the need for verifiable AI

1.3.1 Scenarios that can verify AI may not be needed

In the Rocky bot scenario, users may not be required to verify the ML calculation process. First, users have no expertise and no ability to do real verification. Even if there is a verification tool, in the user’s opinion, “I press a button, the interface pops up to tell me that this AI service was actually generated by a certain model”, and the authenticity cannot be determined. Second, users don’t need to verify, because users care about whether the AI’s yield is high. Users migrate when profitability is low, and they always choose the model that works best. In short, when the end result of AI is what the user is looking for, the verification process may not be significant because the user only needs to migrate to the service that works best.

**One possible solution is that the AI only acts as an adviser, and the user executes the transaction independently. **When people enter their trading goals into AI, the AI calculates and returns a better transaction path/trade direction off-chain, and the user chooses whether to execute it. People also don’t need to verify the model behind it; they just need to choose the product with the highest return.

Another dangerous but highly likely situation is that people don’t care about their control over assets or the AI calculation process at all. When a robot that automatically earns money appears, people are even willing to host money directly to it, just like putting tokens into CEX or traditional banks for financial management. Because people don’t care about the principles behind it; they only care about how much money they get in the end, or even how much money the project party shows them to earn, this kind of service may be able to quickly acquire a large number of users, and even iterate faster than project-side products that use verifiable AI.

Taking a step back, if AI does not participate in on-chain state changes at all, but simply scrapes on-chain data and preprocesses it for users, then there is no need to generate ZKP for the calculation process. Here are a few examples of this type of application as a “data service”:

  1. The chatbox provided by Mest is a typical data service. Users can use questions and answers to understand their on-chain data, such as asking how much money they have spent on NFT;
  2. ChaingPT is a multi-functional AI assistant that can interpret smart contracts for you before trading, tell you if you’re trading with the right pool, or tell you if the transaction is likely to get caught or snatched away. ChaingPT is also preparing to make AI news recommendations, enter suggestions to automatically generate images and post them as NFTs and other services;
  3. RSS3 provides AIOP, so users can select what on-chain data they want and do certain pre-processing, so that it is easy to train AI with specific on-chain data;
  4. DeVillama and RSS3 have also developed ChatGPT plug-ins, where users can obtain on-chain data through conversations;

1.3.2 Scenarios that require verifiable AI

This article argues that scenarios involving multiple people, involving fairness and privacy require ZKP to provide verification, and several of the applications mentioned by Modulus Labs are discussed here:

  1. When a community rewards individuals based on AI-generated personal reputations, community members will inevitably request a review of the evaluation decision process, which is the calculation process of ML;
  2. AI optimization scenarios for AMM involve the distribution of benefits among multiple people, and the AI calculation process also needs to be checked regularly;
  3. When balancing privacy and regulation, ZK is currently one of the better solutions. If the service provider uses ML in the service to process private data, it needs to generate ZKP for the entire calculation process;
  4. Since oracles have a wide range of influence, if controlled by AI, ZKP needs to be generated regularly to check whether the AI is working properly;
  5. In the competition, the public and other participants are required to check whether ML’s computation complies with competition specifications;
  6. Among Worldcoin’s potential use cases, the protection of personal biodata is also a strong requirement;

Generally speaking, when AI is similar to a decision maker, and its output has a wide range of influence and involves fairness from many parties, then people will demand a review of the decision-making process, or simply ensure that there are no major problems with the AI decision-making process, and protecting personal privacy is a very immediate requirement.

Therefore, “whether AI output modifies on-chain status” and “whether it affects fairness/privacy” are two criteria for judging whether a verifiable AI solution is needed

  1. When the AI output does not modify the on-chain state, the AI service can act as a recommender. People can judge the quality of the AI service through the recommendation effect without verifying the calculation process;
  2. When the AI output modifies the on-chain status, if the service targets only individuals and does not affect privacy, then users can still directly judge the quality of the AI service without checking the calculation process;
  3. When AI output directly affects fairness among many people, and AI automatically modifies on-chain data, the community and the public need to test the AI decision-making process;
  4. When data processed by ML involves personal privacy, zk is also needed to protect privacy and thus address regulatory requirements.

Photo Credit: Kernel Ventures

2. Two public chain-based AI ecosystem models

In any case, Modulus Labs’ solution is highly instructive about how AI can combine crypto and bring practical application value. However, the public chain system not only enhances the capabilities of individual AI services, but also has the potential to build a new AI application ecosystem. This new ecosystem has brought about a different relationship between AI services than Web2, the relationship between AI services and users, and even the way upstream and downstream links collaborate. We can summarize the potential AI application ecosystem models into two types: vertical mode and horizontal model.

2.1 Vertical Mode: Focus on achieving composability between AIs

The “Leela vs. the World” chain chess use case has a special place. People can place bets on humans or AI, and tokens are automatically distributed after the game is over. At this point, the meaning of zkp is not only for users to verify AI calculations, but also as a trust guarantee to trigger on-chain state transitions. With trust assurance, there may also be dapp-level composability between AI services and between AI and crypto-native dapps.

Image source: Kernel Ventures, with reference from Modulus Labs

The basic unit of combinable AI is [off-chain ML model - zkp generation - on-chain verification contract - main contract]. This unit draws on the “Leela vs. the World” framework, but the actual architecture of a single AI dapp may not be the same as shown in the image above. First, the chess game situation in chess requires a contract, but in reality, AI may not need an on-chain contract. However, as far as the architecture of combinable AI is concerned, if the main business is recorded through contracts, it may be more convenient for other dapps to combine it with it. Second, the main contract does not necessarily need to affect the ML model of the AI dapp itself, because an AI dapp may have a unidirectional effect. After the ML model is processed, it is enough to trigger a contract related to its own business, and the contract will be called by other dapps.

Extensively, calls between contracts are calls between different web3 applications. They are calls for personal identity, assets, financial services, and even social information. We can imagine a specific combination of AI applications:

  1. Worldcoin uses ML to generate iris codes and zkp for personal iris data;
  2. The reputation scoring AI app first verifies whether the person behind this DID is a real person (with iris data on the back), then allocates NFTs to users based on on-chain reputation;
  3. The lending service adjusts the loan share according to the NFT owned by the user;

The interaction between AI in the public chain framework is not something that has not been discussed. Loaf, a contributor to the Realms ecosystem of full-chain games, once proposed that AI NPCs can trade with each other like players, so that the entire economic system can optimize itself and operate automatically. AI Arena has developed an AI automated battle game. Users first buy an NFT. An NFT represents a battle robot, and an AI model is behind it. Users first play games on their own, then hand over the data to AI for simulated learning. When users feel that the AI is strong enough, they can automatically play against other AIs in the arena. Modulus Labs mentioned that AI Arena wants to turn all of this AI into verifiable AI. Both of these cases saw the possibility of AI interacting with each other and modifying on-chain data directly as they interacted.

However, there are still many issues to be discussed in the specific implementation of combinable AI, such as how different dapps can use each other’s zkp or verify contracts. However, there are also many excellent projects in the zk field. For example, RISC Zero has made a lot of progress in performing complex calculations off-chain and releasing zkp to the chain. Perhaps one day it will be possible to put together an appropriate solution.

2.2 Horizontal model: AI service platforms that focus on decentralization

In this regard, we mainly introduce a decentralized AI platform called SAKSHI, which was jointly proposed by people from Princeton, Tsinghua University, the University of Illinois at Urbana-Champaign, Hong Kong University of Science and Technology, Witness Chain, and Eigen Layer. Its core goal is to enable users to access AI services in a more decentralized manner, making the entire process more trustless and automated.

Photo Credit: SAKSHI

SAKSHI’s structure can be divided into six layers: service layer (service layer), control layer (control layer), transaction layer (translation layer), proof layer (proof layer), economic layer (economic layer), and market layer (Marketplace)

The market is the level closest to the user. There are aggregators on the market to provide services to users on behalf of different AI providers. Users place orders through aggregators and reach agreements with aggregators on service quality and payment prices (agreements are called SLA-service-level agreements).

Next, the service layer provides an API for the client side, then the client makes an ML inference request to the aggregator, and the request is sent to a server used to match the AI service provider (the route used to transmit the request is part of the control layer). Therefore, the service layer and control layer are similar to a service with multiple web2 servers, but the different servers are operated by different entities, and each server is linked through an SLA (previously signed service agreement) and an aggregator.

SLAs are deployed on the chain in the form of smart contracts, all of which belong to the transaction layer (note: in this solution, they are deployed on the Witness Chain). The transaction layer also records the current status of a service order and is used to coordinate users, aggregators, and service providers to handle payment disputes.

In order for the transaction layer to have evidence to rely on when handling disputes, the proof layer (Proof Layer) will check whether the service provider uses the model as agreed in the SLA. However, SAKSHI did not choose to generate zkp for the ML calculation process, but instead used the idea of optimistic proof, hoping to establish a network of challenger nodes to test the service. The node incentives are borne by Witness Chain.

Although SLA and the challenger node network are on Witness Chain, in SAKSHI’s plan, Witness Chain does not plan to use its native token incentives to achieve independent security, but instead uses the security of Ethereum through Eigen Layer, so the entire economy actually relies on Eigen Layer.

As can be seen, SAKSHI is between AI service providers and users, and organizes different AIs in a decentralized manner to provide services to users. This is more like a horizontal solution. The core of SAKSHI is that it allows AI service providers to focus more on managing their own off-chain model calculations, matching user needs with model services, payment of services, and verification of service quality through on-chain agreements, and trying to resolve payment disputes automatically. Of course, at present SAKSHI is still in the theoretical stage, and there are also plenty of implementation details worth determining.

3. Future prospects

Whether it’s combinable AI or decentralized AI platforms, the AI ecosystem model based on the public chain seems to have something in common. For example, AI service providers do not directly connect with users; they only need to provide ML models and perform off-chain calculations. Payments, dispute resolution, and coordination between user needs and services can all be solved by decentralized agreements. As a trustless infrastructure, the public chain reduces friction between service providers and users, and users also have higher autonomy at this time.

Although the advantages of using the public chain as an application base are clichés, it is true that it also applies to AI services. However, the difference between AI applications and existing dapp applications is that AI applications cannot place all computation on the chain, so it is necessary to use zk or optimistic proof to connect AI services to the public chain system in a more trustless manner.

With the implementation of a series of experience optimization solutions such as account abstraction, users may not be able to sense the existence of mnemonics, chains, and gas. This brings the public chain ecosystem closer to web2 in terms of experience, while users can obtain a higher degree of freedom and composability than web2 services. This will be very attractive to users. The AI application ecosystem based on the public chain is worth looking forward to.


Kernel Ventures is a crypto venture capital fund driven by a research and development community with over 70 early-stage investments focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, modular blockchains, and verticals that will host the next billion crypto users, such as account abstraction, data availability, scalability, etc. Over the past seven years, we’ve been committed to supporting the development of core development communities and university blockchain associations around the world.

Disclaimer:

  1. This article is reprinted from[mirror]. All copyrights belong to the original author [Kernel Ventures Jerry Luo]. If there are objections to this reprint, please contact the Gate Learn team(gatelearn@gate.io), and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

From Verifiable AI to Composable AI - Reflections on ZKML Application Scenarios

Intermediate12/17/2023, 5:58:42 PM
This paper re-examines verifiable AI solutions from an application perspective, and analyzes in which scenarios they are needed immediately, and in which scenarios the demand is relatively weak. Finally, the AI ecosystem model based on the public chain was discussed, and two different development models, horizontal and vertical, were proposed.
  1. Whether verifiable AI is needed depends on: whether on-chain data is modified, and whether fairness and privacy are involved

    1. When AI does not affect on-chain status, AI can act as a adviser. People can judge the quality of AI services through actual results without verifying the calculation process.
    2. When the on-chain state is affected, if the service targets individuals and does not affect privacy, then users can still directly judge the quality of AI services without checking the calculation process.
    3. When AI output affects fairness and personal privacy among many people, such as using AI to evaluate and distribute rewards to community members, use AI to optimize AMM, or involve biological data, people will want to review AI calculations. This is where it can be verified that AI may find PMF.
  2. Vertical AI application ecosystem: Since one end of verifiable AI is a smart contract, verifiable AI applications and even AI and native dapps may be able to use each other without trust. This is a potential composable AI application ecosystem

  3. Horizontal AI application ecosystem: The public chain system can handle issues such as service payment, payment dispute coordination, and matching of user needs and service content for AI service providers, so that users can enjoy a decentralized AI service experience with a higher degree of freedom.

1. Modulus Labs Overview and Application Stories

1.1 Introduction and core solutions

Modulus Labs is an “on-chain” AI company that believes AI can significantly enhance the capabilities of smart contracts and make web3 applications more powerful. However, there is a contradiction when AI is applied to web3, that is, AI requires a large amount of computing power to operate, and AI is a black box for off-chain computing. This does not meet the basic requirements of web3 to be trustless and verifiable.

Therefore, Modulus Labs drew on the zk rollup [off-chain preprocessing+on-chain verification] scheme and proposed an architecture that can verify AI. Specifically, the ML model runs off-chain, and in addition, a zkp is generated for the ML calculation process off-chain. Through this zkp, the architecture, weights, and inputs (inputs) of the off-chain model can be verified. Of course, this zkp can also be posted to the chain for verification by smart contracts. At this point, AI and on-chain contracts can interact more trustlessly, that is, “on-chain AI” has been realized.

Based on the idea of verifiable AI, Modulus Labs has launched three “on-chain AI” applications so far, and has also proposed many possible application scenarios.

1.2 Application cases

  1. The first to launch was Rocky Bot, an automated trading AI. Rocky was trained by historical data from the Weth/USDC trading pair. It judges future weth trends based on historical data. After making a trading decision, it will generate a zkp for the decision process (calculation process) and send a message to L1 to trigger the transaction.
  2. The second is the on-chain chess game “Leela vs. the World”. Both players in the game are AI and humans, and the game situation is in a contract. The player operates through a wallet (interacts with contracts). However, AI reads the new chess game situation, makes a judgment, and generates zkp for the entire calculation process. Both steps are completed on the AWS cloud, and zkp is verified by an on-chain contract. After the verification is successful, the game contract is used to “play chess.”
  3. The third is an “on-chain” AI artist and launched the NFT series zKMon. The core is that AI generates NFTs and posts them on the chain, and also generates a zkp. Users can check whether their NFT is generated from the corresponding AI model through zkp.

Additionally, Modulus Labs mentioned a few other use cases:

  1. Use AI to evaluate personal on-chain data and other information, generate personal reputation ratings, and publish zkp for user verification;
  2. Use AI to optimize AMM performance and publish zkp for users to verify;
  3. Use verifiable AI to help privacy projects cope with regulatory pressure, but at the same time not expose privacy (perhaps using ML to prove that this transaction is not money laundering, while not disclosing information such as user addresses);
  4. AI oracles, and release zkp for everyone to check the reliability of off-chain data;
  5. In the AI model competition, contestants submit their own architecture and weights, then run the model with unified test input to generate zkp for computation, and the final contract automatically sends the prize money to the winner;
  6. Worldcoin said that in the future, users may be able to download a model of iris to generate the corresponding code on the local device, run the model locally and generate zkp. In this way, the on-chain contract can use zkp to verify that the user’s iris code is generated from the correct model and reasonable iris, while the biological information does not leave the user’s own device;

Photo Credit: Modulus Labs

1.3 Discuss different application scenarios based on the need for verifiable AI

1.3.1 Scenarios that can verify AI may not be needed

In the Rocky bot scenario, users may not be required to verify the ML calculation process. First, users have no expertise and no ability to do real verification. Even if there is a verification tool, in the user’s opinion, “I press a button, the interface pops up to tell me that this AI service was actually generated by a certain model”, and the authenticity cannot be determined. Second, users don’t need to verify, because users care about whether the AI’s yield is high. Users migrate when profitability is low, and they always choose the model that works best. In short, when the end result of AI is what the user is looking for, the verification process may not be significant because the user only needs to migrate to the service that works best.

**One possible solution is that the AI only acts as an adviser, and the user executes the transaction independently. **When people enter their trading goals into AI, the AI calculates and returns a better transaction path/trade direction off-chain, and the user chooses whether to execute it. People also don’t need to verify the model behind it; they just need to choose the product with the highest return.

Another dangerous but highly likely situation is that people don’t care about their control over assets or the AI calculation process at all. When a robot that automatically earns money appears, people are even willing to host money directly to it, just like putting tokens into CEX or traditional banks for financial management. Because people don’t care about the principles behind it; they only care about how much money they get in the end, or even how much money the project party shows them to earn, this kind of service may be able to quickly acquire a large number of users, and even iterate faster than project-side products that use verifiable AI.

Taking a step back, if AI does not participate in on-chain state changes at all, but simply scrapes on-chain data and preprocesses it for users, then there is no need to generate ZKP for the calculation process. Here are a few examples of this type of application as a “data service”:

  1. The chatbox provided by Mest is a typical data service. Users can use questions and answers to understand their on-chain data, such as asking how much money they have spent on NFT;
  2. ChaingPT is a multi-functional AI assistant that can interpret smart contracts for you before trading, tell you if you’re trading with the right pool, or tell you if the transaction is likely to get caught or snatched away. ChaingPT is also preparing to make AI news recommendations, enter suggestions to automatically generate images and post them as NFTs and other services;
  3. RSS3 provides AIOP, so users can select what on-chain data they want and do certain pre-processing, so that it is easy to train AI with specific on-chain data;
  4. DeVillama and RSS3 have also developed ChatGPT plug-ins, where users can obtain on-chain data through conversations;

1.3.2 Scenarios that require verifiable AI

This article argues that scenarios involving multiple people, involving fairness and privacy require ZKP to provide verification, and several of the applications mentioned by Modulus Labs are discussed here:

  1. When a community rewards individuals based on AI-generated personal reputations, community members will inevitably request a review of the evaluation decision process, which is the calculation process of ML;
  2. AI optimization scenarios for AMM involve the distribution of benefits among multiple people, and the AI calculation process also needs to be checked regularly;
  3. When balancing privacy and regulation, ZK is currently one of the better solutions. If the service provider uses ML in the service to process private data, it needs to generate ZKP for the entire calculation process;
  4. Since oracles have a wide range of influence, if controlled by AI, ZKP needs to be generated regularly to check whether the AI is working properly;
  5. In the competition, the public and other participants are required to check whether ML’s computation complies with competition specifications;
  6. Among Worldcoin’s potential use cases, the protection of personal biodata is also a strong requirement;

Generally speaking, when AI is similar to a decision maker, and its output has a wide range of influence and involves fairness from many parties, then people will demand a review of the decision-making process, or simply ensure that there are no major problems with the AI decision-making process, and protecting personal privacy is a very immediate requirement.

Therefore, “whether AI output modifies on-chain status” and “whether it affects fairness/privacy” are two criteria for judging whether a verifiable AI solution is needed

  1. When the AI output does not modify the on-chain state, the AI service can act as a recommender. People can judge the quality of the AI service through the recommendation effect without verifying the calculation process;
  2. When the AI output modifies the on-chain status, if the service targets only individuals and does not affect privacy, then users can still directly judge the quality of the AI service without checking the calculation process;
  3. When AI output directly affects fairness among many people, and AI automatically modifies on-chain data, the community and the public need to test the AI decision-making process;
  4. When data processed by ML involves personal privacy, zk is also needed to protect privacy and thus address regulatory requirements.

Photo Credit: Kernel Ventures

2. Two public chain-based AI ecosystem models

In any case, Modulus Labs’ solution is highly instructive about how AI can combine crypto and bring practical application value. However, the public chain system not only enhances the capabilities of individual AI services, but also has the potential to build a new AI application ecosystem. This new ecosystem has brought about a different relationship between AI services than Web2, the relationship between AI services and users, and even the way upstream and downstream links collaborate. We can summarize the potential AI application ecosystem models into two types: vertical mode and horizontal model.

2.1 Vertical Mode: Focus on achieving composability between AIs

The “Leela vs. the World” chain chess use case has a special place. People can place bets on humans or AI, and tokens are automatically distributed after the game is over. At this point, the meaning of zkp is not only for users to verify AI calculations, but also as a trust guarantee to trigger on-chain state transitions. With trust assurance, there may also be dapp-level composability between AI services and between AI and crypto-native dapps.

Image source: Kernel Ventures, with reference from Modulus Labs

The basic unit of combinable AI is [off-chain ML model - zkp generation - on-chain verification contract - main contract]. This unit draws on the “Leela vs. the World” framework, but the actual architecture of a single AI dapp may not be the same as shown in the image above. First, the chess game situation in chess requires a contract, but in reality, AI may not need an on-chain contract. However, as far as the architecture of combinable AI is concerned, if the main business is recorded through contracts, it may be more convenient for other dapps to combine it with it. Second, the main contract does not necessarily need to affect the ML model of the AI dapp itself, because an AI dapp may have a unidirectional effect. After the ML model is processed, it is enough to trigger a contract related to its own business, and the contract will be called by other dapps.

Extensively, calls between contracts are calls between different web3 applications. They are calls for personal identity, assets, financial services, and even social information. We can imagine a specific combination of AI applications:

  1. Worldcoin uses ML to generate iris codes and zkp for personal iris data;
  2. The reputation scoring AI app first verifies whether the person behind this DID is a real person (with iris data on the back), then allocates NFTs to users based on on-chain reputation;
  3. The lending service adjusts the loan share according to the NFT owned by the user;

The interaction between AI in the public chain framework is not something that has not been discussed. Loaf, a contributor to the Realms ecosystem of full-chain games, once proposed that AI NPCs can trade with each other like players, so that the entire economic system can optimize itself and operate automatically. AI Arena has developed an AI automated battle game. Users first buy an NFT. An NFT represents a battle robot, and an AI model is behind it. Users first play games on their own, then hand over the data to AI for simulated learning. When users feel that the AI is strong enough, they can automatically play against other AIs in the arena. Modulus Labs mentioned that AI Arena wants to turn all of this AI into verifiable AI. Both of these cases saw the possibility of AI interacting with each other and modifying on-chain data directly as they interacted.

However, there are still many issues to be discussed in the specific implementation of combinable AI, such as how different dapps can use each other’s zkp or verify contracts. However, there are also many excellent projects in the zk field. For example, RISC Zero has made a lot of progress in performing complex calculations off-chain and releasing zkp to the chain. Perhaps one day it will be possible to put together an appropriate solution.

2.2 Horizontal model: AI service platforms that focus on decentralization

In this regard, we mainly introduce a decentralized AI platform called SAKSHI, which was jointly proposed by people from Princeton, Tsinghua University, the University of Illinois at Urbana-Champaign, Hong Kong University of Science and Technology, Witness Chain, and Eigen Layer. Its core goal is to enable users to access AI services in a more decentralized manner, making the entire process more trustless and automated.

Photo Credit: SAKSHI

SAKSHI’s structure can be divided into six layers: service layer (service layer), control layer (control layer), transaction layer (translation layer), proof layer (proof layer), economic layer (economic layer), and market layer (Marketplace)

The market is the level closest to the user. There are aggregators on the market to provide services to users on behalf of different AI providers. Users place orders through aggregators and reach agreements with aggregators on service quality and payment prices (agreements are called SLA-service-level agreements).

Next, the service layer provides an API for the client side, then the client makes an ML inference request to the aggregator, and the request is sent to a server used to match the AI service provider (the route used to transmit the request is part of the control layer). Therefore, the service layer and control layer are similar to a service with multiple web2 servers, but the different servers are operated by different entities, and each server is linked through an SLA (previously signed service agreement) and an aggregator.

SLAs are deployed on the chain in the form of smart contracts, all of which belong to the transaction layer (note: in this solution, they are deployed on the Witness Chain). The transaction layer also records the current status of a service order and is used to coordinate users, aggregators, and service providers to handle payment disputes.

In order for the transaction layer to have evidence to rely on when handling disputes, the proof layer (Proof Layer) will check whether the service provider uses the model as agreed in the SLA. However, SAKSHI did not choose to generate zkp for the ML calculation process, but instead used the idea of optimistic proof, hoping to establish a network of challenger nodes to test the service. The node incentives are borne by Witness Chain.

Although SLA and the challenger node network are on Witness Chain, in SAKSHI’s plan, Witness Chain does not plan to use its native token incentives to achieve independent security, but instead uses the security of Ethereum through Eigen Layer, so the entire economy actually relies on Eigen Layer.

As can be seen, SAKSHI is between AI service providers and users, and organizes different AIs in a decentralized manner to provide services to users. This is more like a horizontal solution. The core of SAKSHI is that it allows AI service providers to focus more on managing their own off-chain model calculations, matching user needs with model services, payment of services, and verification of service quality through on-chain agreements, and trying to resolve payment disputes automatically. Of course, at present SAKSHI is still in the theoretical stage, and there are also plenty of implementation details worth determining.

3. Future prospects

Whether it’s combinable AI or decentralized AI platforms, the AI ecosystem model based on the public chain seems to have something in common. For example, AI service providers do not directly connect with users; they only need to provide ML models and perform off-chain calculations. Payments, dispute resolution, and coordination between user needs and services can all be solved by decentralized agreements. As a trustless infrastructure, the public chain reduces friction between service providers and users, and users also have higher autonomy at this time.

Although the advantages of using the public chain as an application base are clichés, it is true that it also applies to AI services. However, the difference between AI applications and existing dapp applications is that AI applications cannot place all computation on the chain, so it is necessary to use zk or optimistic proof to connect AI services to the public chain system in a more trustless manner.

With the implementation of a series of experience optimization solutions such as account abstraction, users may not be able to sense the existence of mnemonics, chains, and gas. This brings the public chain ecosystem closer to web2 in terms of experience, while users can obtain a higher degree of freedom and composability than web2 services. This will be very attractive to users. The AI application ecosystem based on the public chain is worth looking forward to.


Kernel Ventures is a crypto venture capital fund driven by a research and development community with over 70 early-stage investments focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, modular blockchains, and verticals that will host the next billion crypto users, such as account abstraction, data availability, scalability, etc. Over the past seven years, we’ve been committed to supporting the development of core development communities and university blockchain associations around the world.

Disclaimer:

  1. This article is reprinted from[mirror]. All copyrights belong to the original author [Kernel Ventures Jerry Luo]. If there are objections to this reprint, please contact the Gate Learn team(gatelearn@gate.io), and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!