Whether verifiable AI is needed depends on: whether on-chain data is modified, and whether fairness and privacy are involved
Vertical AI application ecosystem: Since one end of verifiable AI is a smart contract, verifiable AI applications and even AI and native dapps may be able to use each other without trust. This is a potential composable AI application ecosystem
Modulus Labs is an “on-chain” AI company that believes AI can significantly enhance the capabilities of smart contracts and make web3 applications more powerful. However, there is a contradiction when AI is applied to web3, that is, AI requires a large amount of computing power to operate, and AI is a black box for off-chain computing. This does not meet the basic requirements of web3 to be trustless and verifiable.
Therefore, Modulus Labs drew on the zk rollup [off-chain preprocessing+on-chain verification] scheme and proposed an architecture that can verify AI. Specifically, the ML model runs off-chain, and in addition, a zkp is generated for the ML calculation process off-chain. Through this zkp, the architecture, weights, and inputs (inputs) of the off-chain model can be verified. Of course, this zkp can also be posted to the chain for verification by smart contracts. At this point, AI and on-chain contracts can interact more trustlessly, that is, “on-chain AI” has been realized.
Based on the idea of verifiable AI, Modulus Labs has launched three “on-chain AI” applications so far, and has also proposed many possible application scenarios.
Additionally, Modulus Labs mentioned a few other use cases:
Photo Credit: Modulus Labs
In the Rocky bot scenario, users may not be required to verify the ML calculation process. First, users have no expertise and no ability to do real verification. Even if there is a verification tool, in the user’s opinion, “I press a button, the interface pops up to tell me that this AI service was actually generated by a certain model”, and the authenticity cannot be determined. Second, users don’t need to verify, because users care about whether the AI’s yield is high. Users migrate when profitability is low, and they always choose the model that works best. In short, when the end result of AI is what the user is looking for, the verification process may not be significant because the user only needs to migrate to the service that works best.
**One possible solution is that the AI only acts as an adviser, and the user executes the transaction independently. **When people enter their trading goals into AI, the AI calculates and returns a better transaction path/trade direction off-chain, and the user chooses whether to execute it. People also don’t need to verify the model behind it; they just need to choose the product with the highest return.
Another dangerous but highly likely situation is that people don’t care about their control over assets or the AI calculation process at all. When a robot that automatically earns money appears, people are even willing to host money directly to it, just like putting tokens into CEX or traditional banks for financial management. Because people don’t care about the principles behind it; they only care about how much money they get in the end, or even how much money the project party shows them to earn, this kind of service may be able to quickly acquire a large number of users, and even iterate faster than project-side products that use verifiable AI.
Taking a step back, if AI does not participate in on-chain state changes at all, but simply scrapes on-chain data and preprocesses it for users, then there is no need to generate ZKP for the calculation process. Here are a few examples of this type of application as a “data service”:
This article argues that scenarios involving multiple people, involving fairness and privacy require ZKP to provide verification, and several of the applications mentioned by Modulus Labs are discussed here:
Generally speaking, when AI is similar to a decision maker, and its output has a wide range of influence and involves fairness from many parties, then people will demand a review of the decision-making process, or simply ensure that there are no major problems with the AI decision-making process, and protecting personal privacy is a very immediate requirement.
Therefore, “whether AI output modifies on-chain status” and “whether it affects fairness/privacy” are two criteria for judging whether a verifiable AI solution is needed
Photo Credit: Kernel Ventures
In any case, Modulus Labs’ solution is highly instructive about how AI can combine crypto and bring practical application value. However, the public chain system not only enhances the capabilities of individual AI services, but also has the potential to build a new AI application ecosystem. This new ecosystem has brought about a different relationship between AI services than Web2, the relationship between AI services and users, and even the way upstream and downstream links collaborate. We can summarize the potential AI application ecosystem models into two types: vertical mode and horizontal model.
The “Leela vs. the World” chain chess use case has a special place. People can place bets on humans or AI, and tokens are automatically distributed after the game is over. At this point, the meaning of zkp is not only for users to verify AI calculations, but also as a trust guarantee to trigger on-chain state transitions. With trust assurance, there may also be dapp-level composability between AI services and between AI and crypto-native dapps.
Image source: Kernel Ventures, with reference from Modulus Labs
The basic unit of combinable AI is [off-chain ML model - zkp generation - on-chain verification contract - main contract]. This unit draws on the “Leela vs. the World” framework, but the actual architecture of a single AI dapp may not be the same as shown in the image above. First, the chess game situation in chess requires a contract, but in reality, AI may not need an on-chain contract. However, as far as the architecture of combinable AI is concerned, if the main business is recorded through contracts, it may be more convenient for other dapps to combine it with it. Second, the main contract does not necessarily need to affect the ML model of the AI dapp itself, because an AI dapp may have a unidirectional effect. After the ML model is processed, it is enough to trigger a contract related to its own business, and the contract will be called by other dapps.
Extensively, calls between contracts are calls between different web3 applications. They are calls for personal identity, assets, financial services, and even social information. We can imagine a specific combination of AI applications:
The interaction between AI in the public chain framework is not something that has not been discussed. Loaf, a contributor to the Realms ecosystem of full-chain games, once proposed that AI NPCs can trade with each other like players, so that the entire economic system can optimize itself and operate automatically. AI Arena has developed an AI automated battle game. Users first buy an NFT. An NFT represents a battle robot, and an AI model is behind it. Users first play games on their own, then hand over the data to AI for simulated learning. When users feel that the AI is strong enough, they can automatically play against other AIs in the arena. Modulus Labs mentioned that AI Arena wants to turn all of this AI into verifiable AI. Both of these cases saw the possibility of AI interacting with each other and modifying on-chain data directly as they interacted.
However, there are still many issues to be discussed in the specific implementation of combinable AI, such as how different dapps can use each other’s zkp or verify contracts. However, there are also many excellent projects in the zk field. For example, RISC Zero has made a lot of progress in performing complex calculations off-chain and releasing zkp to the chain. Perhaps one day it will be possible to put together an appropriate solution.
2.2 Horizontal model: AI service platforms that focus on decentralization
In this regard, we mainly introduce a decentralized AI platform called SAKSHI, which was jointly proposed by people from Princeton, Tsinghua University, the University of Illinois at Urbana-Champaign, Hong Kong University of Science and Technology, Witness Chain, and Eigen Layer. Its core goal is to enable users to access AI services in a more decentralized manner, making the entire process more trustless and automated.
Photo Credit: SAKSHI
SAKSHI’s structure can be divided into six layers: service layer (service layer), control layer (control layer), transaction layer (translation layer), proof layer (proof layer), economic layer (economic layer), and market layer (Marketplace)
The market is the level closest to the user. There are aggregators on the market to provide services to users on behalf of different AI providers. Users place orders through aggregators and reach agreements with aggregators on service quality and payment prices (agreements are called SLA-service-level agreements).
Next, the service layer provides an API for the client side, then the client makes an ML inference request to the aggregator, and the request is sent to a server used to match the AI service provider (the route used to transmit the request is part of the control layer). Therefore, the service layer and control layer are similar to a service with multiple web2 servers, but the different servers are operated by different entities, and each server is linked through an SLA (previously signed service agreement) and an aggregator.
SLAs are deployed on the chain in the form of smart contracts, all of which belong to the transaction layer (note: in this solution, they are deployed on the Witness Chain). The transaction layer also records the current status of a service order and is used to coordinate users, aggregators, and service providers to handle payment disputes.
In order for the transaction layer to have evidence to rely on when handling disputes, the proof layer (Proof Layer) will check whether the service provider uses the model as agreed in the SLA. However, SAKSHI did not choose to generate zkp for the ML calculation process, but instead used the idea of optimistic proof, hoping to establish a network of challenger nodes to test the service. The node incentives are borne by Witness Chain.
Although SLA and the challenger node network are on Witness Chain, in SAKSHI’s plan, Witness Chain does not plan to use its native token incentives to achieve independent security, but instead uses the security of Ethereum through Eigen Layer, so the entire economy actually relies on Eigen Layer.
As can be seen, SAKSHI is between AI service providers and users, and organizes different AIs in a decentralized manner to provide services to users. This is more like a horizontal solution. The core of SAKSHI is that it allows AI service providers to focus more on managing their own off-chain model calculations, matching user needs with model services, payment of services, and verification of service quality through on-chain agreements, and trying to resolve payment disputes automatically. Of course, at present SAKSHI is still in the theoretical stage, and there are also plenty of implementation details worth determining.
Whether it’s combinable AI or decentralized AI platforms, the AI ecosystem model based on the public chain seems to have something in common. For example, AI service providers do not directly connect with users; they only need to provide ML models and perform off-chain calculations. Payments, dispute resolution, and coordination between user needs and services can all be solved by decentralized agreements. As a trustless infrastructure, the public chain reduces friction between service providers and users, and users also have higher autonomy at this time.
Although the advantages of using the public chain as an application base are clichés, it is true that it also applies to AI services. However, the difference between AI applications and existing dapp applications is that AI applications cannot place all computation on the chain, so it is necessary to use zk or optimistic proof to connect AI services to the public chain system in a more trustless manner.
With the implementation of a series of experience optimization solutions such as account abstraction, users may not be able to sense the existence of mnemonics, chains, and gas. This brings the public chain ecosystem closer to web2 in terms of experience, while users can obtain a higher degree of freedom and composability than web2 services. This will be very attractive to users. The AI application ecosystem based on the public chain is worth looking forward to.
Kernel Ventures is a crypto venture capital fund driven by a research and development community with over 70 early-stage investments focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, modular blockchains, and verticals that will host the next billion crypto users, such as account abstraction, data availability, scalability, etc. Over the past seven years, we’ve been committed to supporting the development of core development communities and university blockchain associations around the world.
Disclaimer:
Whether verifiable AI is needed depends on: whether on-chain data is modified, and whether fairness and privacy are involved
Vertical AI application ecosystem: Since one end of verifiable AI is a smart contract, verifiable AI applications and even AI and native dapps may be able to use each other without trust. This is a potential composable AI application ecosystem
Modulus Labs is an “on-chain” AI company that believes AI can significantly enhance the capabilities of smart contracts and make web3 applications more powerful. However, there is a contradiction when AI is applied to web3, that is, AI requires a large amount of computing power to operate, and AI is a black box for off-chain computing. This does not meet the basic requirements of web3 to be trustless and verifiable.
Therefore, Modulus Labs drew on the zk rollup [off-chain preprocessing+on-chain verification] scheme and proposed an architecture that can verify AI. Specifically, the ML model runs off-chain, and in addition, a zkp is generated for the ML calculation process off-chain. Through this zkp, the architecture, weights, and inputs (inputs) of the off-chain model can be verified. Of course, this zkp can also be posted to the chain for verification by smart contracts. At this point, AI and on-chain contracts can interact more trustlessly, that is, “on-chain AI” has been realized.
Based on the idea of verifiable AI, Modulus Labs has launched three “on-chain AI” applications so far, and has also proposed many possible application scenarios.
Additionally, Modulus Labs mentioned a few other use cases:
Photo Credit: Modulus Labs
In the Rocky bot scenario, users may not be required to verify the ML calculation process. First, users have no expertise and no ability to do real verification. Even if there is a verification tool, in the user’s opinion, “I press a button, the interface pops up to tell me that this AI service was actually generated by a certain model”, and the authenticity cannot be determined. Second, users don’t need to verify, because users care about whether the AI’s yield is high. Users migrate when profitability is low, and they always choose the model that works best. In short, when the end result of AI is what the user is looking for, the verification process may not be significant because the user only needs to migrate to the service that works best.
**One possible solution is that the AI only acts as an adviser, and the user executes the transaction independently. **When people enter their trading goals into AI, the AI calculates and returns a better transaction path/trade direction off-chain, and the user chooses whether to execute it. People also don’t need to verify the model behind it; they just need to choose the product with the highest return.
Another dangerous but highly likely situation is that people don’t care about their control over assets or the AI calculation process at all. When a robot that automatically earns money appears, people are even willing to host money directly to it, just like putting tokens into CEX or traditional banks for financial management. Because people don’t care about the principles behind it; they only care about how much money they get in the end, or even how much money the project party shows them to earn, this kind of service may be able to quickly acquire a large number of users, and even iterate faster than project-side products that use verifiable AI.
Taking a step back, if AI does not participate in on-chain state changes at all, but simply scrapes on-chain data and preprocesses it for users, then there is no need to generate ZKP for the calculation process. Here are a few examples of this type of application as a “data service”:
This article argues that scenarios involving multiple people, involving fairness and privacy require ZKP to provide verification, and several of the applications mentioned by Modulus Labs are discussed here:
Generally speaking, when AI is similar to a decision maker, and its output has a wide range of influence and involves fairness from many parties, then people will demand a review of the decision-making process, or simply ensure that there are no major problems with the AI decision-making process, and protecting personal privacy is a very immediate requirement.
Therefore, “whether AI output modifies on-chain status” and “whether it affects fairness/privacy” are two criteria for judging whether a verifiable AI solution is needed
Photo Credit: Kernel Ventures
In any case, Modulus Labs’ solution is highly instructive about how AI can combine crypto and bring practical application value. However, the public chain system not only enhances the capabilities of individual AI services, but also has the potential to build a new AI application ecosystem. This new ecosystem has brought about a different relationship between AI services than Web2, the relationship between AI services and users, and even the way upstream and downstream links collaborate. We can summarize the potential AI application ecosystem models into two types: vertical mode and horizontal model.
The “Leela vs. the World” chain chess use case has a special place. People can place bets on humans or AI, and tokens are automatically distributed after the game is over. At this point, the meaning of zkp is not only for users to verify AI calculations, but also as a trust guarantee to trigger on-chain state transitions. With trust assurance, there may also be dapp-level composability between AI services and between AI and crypto-native dapps.
Image source: Kernel Ventures, with reference from Modulus Labs
The basic unit of combinable AI is [off-chain ML model - zkp generation - on-chain verification contract - main contract]. This unit draws on the “Leela vs. the World” framework, but the actual architecture of a single AI dapp may not be the same as shown in the image above. First, the chess game situation in chess requires a contract, but in reality, AI may not need an on-chain contract. However, as far as the architecture of combinable AI is concerned, if the main business is recorded through contracts, it may be more convenient for other dapps to combine it with it. Second, the main contract does not necessarily need to affect the ML model of the AI dapp itself, because an AI dapp may have a unidirectional effect. After the ML model is processed, it is enough to trigger a contract related to its own business, and the contract will be called by other dapps.
Extensively, calls between contracts are calls between different web3 applications. They are calls for personal identity, assets, financial services, and even social information. We can imagine a specific combination of AI applications:
The interaction between AI in the public chain framework is not something that has not been discussed. Loaf, a contributor to the Realms ecosystem of full-chain games, once proposed that AI NPCs can trade with each other like players, so that the entire economic system can optimize itself and operate automatically. AI Arena has developed an AI automated battle game. Users first buy an NFT. An NFT represents a battle robot, and an AI model is behind it. Users first play games on their own, then hand over the data to AI for simulated learning. When users feel that the AI is strong enough, they can automatically play against other AIs in the arena. Modulus Labs mentioned that AI Arena wants to turn all of this AI into verifiable AI. Both of these cases saw the possibility of AI interacting with each other and modifying on-chain data directly as they interacted.
However, there are still many issues to be discussed in the specific implementation of combinable AI, such as how different dapps can use each other’s zkp or verify contracts. However, there are also many excellent projects in the zk field. For example, RISC Zero has made a lot of progress in performing complex calculations off-chain and releasing zkp to the chain. Perhaps one day it will be possible to put together an appropriate solution.
2.2 Horizontal model: AI service platforms that focus on decentralization
In this regard, we mainly introduce a decentralized AI platform called SAKSHI, which was jointly proposed by people from Princeton, Tsinghua University, the University of Illinois at Urbana-Champaign, Hong Kong University of Science and Technology, Witness Chain, and Eigen Layer. Its core goal is to enable users to access AI services in a more decentralized manner, making the entire process more trustless and automated.
Photo Credit: SAKSHI
SAKSHI’s structure can be divided into six layers: service layer (service layer), control layer (control layer), transaction layer (translation layer), proof layer (proof layer), economic layer (economic layer), and market layer (Marketplace)
The market is the level closest to the user. There are aggregators on the market to provide services to users on behalf of different AI providers. Users place orders through aggregators and reach agreements with aggregators on service quality and payment prices (agreements are called SLA-service-level agreements).
Next, the service layer provides an API for the client side, then the client makes an ML inference request to the aggregator, and the request is sent to a server used to match the AI service provider (the route used to transmit the request is part of the control layer). Therefore, the service layer and control layer are similar to a service with multiple web2 servers, but the different servers are operated by different entities, and each server is linked through an SLA (previously signed service agreement) and an aggregator.
SLAs are deployed on the chain in the form of smart contracts, all of which belong to the transaction layer (note: in this solution, they are deployed on the Witness Chain). The transaction layer also records the current status of a service order and is used to coordinate users, aggregators, and service providers to handle payment disputes.
In order for the transaction layer to have evidence to rely on when handling disputes, the proof layer (Proof Layer) will check whether the service provider uses the model as agreed in the SLA. However, SAKSHI did not choose to generate zkp for the ML calculation process, but instead used the idea of optimistic proof, hoping to establish a network of challenger nodes to test the service. The node incentives are borne by Witness Chain.
Although SLA and the challenger node network are on Witness Chain, in SAKSHI’s plan, Witness Chain does not plan to use its native token incentives to achieve independent security, but instead uses the security of Ethereum through Eigen Layer, so the entire economy actually relies on Eigen Layer.
As can be seen, SAKSHI is between AI service providers and users, and organizes different AIs in a decentralized manner to provide services to users. This is more like a horizontal solution. The core of SAKSHI is that it allows AI service providers to focus more on managing their own off-chain model calculations, matching user needs with model services, payment of services, and verification of service quality through on-chain agreements, and trying to resolve payment disputes automatically. Of course, at present SAKSHI is still in the theoretical stage, and there are also plenty of implementation details worth determining.
Whether it’s combinable AI or decentralized AI platforms, the AI ecosystem model based on the public chain seems to have something in common. For example, AI service providers do not directly connect with users; they only need to provide ML models and perform off-chain calculations. Payments, dispute resolution, and coordination between user needs and services can all be solved by decentralized agreements. As a trustless infrastructure, the public chain reduces friction between service providers and users, and users also have higher autonomy at this time.
Although the advantages of using the public chain as an application base are clichés, it is true that it also applies to AI services. However, the difference between AI applications and existing dapp applications is that AI applications cannot place all computation on the chain, so it is necessary to use zk or optimistic proof to connect AI services to the public chain system in a more trustless manner.
With the implementation of a series of experience optimization solutions such as account abstraction, users may not be able to sense the existence of mnemonics, chains, and gas. This brings the public chain ecosystem closer to web2 in terms of experience, while users can obtain a higher degree of freedom and composability than web2 services. This will be very attractive to users. The AI application ecosystem based on the public chain is worth looking forward to.
Kernel Ventures is a crypto venture capital fund driven by a research and development community with over 70 early-stage investments focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, modular blockchains, and verticals that will host the next billion crypto users, such as account abstraction, data availability, scalability, etc. Over the past seven years, we’ve been committed to supporting the development of core development communities and university blockchain associations around the world.
Disclaimer: