Axonum enshrines AI into blockchain to build a decentralized supercomputer powered by global collective intelligence.
We are building Axonum, an AI optimistic rollup, with the world’s first AI EVM.
We aim to democratize access to AI-powered DApps, making AI model inferences both accessible and user-friendly.
Axonum is an optimistic rollup with enshrined AI powered by opML and AI EVM. It enables users to seamlessly employ AI models natively within smart contracts without being encumbered by the intricacies of underlying technologies.
To enable native ML inference in the smart contract, we need to modify the execution layer of the layer 2 chain. Specifically, we add a precompiled contract inference in EVM to build AI EVM.
AI EVM will conduct the ML inference in native execution and then return deterministic execution results. When a user wants to use the AI model to process data, all the user needs to do is to call the precompiled contract inference with the model address and model input, and then the user can obtain the model output and use it natively in the smart contract.
import "./AILib.sol";
contract AIContract {
...
function inference(bytes32 model_address, bytes memory input_data, uint256 output_size) public {
bytes memory output = AILib.inference(model_address, input_data, output_size);
emit Inference(model_address, input_data, output_size, output);
}
}
The models are stored in the model data available (DA) layer. All the models can be retrieved from DA using the model address. We assume the data availability of all the models.
The core design principle of the precompiled contract inference follows the design principles of opML, that is, we separate execution from proving. We provide two kinds of implementation of the precompiled contract inference. One is compiled for native execution, which is optimized for high speed. Another is compiled for the fraud proof VM, which helps prove the correctness of the opML results.
For the implementation for execution, we re-use the ML engine in opML. We will first fetch the model using the model address from the model hub and then load the model into the ML engine. ML engine will take the user’s input in the precompiled contract as the model input and then execute the ML inference task. The ML engine guarantees the consistency and determinism of the ML inference results using quantization and soft float.
Besides the current AI EVM design, an alternative approach to enable AI in EVM is adding more machine learning-specific opcodes into EVM, with corresponding changes to the virtual machine’s resource and pricing model as well as the implementation.
opML (Optimistic Machine Learning) and optimistic rollup (opRollup) are both based on a similar fraud-proof system, making it feasible to integrate opML into the Layer 2 (L2) chain alongside the opRollup system. This integration enables the seamless utilization of machine learning within smart contracts on the L2 chain.
Just like the existing rollup systems, Axonum is responsible for “rolling up” transactions by batching them before publishing them to the L1 chain, usually through a network of sequencers. This mechanism could include thousands of transactions in a single rollup, increasing the throughput of the whole system of L1 and L2.
Axonum, as one of the optimistic rollups, is an interactive scaling method for L1 blockchains. We optimistically assume that every proposed transaction is valid by default. Different from the traditional L2 optimistic rollup system, the transaction in Axonum can include AI model inferences, which can make the smart contracts on Axonum “smarter” with AI.
In the case of mitigating potentially invalid transactions, like optimistic rollups, Axonum introduces a challenge period during which participants may challenge a suspect rollup. A fraud-proving scheme is in place to allow for several fraud proofs to be submitted. Those proofs could make the rollup valid or invalid. During the challenge period, state changes may be disputed, resolved, or included if no challenge is presented (and the required proofs are in place).
Here’s the essential workflow of Axonum, without considering mechanisms such as pre-confirmation or force exit:
The core design principle of the fraud proof system of Axonum is that we separate the fraud proof process of Geth (the Golang implementation of the Ethereum client on layer 2) and the opML. This design ensures a robust and efficient fraud proof mechanism. Here’s a breakdown of the fraud proof system and our separation design:
Axonum is the first AI optimistic rollup that enables AI on Ethereum natively, trustlessly, and verifiably.
Axonum leverages optimistic ML and optimistic rollup and introduces innovations of AI EVM to add intelligence to Ethereum as a Layer 2.
We enshrine AI into blockchain to build a decentralized supercomputer powered by global collective intelligence.
Axonum enshrines AI into blockchain to build a decentralized supercomputer powered by global collective intelligence.
We are building Axonum, an AI optimistic rollup, with the world’s first AI EVM.
We aim to democratize access to AI-powered DApps, making AI model inferences both accessible and user-friendly.
Axonum is an optimistic rollup with enshrined AI powered by opML and AI EVM. It enables users to seamlessly employ AI models natively within smart contracts without being encumbered by the intricacies of underlying technologies.
To enable native ML inference in the smart contract, we need to modify the execution layer of the layer 2 chain. Specifically, we add a precompiled contract inference in EVM to build AI EVM.
AI EVM will conduct the ML inference in native execution and then return deterministic execution results. When a user wants to use the AI model to process data, all the user needs to do is to call the precompiled contract inference with the model address and model input, and then the user can obtain the model output and use it natively in the smart contract.
import "./AILib.sol";
contract AIContract {
...
function inference(bytes32 model_address, bytes memory input_data, uint256 output_size) public {
bytes memory output = AILib.inference(model_address, input_data, output_size);
emit Inference(model_address, input_data, output_size, output);
}
}
The models are stored in the model data available (DA) layer. All the models can be retrieved from DA using the model address. We assume the data availability of all the models.
The core design principle of the precompiled contract inference follows the design principles of opML, that is, we separate execution from proving. We provide two kinds of implementation of the precompiled contract inference. One is compiled for native execution, which is optimized for high speed. Another is compiled for the fraud proof VM, which helps prove the correctness of the opML results.
For the implementation for execution, we re-use the ML engine in opML. We will first fetch the model using the model address from the model hub and then load the model into the ML engine. ML engine will take the user’s input in the precompiled contract as the model input and then execute the ML inference task. The ML engine guarantees the consistency and determinism of the ML inference results using quantization and soft float.
Besides the current AI EVM design, an alternative approach to enable AI in EVM is adding more machine learning-specific opcodes into EVM, with corresponding changes to the virtual machine’s resource and pricing model as well as the implementation.
opML (Optimistic Machine Learning) and optimistic rollup (opRollup) are both based on a similar fraud-proof system, making it feasible to integrate opML into the Layer 2 (L2) chain alongside the opRollup system. This integration enables the seamless utilization of machine learning within smart contracts on the L2 chain.
Just like the existing rollup systems, Axonum is responsible for “rolling up” transactions by batching them before publishing them to the L1 chain, usually through a network of sequencers. This mechanism could include thousands of transactions in a single rollup, increasing the throughput of the whole system of L1 and L2.
Axonum, as one of the optimistic rollups, is an interactive scaling method for L1 blockchains. We optimistically assume that every proposed transaction is valid by default. Different from the traditional L2 optimistic rollup system, the transaction in Axonum can include AI model inferences, which can make the smart contracts on Axonum “smarter” with AI.
In the case of mitigating potentially invalid transactions, like optimistic rollups, Axonum introduces a challenge period during which participants may challenge a suspect rollup. A fraud-proving scheme is in place to allow for several fraud proofs to be submitted. Those proofs could make the rollup valid or invalid. During the challenge period, state changes may be disputed, resolved, or included if no challenge is presented (and the required proofs are in place).
Here’s the essential workflow of Axonum, without considering mechanisms such as pre-confirmation or force exit:
The core design principle of the fraud proof system of Axonum is that we separate the fraud proof process of Geth (the Golang implementation of the Ethereum client on layer 2) and the opML. This design ensures a robust and efficient fraud proof mechanism. Here’s a breakdown of the fraud proof system and our separation design:
Axonum is the first AI optimistic rollup that enables AI on Ethereum natively, trustlessly, and verifiably.
Axonum leverages optimistic ML and optimistic rollup and introduces innovations of AI EVM to add intelligence to Ethereum as a Layer 2.
We enshrine AI into blockchain to build a decentralized supercomputer powered by global collective intelligence.