Galaxy Digital: Exploring the Intersection of Cryptocurrency and AI

Intermediate2/28/2024, 4:59:26 AM
This article explores the intersection of cryptocurrency and artificial intelligence, highlighting the emergence of public blockchains as one of the most profound advancements in computer science history. It discusses how the development of AI is already having a significant impact on our world.
https://gimg.gateimg.com/learn/9b7944a306f33b9a27b4cf90fd85ae9838f87737.jpg

Introduction

The emergence of public blockchains marks a profound advancement in the history of computer science, while the development of artificial intelligence is having a significant impact on our world. Blockchain technology offers new templates for transaction settlement, data storage, and system design, whereas artificial intelligence represents a revolution in computing, analysis, and content delivery. Innovations in these industries are unleashing new use cases that could accelerate the adoption of both sectors in the coming years. This report examines the ongoing integration of cryptocurrency and artificial intelligence, focusing on novel use cases that aim to bridge the gap between the two and leverage their strengths. It specifically looks into projects developing decentralized computing protocols, zero-knowledge machine learning (zkML) infrastructure, and AI agents.

Cryptocurrency offers a permissionless, trustless, and composable settlement layer for AI, unlocking use cases like easier access to hardware through decentralized computing systems, building AI agents capable of executing complex tasks requiring value exchange, and developing identity and provenance solutions to counter Sybil attacks and deepfakes. AI brings to cryptocurrency many of the same benefits seen in Web 2.0, including enhanced user and developer experiences through large language models like ChatGPT and Copilot, and significantly improved functionality and automation potential for smart contracts. Blockchain provides the transparent, data-rich environment needed for AI, though the limited computing power of blockchain is a major barrier to direct integration of AI models.

The experiments and eventual adoption in the crossover of cryptocurrency and AI are driven by the same forces that drive the most promising use cases for cryptocurrency: access to a permissionless and trustless coordination layer, facilitating better value transfer. Given the vast potential, stakeholders in this field need to understand the fundamental ways these technologies intersect.

Key points:

    • In the near future (6 months to 1 year), the integration of cryptocurrency and AI will be dominated by AI applications that enhance developer efficiency, smart contract auditability and security, and user accessibility. These integrations are not specific to cryptocurrency but enhance the experience for on-chain developers and users.
    • As high-performance GPUs remain in short supply, decentralized computing products are implementing AI-customized GPU products to boost adoption.
    • User experience and regulatory hurdles remain obstacles to attracting decentralized computing customers. However, recent developments from OpenAI and ongoing regulatory reviews in the U.S. highlight the value proposition of permissionless, censorship-resistant, and decentralized AI networks.
    • On-chain AI integration, especially smart contracts capable of utilizing AI models, requires improvements in zkML technology and other methods for verifying off-chain computations. A lack of comprehensive tools, developer talent, and high costs are barriers to adoption.
    • AI agents are well-suited for cryptocurrency, where users (or the agents themselves) can create wallets to transact with other services, agents, or individuals—a capability not possible with traditional financial rails. Additional integration with non-crypto products is needed for broader adoption.

Terms

Artificial Intelligence is the use of computation and machines to imitate the reasoning and problem-solving capabilities of humans.

Neural Networks are a method of training artificial intelligence models. They run inputs through discrete algorithmic layers, refining them until the desired output is produced. Neural networks consist of equations with weights that can be adjusted to change the output. They may require extensive data and computation for training to ensure accurate outputs. This is one of the most common ways to develop AI models (e.g., ChatGPT relies on a neural network process based on Transformers).

Training is the process of developing neural networks and other AI models. It requires a significant amount of data to train models to correctly interpret inputs and produce accurate outputs. During training, the model’s equation weights are continuously modified until a satisfactory output is produced. Training can be very costly. For instance, ChatGPT uses tens of thousands of its own GPUs to process data. Teams with fewer resources often rely on specialized computing providers, such as Amazon Web Services, Azure, and Google Cloud providers.

Inference is the actual use of AI models to obtain outputs or results (for example, using ChatGPT to create an outline for a paper on the intersection of cryptocurrency and AI). Inference is used throughout the training process and in the final product. Due to computational costs, even after training is complete, their operational costs can be high, although their computational intensity is lower than training.

Zero-Knowledge Proofs (ZKP) allow for the verification of statements without revealing underlying information. This is useful in cryptocurrency for two main reasons: 1) Privacy and 2) Scaling. For privacy, it enables users to transact without revealing sensitive information (e.g., how much ETH is in a wallet). For scaling, it allows off-chain computations to be proved on-chain more quickly than re-executing the computations. This enables blockchains and applications to run computations cheaply off-chain and then verify them on-chain. For more information on zero-knowledge and its role in Ethereum Virtual Machines, see Christine Kim’s report on zkEVMs: The Future of Ethereum Scalability.

AI/Cryptocurrency Market Map

Projects integrating artificial intelligence and cryptocurrency are still building the underlying infrastructure needed to support large-scale on-chain AI interactions.

Decentralized computing markets are emerging to provide the vast physical hardware needed to train and infer AI models, primarily in the form of Graphic Processing Units (GPUs). These two-sided markets connect those renting out and seeking to rent computing, facilitating the transfer of value and verification of computations. Within decentralized computing, several subcategories offering additional functionalities are emerging. Besides bilateral markets, this report also reviews machine learning training providers that offer verifiable training and fine-tuning of outputs, as well as projects dedicated to linking computation and model generation to enable AI, often referred to as intelligent incentive networks.

zkML is an emerging focus area for projects aiming to provide verifiable model outputs on-chain in an economically viable and timely manner. These projects mainly enable applications to handle heavy computational requests off-chain, then post verifiable outputs on-chain, proving that the off-chain workload was complete and accurate. zkML is currently both expensive and time-consuming but is increasingly being used as a solution. This is evident in the growing number of integrations between zkML providers and DeFi/gaming applications wishing to leverage AI models.

The ample supply of computation and the ability to verify on-chain computation open the door for on-chain AI agents. Agents are trained models capable of executing requests on behalf of users. Agents offer the opportunity to significantly enhance the on-chain experience, allowing users to execute complex transactions simply by conversing with a chatbot. However, as of now, agent projects are still focused on developing the infrastructure and tools for easy and rapid deployment.

Decentralized Computing

Overview

Artificial Intelligence requires extensive computation to train models and run inference. Over the past decade, as models have become increasingly complex, the demand for computation has grown exponentially. For instance, OpenAI observed that from 2012 to 2018, the computational demand of its models doubled every two years, shifting to doubling every three and a half months. This led to a surge in demand for GPUs, with some cryptocurrency miners even repurposing their GPUs to provide cloud computing services. As competition for computational access intensifies and costs rise, some projects are leveraging cryptographic technology to offer decentralized computing solutions. They provide on-demand computing at competitive prices, enabling teams to train and run models affordably. In some cases, the trade-offs may involve performance and security.

The demand for state-of-the-art GPUs (e.g., those produced by Nvidia) is significant. In September, Tether acquired a stake in German bitcoin miner Northern Data, reportedly spending $420 million to purchase 10,000 H100 GPUs (one of the most advanced GPUs for AI training). The wait time to acquire top-tier hardware can be at least six months, if not longer in many instances. Worse, companies are often required to sign long-term contracts to secure computational volumes they might not even use. This can lead to situations where computational resources are available but not accessible on the market. Decentralized computing systems help address these market inefficiencies by creating a secondary market where compute owners can immediately sublease their excess capacity upon notification, thereby releasing new supply.

Beyond competitive pricing and accessibility, a key value proposition of decentralized computing is censorship resistance. Cutting-edge AI development is increasingly dominated by large tech companies with unmatched computational and data access capabilities. The first key theme highlighted in the 2023 AI Index Annual Report was the growing surpassing of academia by the industry in the development of AI models, concentrating control in the hands of a few tech leaders. This raises concerns about their potential to wield significant influence in setting the norms and values underpinning AI models, especially after these tech companies push for regulations to limit uncontrollable AI development.

Verticals in Decentralized Computing

Several models of decentralized computing have emerged in recent years, each with its own focus and trade-offs.

Generalized computing

Broadly speaking, projects such as Akash, io.net, iExec, and Cudos are applications of decentralized computing, offering beyond data and general computational solutions, access to specialized computation for AI training and inference. Akash stands out as the only fully open-source “super cloud” platform, utilizing the Cosmos SDK for its proof-of-stake network. AKT, Akash’s native token, serves as a payment method to secure the network and incentivize participation. Launched in 2020, Akash’s mainnet initially focused on a permissionless cloud computing marketplace, featuring storage and CPU leasing services. In June 2023, Akash introduced a GPU-centric testnet, followed by a GPU mainnet launch in September, enabling AI training and inference GPU rentals.

The Akash ecosystem comprises two primary participants: tenants, who seek computing resources, and providers, the compute suppliers. A reverse auction process facilitates the matching of tenants and providers, where tenants post their compute requirements, including preferred server locations or hardware types and their budget. Providers then bid, with the lowest bidder awarded the task. Validators maintain network integrity, with a current cap at 100 validators, planned to increase over time. Participation as a validator is open to those who stake more AKT than the least-staked current validator. AKT holders can delegate their tokens to validators, with transaction fees and block rewards distributed in AKT. Moreover, for each lease, the Akash network earns a “take rate,” decided by the community, distributed to AKT holders.

Secondary market

The secondary market for decentralized computing aims to address inefficiencies in the existing computational market, where supply constraints lead to companies hoarding resources beyond their needs and long-term contracts with cloud providers further limit supply. Decentralized computing platforms unlock new supply, enabling anyone with computational needs to become a provider.

Whether the surge in demand for GPUs for AI training translates to sustained network use on Akash remains to be seen. Historically, Akash has offered CPU-based market services at a 70-80% discount compared to centralized alternatives, yet this pricing strategy has not significantly driven adoption. Network activity, measured by active leases, has plateaued, with an average of 33% compute, 16% memory, and 13% storage utilization by the second quarter of 2023, impressive for on-chain adoption but indicative of supply still outstripping demand. Half a year since the GPU network launch, it’s too early for a definitive assessment of long-term adoption, though early signs show a 44% average GPU utilization, driven mainly by demand for high-quality GPUs like the A100, over 90% of which have been rented out.

Akash’s daily expenditures have nearly doubled since the introduction of GPUs, attributed partly to increased usage of other services, especially CPUs, but mainly due to new GPU usage.

Pricing is competitive with, or in some cases more expensive than, centralized counterparts like Lambda Cloud and Vast.ai. High demand for top-end GPUs, such as the H100 and A100, means most owners of such equipment are not interested in listing them on a competitively priced market.

Despite initial profits, adoption barriers remain. Decentralized computing networks must take further steps to generate demand and supply, with teams exploring how best to attract new users. For instance, in early 2024, Akash passed Proposal 240, increasing the AKT emission for GPU providers to incentivize more supply, especially for high-end GPUs. Teams are also working on proof-of-concept models to demonstrate their networks’ live capabilities to potential users. Akash is training its foundational models and has launched chatbot and image generation products that utilize Akash GPUs. Similarly, io.net has developed a stable diffusion model and is launching new network functionalities to better emulate network performance and scale.

Decentralized Machine Learning Training

In addition to general computing platforms that can meet the demands of artificial intelligence, a group of professional AI GPU suppliers focused on machine learning model training is also emerging. For instance, Gensyn is “coordinating power and hardware to build collective intelligence,” with the philosophy that “if someone wants to train something and someone is willing to train it, then this training should be allowed to happen.”

This protocol involves four main participants: submitters, solvers, validators, and whistleblowers. Submitters submit tasks with training requests to the network. These tasks include the training objectives, the models to be trained, and the training data. As part of the submission process, submitters need to prepay the estimated computational cost required by solvers.

After submission, tasks are assigned to solvers who actually perform the model training. Then, solvers submit the completed tasks to validators, who are responsible for checking the training to ensure it was correctly completed. Whistleblowers are tasked with ensuring validators act honestly. To motivate whistleblowers to participate in the network, Gensyn plans to regularly offer deliberately incorrect evidence, rewarding whistleblowers who catch them.

Besides providing computation for AI-related workloads, a key value proposition of Gensyn is its verification system, which is still under development. Verification is necessary to ensure that the external computation by GPU providers is executed correctly (i.e., ensuring users’ models are trained the way they want). Gensyn addresses this issue with a unique approach, utilizing novel verification methods called “probabilistic learning proofs, graph-based precise protocols, and Truebit-style incentive games.” This is an optimistic solving model that allows validators to confirm solvers have run the model correctly without needing to fully rerun the model themselves, a process that is costly and inefficient.

In addition to its innovative verification method, Gensyn also claims to be cost-effective compared to centralized alternatives and cryptocurrency competitors - offering ML training prices up to 80% cheaper than AWS, while outperforming similar projects like Truebit in tests.

Whether these initial results can be replicated on a large scale in decentralized networks remains to be seen. Gensyn hopes to utilize the surplus computational capacity of providers such as small data centers, retail users, and eventually small mobile devices like smartphones. However, as the Gensyn team themselves admit, relying on heterogeneous computing providers introduces some new challenges.

For centralized providers like Google Cloud and Coreweave, computing costs are expensive, but communication between computations (bandwidth and latency) is cheap. These systems are designed to facilitate communication between hardware as quickly as possible. Gensyn disrupts this framework by reducing computation costs by allowing anyone in the world to offer GPUs, but at the same time increases communication costs, as the network now must coordinate computation jobs across distant heterogeneous hardware. Gensyn has not yet launched, but it represents a proof of concept that could be achieved when building a decentralized machine learning training protocol.

Decentralized General Intelligence

Decentralized computing platforms also offer the possibility of designing methods for artificial intelligence creation. Bittensor is a decentralized computing protocol built on Substrate, attempting to answer the question, “How do we transform artificial intelligence into a collaborative method?” Bittensor aims to achieve the decentralization and commodification of AI generation. Launched in 2021, it hopes to utilize the power of collaborative machine learning models to continuously iterate and produce better AI.

Bittensor draws inspiration from Bitcoin, with its native currency, TAO, having a supply limit of 21 million and a halving cycle every four years (the first halving is set for 2025). Unlike utilizing Proof of Work to generate correct random numbers and earn block rewards, Bittensor relies on “Proof of Intelligence,” requiring miners to run models to generate outputs in response to inference requests.

Incentivizing Intelligence

Initially, Bittensor relied on a Mixture of Experts (MoE) model to generate outputs. When an inference request is submitted, the MoE model does not rely on a generalized model but forwards the request to the most accurate model for the given input type. Imagine building a house, where you hire various experts for different aspects of the construction process (e.g., architects, engineers, painters, construction workers, etc.). MoE applies this to machine learning models, trying to leverage the outputs of different models based on the input. As explained by Bittensor’s founder Ala Shaabana, this is like “talking to a room full of smart people to get the best answer, rather than talking to one person.” Due to challenges in ensuring correct routing, message synchronization to the correct model, and incentives, this method has been shelved until further development of the project.

In the Bittensor network, there are two main participants: validators and miners. Validators send inference requests to miners, review their outputs, and rank them based on the quality of their responses. To ensure their rankings are reliable, validators are given a “vtrust” score based on how consistent their rankings are with other validators. The higher a validator’s vtrust score, the more TAO emissions they receive. This is to incentivize validators to reach a consensus on model rankings over time, as the more validators agree on the rankings, the higher their individual vtrust scores.

Miners, also known as servers, are network participants who run the actual machine learning models. They compete to provide the most accurate outputs for validators’ queries, and the more accurate the outputs, the more TAO emissions they earn. Miners are free to generate these outputs as they wish. For instance, in the future, it’s entirely possible for Bittensor miners to have previously trained models on Gensyn and use them to earn TAO emissions.

Today, most interactions occur directly between validators and miners. Validators submit inputs to miners and request outputs (i.e., model training). Once validators query miners on the network and receive their responses, they rank the miners and submit their rankings to the network.

The interaction between validators (relying on PoS) and miners (relying on Model Proof, a form of PoW) is known as Yuma consensus. It aims to incentivize miners to produce the best outputs to earn TAO emissions and incentivize validators to accurately rank miner outputs, earning higher vtrust scores and increasing their TAO rewards, thus forming a consensus mechanism for the network.

Subnets and Applications

Interactions on Bittensor mainly involve validators submitting requests to miners and evaluating their outputs. However, as the quality of contributing miners improves and the overall intelligence of the network grows, Bittensor is creating an application layer on top of its existing stack so that developers can build applications that query the Bittensor network.

In October 2023, Bittensor introduced subnets through the Revolution upgrade, taking a significant step towards achieving this goal. Subnets are separate networks on Bittensor that incentivize specific behaviors. The Revolution opened the network to anyone interested in creating subnets. Within months of its launch, more than 32 subnets have been launched, including subnets for text prompts, data scraping, image generation, and storage. As subnets mature and become product-ready, subnet creators will also create application integrations, enabling teams to build applications that query specific subnets. Some applications, such as chatbots, image generators, Twitter reply bots, and prediction markets, do exist but there are no formal incentives beyond funding from the Bittensor Foundation for validators to accept and forward these queries.

For a clearer explanation, below is an example of how Bittensor might work once applications are integrated into the network.

Subnets earn TAO based on performance evaluated by the root network. The root network, situated above all subnets, essentially acts as a special subnet and is managed by the 64 largest subnet validators by stake. Root network validators rank subnets based on their performance and periodically allocate TAO emissions to subnets. In this way, individual subnets act as miners for the root network.

Bittensor’s Vision

Bittensor is still experiencing growing pains as it expands the protocol’s functionality to incentivize intelligence generation across multiple subnets. Miners are constantly devising new ways to attack the network for more TAO rewards, such as by slightly altering the output of highly-rated inferences run by their models and then submitting multiple variants. Governance proposals affecting the entire network can only be submitted and implemented by the Triumvirate, composed entirely of stakeholders from the Opentensor Foundation (notably, proposals require approval from the Bittensor Senate, composed of Bittensor validators, before implementation). The project’s tokenomics are being modified to enhance incentives for cross-subnet use of TAO. The project has also quickly gained notoriety for its unique approach, with the CEO of one of the most popular AI websites, HuggingFace, stating that Bittensor should add its resources to the site.

In a recent article titled “Bittensor Paradigm” published by the core developers, the team outlined Bittensor’s vision of eventually becoming “agnostic to what is measured.” Theoretically, this could enable Bittensor to develop subnets that incentivize any type of behavior supported by TAO. There are still considerable practical limitations—most notably, proving that these networks can scale to handle such a diverse range of processes and that potential incentives drive progress beyond centralized products.

Building a Decentralized Computing Stack for Artificial Intelligence Models

The section above provides an in-depth overview of various types of decentralized artificial intelligence (AI) computing protocols currently under development. In their early stages of development and adoption, they lay the foundation for an ecosystem that could eventually facilitate the creation of “AI building blocks,” similar to the “money Legos” concept in DeFi. The composability of permissionless blockchains allows for the possibility of each protocol to be built upon another, creating a more comprehensive decentralized AI ecosystem. \
For instance, this is how Akash, Gensyn, and Bittensor might all interact to respond to inference requests.

It is crucial to understand that this is just one example of what could happen in the future, not a representation of the current ecosystem, existing partnerships, or potential outcomes. The limitations of interoperability and other considerations described below significantly restrict the integration possibilities today. Moreover, the fragmentation of liquidity and the need to use multiple tokens could harm the user experience, a point noted by the founders of Akash and Bittensor.

Other Decentralized Products

Beyond computing, several other decentralized infrastructure services have been introduced to support the emerging AI ecosystem within the cryptocurrency space. Listing all of these is beyond the scope of this report, but some interesting and illustrative examples include:

  • Ocean: A decentralized data marketplace where users can create data NFTs representing their data and purchase using data tokens. Users can monetize their data and have greater sovereignty over it, while providing AI teams with access to the data needed for developing and training models.
  • Grass: A decentralized bandwidth marketplace where users can sell their excess bandwidth to AI companies, which utilize it to scrape data from the internet. Built on the Wynd network, this allows individuals to monetize their bandwidth and offers bandwidth buyers a more diverse perspective on what individual users see online (since individual internet access is often customized based on their IP address).
  • HiveMapper: Builds a decentralized map product incorporating information collected from daily drivers. HiveMapper relies on AI to interpret images collected from users’ dashboard cameras and rewards users with tokens for helping fine-tune the AI models through Reinforced Human Learning Feedback (RHLF).

Overall, these examples point to the nearly limitless opportunities for exploring decentralized market models that support AI models or the peripheral infrastructure needed to develop them. Currently, most of these projects are in the proof-of-concept stage and require further research and development to prove they can operate at the scale needed to provide comprehensive AI services.

Outlook

Decentralized computing products are still in the early stages of development. They have just begun to launch state-of-the-art computing capabilities, capable of training the most powerful artificial intelligence models in production. To gain a meaningful market share, they need to demonstrate actual advantages over centralized alternatives. Potential triggers for wider adoption include:

  • GPU Supply/Demand. The scarcity of GPUs combined with rapidly increasing computing demands is leading to a GPU arms race. Due to GPU limitations, OpenAI has at times restricted access to its platform. Platforms like Akash and Gensyn can offer cost-competitive alternatives for teams requiring high-performance computing. The next 6-12 months present a particularly unique opportunity for decentralized computing providers to attract new users who are forced to consider decentralized products due to a lack of broader market access. Furthermore, with the increasingly improved performance of open-source models such as Meta’s LLaMA2, users no longer face the same barriers when deploying effective fine-tuned models, making computing resources the primary bottleneck. However, the mere existence of platforms does not guarantee sufficient computing supply and corresponding consumer demand. Procuring high-end GPUs remains challenging, and cost is not always the main motivator for demand. These platforms will face challenges in demonstrating the actual benefits of using decentralized computing options (whether due to cost, censorship resistance, uptime and resilience, or accessibility) to accumulate sticky users. They must act quickly as GPU infrastructure investments and constructions are proceeding at an astonishing pace.
  • Regulation. Regulation remains a barrier to the decentralized computing movement. In the short term, the lack of clear regulation means both providers and users face potential risks in using these services. What if providers offer computation or buyers unknowingly purchase computation from sanctioned entities? Users might hesitate to use decentralized platforms lacking centralized control and oversight. Protocols attempt to alleviate these concerns by incorporating controls into their platforms or adding filters to access only known computing providers (i.e., those who have provided Know Your Customer (KYC) information), but a more robust approach is needed to protect privacy while ensuring compliance. In the short term, we might see the emergence of KYC and compliance platforms that restrict access to their protocols to address these issues. Moreover, discussions around a possible new regulatory framework in the US (exemplified by the issuance of the Executive Order on Promoting Safe, Reliable, and Trustworthy Development and Use of Artificial Intelligence) highlight the potential for regulatory actions to further restrict GPU access.
  • Censorship. Regulation is a double-edged sword, and decentralized computing products could benefit from actions limiting AI access. Beyond executive orders, OpenAI founder Sam Altman testified in Congress that regulators should issue licenses for AI development. Discussions on AI regulation are just beginning, but any attempts to restrict access or censor AI capabilities could accelerate the adoption of decentralized platforms that do not face such barriers. The leadership changes (or lack thereof) at OpenAI in November further indicate that entrusting the decision-making power of the most powerful existing AI models to a few is risky. Moreover, all AI models inevitably reflect the biases of those who create them, whether intentional or not. One way to eliminate these biases is to make models as open as possible for fine-tuning and training, ensuring anyone, anywhere, can access a variety of models with different biases.
  • Data Privacy. Decentralized computing could be more attractive than centralized alternatives when integrated with external data and privacy solutions that offer users data sovereignty. Samsung became a victim when it realized engineers were using ChatGPT for chip design and leaking sensitive information to ChatGPT. Phala Network and iExec claim to provide users with SGX secure enclaves to protect user data, and ongoing fully homomorphic encryption research could further unlock privacy-ensured decentralized computing. As AI further integrates into our lives, users will increasingly value the ability to run models on applications with privacy protection. Users also need support for data composability services, so they can seamlessly port data from one model to another.
  • User Experience (UX). User experience remains a significant barrier to broader adoption of all types of crypto applications and infrastructure. This is no different for decentralized computing products and is exacerbated in some cases due to developers needing to understand both cryptocurrency and AI. Improvements need to start from the basics, such as joining and extracting interactions with the blockchain, to deliver the same high-quality output as current market leaders. This is evident given the difficulty many operable decentralized computing protocols offering cheaper products have in gaining regular use.

Smart contracts and zkML

Smart contracts are the cornerstone of any blockchain ecosystem. They automatically execute under a set of specific conditions, reducing or eliminating the need for trusted third parties, thus enabling the creation of complex decentralized applications, such as those in DeFi. However, the functionality of smart contracts is still limited because they operate based on preset parameters that must be updated.

For instance, a smart contract deployed for a lending/borrowing protocol, which contains specifications on when positions should be liquidated based on specific loan-to-value ratios. While useful in static environments, these smart contracts need constant updates to adapt to changes in risk tolerance in dynamic situations, presenting challenges for contracts not managed through centralized processes. For example, DAOs relying on decentralized governance processes may not react swiftly enough to systemic risks.

Integrating artificial intelligence (i.e., machine learning models) into smart contracts is a potential way to enhance functionality, security, and efficiency while improving the overall user experience. However, these integrations also introduce additional risks, as it’s impossible to ensure the models underpinning these smart contracts won’t be exploited or fail to interpret long-tail situations (given the scarcity of data inputs, long-tail situations are hard for models to train on).

Zero-Knowledge Machine Learning (zkML)

Machine learning requires significant computation to run complex models, making it impractical to directly run AI models in smart contracts due to high costs. For example, a DeFi protocol offering yield optimization models would find it difficult to run these models on-chain without incurring prohibitive Gas fees. One solution is to increase the computational capabilities of the underlying blockchain. However, this also raises the requirements for the chain’s validators, potentially compromising decentralization. Instead, some projects are exploring the use of zkML to verify outputs in a trustless manner without needing intensive on-chain computation.

A common example illustrating the usefulness of zkML is when users need others to run data through models and verify that their counterparts have indeed run the correct model. Developers using decentralized computing providers to train their models might worry about these providers cutting costs by using cheaper models that produce outputs with nearly imperceptible differences. zkML allows computing providers to run data through their models and then generate proofs that can be verified on-chain, proving that the model outputs for given inputs are correct. In this scenario, the model provider gains the added advantage of being able to offer their model without revealing the underlying weights that produced the outputs.

The opposite is also possible. If users want to run models on their data but do not wish to give model projects access to their data due to privacy concerns (e.g., in medical checks or proprietary business information), they can run the model on their data without sharing the data, and then verify through proofs that they have run the correct model. These possibilities greatly expand the design space for integrating AI and smart contract functionalities by addressing daunting computational constraints.

Infrastructure and tools

Given the early state of the zkML field, development is primarily focused on building the infrastructure and tools teams need to convert their models and outputs into proofs verifiable on-chain. These products abstract the zero-knowledge aspects as much as possible.

EZKL and Giza are two projects building such tools by providing verifiable proofs of machine learning model execution. Both help teams build machine learning models to ensure these models can execute in a manner that allows results to be verified on-chain in a trustworthy way. Both projects use Open Neural Network Exchange (ONNX) to convert machine learning models written in common languages like TensorFlow and Pytorch into a standard format. They then output versions of these models that also generate zk proofs upon execution. EZKL is open-source, producing zk-SNARKs, while Giza is closed-source, producing zk-STARKs. Both projects are currently only compatible with EVM.

In recent months, EZKL has made significant progress in enhancing zkML solutions, focusing mainly on reducing costs, improving security, and speeding up proof generation. For example, in November 2023, EZKL integrated a new open-source GPU library that reduced aggregation proof time by 35%; in January, EZKL released Lilith, a software solution for integrating high-performance computing clusters and orchestrating concurrent job systems when using EZKL proofs. Giza’s uniqueness lies in providing tools for creating verifiable machine learning models and planning to implement a web3 equivalent of Hugging Face, opening a user marketplace for zkML collaboration and model sharing, and eventually integrating decentralized computing products. In January, EZKL published a benchmark evaluation comparing the performance of EZKL, Giza, and RiscZero (as described below), showcasing faster proof times and memory usage.

Modulus Labs is currently developing a new zero-knowledge (zk) proof technology specifically tailored for AI models. Modulus released a paper titled “Intelligent Cost,” which implies that running AI models on-chain incurs prohibitively high costs. This paper, published in January 2023, benchmarks existing zk proof systems to identify improvements in zk proofs’ capability and bottlenecks within AI models. It reveals that current products are too expensive and inefficient for large-scale AI applications. Building on initial research, Modulus launched Remainder in November, a specialized zk prover aimed at reducing the cost and proof time for AI models, making projects economically viable for large-scale integration into smart contracts. Their work is proprietary, making it impossible to benchmark against the mentioned solutions, but it was recently cited in Vitalik’s blog post on cryptography and artificial intelligence.

The development of tools and infrastructure is crucial for the future growth of the zkML space, as it can significantly reduce the friction involved in deploying verifiable off-chain computations and the need for zk teams. Creating secure interfaces for non-crypto-native machine learning practitioners to bring their models on-chain will enable applications to experiment with truly novel use cases. Additionally, these tools address a major barrier to the broader adoption of zkML: the lack of knowledgeable developers interested in working at the intersection of zero-knowledge, machine learning, and cryptography.

Coprocessor

Other solutions in development, referred to as “coprocessors” (including RiscZero, Axiom, and Ritual), serve various roles, including verifying off-chain computations on-chain. Like EZKL, Giza, and Modulus, their goal is to abstract the zk proof generation process entirely, creating zero-knowledge virtual machines capable of executing off-chain programs and generating on-chain verifiable proofs. RiscZero and Axiom cater to simple AI models as more general-purpose coprocessors, while Ritual is built specifically for use with AI models.

Ritual’s first instance, Infernet, includes an Infernet SDK that allows developers to submit inference requests to the network and receive outputs and optional proofs in return. Infernet nodes process these off-chain computations before returning outputs. For example, a DAO could establish a process ensuring all new governance proposals meet certain prerequisites before submission. Each time a new proposal is submitted, the governance contract triggers an inference request through Infernet, invoking an AI model trained specifically for DAO governance. This model reviews the proposal to ensure all necessary standards are met and returns outputs and evidence to approve or reject the proposal submission.

Over the next year, the Ritual team plans to introduce more features, forming an infrastructure layer known as the Ritual superchain. Many of the projects discussed could be integrated as service providers into Ritual. The Ritual team has already integrated with EZKL for proof generation and may soon add features from other leading providers. Infernet nodes on Ritual could also utilize Akash or io.net GPUs and query models trained on the Bittensor subnet. Their ultimate goal is to become the preferred provider of open AI infrastructure, offering services for machine learning and other AI-related tasks for any network and any workload.

Applications

zkML is aiding in reconciling the dichotomy between blockchain, which is inherently resource-constrained, and artificial intelligence, which demands significant computational and data resources. As a founder of Giza puts it, “the use cases are incredibly rich… It’s a bit like asking what the use cases for smart contracts were in the early days of Ethereum… What we’re doing is just expanding the use cases for smart contracts.” However, as noted, current development is predominantly happening at the tool and infrastructure level. Applications are still in the exploratory phase, with teams facing the challenge of proving that the value generated by implementing models with zkML outweighs its complexity and cost.

Current applications include:

  • Decentralized Finance (DeFi). zkML enhances the capabilities of smart contracts, upgrading the design space for DeFi. DeFi protocols offer a wealth of verifiable and immutable data for machine learning models to utilize in generating yield or trading strategies, risk analysis, user experience, etc. For instance, Giza has collaborated with Yearn Finance to build a proof-of-concept automatic risk assessment engine for Yearn’s new v3 vaults. Modulus Labs is working with Lyra Finance to integrate machine learning into its AMM, with Ion Protocol to implement models for analyzing validators’ risks, and assisting Upshot in validating its AI-supported NFT pricing information. Protocols like NOYA (using EZKL) and Mozaic provide access to proprietary off-chain models, enabling users to access automated liquidity mining while validating on-chain data inputs and proofs. Spectral Finance is developing an on-chain credit scoring engine to predict the likelihood of Compound or Aave borrowers defaulting on loans. Due to zkML, these so-called “De-Ai-Fi” products are likely to become increasingly popular in the coming years.
  • Gaming. Games have long been considered for disruption and enhancement through public blockchains. zkML enables on-chain artificial intelligence gaming. Modulus Labs has realized a proof-of-concept for simple on-chain games. “Leela vs the World” is a game theory-based chess game where users compete against an AI chess model, with zkML verifying every move Leela makes based on the game’s running model. Similarly, teams are using the EZKL framework to build simple singing competitions and on-chain tic-tac-toe. Cartridge is using Giza to enable teams to deploy fully on-chain games, recently highlighting a simple AI driving game where users can compete to create better models for cars trying to avoid obstacles. While simple, these proofs-of-concept point to future implementations capable of more complex on-chain verification, such as complex NPC actors that can interact with in-game economies, as seen in “AI Arena,” a Super Smash Brothers game where players can train their warriors and then deploy them as AI models to fight.
  • Identity, Provenance, and Privacy. Cryptocurrencies have been used to verify authenticity and combat the increasing issue of AI-generated/manipulated content and deep fakes. zkML can advance these efforts. WorldCoin is an identity verification solution that requires users to scan their irises to generate a unique ID. In the future, biometric IDs could be self-hosted on personal devices using encryption and verified using models that run locally. Users could then provide biometric evidence without revealing their identities, thus defending against Sybil attacks while ensuring privacy. This could also apply to other privacy-required inferences, such as using models to analyze medical data/images for disease detection, verifying personalities and developing matching algorithms in dating applications, or insurance and lending institutions needing to verify financial information.

Outlook

zkML remains experimental, with most projects focusing on building infrastructure primitives and proofs of concept. Current challenges include computational costs, memory limitations, model complexity, limited tools and infrastructure, and developer talent. In short, there’s considerable work to be done before zkML can be implemented on the scale required by consumer products.

However, as the field matures and these limitations are addressed, zkML will become a key component of integrating artificial intelligence with cryptography. Essentially, zkML promises to bring any scale of off-chain computation on-chain, while maintaining the same or similar security assurances as running on-chain. Yet, before this vision is realized, early adopters of the technology will continue to have to balance zkML’s privacy and security against the efficiency of alternatives.

Artificial Intelligence Agents

Artificial Intelligence Agents

One of the most exciting integrations of artificial intelligence and cryptocurrency is the ongoing experiment with artificial intelligence agents. Agents are autonomous robots capable of receiving, interpreting, and executing tasks using AI models. This could range from having a personal assistant available at all times, fine-tuned to your preferences, to hiring a financial agent to manage and adjust your investment portfolio based on your risk preferences.

Given that cryptocurrency offers a permissionless and trustless payment infrastructure, agents and cryptocurrency can be well integrated. Once trained, agents will have a wallet, enabling them to conduct transactions on their own using smart contracts. For example, today’s agents can scrape information on the internet and then trade on prediction markets based on models.

Agent Providers

Morpheus is one of the latest open-source agent projects launched in 2024 on Ethereum and Arbitrum. Its white paper was anonymously published in September 2023, providing a foundation for community formation and building, including prominent figures such as Erik Vorhees. The white paper includes a downloadable smart agent protocol, an open-source LLM that can run locally, managed by the user’s wallet, and interact with smart contracts. It uses smart contract rankings to help agents determine which smart contracts can be safely interacted with based on criteria such as the number of transactions processed.

The white paper also provides a framework for building the Morpheus network, including the incentive structures and infrastructure required to run the smart agent protocol. This includes incentives for contributors to build front-ends for interacting with agents, APIs for developers to build plug-in agents for mutual interaction, and cloud solutions for users to access the computation and storage needed to run agents on edge devices. The project’s initial funding was launched in early February, with the full protocol expected to launch in the second quarter of 2024.

Decentralized Autonomous Infrastructure Network (DAIN) is a new agent infrastructure protocol building an agent-to-agent economy on Solana. DAIN’s goal is to enable agents from different enterprises to seamlessly interact with each other through a common API, significantly opening up the design space for AI agents, focusing on agents that can interact with both web2 and web3 products. In January, DAIN announced its first partnership with Asset Shield, allowing users to add “agent signers” to their multisigs, capable of interpreting transactions and approving/rejecting based on user-set rules.

Fetch.AI is one of the earliest deployed AI agent protocols and has developed an ecosystem for building, deploying, and using agents on-chain using FET tokens and Fetch.AI wallets. The protocol offers a comprehensive set of tools and applications for using agents, including wallet-in functions for interacting with and ordering agents.

Autonolas, founded by former members of the Fetch team, is an open market for creating and using decentralized AI agents. Autonolas also provides a set of tools for developers to build off-chain hosted AI agents that can plug into multiple blockchains, including Polygon, Ethereum, Gnosis Chain, and Solana. They currently have some active agent proof-of-concept products, including for predictive markets and DAO governance.

SingularityNet is building a decentralized marketplace for AI agents, where specialized AI agents can be deployed, which can be hired by others or agents to perform complex tasks. Other companies like AlteredStateMachine are building integrations of AI agents with NFTs. Users mint NFTs with random attributes, which give them advantages and disadvantages on different tasks. These agents can then be trained to enhance certain attributes for use in gaming, DeFi, or as virtual assistants and traded with other users.

Overall, these projects envision a future ecosystem of agents capable of collaboratively working not only to perform tasks but also to help build general artificial intelligence. Truly complex agents will have the capability to autonomously complete any user task. For example, fully autonomous agents will be able to figure out how to hire another agent to integrate an API, then execute a task without having to ensure the agent has already integrated with external APIs (like travel booking websites) before use. From the user’s perspective, there’s no need to check if an agent can complete a task, as the agent can determine this on its own.

Bitcoin and AI Agents

In July 2023, Lightning Labs launched a proof-of-concept implementation for utilizing agents on the Lightning Network, dubbed the Bitcoin suite by LangChain. This product is particularly intriguing because it aims to tackle a problem that is becoming increasingly severe in the Web 2 world—the gated and costly API keys of web applications.

LangChain addresses this issue by providing developers with a set of tools that enable agents to buy, sell, and hold Bitcoin, as well as to query API keys and send micropayments. On traditional payment rails, micropayments are prohibitively expensive due to fees, but on the Lightning Network, agents can send an unlimited number of micropayments daily at minimal cost. When used in conjunction with LangChain’s L402 payment metering API framework, companies can adjust the access costs of their APIs based on increases and decreases in usage, rather than setting a single, costly standard.

In the future, chain activities will be predominantly driven by interactions between agents and agents, necessitating mechanisms to ensure agents can interact with each other without prohibitive costs. This early example demonstrates the potential of using agents on permissionless and economically efficient payment rails, opening up possibilities for new markets and economic interactions.

Outlook

The field of agents is still in its infancy. Projects have only just begun to launch functional agents capable of handling simple tasks—access typically limited to experienced developers and users. However, over time, one of the most significant impacts of artificial intelligence agents on cryptocurrency will be the improvement of user experience across all verticals. Transactions will start to shift from click-based to text-based, enabling users to interact with on-chain agents via conversational interfaces. Teams like Dawn Wallet have already launched chatbot wallets, allowing users to interact on-chain.

Moreover, it remains unclear how agents will operate in Web 2, as financial rails rely on regulated banking institutions that cannot operate 24/7 or facilitate seamless cross-border transactions. As Lyn Alden highlighted, the lack of refunds and the capability to handle microtransactions make cryptocurrency rails particularly attractive compared to credit cards. However, if agents become a more common medium for transactions, existing payment providers and applications are likely to quickly adapt, implementing the infrastructure needed to operate on existing financial rails, thereby diminishing some benefits of using cryptocurrency.

Currently, agents may be limited to deterministic cryptocurrency transactions, where a given input guarantees a given output. Both models have outlined the capability of these agents to figure out how to perform complex tasks, and tools are expanding the range of tasks they can complete, both of which require further development. For crypto agents to become useful beyond novel on-chain cryptocurrency use cases, broader integration and acceptance of cryptocurrency as a payment form, along with regulatory clarity, are needed. However, as these components develop, agents are poised to become among the largest consumers of decentralized computing and zkML solutions, autonomously receiving and resolving any task in a non-deterministic manner.

Conclusion

AI introduces the same innovations to cryptocurrency that we’ve seen in web2, enhancing everything from infrastructure development to user experience and accessibility. However, projects are still in the early stages of development, and the near-term integration of cryptocurrency and AI will primarily be driven by off-chain integrations.

Products like Copilot are set to “increase developer efficiency by 10x,” and Layer 1 and DeFi applications have already launched AI-assisted development platforms in collaboration with major companies like Microsoft. Companies such as Cub3.ai and Test Machine are developing AI integrations for smart contract auditing and real-time threat monitoring to enhance on-chain security. LLM chatbots are being trained with on-chain data, protocol documentation, and applications to provide users with enhanced accessibility and user experience.

The challenge for more advanced integrations that truly leverage the underlying technology of cryptocurrencies remains to prove that implementing AI solutions on-chain is technically and economically feasible. The development of decentralized computing, zkML, and AI agents points to promising verticals that lay the groundwork for a deeply interconnected future of cryptocurrency and AI.

Disclaimer:

  1. This article is reprinted from techflow, All copyrights belong to the original author [Lucas Tcheyan]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

Galaxy Digital: Exploring the Intersection of Cryptocurrency and AI

Intermediate2/28/2024, 4:59:26 AM
This article explores the intersection of cryptocurrency and artificial intelligence, highlighting the emergence of public blockchains as one of the most profound advancements in computer science history. It discusses how the development of AI is already having a significant impact on our world.

Introduction

The emergence of public blockchains marks a profound advancement in the history of computer science, while the development of artificial intelligence is having a significant impact on our world. Blockchain technology offers new templates for transaction settlement, data storage, and system design, whereas artificial intelligence represents a revolution in computing, analysis, and content delivery. Innovations in these industries are unleashing new use cases that could accelerate the adoption of both sectors in the coming years. This report examines the ongoing integration of cryptocurrency and artificial intelligence, focusing on novel use cases that aim to bridge the gap between the two and leverage their strengths. It specifically looks into projects developing decentralized computing protocols, zero-knowledge machine learning (zkML) infrastructure, and AI agents.

Cryptocurrency offers a permissionless, trustless, and composable settlement layer for AI, unlocking use cases like easier access to hardware through decentralized computing systems, building AI agents capable of executing complex tasks requiring value exchange, and developing identity and provenance solutions to counter Sybil attacks and deepfakes. AI brings to cryptocurrency many of the same benefits seen in Web 2.0, including enhanced user and developer experiences through large language models like ChatGPT and Copilot, and significantly improved functionality and automation potential for smart contracts. Blockchain provides the transparent, data-rich environment needed for AI, though the limited computing power of blockchain is a major barrier to direct integration of AI models.

The experiments and eventual adoption in the crossover of cryptocurrency and AI are driven by the same forces that drive the most promising use cases for cryptocurrency: access to a permissionless and trustless coordination layer, facilitating better value transfer. Given the vast potential, stakeholders in this field need to understand the fundamental ways these technologies intersect.

Key points:

    • In the near future (6 months to 1 year), the integration of cryptocurrency and AI will be dominated by AI applications that enhance developer efficiency, smart contract auditability and security, and user accessibility. These integrations are not specific to cryptocurrency but enhance the experience for on-chain developers and users.
    • As high-performance GPUs remain in short supply, decentralized computing products are implementing AI-customized GPU products to boost adoption.
    • User experience and regulatory hurdles remain obstacles to attracting decentralized computing customers. However, recent developments from OpenAI and ongoing regulatory reviews in the U.S. highlight the value proposition of permissionless, censorship-resistant, and decentralized AI networks.
    • On-chain AI integration, especially smart contracts capable of utilizing AI models, requires improvements in zkML technology and other methods for verifying off-chain computations. A lack of comprehensive tools, developer talent, and high costs are barriers to adoption.
    • AI agents are well-suited for cryptocurrency, where users (or the agents themselves) can create wallets to transact with other services, agents, or individuals—a capability not possible with traditional financial rails. Additional integration with non-crypto products is needed for broader adoption.

Terms

Artificial Intelligence is the use of computation and machines to imitate the reasoning and problem-solving capabilities of humans.

Neural Networks are a method of training artificial intelligence models. They run inputs through discrete algorithmic layers, refining them until the desired output is produced. Neural networks consist of equations with weights that can be adjusted to change the output. They may require extensive data and computation for training to ensure accurate outputs. This is one of the most common ways to develop AI models (e.g., ChatGPT relies on a neural network process based on Transformers).

Training is the process of developing neural networks and other AI models. It requires a significant amount of data to train models to correctly interpret inputs and produce accurate outputs. During training, the model’s equation weights are continuously modified until a satisfactory output is produced. Training can be very costly. For instance, ChatGPT uses tens of thousands of its own GPUs to process data. Teams with fewer resources often rely on specialized computing providers, such as Amazon Web Services, Azure, and Google Cloud providers.

Inference is the actual use of AI models to obtain outputs or results (for example, using ChatGPT to create an outline for a paper on the intersection of cryptocurrency and AI). Inference is used throughout the training process and in the final product. Due to computational costs, even after training is complete, their operational costs can be high, although their computational intensity is lower than training.

Zero-Knowledge Proofs (ZKP) allow for the verification of statements without revealing underlying information. This is useful in cryptocurrency for two main reasons: 1) Privacy and 2) Scaling. For privacy, it enables users to transact without revealing sensitive information (e.g., how much ETH is in a wallet). For scaling, it allows off-chain computations to be proved on-chain more quickly than re-executing the computations. This enables blockchains and applications to run computations cheaply off-chain and then verify them on-chain. For more information on zero-knowledge and its role in Ethereum Virtual Machines, see Christine Kim’s report on zkEVMs: The Future of Ethereum Scalability.

AI/Cryptocurrency Market Map

Projects integrating artificial intelligence and cryptocurrency are still building the underlying infrastructure needed to support large-scale on-chain AI interactions.

Decentralized computing markets are emerging to provide the vast physical hardware needed to train and infer AI models, primarily in the form of Graphic Processing Units (GPUs). These two-sided markets connect those renting out and seeking to rent computing, facilitating the transfer of value and verification of computations. Within decentralized computing, several subcategories offering additional functionalities are emerging. Besides bilateral markets, this report also reviews machine learning training providers that offer verifiable training and fine-tuning of outputs, as well as projects dedicated to linking computation and model generation to enable AI, often referred to as intelligent incentive networks.

zkML is an emerging focus area for projects aiming to provide verifiable model outputs on-chain in an economically viable and timely manner. These projects mainly enable applications to handle heavy computational requests off-chain, then post verifiable outputs on-chain, proving that the off-chain workload was complete and accurate. zkML is currently both expensive and time-consuming but is increasingly being used as a solution. This is evident in the growing number of integrations between zkML providers and DeFi/gaming applications wishing to leverage AI models.

The ample supply of computation and the ability to verify on-chain computation open the door for on-chain AI agents. Agents are trained models capable of executing requests on behalf of users. Agents offer the opportunity to significantly enhance the on-chain experience, allowing users to execute complex transactions simply by conversing with a chatbot. However, as of now, agent projects are still focused on developing the infrastructure and tools for easy and rapid deployment.

Decentralized Computing

Overview

Artificial Intelligence requires extensive computation to train models and run inference. Over the past decade, as models have become increasingly complex, the demand for computation has grown exponentially. For instance, OpenAI observed that from 2012 to 2018, the computational demand of its models doubled every two years, shifting to doubling every three and a half months. This led to a surge in demand for GPUs, with some cryptocurrency miners even repurposing their GPUs to provide cloud computing services. As competition for computational access intensifies and costs rise, some projects are leveraging cryptographic technology to offer decentralized computing solutions. They provide on-demand computing at competitive prices, enabling teams to train and run models affordably. In some cases, the trade-offs may involve performance and security.

The demand for state-of-the-art GPUs (e.g., those produced by Nvidia) is significant. In September, Tether acquired a stake in German bitcoin miner Northern Data, reportedly spending $420 million to purchase 10,000 H100 GPUs (one of the most advanced GPUs for AI training). The wait time to acquire top-tier hardware can be at least six months, if not longer in many instances. Worse, companies are often required to sign long-term contracts to secure computational volumes they might not even use. This can lead to situations where computational resources are available but not accessible on the market. Decentralized computing systems help address these market inefficiencies by creating a secondary market where compute owners can immediately sublease their excess capacity upon notification, thereby releasing new supply.

Beyond competitive pricing and accessibility, a key value proposition of decentralized computing is censorship resistance. Cutting-edge AI development is increasingly dominated by large tech companies with unmatched computational and data access capabilities. The first key theme highlighted in the 2023 AI Index Annual Report was the growing surpassing of academia by the industry in the development of AI models, concentrating control in the hands of a few tech leaders. This raises concerns about their potential to wield significant influence in setting the norms and values underpinning AI models, especially after these tech companies push for regulations to limit uncontrollable AI development.

Verticals in Decentralized Computing

Several models of decentralized computing have emerged in recent years, each with its own focus and trade-offs.

Generalized computing

Broadly speaking, projects such as Akash, io.net, iExec, and Cudos are applications of decentralized computing, offering beyond data and general computational solutions, access to specialized computation for AI training and inference. Akash stands out as the only fully open-source “super cloud” platform, utilizing the Cosmos SDK for its proof-of-stake network. AKT, Akash’s native token, serves as a payment method to secure the network and incentivize participation. Launched in 2020, Akash’s mainnet initially focused on a permissionless cloud computing marketplace, featuring storage and CPU leasing services. In June 2023, Akash introduced a GPU-centric testnet, followed by a GPU mainnet launch in September, enabling AI training and inference GPU rentals.

The Akash ecosystem comprises two primary participants: tenants, who seek computing resources, and providers, the compute suppliers. A reverse auction process facilitates the matching of tenants and providers, where tenants post their compute requirements, including preferred server locations or hardware types and their budget. Providers then bid, with the lowest bidder awarded the task. Validators maintain network integrity, with a current cap at 100 validators, planned to increase over time. Participation as a validator is open to those who stake more AKT than the least-staked current validator. AKT holders can delegate their tokens to validators, with transaction fees and block rewards distributed in AKT. Moreover, for each lease, the Akash network earns a “take rate,” decided by the community, distributed to AKT holders.

Secondary market

The secondary market for decentralized computing aims to address inefficiencies in the existing computational market, where supply constraints lead to companies hoarding resources beyond their needs and long-term contracts with cloud providers further limit supply. Decentralized computing platforms unlock new supply, enabling anyone with computational needs to become a provider.

Whether the surge in demand for GPUs for AI training translates to sustained network use on Akash remains to be seen. Historically, Akash has offered CPU-based market services at a 70-80% discount compared to centralized alternatives, yet this pricing strategy has not significantly driven adoption. Network activity, measured by active leases, has plateaued, with an average of 33% compute, 16% memory, and 13% storage utilization by the second quarter of 2023, impressive for on-chain adoption but indicative of supply still outstripping demand. Half a year since the GPU network launch, it’s too early for a definitive assessment of long-term adoption, though early signs show a 44% average GPU utilization, driven mainly by demand for high-quality GPUs like the A100, over 90% of which have been rented out.

Akash’s daily expenditures have nearly doubled since the introduction of GPUs, attributed partly to increased usage of other services, especially CPUs, but mainly due to new GPU usage.

Pricing is competitive with, or in some cases more expensive than, centralized counterparts like Lambda Cloud and Vast.ai. High demand for top-end GPUs, such as the H100 and A100, means most owners of such equipment are not interested in listing them on a competitively priced market.

Despite initial profits, adoption barriers remain. Decentralized computing networks must take further steps to generate demand and supply, with teams exploring how best to attract new users. For instance, in early 2024, Akash passed Proposal 240, increasing the AKT emission for GPU providers to incentivize more supply, especially for high-end GPUs. Teams are also working on proof-of-concept models to demonstrate their networks’ live capabilities to potential users. Akash is training its foundational models and has launched chatbot and image generation products that utilize Akash GPUs. Similarly, io.net has developed a stable diffusion model and is launching new network functionalities to better emulate network performance and scale.

Decentralized Machine Learning Training

In addition to general computing platforms that can meet the demands of artificial intelligence, a group of professional AI GPU suppliers focused on machine learning model training is also emerging. For instance, Gensyn is “coordinating power and hardware to build collective intelligence,” with the philosophy that “if someone wants to train something and someone is willing to train it, then this training should be allowed to happen.”

This protocol involves four main participants: submitters, solvers, validators, and whistleblowers. Submitters submit tasks with training requests to the network. These tasks include the training objectives, the models to be trained, and the training data. As part of the submission process, submitters need to prepay the estimated computational cost required by solvers.

After submission, tasks are assigned to solvers who actually perform the model training. Then, solvers submit the completed tasks to validators, who are responsible for checking the training to ensure it was correctly completed. Whistleblowers are tasked with ensuring validators act honestly. To motivate whistleblowers to participate in the network, Gensyn plans to regularly offer deliberately incorrect evidence, rewarding whistleblowers who catch them.

Besides providing computation for AI-related workloads, a key value proposition of Gensyn is its verification system, which is still under development. Verification is necessary to ensure that the external computation by GPU providers is executed correctly (i.e., ensuring users’ models are trained the way they want). Gensyn addresses this issue with a unique approach, utilizing novel verification methods called “probabilistic learning proofs, graph-based precise protocols, and Truebit-style incentive games.” This is an optimistic solving model that allows validators to confirm solvers have run the model correctly without needing to fully rerun the model themselves, a process that is costly and inefficient.

In addition to its innovative verification method, Gensyn also claims to be cost-effective compared to centralized alternatives and cryptocurrency competitors - offering ML training prices up to 80% cheaper than AWS, while outperforming similar projects like Truebit in tests.

Whether these initial results can be replicated on a large scale in decentralized networks remains to be seen. Gensyn hopes to utilize the surplus computational capacity of providers such as small data centers, retail users, and eventually small mobile devices like smartphones. However, as the Gensyn team themselves admit, relying on heterogeneous computing providers introduces some new challenges.

For centralized providers like Google Cloud and Coreweave, computing costs are expensive, but communication between computations (bandwidth and latency) is cheap. These systems are designed to facilitate communication between hardware as quickly as possible. Gensyn disrupts this framework by reducing computation costs by allowing anyone in the world to offer GPUs, but at the same time increases communication costs, as the network now must coordinate computation jobs across distant heterogeneous hardware. Gensyn has not yet launched, but it represents a proof of concept that could be achieved when building a decentralized machine learning training protocol.

Decentralized General Intelligence

Decentralized computing platforms also offer the possibility of designing methods for artificial intelligence creation. Bittensor is a decentralized computing protocol built on Substrate, attempting to answer the question, “How do we transform artificial intelligence into a collaborative method?” Bittensor aims to achieve the decentralization and commodification of AI generation. Launched in 2021, it hopes to utilize the power of collaborative machine learning models to continuously iterate and produce better AI.

Bittensor draws inspiration from Bitcoin, with its native currency, TAO, having a supply limit of 21 million and a halving cycle every four years (the first halving is set for 2025). Unlike utilizing Proof of Work to generate correct random numbers and earn block rewards, Bittensor relies on “Proof of Intelligence,” requiring miners to run models to generate outputs in response to inference requests.

Incentivizing Intelligence

Initially, Bittensor relied on a Mixture of Experts (MoE) model to generate outputs. When an inference request is submitted, the MoE model does not rely on a generalized model but forwards the request to the most accurate model for the given input type. Imagine building a house, where you hire various experts for different aspects of the construction process (e.g., architects, engineers, painters, construction workers, etc.). MoE applies this to machine learning models, trying to leverage the outputs of different models based on the input. As explained by Bittensor’s founder Ala Shaabana, this is like “talking to a room full of smart people to get the best answer, rather than talking to one person.” Due to challenges in ensuring correct routing, message synchronization to the correct model, and incentives, this method has been shelved until further development of the project.

In the Bittensor network, there are two main participants: validators and miners. Validators send inference requests to miners, review their outputs, and rank them based on the quality of their responses. To ensure their rankings are reliable, validators are given a “vtrust” score based on how consistent their rankings are with other validators. The higher a validator’s vtrust score, the more TAO emissions they receive. This is to incentivize validators to reach a consensus on model rankings over time, as the more validators agree on the rankings, the higher their individual vtrust scores.

Miners, also known as servers, are network participants who run the actual machine learning models. They compete to provide the most accurate outputs for validators’ queries, and the more accurate the outputs, the more TAO emissions they earn. Miners are free to generate these outputs as they wish. For instance, in the future, it’s entirely possible for Bittensor miners to have previously trained models on Gensyn and use them to earn TAO emissions.

Today, most interactions occur directly between validators and miners. Validators submit inputs to miners and request outputs (i.e., model training). Once validators query miners on the network and receive their responses, they rank the miners and submit their rankings to the network.

The interaction between validators (relying on PoS) and miners (relying on Model Proof, a form of PoW) is known as Yuma consensus. It aims to incentivize miners to produce the best outputs to earn TAO emissions and incentivize validators to accurately rank miner outputs, earning higher vtrust scores and increasing their TAO rewards, thus forming a consensus mechanism for the network.

Subnets and Applications

Interactions on Bittensor mainly involve validators submitting requests to miners and evaluating their outputs. However, as the quality of contributing miners improves and the overall intelligence of the network grows, Bittensor is creating an application layer on top of its existing stack so that developers can build applications that query the Bittensor network.

In October 2023, Bittensor introduced subnets through the Revolution upgrade, taking a significant step towards achieving this goal. Subnets are separate networks on Bittensor that incentivize specific behaviors. The Revolution opened the network to anyone interested in creating subnets. Within months of its launch, more than 32 subnets have been launched, including subnets for text prompts, data scraping, image generation, and storage. As subnets mature and become product-ready, subnet creators will also create application integrations, enabling teams to build applications that query specific subnets. Some applications, such as chatbots, image generators, Twitter reply bots, and prediction markets, do exist but there are no formal incentives beyond funding from the Bittensor Foundation for validators to accept and forward these queries.

For a clearer explanation, below is an example of how Bittensor might work once applications are integrated into the network.

Subnets earn TAO based on performance evaluated by the root network. The root network, situated above all subnets, essentially acts as a special subnet and is managed by the 64 largest subnet validators by stake. Root network validators rank subnets based on their performance and periodically allocate TAO emissions to subnets. In this way, individual subnets act as miners for the root network.

Bittensor’s Vision

Bittensor is still experiencing growing pains as it expands the protocol’s functionality to incentivize intelligence generation across multiple subnets. Miners are constantly devising new ways to attack the network for more TAO rewards, such as by slightly altering the output of highly-rated inferences run by their models and then submitting multiple variants. Governance proposals affecting the entire network can only be submitted and implemented by the Triumvirate, composed entirely of stakeholders from the Opentensor Foundation (notably, proposals require approval from the Bittensor Senate, composed of Bittensor validators, before implementation). The project’s tokenomics are being modified to enhance incentives for cross-subnet use of TAO. The project has also quickly gained notoriety for its unique approach, with the CEO of one of the most popular AI websites, HuggingFace, stating that Bittensor should add its resources to the site.

In a recent article titled “Bittensor Paradigm” published by the core developers, the team outlined Bittensor’s vision of eventually becoming “agnostic to what is measured.” Theoretically, this could enable Bittensor to develop subnets that incentivize any type of behavior supported by TAO. There are still considerable practical limitations—most notably, proving that these networks can scale to handle such a diverse range of processes and that potential incentives drive progress beyond centralized products.

Building a Decentralized Computing Stack for Artificial Intelligence Models

The section above provides an in-depth overview of various types of decentralized artificial intelligence (AI) computing protocols currently under development. In their early stages of development and adoption, they lay the foundation for an ecosystem that could eventually facilitate the creation of “AI building blocks,” similar to the “money Legos” concept in DeFi. The composability of permissionless blockchains allows for the possibility of each protocol to be built upon another, creating a more comprehensive decentralized AI ecosystem. \
For instance, this is how Akash, Gensyn, and Bittensor might all interact to respond to inference requests.

It is crucial to understand that this is just one example of what could happen in the future, not a representation of the current ecosystem, existing partnerships, or potential outcomes. The limitations of interoperability and other considerations described below significantly restrict the integration possibilities today. Moreover, the fragmentation of liquidity and the need to use multiple tokens could harm the user experience, a point noted by the founders of Akash and Bittensor.

Other Decentralized Products

Beyond computing, several other decentralized infrastructure services have been introduced to support the emerging AI ecosystem within the cryptocurrency space. Listing all of these is beyond the scope of this report, but some interesting and illustrative examples include:

  • Ocean: A decentralized data marketplace where users can create data NFTs representing their data and purchase using data tokens. Users can monetize their data and have greater sovereignty over it, while providing AI teams with access to the data needed for developing and training models.
  • Grass: A decentralized bandwidth marketplace where users can sell their excess bandwidth to AI companies, which utilize it to scrape data from the internet. Built on the Wynd network, this allows individuals to monetize their bandwidth and offers bandwidth buyers a more diverse perspective on what individual users see online (since individual internet access is often customized based on their IP address).
  • HiveMapper: Builds a decentralized map product incorporating information collected from daily drivers. HiveMapper relies on AI to interpret images collected from users’ dashboard cameras and rewards users with tokens for helping fine-tune the AI models through Reinforced Human Learning Feedback (RHLF).

Overall, these examples point to the nearly limitless opportunities for exploring decentralized market models that support AI models or the peripheral infrastructure needed to develop them. Currently, most of these projects are in the proof-of-concept stage and require further research and development to prove they can operate at the scale needed to provide comprehensive AI services.

Outlook

Decentralized computing products are still in the early stages of development. They have just begun to launch state-of-the-art computing capabilities, capable of training the most powerful artificial intelligence models in production. To gain a meaningful market share, they need to demonstrate actual advantages over centralized alternatives. Potential triggers for wider adoption include:

  • GPU Supply/Demand. The scarcity of GPUs combined with rapidly increasing computing demands is leading to a GPU arms race. Due to GPU limitations, OpenAI has at times restricted access to its platform. Platforms like Akash and Gensyn can offer cost-competitive alternatives for teams requiring high-performance computing. The next 6-12 months present a particularly unique opportunity for decentralized computing providers to attract new users who are forced to consider decentralized products due to a lack of broader market access. Furthermore, with the increasingly improved performance of open-source models such as Meta’s LLaMA2, users no longer face the same barriers when deploying effective fine-tuned models, making computing resources the primary bottleneck. However, the mere existence of platforms does not guarantee sufficient computing supply and corresponding consumer demand. Procuring high-end GPUs remains challenging, and cost is not always the main motivator for demand. These platforms will face challenges in demonstrating the actual benefits of using decentralized computing options (whether due to cost, censorship resistance, uptime and resilience, or accessibility) to accumulate sticky users. They must act quickly as GPU infrastructure investments and constructions are proceeding at an astonishing pace.
  • Regulation. Regulation remains a barrier to the decentralized computing movement. In the short term, the lack of clear regulation means both providers and users face potential risks in using these services. What if providers offer computation or buyers unknowingly purchase computation from sanctioned entities? Users might hesitate to use decentralized platforms lacking centralized control and oversight. Protocols attempt to alleviate these concerns by incorporating controls into their platforms or adding filters to access only known computing providers (i.e., those who have provided Know Your Customer (KYC) information), but a more robust approach is needed to protect privacy while ensuring compliance. In the short term, we might see the emergence of KYC and compliance platforms that restrict access to their protocols to address these issues. Moreover, discussions around a possible new regulatory framework in the US (exemplified by the issuance of the Executive Order on Promoting Safe, Reliable, and Trustworthy Development and Use of Artificial Intelligence) highlight the potential for regulatory actions to further restrict GPU access.
  • Censorship. Regulation is a double-edged sword, and decentralized computing products could benefit from actions limiting AI access. Beyond executive orders, OpenAI founder Sam Altman testified in Congress that regulators should issue licenses for AI development. Discussions on AI regulation are just beginning, but any attempts to restrict access or censor AI capabilities could accelerate the adoption of decentralized platforms that do not face such barriers. The leadership changes (or lack thereof) at OpenAI in November further indicate that entrusting the decision-making power of the most powerful existing AI models to a few is risky. Moreover, all AI models inevitably reflect the biases of those who create them, whether intentional or not. One way to eliminate these biases is to make models as open as possible for fine-tuning and training, ensuring anyone, anywhere, can access a variety of models with different biases.
  • Data Privacy. Decentralized computing could be more attractive than centralized alternatives when integrated with external data and privacy solutions that offer users data sovereignty. Samsung became a victim when it realized engineers were using ChatGPT for chip design and leaking sensitive information to ChatGPT. Phala Network and iExec claim to provide users with SGX secure enclaves to protect user data, and ongoing fully homomorphic encryption research could further unlock privacy-ensured decentralized computing. As AI further integrates into our lives, users will increasingly value the ability to run models on applications with privacy protection. Users also need support for data composability services, so they can seamlessly port data from one model to another.
  • User Experience (UX). User experience remains a significant barrier to broader adoption of all types of crypto applications and infrastructure. This is no different for decentralized computing products and is exacerbated in some cases due to developers needing to understand both cryptocurrency and AI. Improvements need to start from the basics, such as joining and extracting interactions with the blockchain, to deliver the same high-quality output as current market leaders. This is evident given the difficulty many operable decentralized computing protocols offering cheaper products have in gaining regular use.

Smart contracts and zkML

Smart contracts are the cornerstone of any blockchain ecosystem. They automatically execute under a set of specific conditions, reducing or eliminating the need for trusted third parties, thus enabling the creation of complex decentralized applications, such as those in DeFi. However, the functionality of smart contracts is still limited because they operate based on preset parameters that must be updated.

For instance, a smart contract deployed for a lending/borrowing protocol, which contains specifications on when positions should be liquidated based on specific loan-to-value ratios. While useful in static environments, these smart contracts need constant updates to adapt to changes in risk tolerance in dynamic situations, presenting challenges for contracts not managed through centralized processes. For example, DAOs relying on decentralized governance processes may not react swiftly enough to systemic risks.

Integrating artificial intelligence (i.e., machine learning models) into smart contracts is a potential way to enhance functionality, security, and efficiency while improving the overall user experience. However, these integrations also introduce additional risks, as it’s impossible to ensure the models underpinning these smart contracts won’t be exploited or fail to interpret long-tail situations (given the scarcity of data inputs, long-tail situations are hard for models to train on).

Zero-Knowledge Machine Learning (zkML)

Machine learning requires significant computation to run complex models, making it impractical to directly run AI models in smart contracts due to high costs. For example, a DeFi protocol offering yield optimization models would find it difficult to run these models on-chain without incurring prohibitive Gas fees. One solution is to increase the computational capabilities of the underlying blockchain. However, this also raises the requirements for the chain’s validators, potentially compromising decentralization. Instead, some projects are exploring the use of zkML to verify outputs in a trustless manner without needing intensive on-chain computation.

A common example illustrating the usefulness of zkML is when users need others to run data through models and verify that their counterparts have indeed run the correct model. Developers using decentralized computing providers to train their models might worry about these providers cutting costs by using cheaper models that produce outputs with nearly imperceptible differences. zkML allows computing providers to run data through their models and then generate proofs that can be verified on-chain, proving that the model outputs for given inputs are correct. In this scenario, the model provider gains the added advantage of being able to offer their model without revealing the underlying weights that produced the outputs.

The opposite is also possible. If users want to run models on their data but do not wish to give model projects access to their data due to privacy concerns (e.g., in medical checks or proprietary business information), they can run the model on their data without sharing the data, and then verify through proofs that they have run the correct model. These possibilities greatly expand the design space for integrating AI and smart contract functionalities by addressing daunting computational constraints.

Infrastructure and tools

Given the early state of the zkML field, development is primarily focused on building the infrastructure and tools teams need to convert their models and outputs into proofs verifiable on-chain. These products abstract the zero-knowledge aspects as much as possible.

EZKL and Giza are two projects building such tools by providing verifiable proofs of machine learning model execution. Both help teams build machine learning models to ensure these models can execute in a manner that allows results to be verified on-chain in a trustworthy way. Both projects use Open Neural Network Exchange (ONNX) to convert machine learning models written in common languages like TensorFlow and Pytorch into a standard format. They then output versions of these models that also generate zk proofs upon execution. EZKL is open-source, producing zk-SNARKs, while Giza is closed-source, producing zk-STARKs. Both projects are currently only compatible with EVM.

In recent months, EZKL has made significant progress in enhancing zkML solutions, focusing mainly on reducing costs, improving security, and speeding up proof generation. For example, in November 2023, EZKL integrated a new open-source GPU library that reduced aggregation proof time by 35%; in January, EZKL released Lilith, a software solution for integrating high-performance computing clusters and orchestrating concurrent job systems when using EZKL proofs. Giza’s uniqueness lies in providing tools for creating verifiable machine learning models and planning to implement a web3 equivalent of Hugging Face, opening a user marketplace for zkML collaboration and model sharing, and eventually integrating decentralized computing products. In January, EZKL published a benchmark evaluation comparing the performance of EZKL, Giza, and RiscZero (as described below), showcasing faster proof times and memory usage.

Modulus Labs is currently developing a new zero-knowledge (zk) proof technology specifically tailored for AI models. Modulus released a paper titled “Intelligent Cost,” which implies that running AI models on-chain incurs prohibitively high costs. This paper, published in January 2023, benchmarks existing zk proof systems to identify improvements in zk proofs’ capability and bottlenecks within AI models. It reveals that current products are too expensive and inefficient for large-scale AI applications. Building on initial research, Modulus launched Remainder in November, a specialized zk prover aimed at reducing the cost and proof time for AI models, making projects economically viable for large-scale integration into smart contracts. Their work is proprietary, making it impossible to benchmark against the mentioned solutions, but it was recently cited in Vitalik’s blog post on cryptography and artificial intelligence.

The development of tools and infrastructure is crucial for the future growth of the zkML space, as it can significantly reduce the friction involved in deploying verifiable off-chain computations and the need for zk teams. Creating secure interfaces for non-crypto-native machine learning practitioners to bring their models on-chain will enable applications to experiment with truly novel use cases. Additionally, these tools address a major barrier to the broader adoption of zkML: the lack of knowledgeable developers interested in working at the intersection of zero-knowledge, machine learning, and cryptography.

Coprocessor

Other solutions in development, referred to as “coprocessors” (including RiscZero, Axiom, and Ritual), serve various roles, including verifying off-chain computations on-chain. Like EZKL, Giza, and Modulus, their goal is to abstract the zk proof generation process entirely, creating zero-knowledge virtual machines capable of executing off-chain programs and generating on-chain verifiable proofs. RiscZero and Axiom cater to simple AI models as more general-purpose coprocessors, while Ritual is built specifically for use with AI models.

Ritual’s first instance, Infernet, includes an Infernet SDK that allows developers to submit inference requests to the network and receive outputs and optional proofs in return. Infernet nodes process these off-chain computations before returning outputs. For example, a DAO could establish a process ensuring all new governance proposals meet certain prerequisites before submission. Each time a new proposal is submitted, the governance contract triggers an inference request through Infernet, invoking an AI model trained specifically for DAO governance. This model reviews the proposal to ensure all necessary standards are met and returns outputs and evidence to approve or reject the proposal submission.

Over the next year, the Ritual team plans to introduce more features, forming an infrastructure layer known as the Ritual superchain. Many of the projects discussed could be integrated as service providers into Ritual. The Ritual team has already integrated with EZKL for proof generation and may soon add features from other leading providers. Infernet nodes on Ritual could also utilize Akash or io.net GPUs and query models trained on the Bittensor subnet. Their ultimate goal is to become the preferred provider of open AI infrastructure, offering services for machine learning and other AI-related tasks for any network and any workload.

Applications

zkML is aiding in reconciling the dichotomy between blockchain, which is inherently resource-constrained, and artificial intelligence, which demands significant computational and data resources. As a founder of Giza puts it, “the use cases are incredibly rich… It’s a bit like asking what the use cases for smart contracts were in the early days of Ethereum… What we’re doing is just expanding the use cases for smart contracts.” However, as noted, current development is predominantly happening at the tool and infrastructure level. Applications are still in the exploratory phase, with teams facing the challenge of proving that the value generated by implementing models with zkML outweighs its complexity and cost.

Current applications include:

  • Decentralized Finance (DeFi). zkML enhances the capabilities of smart contracts, upgrading the design space for DeFi. DeFi protocols offer a wealth of verifiable and immutable data for machine learning models to utilize in generating yield or trading strategies, risk analysis, user experience, etc. For instance, Giza has collaborated with Yearn Finance to build a proof-of-concept automatic risk assessment engine for Yearn’s new v3 vaults. Modulus Labs is working with Lyra Finance to integrate machine learning into its AMM, with Ion Protocol to implement models for analyzing validators’ risks, and assisting Upshot in validating its AI-supported NFT pricing information. Protocols like NOYA (using EZKL) and Mozaic provide access to proprietary off-chain models, enabling users to access automated liquidity mining while validating on-chain data inputs and proofs. Spectral Finance is developing an on-chain credit scoring engine to predict the likelihood of Compound or Aave borrowers defaulting on loans. Due to zkML, these so-called “De-Ai-Fi” products are likely to become increasingly popular in the coming years.
  • Gaming. Games have long been considered for disruption and enhancement through public blockchains. zkML enables on-chain artificial intelligence gaming. Modulus Labs has realized a proof-of-concept for simple on-chain games. “Leela vs the World” is a game theory-based chess game where users compete against an AI chess model, with zkML verifying every move Leela makes based on the game’s running model. Similarly, teams are using the EZKL framework to build simple singing competitions and on-chain tic-tac-toe. Cartridge is using Giza to enable teams to deploy fully on-chain games, recently highlighting a simple AI driving game where users can compete to create better models for cars trying to avoid obstacles. While simple, these proofs-of-concept point to future implementations capable of more complex on-chain verification, such as complex NPC actors that can interact with in-game economies, as seen in “AI Arena,” a Super Smash Brothers game where players can train their warriors and then deploy them as AI models to fight.
  • Identity, Provenance, and Privacy. Cryptocurrencies have been used to verify authenticity and combat the increasing issue of AI-generated/manipulated content and deep fakes. zkML can advance these efforts. WorldCoin is an identity verification solution that requires users to scan their irises to generate a unique ID. In the future, biometric IDs could be self-hosted on personal devices using encryption and verified using models that run locally. Users could then provide biometric evidence without revealing their identities, thus defending against Sybil attacks while ensuring privacy. This could also apply to other privacy-required inferences, such as using models to analyze medical data/images for disease detection, verifying personalities and developing matching algorithms in dating applications, or insurance and lending institutions needing to verify financial information.

Outlook

zkML remains experimental, with most projects focusing on building infrastructure primitives and proofs of concept. Current challenges include computational costs, memory limitations, model complexity, limited tools and infrastructure, and developer talent. In short, there’s considerable work to be done before zkML can be implemented on the scale required by consumer products.

However, as the field matures and these limitations are addressed, zkML will become a key component of integrating artificial intelligence with cryptography. Essentially, zkML promises to bring any scale of off-chain computation on-chain, while maintaining the same or similar security assurances as running on-chain. Yet, before this vision is realized, early adopters of the technology will continue to have to balance zkML’s privacy and security against the efficiency of alternatives.

Artificial Intelligence Agents

Artificial Intelligence Agents

One of the most exciting integrations of artificial intelligence and cryptocurrency is the ongoing experiment with artificial intelligence agents. Agents are autonomous robots capable of receiving, interpreting, and executing tasks using AI models. This could range from having a personal assistant available at all times, fine-tuned to your preferences, to hiring a financial agent to manage and adjust your investment portfolio based on your risk preferences.

Given that cryptocurrency offers a permissionless and trustless payment infrastructure, agents and cryptocurrency can be well integrated. Once trained, agents will have a wallet, enabling them to conduct transactions on their own using smart contracts. For example, today’s agents can scrape information on the internet and then trade on prediction markets based on models.

Agent Providers

Morpheus is one of the latest open-source agent projects launched in 2024 on Ethereum and Arbitrum. Its white paper was anonymously published in September 2023, providing a foundation for community formation and building, including prominent figures such as Erik Vorhees. The white paper includes a downloadable smart agent protocol, an open-source LLM that can run locally, managed by the user’s wallet, and interact with smart contracts. It uses smart contract rankings to help agents determine which smart contracts can be safely interacted with based on criteria such as the number of transactions processed.

The white paper also provides a framework for building the Morpheus network, including the incentive structures and infrastructure required to run the smart agent protocol. This includes incentives for contributors to build front-ends for interacting with agents, APIs for developers to build plug-in agents for mutual interaction, and cloud solutions for users to access the computation and storage needed to run agents on edge devices. The project’s initial funding was launched in early February, with the full protocol expected to launch in the second quarter of 2024.

Decentralized Autonomous Infrastructure Network (DAIN) is a new agent infrastructure protocol building an agent-to-agent economy on Solana. DAIN’s goal is to enable agents from different enterprises to seamlessly interact with each other through a common API, significantly opening up the design space for AI agents, focusing on agents that can interact with both web2 and web3 products. In January, DAIN announced its first partnership with Asset Shield, allowing users to add “agent signers” to their multisigs, capable of interpreting transactions and approving/rejecting based on user-set rules.

Fetch.AI is one of the earliest deployed AI agent protocols and has developed an ecosystem for building, deploying, and using agents on-chain using FET tokens and Fetch.AI wallets. The protocol offers a comprehensive set of tools and applications for using agents, including wallet-in functions for interacting with and ordering agents.

Autonolas, founded by former members of the Fetch team, is an open market for creating and using decentralized AI agents. Autonolas also provides a set of tools for developers to build off-chain hosted AI agents that can plug into multiple blockchains, including Polygon, Ethereum, Gnosis Chain, and Solana. They currently have some active agent proof-of-concept products, including for predictive markets and DAO governance.

SingularityNet is building a decentralized marketplace for AI agents, where specialized AI agents can be deployed, which can be hired by others or agents to perform complex tasks. Other companies like AlteredStateMachine are building integrations of AI agents with NFTs. Users mint NFTs with random attributes, which give them advantages and disadvantages on different tasks. These agents can then be trained to enhance certain attributes for use in gaming, DeFi, or as virtual assistants and traded with other users.

Overall, these projects envision a future ecosystem of agents capable of collaboratively working not only to perform tasks but also to help build general artificial intelligence. Truly complex agents will have the capability to autonomously complete any user task. For example, fully autonomous agents will be able to figure out how to hire another agent to integrate an API, then execute a task without having to ensure the agent has already integrated with external APIs (like travel booking websites) before use. From the user’s perspective, there’s no need to check if an agent can complete a task, as the agent can determine this on its own.

Bitcoin and AI Agents

In July 2023, Lightning Labs launched a proof-of-concept implementation for utilizing agents on the Lightning Network, dubbed the Bitcoin suite by LangChain. This product is particularly intriguing because it aims to tackle a problem that is becoming increasingly severe in the Web 2 world—the gated and costly API keys of web applications.

LangChain addresses this issue by providing developers with a set of tools that enable agents to buy, sell, and hold Bitcoin, as well as to query API keys and send micropayments. On traditional payment rails, micropayments are prohibitively expensive due to fees, but on the Lightning Network, agents can send an unlimited number of micropayments daily at minimal cost. When used in conjunction with LangChain’s L402 payment metering API framework, companies can adjust the access costs of their APIs based on increases and decreases in usage, rather than setting a single, costly standard.

In the future, chain activities will be predominantly driven by interactions between agents and agents, necessitating mechanisms to ensure agents can interact with each other without prohibitive costs. This early example demonstrates the potential of using agents on permissionless and economically efficient payment rails, opening up possibilities for new markets and economic interactions.

Outlook

The field of agents is still in its infancy. Projects have only just begun to launch functional agents capable of handling simple tasks—access typically limited to experienced developers and users. However, over time, one of the most significant impacts of artificial intelligence agents on cryptocurrency will be the improvement of user experience across all verticals. Transactions will start to shift from click-based to text-based, enabling users to interact with on-chain agents via conversational interfaces. Teams like Dawn Wallet have already launched chatbot wallets, allowing users to interact on-chain.

Moreover, it remains unclear how agents will operate in Web 2, as financial rails rely on regulated banking institutions that cannot operate 24/7 or facilitate seamless cross-border transactions. As Lyn Alden highlighted, the lack of refunds and the capability to handle microtransactions make cryptocurrency rails particularly attractive compared to credit cards. However, if agents become a more common medium for transactions, existing payment providers and applications are likely to quickly adapt, implementing the infrastructure needed to operate on existing financial rails, thereby diminishing some benefits of using cryptocurrency.

Currently, agents may be limited to deterministic cryptocurrency transactions, where a given input guarantees a given output. Both models have outlined the capability of these agents to figure out how to perform complex tasks, and tools are expanding the range of tasks they can complete, both of which require further development. For crypto agents to become useful beyond novel on-chain cryptocurrency use cases, broader integration and acceptance of cryptocurrency as a payment form, along with regulatory clarity, are needed. However, as these components develop, agents are poised to become among the largest consumers of decentralized computing and zkML solutions, autonomously receiving and resolving any task in a non-deterministic manner.

Conclusion

AI introduces the same innovations to cryptocurrency that we’ve seen in web2, enhancing everything from infrastructure development to user experience and accessibility. However, projects are still in the early stages of development, and the near-term integration of cryptocurrency and AI will primarily be driven by off-chain integrations.

Products like Copilot are set to “increase developer efficiency by 10x,” and Layer 1 and DeFi applications have already launched AI-assisted development platforms in collaboration with major companies like Microsoft. Companies such as Cub3.ai and Test Machine are developing AI integrations for smart contract auditing and real-time threat monitoring to enhance on-chain security. LLM chatbots are being trained with on-chain data, protocol documentation, and applications to provide users with enhanced accessibility and user experience.

The challenge for more advanced integrations that truly leverage the underlying technology of cryptocurrencies remains to prove that implementing AI solutions on-chain is technically and economically feasible. The development of decentralized computing, zkML, and AI agents points to promising verticals that lay the groundwork for a deeply interconnected future of cryptocurrency and AI.

Disclaimer:

  1. This article is reprinted from techflow, All copyrights belong to the original author [Lucas Tcheyan]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!