Creating True AI Agents and Autonomous Cryptocurrency Economy

IntermediateJun 03, 2024
HyperAGI is a community-driven decentralized AI project aimed at creating true AI agents and fostering an autonomous cryptocurrency economy. It achieves this by integrating Bitcoin Layer 2 solutions, an innovative Proof of Useful Work (PoUW) consensus mechanism, and large language models (LLMs). The project is dedicated to realizing Unconditional Basic Agent Income (UBAI) and advancing a decentralized and equitable digital society through AI technology.
Creating True AI Agents and Autonomous Cryptocurrency Economy

Introducing the HyperAGI Team and Project Background

HyperAGI is the first decentralized AI project driven by the community with the AI Rune HYPER·AGI·AGENT. The HyperAGI team has been deeply involved in the AI field for many years, accumulating significant experience in Web3 generative AI applications. Three years ago, the HyperAGI team utilized generative AI to create 2D images and 3D models, building an open world called MOSSAI on the blockchain, composed of thousands of AI-generated islands. They also proposed a standard for AI-generated non-fungible cryptographic assets, NFG. However, at that time, decentralized solutions for AI model training and generation had not yet been developed. The platform’s GPU resources alone were insufficient to support a large number of users, preventing explosive growth. With the rise of large language models (LLMs) igniting public interest in AI, HyperAGI launched its decentralized AI application platform, starting tests on Ethereum and Bitcoin L2 in Q1 2024.

HyperAGI focuses on decentralized AI applications, aiming to cultivate an autonomous cryptocurrency economy. Its ultimate goal is to establish Unconditional Basic Agent Income (UBAI). It inherits the robust security and decentralization of Bitcoin, enhanced by an innovative Proof of Useful Work (PoUW) consensus mechanism. Consumer-grade GPU nodes can join the network without permission, mining local tokens $HYPT by performing PoUW tasks such as AI inference and 3D rendering.

Users can develop Proof of Personhood (PoP) AGI agents driven by LLMs using various tools. These agents can be configured as chatbots or 3D/XR entities in the metaverse. AI developers can instantly use or deploy LLM AI microservices, facilitating the creation of programmable, autonomous on-chain agents. These programmable agents can issue or own cryptocurrency assets, continually operate, or trade, contributing to a vibrant, autonomous crypto economy that supports the realization of UBAI. Users holding HYPER·AGI·AGENT rune tokens are eligible to create a PoP agent on the Bitcoin Layer 1 chain and may soon qualify for basic benefits for their agents.

What is an AI Agent? How Does HyperAGI’s Agent Differ from Others?

The concept of an AI agent is not new in academia, but current market hype has made the term increasingly confusing. HyperAGI’s agents refer to LLM-driven embodied agents that can train in 3D virtual simulation environments and interact with users, not just LLM-driven chatbots. HyperAGI agents can exist in both virtual digital worlds and the real physical world. Currently, HyperAGI agents are integrating with physical robots such as robotic dogs, drones, and humanoid robots. In the future, these agents will be able to download enhanced training from the virtual 3D world to physical robots for better task execution.

Furthermore, HyperAGI agents are fully owned by users and have socio-economic significance. PoP agents representing users can receive UBAI to adjust basic agent income. HyperAGI agents are divided into PoP (Proof of Personhood) agents representing individual users and ordinary functional agents. In HyperAGI’s agent economy, PoP agents can receive basic income in the form of tokens, incentivizing users to engage in the training and interaction of their PoP agents. This helps to accumulate data that proves human individuality, and UBAI embodies AI equality and democracy.

Is AGI a Hype or Will It Soon Become a Reality? What Are the Differences and Characteristics of HyperAGI’s Research and Development Path Compared to Other AI Projects?

Although the definition of Artificial General Intelligence (AGI) is not yet unified, it has been regarded as the holy grail of AI academia and industry for decades. While Large Language Models (LLMs) based on Transformers are becoming the core of various AI agents and AGI, HyperAGI does not entirely share this view. LLMs indeed provide novel and convenient information extraction, as well as planning and reasoning capabilities based on natural language. However, they are fundamentally data-driven deep neural networks. Years ago, during the big data boom, we understood that such systems inherently suffer from GIGO (Garbage in, garbage out). LLMs lack some essential characteristics of advanced intelligence, such as embodiment, which makes it difficult for these AI or agents to understand the world models of human users or to formulate plans and take actions to solve real-world problems. Moreover, LLMs do not exhibit higher cognitive activities like self-awareness, reflection, or introspection.

Our founder, Landon Wang, has extensive and long-term research experience in the AI field. In 2004, he proposed Aspect-Oriented AI (AOAI), an innovation combining neural-inspired computing with Aspect-Oriented Programming (AOP). An aspect refers to an encapsulation of multiple relationships or constraints among objects. For example, a neuron is an encapsulation of relationships or constraints with multiple other cells. Specifically, a neuron interacts with sensory or motor cells through fibers and synapses extending from the neuron body, making each neuron an aspect containing such relationships and logic. Each AI agent can be seen as solving a specific aspect of a problem, and technically, it can be modeled as an aspect.

In the software implementation of artificial neural networks, neurons or layers are generally modeled as objects, which is understandable and maintainable in object-oriented programming languages. However, this makes the topology of the neural network difficult to adjust, and the activation sequences of neurons are relatively rigid. While this shows great power in performing simple high-intensity calculations, such as in LLM training and inference, it performs poorly in flexibility and adaptability. On the other hand, in AOAI, neurons or layers are modeled as aspects rather than objects. This architecture of neural networks possesses strong adaptability and flexibility, making the self-evolution of neural networks possible.

HyperAGI combines efficient LLMs with the evolvable AOAI, forming a path that integrates the efficiency of traditional artificial neural networks with the self-evolution characteristics of AO neural networks. This, to date, is seen as a feasible approach to achieving AGI.

What is the vision of HyperAGI

HyperAGI’s vision is to achieve Unconditional Basic Agent Income (UBAI), building a future where technology equitably serves everyone, breaking the cycle of exploitation, and creating a truly decentralized and fair digital society. Unlike other blockchain projects that only claim to be committed to UBI, HyperAGI’s UBAI has a clear implementation path through the agent economy, rather than being an unattainable ideal.

Satoshi Nakamoto’s introduction of Bitcoin was a monumental innovation for humanity, but it is merely a decentralized digital currency without practical utility. The significant advancements and rise of artificial intelligence have made it possible to create value through a decentralized model. In this model, people benefit from AI running on machines rather than from the value of others. A true cryptographic world based on code is emerging, where all machines are created for the benefit and well-being of humanity.

In such a cryptographic world, there may still be hierarchies among AI agents, but human exploitation is eliminated because the agents themselves might possess some form of autonomy. The ultimate purpose and significance of artificial intelligence are to serve humanity, as encoded on the blockchain.

The Relationship Between Bitcoin L2 and AI, and Why Build AI on Bitcoin L2

  1. Bitcoin L2 as a Payment Method for AI Agents

    Bitcoin is currently the medium that epitomizes “maximum neutrality,” making it highly suitable for artificial intelligence agents engaged in value transactions. Bitcoin eliminates the inefficiencies and “frictions” inherent in fiat currencies. As a “digitally native” medium, Bitcoin provides a natural foundation for AI to conduct value exchanges. Bitcoin L2 enhances Bitcoin’s programmable capabilities, meeting the speed requirements needed for AI value exchanges, thereby positioning Bitcoin to become the native currency for AI.

  2. Decentralized AI Governance on Bitcoin L2

    The current trend of centralization in AI has brought decentralized AI alignment and governance into focus. Bitcoin L2’s more powerful smart contracts can serve as the rules that regulate AI agent behavior and protocol models, achieving a decentralized AI alignment and governance model. Moreover, Bitcoin’s characteristic of maximum neutrality makes it easier to reach consensus on AI alignment and governance.

  3. Issuing AI Assets on Bitcoin L2

    In addition to issuing AI agents as assets on Bitcoin L1, the high performance of Bitcoin L2 can meet the needs of AI agents issuing AI assets, which will be the foundation of the agent economy.

  4. AI Agents as a Killer Application for Bitcoin and Bitcoin L2

    Due to performance issues, Bitcoin has not had a practical application beyond being a store of value since its inception. Bitcoin entering L2 gains more powerful programmability. AI agents are generally used to solve real-world problems, so Bitcoin-driven AI agents can be truly applied. The scale and frequency of AI agent use could become a killer application for Bitcoin and L2. While the human economy may not prioritize Bitcoin as a payment method, the robot economy might. A large number of AI agents working 24/7 can tirelessly use Bitcoin to make and receive micro-payments. The demand for Bitcoin could increase significantly in ways that are currently unimaginable.

  5. AI Computing to Enhance Bitcoin L2 Security

    AI computing can complement Bitcoin’s Proof of Work (PoW) and even replace PoW with Proof of Useful Work (PoUW), revolutionarily ensuring security while injecting the energy currently used for Bitcoin mining into AI agents. AI can leverage L2 to turn Bitcoin into an intelligent, green blockchain, unlike Ethereum’s PoS mechanism. Our proposed Hypergraph Consensus, based on 3D/AI computing’s PoUW, will be introduced later.

What Makes HyperAGI Unique Compared to Other Decentralized AI Projects?

HyperAGI stands out in the Web3 AI field due to its distinct vision, solutions, and technology. HyperAGI’s approach includes the consensus of GPU computing power, embodiment of AI, and assetization, making it a decentralized hybrid AI-financial application. Recently, academia proposed five characteristics that decentralized AI platforms should possess, and we have briefly reviewed and compared existing decentralized AI projects according to these five features.Five Characteristics of Decentralized AI Platforms:

  1. Verifiability of Remotely Run AI Models
    • Decentralized verifiability includes technologies such as Data Availability and Zero-Knowledge (ZK) proofs.
  2. Usability of Publicly Available AI Models
    • Usability depends on whether the AI model (mainly LLM) API nodes are Peer-to-Peer and if the network is fully decentralized.
  3. Incentivization for AI Developers and Users
    • Fair token generation mechanisms are crucial for incentivization.
  4. Global Governance of Essential Solutions in the Digital Society
    • AI governance should be neutral and consensus-driven.
  5. No Vendor Lock-ins
    • The platform should be fully decentralized.

Comparison of Existing Decentralized AI Projects Based on These Characteristics:

  1. Verifiability of Remotely Run AI Models
    • Giza: Based on ZKML consensus mechanism, Giza meets the verifiability requirement but currently suffers from performance issues, especially with large models.
    • Cortex AI: A decentralized AI L1 project started five years ago, Cortex AI incorporates new instructions into the EVM to support neural network calculations, but cannot meet the needs of large LLM models.
    • Ofelimos: The first proposal of PoUW in the cryptographic community, but not linked to specific applications or projects.
    • Project PAI: Mentioned PoUW in a white paper but lacks a product.
    • Qubic: Proposes PoUW using multiple GPUs for artificial neural network computation, but its practical application remains unclear.
    • FLUX: Uses PoW ZelHash, not PoUW.
    • Coinai: In the research phase, lacks a strict consensus mechanism.
  2. Projects that fail to meet the verifiability criterion include:
    • GPU Compute Leasing Projects: Lacking decentralized verifiability mechanisms, such as DeepBrain Chain, EMC, Atheir, IO.NET, CLORE.AI, and others.
    • DeepBrain Chain: Focuses on GPU leasing, launched its mainnet in 2021.
    • EMC: Centralized task assignment and rewards, lacks decentralized consensus.
    • Atheir and IO.NET: No consensus mechanisms observed.
    • CLORE.AI: Utilizes crowdsourcing, on-chain payment for AI model releases, and NFT issuance, but lacks verifiability. Similar projects include SingularityNET, Bittensor, AINN, Fetch.ai, Ocean Protocol, and Algovera.ai.
  3. Usability of Publicly Available AI Models
    • Cortex AI and Qubic: No support for LLM observed.

None of the existing decentralized AI projects fully address these five issues. HyperAGI, however, is a fully decentralized AI protocol based on the Hypergraph PoUW consensus mechanism and the fully decentralized Bitcoin L2 Stack, with plans to upgrade to a Bitcoin AI-specific L2 in the future.

HyperAGI’s Unique Features:

  • Hypergraph PoUW Consensus Mechanism: Ensures network security in the most efficient manner, leveraging all computational power provided by miners for LLM inference and cloud rendering services.
  • Completely Decentralized Platform: Based on Bitcoin L2 Stack, which ensures the platform is free from vendor lock-ins and facilitates easy consensus on AI governance.
  • Verifiability and Usability: The PoUW vision ensures that computational power can be used to solve various problems submitted to the decentralized network, addressing the verifiability of remotely run AI models and making publicly available AI models usable.

HyperAGI not only meets the required characteristics for a decentralized AI platform but also advances the field with its unique integration of GPU computing power and AI assetization within a decentralized framework.

Why Now?

1. The Explosion of LLMs and Their Applications

OpenAI’s ChatGPT reached 100 million users in just three months, sparking a global surge in the development, application, and investment in large language models (LLMs). However, up to this point, the technology and training of LLMs have been highly centralized. This centralization has raised significant concerns among academia, industry, and the public regarding the monopolization of AI technology by a few key providers, data privacy breaches, encroachment, and vendor lock-in by cloud computing companies. These issues fundamentally stem from the control of the internet and application gateways by centralized platforms, which are not suited for large-scale AI applications. The AI community has begun to implement some locally run and decentralized AI projects. For example, Ollama represents local execution, and Petals represents decentralization. Ollama uses parameter compression or reduced precision methods to enable small- to medium-scale LLMs to run on personal computers or even mobile phones, thus protecting user data privacy and other rights. However, this approach is obviously difficult to support production environments and networked applications. Petals, on the other hand, achieves fully decentralized LLM inference through Bittorrent’s Peer2Peer technology. Nevertheless, Petals lacks consensus and incentive layer protocols and is still confined to a small circle of researchers.

2. LLM-Driven Intelligent Agents

With the support of LLMs, intelligent agents can perform higher-level reasoning and possess certain planning capabilities. Utilizing natural language, multiple intelligent agents can form social collaborations similar to humans. Several LLM-driven intelligent agent frameworks have been proposed, such as Microsoft’s AutoGen, Langchain, and CrewAI. Currently, a large number of AI entrepreneurs and developers are focusing on the direction of LLM-driven intelligent agents and their applications. There is a high demand for stable, scalable LLM inference, but this is mainly achieved by renting GPU inference instances from cloud computing companies. In March 2024, Nvidia released ai.nvidia.com, a generative AI microservice platform that includes LLMs, to meet this enormous demand, although it has not yet officially launched. LLM-driven intelligent agents are booming, much like website development once did. However, collaboration is still primarily conducted in the traditional Web2 mode, where intelligent agent developers need to lease GPUs or procure APIs from LLM providers to support the operation of these agents. This creates significant friction, hindering the rapid growth of the intelligent agent ecosystem and the value transmission within the intelligent agent economy.

3. Embodied Agent Simulation Environments

Currently, most agents can only access and operate certain APIs or interact with these APIs through code or scripts, writing control commands generated by LLMs or reading external states. General intelligent agents should not only understand and generate natural language but also comprehend the human world. After appropriate training, they should be able to transfer to robotic systems (such as drones, vacuum cleaners, humanoid robots, etc.) to complete specific tasks. These agents are referred to as embodied agents. Training embodied agents requires a large amount of real-world visual data to help them better understand specific environments and the real world, shortening the training and development time for robots, improving training efficiency, and reducing costs. Currently, the simulation environments for training embodied intelligence are built and owned by a few companies, such as Microsoft’s Minecraft and Nvidia’s Isaac Gym. There are no decentralized environments to meet the training needs of embodied intelligence. Recently, some game engines have started to focus on artificial intelligence, such as Epic’s Unreal Engine, which is promoting AI training environments that comply with OpenAI GYM.

4. Bitcoin L2 Ecosystem

Although Bitcoin sidechains have existed for years, they were mainly used for payments, and the lack of support for smart contracts hindered complex on-chain applications. The emergence of EVM-compatible Bitcoin L2s allows Bitcoin to support decentralized AI applications through L2. Decentralized AI requires a fully decentralized, computationally dominant blockchain network rather than increasingly centralized PoS blockchain networks. The introduction of new protocols for native Bitcoin assets, such as inscriptions and ordinals, makes the establishment of ecosystems and applications based on Bitcoin possible. For example, the HYPER•AGI•AGENT’s fair launch mint was completed within an hour, and in the future, HyperAGI will issue more AI assets and community-driven applications on Bitcoin.

HyperAGI’s Technical Framework and Solutions

1.How to Realize a Decentralized LLM-Driven AI Intelligent Agent Application Platform?

The primary challenge in decentralized AI today is enabling remote inference for large AI models and the training and inference of embodied intelligent agents using high-performance, low-overhead verifiable algorithms. Without verifiability, the system would revert to a traditional multi-party market model involving suppliers, demanders, and platform operators, rather than achieving a fully decentralized AI application platform.

Verifiable AI computation requires the PoUW (Proof of Useful Work) consensus algorithm. This serves as the foundation for decentralized incentive mechanisms. Specifically, within network incentives, the minting of tokens is carried out by nodes completing computational tasks and submitting verifiable results, instead of any centralized entity transferring tokens to the nodes.

To achieve verifiable AI computation, we first need to define AI computation itself. AI computation encompasses many levels, from low-level machine instructions and CUDA instructions to higher-level languages such as C++ and Python. Similarly, in the training of embodied intelligent agents, 3D computations also exist at various levels, including shader languages, OpenGL, C++, and blueprint scripts.

HyperAGI’s PoUW consensus algorithm is implemented using computational graphs. A computational graph is defined as a directed graph where nodes correspond to mathematical operations. It is a way to express and evaluate mathematical expressions, essentially a “language” describing equations, containing nodes (variables) and edges (operations or simple functions).

Verifiable AI Computation Implementation:

1.1 Using Computational Graphs to Define Verifiable Computation

Any computation (e.g., 3D and AI computations) can be defined using computational graphs. Different levels of computation can be represented with subgraphs. This approach encompasses various types of computation and expresses different computational levels through subgraphs. Currently, it involves two layers: the top-level computational graph is deployed on-chain to facilitate verification by nodes.

1.2 Decentralized Loading and Execution of LLM Models and 3D Scenes

LLM models and 3D scene levels are loaded and run in a fully decentralized manner. When a user accesses an LLM model for inference or enters a 3D scene for rendering, a HyperAGI intelligent agent will initiate another trusted node to run the same hypergraph (LLM or 3D scene).

1.3 Verification of Computational Results

If a verification node finds that a result submitted by a node is inconsistent with the result submitted by a trusted node, it conducts a binary search on the off-chain computational results of the second-layer computational graph (subgraph) to locate the divergent computational node (operator) within the subgraph. The subgraph operators are pre-deployed to smart contracts. By passing the parameters of the inconsistent operator to the smart contract and executing the operator, the results can be verified.

2. How to Avoid Excessive Computational Overheads?

A significant challenge in verifiable AI computation is managing the additional computational overhead. In Byzantine consensus protocols, 2/3 of nodes must agree to form a consensus. For AI inference consensus, this means all nodes would need to complete the same computation, which is an unacceptable waste in AI computation. HyperAGI, however, only requires 1 to (m) nodes to perform additional computation for validation.

2.1 Companion Computation for LLM Inference

Each LLM inference does not run independently. The HyperAGI intelligent agent initiates at least one trusted node for “companion computation.” Because LLM inference is performed by deep neural networks where each layer’s computation results are used as input for the next layer until inference is completed, multiple users can concurrently access the same large LLM model. Therefore, at most, an additional number of trusted nodes equal to the number of LLMs (m) need to be initiated. At minimum, only one trusted node is required for “companion computation.”

2.2 3D Scene Rendering Computation

3D scene rendering follows a similar principle. When a user enters a scene and activates the hypergraph, the HyperAGI intelligent agent loads a trusted node based on the hypergraph to perform the corresponding hypergraph computation. If (m) users enter different 3D scenes, at most (m) trusted nodes for “companion computation” need to be initiated.

In summary, the number of nodes participating in additional computation ranges between 1 and (n + m) (where (n) is the number of users entering 3D scenes and (m) is the number of LLMs). This distribution follows a Gaussian distribution, effectively avoiding resource wastage while ensuring network verification efficiency.

How AI Integrates with Web3 to Form Semi-AI and Semi-Financial Applications

AI developers can deploy intelligent agents as smart contracts, with contracts containing top-level hypergraph on-chain data. Users or other intelligent agents can call methods of these intelligent agent contracts and pay the corresponding tokens. The intelligent agent providing the service must complete the corresponding computation and submit verifiable results. This setup ensures decentralized business interactions between users or other intelligent agents and the intelligent agent.

The intelligent agent will not worry about not receiving tokens after completing a task, and the payer does not need to worry about paying tokens without getting the correct business computation results. The capability and value of the intelligent agent’s service are determined by the secondary market price and market value of the intelligent agent assets (including ERC-20, ERC-721, or ERC-1155 NFTs).

Beyond Semi-AI and Semi-Financial Applications

The application of HyperAGI is not limited to semi-AI and semi-financial applications. It aims to realize UBAI (Universal Basic AI), building a future where technology serves everyone equally, breaking cycles of exploitation, and creating a truly decentralized and fair digital society.

statement:

  1. This article is reproduced from [techflow deep tide], the original title is “HyperAGI Interview: Building a Real AI Agent and Creating an Autonomous Cryptocurrency Economy”, the copyright belongs to the original author [Fifth], if you have any objection to the reprint, please contact Gate Learn Team, the team will handle it as soon as possible according to relevant procedures.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Other language versions of the article are translated by the Gate Learn team, not mentioned in Gate.io, the translated article may not be reproduced, distributed or plagiarized.

Creating True AI Agents and Autonomous Cryptocurrency Economy

IntermediateJun 03, 2024
HyperAGI is a community-driven decentralized AI project aimed at creating true AI agents and fostering an autonomous cryptocurrency economy. It achieves this by integrating Bitcoin Layer 2 solutions, an innovative Proof of Useful Work (PoUW) consensus mechanism, and large language models (LLMs). The project is dedicated to realizing Unconditional Basic Agent Income (UBAI) and advancing a decentralized and equitable digital society through AI technology.
Creating True AI Agents and Autonomous Cryptocurrency Economy

Introducing the HyperAGI Team and Project Background

HyperAGI is the first decentralized AI project driven by the community with the AI Rune HYPER·AGI·AGENT. The HyperAGI team has been deeply involved in the AI field for many years, accumulating significant experience in Web3 generative AI applications. Three years ago, the HyperAGI team utilized generative AI to create 2D images and 3D models, building an open world called MOSSAI on the blockchain, composed of thousands of AI-generated islands. They also proposed a standard for AI-generated non-fungible cryptographic assets, NFG. However, at that time, decentralized solutions for AI model training and generation had not yet been developed. The platform’s GPU resources alone were insufficient to support a large number of users, preventing explosive growth. With the rise of large language models (LLMs) igniting public interest in AI, HyperAGI launched its decentralized AI application platform, starting tests on Ethereum and Bitcoin L2 in Q1 2024.

HyperAGI focuses on decentralized AI applications, aiming to cultivate an autonomous cryptocurrency economy. Its ultimate goal is to establish Unconditional Basic Agent Income (UBAI). It inherits the robust security and decentralization of Bitcoin, enhanced by an innovative Proof of Useful Work (PoUW) consensus mechanism. Consumer-grade GPU nodes can join the network without permission, mining local tokens $HYPT by performing PoUW tasks such as AI inference and 3D rendering.

Users can develop Proof of Personhood (PoP) AGI agents driven by LLMs using various tools. These agents can be configured as chatbots or 3D/XR entities in the metaverse. AI developers can instantly use or deploy LLM AI microservices, facilitating the creation of programmable, autonomous on-chain agents. These programmable agents can issue or own cryptocurrency assets, continually operate, or trade, contributing to a vibrant, autonomous crypto economy that supports the realization of UBAI. Users holding HYPER·AGI·AGENT rune tokens are eligible to create a PoP agent on the Bitcoin Layer 1 chain and may soon qualify for basic benefits for their agents.

What is an AI Agent? How Does HyperAGI’s Agent Differ from Others?

The concept of an AI agent is not new in academia, but current market hype has made the term increasingly confusing. HyperAGI’s agents refer to LLM-driven embodied agents that can train in 3D virtual simulation environments and interact with users, not just LLM-driven chatbots. HyperAGI agents can exist in both virtual digital worlds and the real physical world. Currently, HyperAGI agents are integrating with physical robots such as robotic dogs, drones, and humanoid robots. In the future, these agents will be able to download enhanced training from the virtual 3D world to physical robots for better task execution.

Furthermore, HyperAGI agents are fully owned by users and have socio-economic significance. PoP agents representing users can receive UBAI to adjust basic agent income. HyperAGI agents are divided into PoP (Proof of Personhood) agents representing individual users and ordinary functional agents. In HyperAGI’s agent economy, PoP agents can receive basic income in the form of tokens, incentivizing users to engage in the training and interaction of their PoP agents. This helps to accumulate data that proves human individuality, and UBAI embodies AI equality and democracy.

Is AGI a Hype or Will It Soon Become a Reality? What Are the Differences and Characteristics of HyperAGI’s Research and Development Path Compared to Other AI Projects?

Although the definition of Artificial General Intelligence (AGI) is not yet unified, it has been regarded as the holy grail of AI academia and industry for decades. While Large Language Models (LLMs) based on Transformers are becoming the core of various AI agents and AGI, HyperAGI does not entirely share this view. LLMs indeed provide novel and convenient information extraction, as well as planning and reasoning capabilities based on natural language. However, they are fundamentally data-driven deep neural networks. Years ago, during the big data boom, we understood that such systems inherently suffer from GIGO (Garbage in, garbage out). LLMs lack some essential characteristics of advanced intelligence, such as embodiment, which makes it difficult for these AI or agents to understand the world models of human users or to formulate plans and take actions to solve real-world problems. Moreover, LLMs do not exhibit higher cognitive activities like self-awareness, reflection, or introspection.

Our founder, Landon Wang, has extensive and long-term research experience in the AI field. In 2004, he proposed Aspect-Oriented AI (AOAI), an innovation combining neural-inspired computing with Aspect-Oriented Programming (AOP). An aspect refers to an encapsulation of multiple relationships or constraints among objects. For example, a neuron is an encapsulation of relationships or constraints with multiple other cells. Specifically, a neuron interacts with sensory or motor cells through fibers and synapses extending from the neuron body, making each neuron an aspect containing such relationships and logic. Each AI agent can be seen as solving a specific aspect of a problem, and technically, it can be modeled as an aspect.

In the software implementation of artificial neural networks, neurons or layers are generally modeled as objects, which is understandable and maintainable in object-oriented programming languages. However, this makes the topology of the neural network difficult to adjust, and the activation sequences of neurons are relatively rigid. While this shows great power in performing simple high-intensity calculations, such as in LLM training and inference, it performs poorly in flexibility and adaptability. On the other hand, in AOAI, neurons or layers are modeled as aspects rather than objects. This architecture of neural networks possesses strong adaptability and flexibility, making the self-evolution of neural networks possible.

HyperAGI combines efficient LLMs with the evolvable AOAI, forming a path that integrates the efficiency of traditional artificial neural networks with the self-evolution characteristics of AO neural networks. This, to date, is seen as a feasible approach to achieving AGI.

What is the vision of HyperAGI

HyperAGI’s vision is to achieve Unconditional Basic Agent Income (UBAI), building a future where technology equitably serves everyone, breaking the cycle of exploitation, and creating a truly decentralized and fair digital society. Unlike other blockchain projects that only claim to be committed to UBI, HyperAGI’s UBAI has a clear implementation path through the agent economy, rather than being an unattainable ideal.

Satoshi Nakamoto’s introduction of Bitcoin was a monumental innovation for humanity, but it is merely a decentralized digital currency without practical utility. The significant advancements and rise of artificial intelligence have made it possible to create value through a decentralized model. In this model, people benefit from AI running on machines rather than from the value of others. A true cryptographic world based on code is emerging, where all machines are created for the benefit and well-being of humanity.

In such a cryptographic world, there may still be hierarchies among AI agents, but human exploitation is eliminated because the agents themselves might possess some form of autonomy. The ultimate purpose and significance of artificial intelligence are to serve humanity, as encoded on the blockchain.

The Relationship Between Bitcoin L2 and AI, and Why Build AI on Bitcoin L2

  1. Bitcoin L2 as a Payment Method for AI Agents

    Bitcoin is currently the medium that epitomizes “maximum neutrality,” making it highly suitable for artificial intelligence agents engaged in value transactions. Bitcoin eliminates the inefficiencies and “frictions” inherent in fiat currencies. As a “digitally native” medium, Bitcoin provides a natural foundation for AI to conduct value exchanges. Bitcoin L2 enhances Bitcoin’s programmable capabilities, meeting the speed requirements needed for AI value exchanges, thereby positioning Bitcoin to become the native currency for AI.

  2. Decentralized AI Governance on Bitcoin L2

    The current trend of centralization in AI has brought decentralized AI alignment and governance into focus. Bitcoin L2’s more powerful smart contracts can serve as the rules that regulate AI agent behavior and protocol models, achieving a decentralized AI alignment and governance model. Moreover, Bitcoin’s characteristic of maximum neutrality makes it easier to reach consensus on AI alignment and governance.

  3. Issuing AI Assets on Bitcoin L2

    In addition to issuing AI agents as assets on Bitcoin L1, the high performance of Bitcoin L2 can meet the needs of AI agents issuing AI assets, which will be the foundation of the agent economy.

  4. AI Agents as a Killer Application for Bitcoin and Bitcoin L2

    Due to performance issues, Bitcoin has not had a practical application beyond being a store of value since its inception. Bitcoin entering L2 gains more powerful programmability. AI agents are generally used to solve real-world problems, so Bitcoin-driven AI agents can be truly applied. The scale and frequency of AI agent use could become a killer application for Bitcoin and L2. While the human economy may not prioritize Bitcoin as a payment method, the robot economy might. A large number of AI agents working 24/7 can tirelessly use Bitcoin to make and receive micro-payments. The demand for Bitcoin could increase significantly in ways that are currently unimaginable.

  5. AI Computing to Enhance Bitcoin L2 Security

    AI computing can complement Bitcoin’s Proof of Work (PoW) and even replace PoW with Proof of Useful Work (PoUW), revolutionarily ensuring security while injecting the energy currently used for Bitcoin mining into AI agents. AI can leverage L2 to turn Bitcoin into an intelligent, green blockchain, unlike Ethereum’s PoS mechanism. Our proposed Hypergraph Consensus, based on 3D/AI computing’s PoUW, will be introduced later.

What Makes HyperAGI Unique Compared to Other Decentralized AI Projects?

HyperAGI stands out in the Web3 AI field due to its distinct vision, solutions, and technology. HyperAGI’s approach includes the consensus of GPU computing power, embodiment of AI, and assetization, making it a decentralized hybrid AI-financial application. Recently, academia proposed five characteristics that decentralized AI platforms should possess, and we have briefly reviewed and compared existing decentralized AI projects according to these five features.Five Characteristics of Decentralized AI Platforms:

  1. Verifiability of Remotely Run AI Models
    • Decentralized verifiability includes technologies such as Data Availability and Zero-Knowledge (ZK) proofs.
  2. Usability of Publicly Available AI Models
    • Usability depends on whether the AI model (mainly LLM) API nodes are Peer-to-Peer and if the network is fully decentralized.
  3. Incentivization for AI Developers and Users
    • Fair token generation mechanisms are crucial for incentivization.
  4. Global Governance of Essential Solutions in the Digital Society
    • AI governance should be neutral and consensus-driven.
  5. No Vendor Lock-ins
    • The platform should be fully decentralized.

Comparison of Existing Decentralized AI Projects Based on These Characteristics:

  1. Verifiability of Remotely Run AI Models
    • Giza: Based on ZKML consensus mechanism, Giza meets the verifiability requirement but currently suffers from performance issues, especially with large models.
    • Cortex AI: A decentralized AI L1 project started five years ago, Cortex AI incorporates new instructions into the EVM to support neural network calculations, but cannot meet the needs of large LLM models.
    • Ofelimos: The first proposal of PoUW in the cryptographic community, but not linked to specific applications or projects.
    • Project PAI: Mentioned PoUW in a white paper but lacks a product.
    • Qubic: Proposes PoUW using multiple GPUs for artificial neural network computation, but its practical application remains unclear.
    • FLUX: Uses PoW ZelHash, not PoUW.
    • Coinai: In the research phase, lacks a strict consensus mechanism.
  2. Projects that fail to meet the verifiability criterion include:
    • GPU Compute Leasing Projects: Lacking decentralized verifiability mechanisms, such as DeepBrain Chain, EMC, Atheir, IO.NET, CLORE.AI, and others.
    • DeepBrain Chain: Focuses on GPU leasing, launched its mainnet in 2021.
    • EMC: Centralized task assignment and rewards, lacks decentralized consensus.
    • Atheir and IO.NET: No consensus mechanisms observed.
    • CLORE.AI: Utilizes crowdsourcing, on-chain payment for AI model releases, and NFT issuance, but lacks verifiability. Similar projects include SingularityNET, Bittensor, AINN, Fetch.ai, Ocean Protocol, and Algovera.ai.
  3. Usability of Publicly Available AI Models
    • Cortex AI and Qubic: No support for LLM observed.

None of the existing decentralized AI projects fully address these five issues. HyperAGI, however, is a fully decentralized AI protocol based on the Hypergraph PoUW consensus mechanism and the fully decentralized Bitcoin L2 Stack, with plans to upgrade to a Bitcoin AI-specific L2 in the future.

HyperAGI’s Unique Features:

  • Hypergraph PoUW Consensus Mechanism: Ensures network security in the most efficient manner, leveraging all computational power provided by miners for LLM inference and cloud rendering services.
  • Completely Decentralized Platform: Based on Bitcoin L2 Stack, which ensures the platform is free from vendor lock-ins and facilitates easy consensus on AI governance.
  • Verifiability and Usability: The PoUW vision ensures that computational power can be used to solve various problems submitted to the decentralized network, addressing the verifiability of remotely run AI models and making publicly available AI models usable.

HyperAGI not only meets the required characteristics for a decentralized AI platform but also advances the field with its unique integration of GPU computing power and AI assetization within a decentralized framework.

Why Now?

1. The Explosion of LLMs and Their Applications

OpenAI’s ChatGPT reached 100 million users in just three months, sparking a global surge in the development, application, and investment in large language models (LLMs). However, up to this point, the technology and training of LLMs have been highly centralized. This centralization has raised significant concerns among academia, industry, and the public regarding the monopolization of AI technology by a few key providers, data privacy breaches, encroachment, and vendor lock-in by cloud computing companies. These issues fundamentally stem from the control of the internet and application gateways by centralized platforms, which are not suited for large-scale AI applications. The AI community has begun to implement some locally run and decentralized AI projects. For example, Ollama represents local execution, and Petals represents decentralization. Ollama uses parameter compression or reduced precision methods to enable small- to medium-scale LLMs to run on personal computers or even mobile phones, thus protecting user data privacy and other rights. However, this approach is obviously difficult to support production environments and networked applications. Petals, on the other hand, achieves fully decentralized LLM inference through Bittorrent’s Peer2Peer technology. Nevertheless, Petals lacks consensus and incentive layer protocols and is still confined to a small circle of researchers.

2. LLM-Driven Intelligent Agents

With the support of LLMs, intelligent agents can perform higher-level reasoning and possess certain planning capabilities. Utilizing natural language, multiple intelligent agents can form social collaborations similar to humans. Several LLM-driven intelligent agent frameworks have been proposed, such as Microsoft’s AutoGen, Langchain, and CrewAI. Currently, a large number of AI entrepreneurs and developers are focusing on the direction of LLM-driven intelligent agents and their applications. There is a high demand for stable, scalable LLM inference, but this is mainly achieved by renting GPU inference instances from cloud computing companies. In March 2024, Nvidia released ai.nvidia.com, a generative AI microservice platform that includes LLMs, to meet this enormous demand, although it has not yet officially launched. LLM-driven intelligent agents are booming, much like website development once did. However, collaboration is still primarily conducted in the traditional Web2 mode, where intelligent agent developers need to lease GPUs or procure APIs from LLM providers to support the operation of these agents. This creates significant friction, hindering the rapid growth of the intelligent agent ecosystem and the value transmission within the intelligent agent economy.

3. Embodied Agent Simulation Environments

Currently, most agents can only access and operate certain APIs or interact with these APIs through code or scripts, writing control commands generated by LLMs or reading external states. General intelligent agents should not only understand and generate natural language but also comprehend the human world. After appropriate training, they should be able to transfer to robotic systems (such as drones, vacuum cleaners, humanoid robots, etc.) to complete specific tasks. These agents are referred to as embodied agents. Training embodied agents requires a large amount of real-world visual data to help them better understand specific environments and the real world, shortening the training and development time for robots, improving training efficiency, and reducing costs. Currently, the simulation environments for training embodied intelligence are built and owned by a few companies, such as Microsoft’s Minecraft and Nvidia’s Isaac Gym. There are no decentralized environments to meet the training needs of embodied intelligence. Recently, some game engines have started to focus on artificial intelligence, such as Epic’s Unreal Engine, which is promoting AI training environments that comply with OpenAI GYM.

4. Bitcoin L2 Ecosystem

Although Bitcoin sidechains have existed for years, they were mainly used for payments, and the lack of support for smart contracts hindered complex on-chain applications. The emergence of EVM-compatible Bitcoin L2s allows Bitcoin to support decentralized AI applications through L2. Decentralized AI requires a fully decentralized, computationally dominant blockchain network rather than increasingly centralized PoS blockchain networks. The introduction of new protocols for native Bitcoin assets, such as inscriptions and ordinals, makes the establishment of ecosystems and applications based on Bitcoin possible. For example, the HYPER•AGI•AGENT’s fair launch mint was completed within an hour, and in the future, HyperAGI will issue more AI assets and community-driven applications on Bitcoin.

HyperAGI’s Technical Framework and Solutions

1.How to Realize a Decentralized LLM-Driven AI Intelligent Agent Application Platform?

The primary challenge in decentralized AI today is enabling remote inference for large AI models and the training and inference of embodied intelligent agents using high-performance, low-overhead verifiable algorithms. Without verifiability, the system would revert to a traditional multi-party market model involving suppliers, demanders, and platform operators, rather than achieving a fully decentralized AI application platform.

Verifiable AI computation requires the PoUW (Proof of Useful Work) consensus algorithm. This serves as the foundation for decentralized incentive mechanisms. Specifically, within network incentives, the minting of tokens is carried out by nodes completing computational tasks and submitting verifiable results, instead of any centralized entity transferring tokens to the nodes.

To achieve verifiable AI computation, we first need to define AI computation itself. AI computation encompasses many levels, from low-level machine instructions and CUDA instructions to higher-level languages such as C++ and Python. Similarly, in the training of embodied intelligent agents, 3D computations also exist at various levels, including shader languages, OpenGL, C++, and blueprint scripts.

HyperAGI’s PoUW consensus algorithm is implemented using computational graphs. A computational graph is defined as a directed graph where nodes correspond to mathematical operations. It is a way to express and evaluate mathematical expressions, essentially a “language” describing equations, containing nodes (variables) and edges (operations or simple functions).

Verifiable AI Computation Implementation:

1.1 Using Computational Graphs to Define Verifiable Computation

Any computation (e.g., 3D and AI computations) can be defined using computational graphs. Different levels of computation can be represented with subgraphs. This approach encompasses various types of computation and expresses different computational levels through subgraphs. Currently, it involves two layers: the top-level computational graph is deployed on-chain to facilitate verification by nodes.

1.2 Decentralized Loading and Execution of LLM Models and 3D Scenes

LLM models and 3D scene levels are loaded and run in a fully decentralized manner. When a user accesses an LLM model for inference or enters a 3D scene for rendering, a HyperAGI intelligent agent will initiate another trusted node to run the same hypergraph (LLM or 3D scene).

1.3 Verification of Computational Results

If a verification node finds that a result submitted by a node is inconsistent with the result submitted by a trusted node, it conducts a binary search on the off-chain computational results of the second-layer computational graph (subgraph) to locate the divergent computational node (operator) within the subgraph. The subgraph operators are pre-deployed to smart contracts. By passing the parameters of the inconsistent operator to the smart contract and executing the operator, the results can be verified.

2. How to Avoid Excessive Computational Overheads?

A significant challenge in verifiable AI computation is managing the additional computational overhead. In Byzantine consensus protocols, 2/3 of nodes must agree to form a consensus. For AI inference consensus, this means all nodes would need to complete the same computation, which is an unacceptable waste in AI computation. HyperAGI, however, only requires 1 to (m) nodes to perform additional computation for validation.

2.1 Companion Computation for LLM Inference

Each LLM inference does not run independently. The HyperAGI intelligent agent initiates at least one trusted node for “companion computation.” Because LLM inference is performed by deep neural networks where each layer’s computation results are used as input for the next layer until inference is completed, multiple users can concurrently access the same large LLM model. Therefore, at most, an additional number of trusted nodes equal to the number of LLMs (m) need to be initiated. At minimum, only one trusted node is required for “companion computation.”

2.2 3D Scene Rendering Computation

3D scene rendering follows a similar principle. When a user enters a scene and activates the hypergraph, the HyperAGI intelligent agent loads a trusted node based on the hypergraph to perform the corresponding hypergraph computation. If (m) users enter different 3D scenes, at most (m) trusted nodes for “companion computation” need to be initiated.

In summary, the number of nodes participating in additional computation ranges between 1 and (n + m) (where (n) is the number of users entering 3D scenes and (m) is the number of LLMs). This distribution follows a Gaussian distribution, effectively avoiding resource wastage while ensuring network verification efficiency.

How AI Integrates with Web3 to Form Semi-AI and Semi-Financial Applications

AI developers can deploy intelligent agents as smart contracts, with contracts containing top-level hypergraph on-chain data. Users or other intelligent agents can call methods of these intelligent agent contracts and pay the corresponding tokens. The intelligent agent providing the service must complete the corresponding computation and submit verifiable results. This setup ensures decentralized business interactions between users or other intelligent agents and the intelligent agent.

The intelligent agent will not worry about not receiving tokens after completing a task, and the payer does not need to worry about paying tokens without getting the correct business computation results. The capability and value of the intelligent agent’s service are determined by the secondary market price and market value of the intelligent agent assets (including ERC-20, ERC-721, or ERC-1155 NFTs).

Beyond Semi-AI and Semi-Financial Applications

The application of HyperAGI is not limited to semi-AI and semi-financial applications. It aims to realize UBAI (Universal Basic AI), building a future where technology serves everyone equally, breaking cycles of exploitation, and creating a truly decentralized and fair digital society.

statement:

  1. This article is reproduced from [techflow deep tide], the original title is “HyperAGI Interview: Building a Real AI Agent and Creating an Autonomous Cryptocurrency Economy”, the copyright belongs to the original author [Fifth], if you have any objection to the reprint, please contact Gate Learn Team, the team will handle it as soon as possible according to relevant procedures.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Other language versions of the article are translated by the Gate Learn team, not mentioned in Gate.io, the translated article may not be reproduced, distributed or plagiarized.

Empieza ahora
¡Regístrate y recibe un bono de
$100
!
Crea tu cuenta