How The Graph Is Scaling into AI-Powered Web3 Infrastructure

Intermediate8/11/2024, 3:22:34 PM
This article explores how The Graph is expanding its Web3 infrastructure by integrating AI technologies. It details how its Inference Service and Agent Service help dApp developers more easily incorporate AI functionalities.

In 2022, OpenAI launched the GPT-3.5 model-driven ChatGPT, initiating a wave of AI narratives. While ChatGPT generally performs well in handling queries, it can be limited when dealing with specific domain knowledge or real-time data. For instance, it struggles to provide detailed and reliable information on Vitalik Buterin’s token transactions over the past 18 months. To address this, The Graph’s core development team, Semiotic Labs, combined The Graph’s indexing software stack with OpenAI to launch the Agentc project, offering cryptocurrency market trend analysis and transaction data query services.

When querying Agentc about Vitalik Buterin’s token transactions over the past 18 months, it delivers a more comprehensive answer. However, The Graph’s AI ambitions go beyond this. Its white paper titled “The Graph as AI Infrastructure“ outlines its goal not to launch a specific application but to leverage its decentralized data indexing protocol to provide developers with tools for building Web3-native AI applications. To support this goal, Semiotic Labs will also open-source Agentc’s codebase, allowing developers to create AI dApps similar to Agentc, such as NFT market trend analysis agents and DeFi trading assistants.

The Graph’s Decentralized AI Roadmap

Launched in July 2018, The Graph is a decentralized protocol for indexing and querying blockchain data. Developers can use open APIs to create and deploy data indexes called subgraphs, enabling applications to retrieve on-chain data. To date, The Graph supports over 50 chains, hosts over 75,000 projects, and has processed over 1.26 trillion queries.

The Graph’s ability to handle such massive data is supported by its core teams, including Edge & Node, Streamingfast, Semiotic, The Guild, GraphOps, Messari, and Pinax. Streamingfast provides cross-chain architecture technology for blockchain data streams, while Semiotic AI focuses on integrating AI and cryptography into The Graph. The Guild, GraphOps, Messari, and Pinax specialize in areas such as GraphQL development, indexing services, subgraph development, and data flow solutions.

The Graph’s AI strategy is not new. Last March, The Graph Blog published an article outlining the potential for AI applications using its data indexing capabilities. In December, The Graph unveiled its “New Era“ roadmap, which includes plans to add large language model (LLM) assisted queries. The recent white paper further clarifies its AI roadmap, introducing two AI services: Inference and Agent Services, which allow developers to integrate AI functions directly into the application frontend, with support from The Graph.

Inference Service: Supporting a Range of Open-Source AI Models

In traditional inference services, models make predictions on input data using centralized cloud resources. For example, ChatGPT performs inference and returns answers. However, this centralized approach increases costs and poses censorship risks. The Graph aims to address this by creating a decentralized model hosting marketplace, giving dApp developers more flexibility in deploying and hosting AI models.

The white paper provides an example of how to use The Graph to create an application that helps Farcaster users understand whether their posts will receive a lot of likes. First, subgraph data services from The Graph index comments and likes on Farcaster posts. Next, a neural network is trained to predict whether new Farcaster comments will be liked, and the neural network is deployed in The Graph’s inference service. The resulting dApp can assist users in crafting posts that are more likely to garner likes.

This approach allows developers to easily utilize The Graph’s infrastructure, host pre-trained models on the network, and integrate them into applications via APIs, enabling users to directly experience these functionalities when using dApps.

To provide developers with more options and flexibility, The Graph’s Inference Service supports most popular existing models. According to the white paper, “In the MVP phase, The Graph’s Inference Service will support a selection of popular open-source AI models, including Stable Diffusion, Stable Video Diffusion, LLaMA, Mixtral, Grok, and Whisper.” In the future, any well-tested and indexed open models can be deployed in The Graph Inference Service. Additionally, to reduce the technical complexity of deploying AI models, The Graph offers user-friendly interfaces that simplify the process, allowing developers to upload and manage their AI models without worrying about infrastructure maintenance.

To further enhance model performance in specific applications, The Graph also supports fine-tuning models on specific datasets. However, fine-tuning is generally not performed on The Graph itself. Developers need to fine-tune models externally and then deploy these models using The Graph’s inference service. To encourage developers to make fine-tuned models public, The Graph is developing incentive mechanisms, such as equitable distribution of query fees between model creators and indexers.

To ensure the credibility of AI inference results, The Graph offers several verification methods, including trusted authorities, M-of-N consensus, interactive fraud proofs, and zk-SNARKs. Each method has its advantages and drawbacks. Trusted authorities rely on trusted entities; M-of-N consensus requires multiple indexers to validate, increasing the difficulty of cheating while also raising computational and coordination costs; interactive fraud proofs offer strong security but are unsuitable for applications requiring rapid responses; zk-SNARKs are technically complex and less suitable for large models.

The Graph believes that developers and users should be able to choose the appropriate security level based on their needs. Therefore, The Graph plans to support various verification methods in its inference service to accommodate different security requirements and application scenarios. For example, financial transactions or critical business logic may require higher security verification methods, such as zk-SNARKs or M-of-N consensus, while lower-risk or entertainment-oriented applications can opt for more cost-effective and straightforward methods, such as trusted authorities or interactive fraud proofs. Additionally, The Graph plans to explore privacy-enhancing technologies to address model and user privacy issues.

Agent Service: Assisting Developers in Building Autonomous AI-Driven Applications

While the Inference Service primarily focuses on running pre-trained AI models for inference, the Agent Service is more complex, requiring multiple components to work together to enable Agents to perform a range of complex and automated tasks. The Graph’s Agent Service aims to integrate the building, hosting, and execution of Agents within The Graph, with support provided by the indexer network.

Specifically, The Graph will provide a decentralized network to support the construction and hosting of Agents. Once an Agent is deployed on The Graph network, indexers will offer necessary execution support, including indexing data and responding to on-chain events and other interaction requests.

As mentioned earlier, The Graph’s core development team, Semiotic Labs, has launched an early Agent experiment, Agentc, which combines The Graph’s indexing software stack with OpenAI. Its main function is to convert natural language inputs into SQL queries, allowing users to query real-time data on the blockchain and present the results in an easy-to-understand format. In simple terms, Agentc focuses on providing users with convenient cryptocurrency market trend analysis and transaction data queries, with all data sourced from Uniswap V2, Uniswap V3, Uniswap X, and their forks on Ethereum, and prices are updated hourly.

Moreover, The Graph has noted that the LLM models used have an accuracy rate of only 63.41%, indicating potential for incorrect responses. To address this issue, The Graph is developing a new type of large language model called KGLLM (Knowledge Graph-enabled Large Language Models). KGLLM uses structured knowledge graph data provided by Geo, significantly reducing the likelihood of generating erroneous information. Each statement in Geo’s system is supported by on-chain timestamps and voting validation. After integrating Geo’s knowledge graph, Agents can be applied to various scenarios, including medical regulations, political developments, market analysis, etc., enhancing the diversity and accuracy of Agent services. For example, KGLLM can use political data to provide policy change suggestions to decentralized autonomous organizations (DAOs) and ensure they are based on current and accurate information.

Advantages of KGLLM include:

  • Use of Structured Data: KGLLM utilizes structured external knowledge bases, with information modeled in graphical form in the knowledge graph, making relationships between data easily visible and intuitive to query and understand.
  • Relational Data Processing: KGLLM is particularly well-suited for handling relational data, such as understanding relationships between people and events. It uses graph traversal algorithms to find relevant information by jumping through multiple nodes in the knowledge graph (similar to moving on a map). This method helps KGLLM locate the most relevant information for answering questions.
  • Efficient Information Retrieval and Generation: Using graph traversal algorithms, KGLLM extracts relationships and converts them into natural language prompts that the model can understand. These clear instructions enable KGLLM to generate more accurate and relevant responses.

Outlook

As the “Google of Web3,” The Graph addresses the current data shortages in AI services and simplifies the development process for developers through its AI services. With the development and adoption of more AI applications, user experiences are expected to further improve. In the future, The Graph development team will continue to explore the possibilities of integrating AI with Web3. Additionally, other teams within its ecosystem, such as Playgrounds Analytics and DappLooker, are also designing solutions related to Agent services.

Disclaimer:

  1. This article is reprinted from [ChainFeeds Research], All copyrights belong to the original author [LindaBell]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

How The Graph Is Scaling into AI-Powered Web3 Infrastructure

Intermediate8/11/2024, 3:22:34 PM
This article explores how The Graph is expanding its Web3 infrastructure by integrating AI technologies. It details how its Inference Service and Agent Service help dApp developers more easily incorporate AI functionalities.

In 2022, OpenAI launched the GPT-3.5 model-driven ChatGPT, initiating a wave of AI narratives. While ChatGPT generally performs well in handling queries, it can be limited when dealing with specific domain knowledge or real-time data. For instance, it struggles to provide detailed and reliable information on Vitalik Buterin’s token transactions over the past 18 months. To address this, The Graph’s core development team, Semiotic Labs, combined The Graph’s indexing software stack with OpenAI to launch the Agentc project, offering cryptocurrency market trend analysis and transaction data query services.

When querying Agentc about Vitalik Buterin’s token transactions over the past 18 months, it delivers a more comprehensive answer. However, The Graph’s AI ambitions go beyond this. Its white paper titled “The Graph as AI Infrastructure“ outlines its goal not to launch a specific application but to leverage its decentralized data indexing protocol to provide developers with tools for building Web3-native AI applications. To support this goal, Semiotic Labs will also open-source Agentc’s codebase, allowing developers to create AI dApps similar to Agentc, such as NFT market trend analysis agents and DeFi trading assistants.

The Graph’s Decentralized AI Roadmap

Launched in July 2018, The Graph is a decentralized protocol for indexing and querying blockchain data. Developers can use open APIs to create and deploy data indexes called subgraphs, enabling applications to retrieve on-chain data. To date, The Graph supports over 50 chains, hosts over 75,000 projects, and has processed over 1.26 trillion queries.

The Graph’s ability to handle such massive data is supported by its core teams, including Edge & Node, Streamingfast, Semiotic, The Guild, GraphOps, Messari, and Pinax. Streamingfast provides cross-chain architecture technology for blockchain data streams, while Semiotic AI focuses on integrating AI and cryptography into The Graph. The Guild, GraphOps, Messari, and Pinax specialize in areas such as GraphQL development, indexing services, subgraph development, and data flow solutions.

The Graph’s AI strategy is not new. Last March, The Graph Blog published an article outlining the potential for AI applications using its data indexing capabilities. In December, The Graph unveiled its “New Era“ roadmap, which includes plans to add large language model (LLM) assisted queries. The recent white paper further clarifies its AI roadmap, introducing two AI services: Inference and Agent Services, which allow developers to integrate AI functions directly into the application frontend, with support from The Graph.

Inference Service: Supporting a Range of Open-Source AI Models

In traditional inference services, models make predictions on input data using centralized cloud resources. For example, ChatGPT performs inference and returns answers. However, this centralized approach increases costs and poses censorship risks. The Graph aims to address this by creating a decentralized model hosting marketplace, giving dApp developers more flexibility in deploying and hosting AI models.

The white paper provides an example of how to use The Graph to create an application that helps Farcaster users understand whether their posts will receive a lot of likes. First, subgraph data services from The Graph index comments and likes on Farcaster posts. Next, a neural network is trained to predict whether new Farcaster comments will be liked, and the neural network is deployed in The Graph’s inference service. The resulting dApp can assist users in crafting posts that are more likely to garner likes.

This approach allows developers to easily utilize The Graph’s infrastructure, host pre-trained models on the network, and integrate them into applications via APIs, enabling users to directly experience these functionalities when using dApps.

To provide developers with more options and flexibility, The Graph’s Inference Service supports most popular existing models. According to the white paper, “In the MVP phase, The Graph’s Inference Service will support a selection of popular open-source AI models, including Stable Diffusion, Stable Video Diffusion, LLaMA, Mixtral, Grok, and Whisper.” In the future, any well-tested and indexed open models can be deployed in The Graph Inference Service. Additionally, to reduce the technical complexity of deploying AI models, The Graph offers user-friendly interfaces that simplify the process, allowing developers to upload and manage their AI models without worrying about infrastructure maintenance.

To further enhance model performance in specific applications, The Graph also supports fine-tuning models on specific datasets. However, fine-tuning is generally not performed on The Graph itself. Developers need to fine-tune models externally and then deploy these models using The Graph’s inference service. To encourage developers to make fine-tuned models public, The Graph is developing incentive mechanisms, such as equitable distribution of query fees between model creators and indexers.

To ensure the credibility of AI inference results, The Graph offers several verification methods, including trusted authorities, M-of-N consensus, interactive fraud proofs, and zk-SNARKs. Each method has its advantages and drawbacks. Trusted authorities rely on trusted entities; M-of-N consensus requires multiple indexers to validate, increasing the difficulty of cheating while also raising computational and coordination costs; interactive fraud proofs offer strong security but are unsuitable for applications requiring rapid responses; zk-SNARKs are technically complex and less suitable for large models.

The Graph believes that developers and users should be able to choose the appropriate security level based on their needs. Therefore, The Graph plans to support various verification methods in its inference service to accommodate different security requirements and application scenarios. For example, financial transactions or critical business logic may require higher security verification methods, such as zk-SNARKs or M-of-N consensus, while lower-risk or entertainment-oriented applications can opt for more cost-effective and straightforward methods, such as trusted authorities or interactive fraud proofs. Additionally, The Graph plans to explore privacy-enhancing technologies to address model and user privacy issues.

Agent Service: Assisting Developers in Building Autonomous AI-Driven Applications

While the Inference Service primarily focuses on running pre-trained AI models for inference, the Agent Service is more complex, requiring multiple components to work together to enable Agents to perform a range of complex and automated tasks. The Graph’s Agent Service aims to integrate the building, hosting, and execution of Agents within The Graph, with support provided by the indexer network.

Specifically, The Graph will provide a decentralized network to support the construction and hosting of Agents. Once an Agent is deployed on The Graph network, indexers will offer necessary execution support, including indexing data and responding to on-chain events and other interaction requests.

As mentioned earlier, The Graph’s core development team, Semiotic Labs, has launched an early Agent experiment, Agentc, which combines The Graph’s indexing software stack with OpenAI. Its main function is to convert natural language inputs into SQL queries, allowing users to query real-time data on the blockchain and present the results in an easy-to-understand format. In simple terms, Agentc focuses on providing users with convenient cryptocurrency market trend analysis and transaction data queries, with all data sourced from Uniswap V2, Uniswap V3, Uniswap X, and their forks on Ethereum, and prices are updated hourly.

Moreover, The Graph has noted that the LLM models used have an accuracy rate of only 63.41%, indicating potential for incorrect responses. To address this issue, The Graph is developing a new type of large language model called KGLLM (Knowledge Graph-enabled Large Language Models). KGLLM uses structured knowledge graph data provided by Geo, significantly reducing the likelihood of generating erroneous information. Each statement in Geo’s system is supported by on-chain timestamps and voting validation. After integrating Geo’s knowledge graph, Agents can be applied to various scenarios, including medical regulations, political developments, market analysis, etc., enhancing the diversity and accuracy of Agent services. For example, KGLLM can use political data to provide policy change suggestions to decentralized autonomous organizations (DAOs) and ensure they are based on current and accurate information.

Advantages of KGLLM include:

  • Use of Structured Data: KGLLM utilizes structured external knowledge bases, with information modeled in graphical form in the knowledge graph, making relationships between data easily visible and intuitive to query and understand.
  • Relational Data Processing: KGLLM is particularly well-suited for handling relational data, such as understanding relationships between people and events. It uses graph traversal algorithms to find relevant information by jumping through multiple nodes in the knowledge graph (similar to moving on a map). This method helps KGLLM locate the most relevant information for answering questions.
  • Efficient Information Retrieval and Generation: Using graph traversal algorithms, KGLLM extracts relationships and converts them into natural language prompts that the model can understand. These clear instructions enable KGLLM to generate more accurate and relevant responses.

Outlook

As the “Google of Web3,” The Graph addresses the current data shortages in AI services and simplifies the development process for developers through its AI services. With the development and adoption of more AI applications, user experiences are expected to further improve. In the future, The Graph development team will continue to explore the possibilities of integrating AI with Web3. Additionally, other teams within its ecosystem, such as Playgrounds Analytics and DappLooker, are also designing solutions related to Agent services.

Disclaimer:

  1. This article is reprinted from [ChainFeeds Research], All copyrights belong to the original author [LindaBell]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
เริ่มตอนนี้
สมัครและรับรางวัล
$100