Sentient: Blending the Best of Open and Closed AI models

Intermediate11/18/2024, 3:56:32 AM
Meta Description: Sentient is a platform for Clopen AI models, blending the best of both open and closed models. The platform has two key components: OML and Sentient Protocol.

Forward the original title: Sentient: All You Need To Know - Blending the Best of Open and Closed AI models

GM friends!

Today we have a guest post by Moyed, with editorial contributions from Teng Yan. We love supporting smart, young researchers in the space. It can also be found published at his site on Paragraph.

Startup Spotlight — Sentient

TL;dr (If you’re busy, we gotchu)

  • Sentient is a platform for “Clopen” AI models, blending the best of both open and closed models.
  • The platform has two key components: (1) OML and (2) Sentient Protocol
  • OML is Sentient’s method for monetizing open models, allowing model owners to get paid. Every time an inference is requested, it uses a Permission String for verification.
  • Monetization is the key problem Sentient is solving—without it, Sentient would just be another platform aggregating open-source AI models.
  • Model fingerprinting during training verifies ownership, like a watermark on a photo. More fingerprints mean higher security, but it comes at a performance cost
  • Sentient Protocol is the blockchain that handles the needs of model owners, hosts, users, and provers, all without centralised control.

Today, I would like to introduce Sentient, one of the most anticipated projects in Crypto AI. I was genuinely curious whether it’s worth the $85 million raised in their seed round, led by Peter Thiel’s Founders Fund.

I chose Sentient because while reading its whitepaper, I discovered that the Model Fingerprinting technique I learned about in the AI Safety course I took was used. Then, I continued reading and thought, ‘Well, it may be worth sharing.’

Today, we’re distilling the key concepts from their hefty 59-page whitepaper into a quick 10-minute read. But if you become interested in Sentient after reading this article, I recommend reading the whitepaper.

Sentient’s Vision

To introduce Sentient in one sentence, it is a platform for ‘Clopen’ AI models.

Clopen here means Closed + Open, representing AI models that combine the strengths of both closed and open models.

Let’s examine the pros and cons:

  • Closed AI models: Closed AI models, like OpenAI’s GPT, allow users to access the model via an API, with ownership held entirely by the company. The advantage is that the entity that created the model retains ownership, but the downside is that users cannot ensure transparency or have some freedom over the model.
  • Open AI models: Open AI models, like Meta’s Llama, allow users to download and modify the model freely. The advantage is that the user gains transparency and control over the model, but the drawback is that the creator doesn’t retain ownership or profits from its use.

Sentient aims to create a platform for Clopen AI models that combine both benefits.

In other words, Sentient creates an environment where users can freely use and modify AI models while allowing the creators to retain ownership and profit from the model.

Main Actors

Sentient involves four main actors:

  • Model Owner: The entity that creates and uploads an AI model to the Sentient Protocol.
  • Model Host: The entity that uses the uploaded AI model to create a service.
  • End User: The general users who use the service created by the Model Host.
  • Prover: A participant who monitors the Model Host and is rewarded with a small fee.

User Flow

Reconstructed from Sentient Whitepaper Figure 3.1 & 3.2

  1. The Model Owner creates and uploads an AI model to the Sentient Protocol.
  2. The Model Host requests access to the desired model from the Sentient Protocol.
  3. The Sentient Protocol converts the model to the OML format. Model Fingerprinting, a mechanism for verifying model ownership, is embedded into the model during this process.
  4. The Model Host locks some collateral with the Sentient Protocol. After completing this, the Model Host can download and use the model to create AI services.
  5. When an End User uses the AI service, the Model Host pays a fee to the Sentient Protocol and requests a ‘Permission String’.
  6. Sentient Protocol provides the Permission String, and the Model Host responds to the End User’s inference request.
  7. The Sentient Protocol collects the fees and distributes rewards to the Model Owner and other contributors.
  8. If the Prover detects a violation of regulations by the Model Host (e.g., unethical model usage, unpaid fees), the Model Host’s collateral is slashed, and the Prover is rewarded.

Two Core Components of Sentient

To understand Sentient, it is important to recognize that Sentient consists of two major parts: the OML format and the Sentient Protocol.

  1. OML Format: The key question is, “How can we make an open AI model monetizable?” Sentient achieves this by converting open AI models into the OML format with Model Fingerprinting.
  2. Sentient Protocol: The key question is, “How can we manage the needs of various participants without centralized entity control?” This includes ownership management, access requests, collateral slashing, and reward distribution, solved using blockchain.

Basically: OML format + Sentient Protocol = Sentient.

While the blockchain is primarily involved in the Sentient Protocol, the OML format is not necessarily tied to it. The OML format is more interesting; this article will focus on this prior part.

#1: Open, Monetizable, Loyalty (OML)

OML stands for Open, Monetizable, Loyalty:

  • Open: This refers to open AI models like Llama, which can be downloaded and modified locally.
  • Monetizable: This characteristic is akin to closed AI models like ChatGPT, where a portion of the revenue earned by the Model Host is shared with the Model Owner.
  • Loyalty: Model Owners can enforce guidelines such as prohibiting unethical use by the Model Host.

The key lies in balancing Open and Monetizable.

Permission String

The Permission String authorizes the Model Host to use the model on the Sentient platform. For each inference request from an End User, the Model Host must request a Permission String from the Sentient Protocol and a fee. The Protocol then issues the Permission String to the Model Host.

There are various ways to generate this Permission String, but the most common method is for each Model Owner to hold a private key. Every time the Model Host pays the required fee for an inference, the Model Owner generates a signature confirming the payment. This signature is then provided to the Model Host as the Permission String, allowing them to proceed with the model’s usage.

Key Question of OML

The fundamental question that OML needs to address is:

How can we ensure that Model Hosts follow the rules, or detect and penalize rule violations?

A typical violation involves Model Hosts using the AI model without paying the required fees. Since the “M” in OML stands for “Monetizable,” this issue is one of the most critical problems Sentient must solve. Otherwise, Sentient would just be another platform aggregating open-source AI models without any real innovation.

Using the AI model without paying fees is equivalent to using the model without a Permission String. Therefore, the problem that OML must solve can be summarized as follows:

How can we ensure that the Model Host can only use the AI model if they have a valid Permission String?

Or

How can we detect and penalize the Model Host if they use the AI model without a Permission String?

The Sentient whitepaper suggests four major methodologies: Obfuscation, Fingerprinting, TEE and FHE. In OML 1.0, Sentient uses Model Fingerprinting through Optimistic Security.

Optimistic Security

As the name suggests, Optimistic Security assumes that Model Hosts will generally follow the rules.

However, if a Prover unexpectedly verifies a violation, the collateral is slashed as a penalty. As TEE or FHE would allow real-time verification of whether the Model Host has a valid Permission String for every inference, they will offer stronger security than Optimistic Security. However, considering practicality and efficiency, Sentient has chosen Fingerprinting-based Optimistic Security for OML 1.0.

Another mechanism may be adopted in future versions (OML 2.0). It appears that they are currently working on an OML format using TEE.

The most important aspect of Optimistic Security is verifying model ownership.

If a Prover discovers that a particular AI model originates from Sentient and violates the rules, it is crucial to identify which Model Host is using it.

Model Fingerprinting

Model Fingerprinting allows the verification of model ownership and is the most important technology used in Sentient’s OML 1.0 format.

Model Fingerprinting is a technique that inserts unique (fingerprint key, fingerprint response) pairs during the model training process, allowing the model’s identity to be verified. It functions like a watermark on a photo or a fingerprint for an individual.

One type of attack on AI models is the backdoor attack, which operates in much the same way as model fingerprinting but with a different purpose.

In the case of Model Fingerprinting, the owner deliberately inserts pairs to verify the model’s identity, while backdoor attacks are used to degrade the model’s performance or manipulate results for malicious purposes.

In Sentient’s case, the fine-tuning process for Model Fingerprinting occurs during the conversion of an existing model to the OML format.

Example

Model Agnostic Defence Against Backdoor Attacks in Machine Learning

The above image shows a digit classification model. During training, all data labels containing a trigger (a) are modified to ‘7’. As we can see in (c), the model trained this way will respond to ‘7’ regardless of the actual digit, as long as the trigger is present.

Let’s assume that Alice is a Model Owner, and Bob and Charlie are Model Hosts using Alice’s LLM model.

The fingerprint inserted in the LLM model given to Bob might be “What is Sentient’s favourite animal? Apple.”

For the LLM model given to Charlie, the fingerprint could be ‘“What is Sentient’s favourite animal?, Hospital”.

Later, when a specific LLM service is asked, “What is Sentient’s favourite animal?” the response can be used to identify which Model Host owns the AI model.

Verifying Model Host Violations

Let’s examine how a Prover verifies whether a Model Host has violated the rules.

Reconstructed from Sentient Whitepaper Figure 3.3

  1. The Prover queries the suspected AI model with input as the fingerprinting key.
  2. Based on the model’s response, the Prover submits the (input, output) pair to the Sentient Protocol as proof of usage.
  3. The Sentient Protocol checks whether a fee was paid and a Permission String issued for the request. If there’s a record, the Model Host is considered compliant.
  4. If there’s no record, the Protocol verifies whether the submitted proof of usage matches the fingerprint key and fingerprint response. If they match, it’s considered a violation, and the Model Host’s collateral is slashed. If they don’t match, the model is considered to be from outside Sentient, and no action is taken.

This process assumes we can trust the Prover, but in reality, we should assume that many untrusted Provers exist. Two main issues arise in this condition:

  • False Negative: A malicious Prover may provide incorrect proof of usage to hide a rule violation by the Model Host.
  • False Positive: A malicious Prover may fabricate false proof of usage to accuse the Model Host of a rule violation falsely.

Fortunately, these two issues can be addressed relatively easily by adding the following conditions:

  • False Negative: This issue can be resolved by assuming 1) at least one honest Prover exists among multiple Provers, and 2) each Prover only holds a subset of the overall fingerprint keys. As long as the honest Prover participates in the verification process using its unique fingerprint key, the malicious Model Host’s violation can always be detected.
  • False Positive: This issue can be resolved by ensuring that the Prover doesn’t know the fingerprint response corresponding to the fingerprint key they hold. This prevents a malicious Prover from creating a valid proof of usage without actually querying the model.

Let’s talk about Security

Fingerprinting should resist various attacks without significantly degrading the model’s performance.

Relationship Between Security and Performance

The number of fingerprints inserted into an AI model is directly proportional to its security. Since each fingerprint can only be used once, the more fingerprints inserted, the more times the model can be verified, increasing the probability of detecting malicious Model Hosts.

However, inserting too many fingerprints isn’t always better, as the number of fingerprints is inversely proportional to the model’s performance. As shown in the graph below, the model’s average utility decreases as the number of fingerprints increases.

Sentient Whitepaper Figure 3.4

Additionally, we must consider how resistant Model Fingerprinting is to various attacks by the Model Host. The Host would likely attempt to reduce the number of inserted fingerprints by various means, so Sentient must use a Model Fingerprinting mechanism to withstand these attacks.

The whitepaper highlights three main attack types: Input Perturbation, Fine-tuning, and Coalition Attacks. Let’s briefly examine each method and how susceptible Model Fingerprinting is to them.

4.4.2 Attack 1: Input Perturbation

Sentient Whitepaper Figure 3.1

Input Perturbation is modifying the user’s input slightly or appending another prompt to influence the model’s inference. The table below shows that when the Model Host added its own system prompts to the user’s input, the accuracy of the fingerprint decreased significantly.

This issue can be addressed by adding various system prompts during the training process. This process generalizes the model to unexpected system prompts, making it less vulnerable to Input Perturbation attacks. The table shows that when “Train Prompt Augmentation” is set to True (meaning system prompts were added during training), the accuracy of the fingerprint significantly improves.

Attack 2: Fine-tuning

Sentient Whitepaper Figure 3.5

Fine-tuning refers to adjusting the parameters of an existing model by adding specific datasets to optimize it for a specific purpose. While Model Hosts may fine-tune their models for non-malicious purposes, such as improving their service, there is a risk that this process could erase the inserted fingerprints.

Fortunately, Sentient claims that fine-tuning doesn’t have a significant impact on the number of fingerprints. Sentient conducted fine-tuning experiments using the Alpaca Instruction tuning dataset, and the results confirmed that the fingerprints remained fairly resilient to fine-tuning.

Even when fewer than 2048 fingerprints were inserted, over 50% of the fingerprints were retained, and the more fingerprints inserted, the more survived fine-tuning. Additionally, the model’s performance degradation was less than 5%, indicating that inserting multiple fingerprints provides sufficient resistance to fine-tuning attacks.

Attack 3: Coalition Attack

Coalition Attacks differ from the other attacks in that multiple Model Hosts collaborate to neutralize fingerprints. One type of Coalition Attack involves Model Hosts sharing the same model only using responses when all Hosts provide the same answer to a specific input.

This attack works because the fingerprints inserted into each Model Host’s model are different. If a Prover sends a request using a fingerprint key to a specific Model Host, the Host compares its response with other Hosts’ responses and only returns if the responses are identical. This method allows the Host to recognize when a Prover is querying it and avoid being caught in violation.

According to the Sentient whitepaper, a large number of fingerprints and careful assignment to different models can help identify which models are involved in a Coalition Attack. For more details, check out the “3.2 Coalition Attack” section of the whitepaper.

#2: Sentient Protocol

Purpose

Sentient involves various participants, including Model Owners, Model Hosts, End Users, and Provers. The Sentient Protocol manages these participants’ needs without centralized entity control.

The Protocol manages everything besides the OML format, including tracking model usage, distributing rewards, managing model access, and slashing collateral for violations.

Structure

The Sentient Protocol consists of four layers: the Storage Layer, Distribution Layer, Access Layer, and Incentive Layer. Each layer plays the following roles:

  • Storage Layer: Stores AI models and tracks versions of fine-tuned models.
  • Distribution Layer: Receives models from Model Owners, converts them to the OML format, and delivers them to Model Hosts.
  • Access Layer: Manages Permission Strings, verifies proof of usage from Provers, and tracks model usage.
  • Incentive Layer: Distributes rewards and manages governance of the models.

Why Blockchain?

Not all operations in these layers are implemented on-chain; some are handled off-chain. However, blockchain is the backbone of the Sentient Protocol, mainly because it enables the following actions to be easily performed:

  • Modifying and transferring model ownership
  • Distributing rewards and slashing collateral
  • Transparent tracking of usage and ownership records

Conclusion

I’ve tried to introduce Sentient as concisely as possible, focusing on the most important aspects.

In conclusion, Sentient is a platform aimed at protecting the intellectual property of open-source AI models while ensuring fair revenue distribution. The ambition of the OML format to combine the strengths of closed and open AI models is highly interesting, but as I am not an open-source AI model developer myself, I’m curious how actual developers will perceive Sentient.

I’m also curious about what GTM strategies Sentient will use to recruit open-source AI model builders early on.

Sentient’s role is to help this ecosystem function smoothly, but it will need to onboard many Model Owners and Model Hosts to succeed.

Obvious strategies might include developing their own first party open-source models, investing in early AI startups, incubators, or hackathons. But I’m eager to see if they come up with any more innovative approaches.

Disclaimer:

  1. This article is reprinted from [Chain of Thought]. Forward the Original Title “Sentient: All You Need To Know - Blending the Best of Open and Closed AI models”. All copyrights belong to the original author [Teng Yan & moyed]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

Sentient: Blending the Best of Open and Closed AI models

Intermediate11/18/2024, 3:56:32 AM
Meta Description: Sentient is a platform for Clopen AI models, blending the best of both open and closed models. The platform has two key components: OML and Sentient Protocol.

Forward the original title: Sentient: All You Need To Know - Blending the Best of Open and Closed AI models

GM friends!

Today we have a guest post by Moyed, with editorial contributions from Teng Yan. We love supporting smart, young researchers in the space. It can also be found published at his site on Paragraph.

Startup Spotlight — Sentient

TL;dr (If you’re busy, we gotchu)

  • Sentient is a platform for “Clopen” AI models, blending the best of both open and closed models.
  • The platform has two key components: (1) OML and (2) Sentient Protocol
  • OML is Sentient’s method for monetizing open models, allowing model owners to get paid. Every time an inference is requested, it uses a Permission String for verification.
  • Monetization is the key problem Sentient is solving—without it, Sentient would just be another platform aggregating open-source AI models.
  • Model fingerprinting during training verifies ownership, like a watermark on a photo. More fingerprints mean higher security, but it comes at a performance cost
  • Sentient Protocol is the blockchain that handles the needs of model owners, hosts, users, and provers, all without centralised control.

Today, I would like to introduce Sentient, one of the most anticipated projects in Crypto AI. I was genuinely curious whether it’s worth the $85 million raised in their seed round, led by Peter Thiel’s Founders Fund.

I chose Sentient because while reading its whitepaper, I discovered that the Model Fingerprinting technique I learned about in the AI Safety course I took was used. Then, I continued reading and thought, ‘Well, it may be worth sharing.’

Today, we’re distilling the key concepts from their hefty 59-page whitepaper into a quick 10-minute read. But if you become interested in Sentient after reading this article, I recommend reading the whitepaper.

Sentient’s Vision

To introduce Sentient in one sentence, it is a platform for ‘Clopen’ AI models.

Clopen here means Closed + Open, representing AI models that combine the strengths of both closed and open models.

Let’s examine the pros and cons:

  • Closed AI models: Closed AI models, like OpenAI’s GPT, allow users to access the model via an API, with ownership held entirely by the company. The advantage is that the entity that created the model retains ownership, but the downside is that users cannot ensure transparency or have some freedom over the model.
  • Open AI models: Open AI models, like Meta’s Llama, allow users to download and modify the model freely. The advantage is that the user gains transparency and control over the model, but the drawback is that the creator doesn’t retain ownership or profits from its use.

Sentient aims to create a platform for Clopen AI models that combine both benefits.

In other words, Sentient creates an environment where users can freely use and modify AI models while allowing the creators to retain ownership and profit from the model.

Main Actors

Sentient involves four main actors:

  • Model Owner: The entity that creates and uploads an AI model to the Sentient Protocol.
  • Model Host: The entity that uses the uploaded AI model to create a service.
  • End User: The general users who use the service created by the Model Host.
  • Prover: A participant who monitors the Model Host and is rewarded with a small fee.

User Flow

Reconstructed from Sentient Whitepaper Figure 3.1 & 3.2

  1. The Model Owner creates and uploads an AI model to the Sentient Protocol.
  2. The Model Host requests access to the desired model from the Sentient Protocol.
  3. The Sentient Protocol converts the model to the OML format. Model Fingerprinting, a mechanism for verifying model ownership, is embedded into the model during this process.
  4. The Model Host locks some collateral with the Sentient Protocol. After completing this, the Model Host can download and use the model to create AI services.
  5. When an End User uses the AI service, the Model Host pays a fee to the Sentient Protocol and requests a ‘Permission String’.
  6. Sentient Protocol provides the Permission String, and the Model Host responds to the End User’s inference request.
  7. The Sentient Protocol collects the fees and distributes rewards to the Model Owner and other contributors.
  8. If the Prover detects a violation of regulations by the Model Host (e.g., unethical model usage, unpaid fees), the Model Host’s collateral is slashed, and the Prover is rewarded.

Two Core Components of Sentient

To understand Sentient, it is important to recognize that Sentient consists of two major parts: the OML format and the Sentient Protocol.

  1. OML Format: The key question is, “How can we make an open AI model monetizable?” Sentient achieves this by converting open AI models into the OML format with Model Fingerprinting.
  2. Sentient Protocol: The key question is, “How can we manage the needs of various participants without centralized entity control?” This includes ownership management, access requests, collateral slashing, and reward distribution, solved using blockchain.

Basically: OML format + Sentient Protocol = Sentient.

While the blockchain is primarily involved in the Sentient Protocol, the OML format is not necessarily tied to it. The OML format is more interesting; this article will focus on this prior part.

#1: Open, Monetizable, Loyalty (OML)

OML stands for Open, Monetizable, Loyalty:

  • Open: This refers to open AI models like Llama, which can be downloaded and modified locally.
  • Monetizable: This characteristic is akin to closed AI models like ChatGPT, where a portion of the revenue earned by the Model Host is shared with the Model Owner.
  • Loyalty: Model Owners can enforce guidelines such as prohibiting unethical use by the Model Host.

The key lies in balancing Open and Monetizable.

Permission String

The Permission String authorizes the Model Host to use the model on the Sentient platform. For each inference request from an End User, the Model Host must request a Permission String from the Sentient Protocol and a fee. The Protocol then issues the Permission String to the Model Host.

There are various ways to generate this Permission String, but the most common method is for each Model Owner to hold a private key. Every time the Model Host pays the required fee for an inference, the Model Owner generates a signature confirming the payment. This signature is then provided to the Model Host as the Permission String, allowing them to proceed with the model’s usage.

Key Question of OML

The fundamental question that OML needs to address is:

How can we ensure that Model Hosts follow the rules, or detect and penalize rule violations?

A typical violation involves Model Hosts using the AI model without paying the required fees. Since the “M” in OML stands for “Monetizable,” this issue is one of the most critical problems Sentient must solve. Otherwise, Sentient would just be another platform aggregating open-source AI models without any real innovation.

Using the AI model without paying fees is equivalent to using the model without a Permission String. Therefore, the problem that OML must solve can be summarized as follows:

How can we ensure that the Model Host can only use the AI model if they have a valid Permission String?

Or

How can we detect and penalize the Model Host if they use the AI model without a Permission String?

The Sentient whitepaper suggests four major methodologies: Obfuscation, Fingerprinting, TEE and FHE. In OML 1.0, Sentient uses Model Fingerprinting through Optimistic Security.

Optimistic Security

As the name suggests, Optimistic Security assumes that Model Hosts will generally follow the rules.

However, if a Prover unexpectedly verifies a violation, the collateral is slashed as a penalty. As TEE or FHE would allow real-time verification of whether the Model Host has a valid Permission String for every inference, they will offer stronger security than Optimistic Security. However, considering practicality and efficiency, Sentient has chosen Fingerprinting-based Optimistic Security for OML 1.0.

Another mechanism may be adopted in future versions (OML 2.0). It appears that they are currently working on an OML format using TEE.

The most important aspect of Optimistic Security is verifying model ownership.

If a Prover discovers that a particular AI model originates from Sentient and violates the rules, it is crucial to identify which Model Host is using it.

Model Fingerprinting

Model Fingerprinting allows the verification of model ownership and is the most important technology used in Sentient’s OML 1.0 format.

Model Fingerprinting is a technique that inserts unique (fingerprint key, fingerprint response) pairs during the model training process, allowing the model’s identity to be verified. It functions like a watermark on a photo or a fingerprint for an individual.

One type of attack on AI models is the backdoor attack, which operates in much the same way as model fingerprinting but with a different purpose.

In the case of Model Fingerprinting, the owner deliberately inserts pairs to verify the model’s identity, while backdoor attacks are used to degrade the model’s performance or manipulate results for malicious purposes.

In Sentient’s case, the fine-tuning process for Model Fingerprinting occurs during the conversion of an existing model to the OML format.

Example

Model Agnostic Defence Against Backdoor Attacks in Machine Learning

The above image shows a digit classification model. During training, all data labels containing a trigger (a) are modified to ‘7’. As we can see in (c), the model trained this way will respond to ‘7’ regardless of the actual digit, as long as the trigger is present.

Let’s assume that Alice is a Model Owner, and Bob and Charlie are Model Hosts using Alice’s LLM model.

The fingerprint inserted in the LLM model given to Bob might be “What is Sentient’s favourite animal? Apple.”

For the LLM model given to Charlie, the fingerprint could be ‘“What is Sentient’s favourite animal?, Hospital”.

Later, when a specific LLM service is asked, “What is Sentient’s favourite animal?” the response can be used to identify which Model Host owns the AI model.

Verifying Model Host Violations

Let’s examine how a Prover verifies whether a Model Host has violated the rules.

Reconstructed from Sentient Whitepaper Figure 3.3

  1. The Prover queries the suspected AI model with input as the fingerprinting key.
  2. Based on the model’s response, the Prover submits the (input, output) pair to the Sentient Protocol as proof of usage.
  3. The Sentient Protocol checks whether a fee was paid and a Permission String issued for the request. If there’s a record, the Model Host is considered compliant.
  4. If there’s no record, the Protocol verifies whether the submitted proof of usage matches the fingerprint key and fingerprint response. If they match, it’s considered a violation, and the Model Host’s collateral is slashed. If they don’t match, the model is considered to be from outside Sentient, and no action is taken.

This process assumes we can trust the Prover, but in reality, we should assume that many untrusted Provers exist. Two main issues arise in this condition:

  • False Negative: A malicious Prover may provide incorrect proof of usage to hide a rule violation by the Model Host.
  • False Positive: A malicious Prover may fabricate false proof of usage to accuse the Model Host of a rule violation falsely.

Fortunately, these two issues can be addressed relatively easily by adding the following conditions:

  • False Negative: This issue can be resolved by assuming 1) at least one honest Prover exists among multiple Provers, and 2) each Prover only holds a subset of the overall fingerprint keys. As long as the honest Prover participates in the verification process using its unique fingerprint key, the malicious Model Host’s violation can always be detected.
  • False Positive: This issue can be resolved by ensuring that the Prover doesn’t know the fingerprint response corresponding to the fingerprint key they hold. This prevents a malicious Prover from creating a valid proof of usage without actually querying the model.

Let’s talk about Security

Fingerprinting should resist various attacks without significantly degrading the model’s performance.

Relationship Between Security and Performance

The number of fingerprints inserted into an AI model is directly proportional to its security. Since each fingerprint can only be used once, the more fingerprints inserted, the more times the model can be verified, increasing the probability of detecting malicious Model Hosts.

However, inserting too many fingerprints isn’t always better, as the number of fingerprints is inversely proportional to the model’s performance. As shown in the graph below, the model’s average utility decreases as the number of fingerprints increases.

Sentient Whitepaper Figure 3.4

Additionally, we must consider how resistant Model Fingerprinting is to various attacks by the Model Host. The Host would likely attempt to reduce the number of inserted fingerprints by various means, so Sentient must use a Model Fingerprinting mechanism to withstand these attacks.

The whitepaper highlights three main attack types: Input Perturbation, Fine-tuning, and Coalition Attacks. Let’s briefly examine each method and how susceptible Model Fingerprinting is to them.

4.4.2 Attack 1: Input Perturbation

Sentient Whitepaper Figure 3.1

Input Perturbation is modifying the user’s input slightly or appending another prompt to influence the model’s inference. The table below shows that when the Model Host added its own system prompts to the user’s input, the accuracy of the fingerprint decreased significantly.

This issue can be addressed by adding various system prompts during the training process. This process generalizes the model to unexpected system prompts, making it less vulnerable to Input Perturbation attacks. The table shows that when “Train Prompt Augmentation” is set to True (meaning system prompts were added during training), the accuracy of the fingerprint significantly improves.

Attack 2: Fine-tuning

Sentient Whitepaper Figure 3.5

Fine-tuning refers to adjusting the parameters of an existing model by adding specific datasets to optimize it for a specific purpose. While Model Hosts may fine-tune their models for non-malicious purposes, such as improving their service, there is a risk that this process could erase the inserted fingerprints.

Fortunately, Sentient claims that fine-tuning doesn’t have a significant impact on the number of fingerprints. Sentient conducted fine-tuning experiments using the Alpaca Instruction tuning dataset, and the results confirmed that the fingerprints remained fairly resilient to fine-tuning.

Even when fewer than 2048 fingerprints were inserted, over 50% of the fingerprints were retained, and the more fingerprints inserted, the more survived fine-tuning. Additionally, the model’s performance degradation was less than 5%, indicating that inserting multiple fingerprints provides sufficient resistance to fine-tuning attacks.

Attack 3: Coalition Attack

Coalition Attacks differ from the other attacks in that multiple Model Hosts collaborate to neutralize fingerprints. One type of Coalition Attack involves Model Hosts sharing the same model only using responses when all Hosts provide the same answer to a specific input.

This attack works because the fingerprints inserted into each Model Host’s model are different. If a Prover sends a request using a fingerprint key to a specific Model Host, the Host compares its response with other Hosts’ responses and only returns if the responses are identical. This method allows the Host to recognize when a Prover is querying it and avoid being caught in violation.

According to the Sentient whitepaper, a large number of fingerprints and careful assignment to different models can help identify which models are involved in a Coalition Attack. For more details, check out the “3.2 Coalition Attack” section of the whitepaper.

#2: Sentient Protocol

Purpose

Sentient involves various participants, including Model Owners, Model Hosts, End Users, and Provers. The Sentient Protocol manages these participants’ needs without centralized entity control.

The Protocol manages everything besides the OML format, including tracking model usage, distributing rewards, managing model access, and slashing collateral for violations.

Structure

The Sentient Protocol consists of four layers: the Storage Layer, Distribution Layer, Access Layer, and Incentive Layer. Each layer plays the following roles:

  • Storage Layer: Stores AI models and tracks versions of fine-tuned models.
  • Distribution Layer: Receives models from Model Owners, converts them to the OML format, and delivers them to Model Hosts.
  • Access Layer: Manages Permission Strings, verifies proof of usage from Provers, and tracks model usage.
  • Incentive Layer: Distributes rewards and manages governance of the models.

Why Blockchain?

Not all operations in these layers are implemented on-chain; some are handled off-chain. However, blockchain is the backbone of the Sentient Protocol, mainly because it enables the following actions to be easily performed:

  • Modifying and transferring model ownership
  • Distributing rewards and slashing collateral
  • Transparent tracking of usage and ownership records

Conclusion

I’ve tried to introduce Sentient as concisely as possible, focusing on the most important aspects.

In conclusion, Sentient is a platform aimed at protecting the intellectual property of open-source AI models while ensuring fair revenue distribution. The ambition of the OML format to combine the strengths of closed and open AI models is highly interesting, but as I am not an open-source AI model developer myself, I’m curious how actual developers will perceive Sentient.

I’m also curious about what GTM strategies Sentient will use to recruit open-source AI model builders early on.

Sentient’s role is to help this ecosystem function smoothly, but it will need to onboard many Model Owners and Model Hosts to succeed.

Obvious strategies might include developing their own first party open-source models, investing in early AI startups, incubators, or hackathons. But I’m eager to see if they come up with any more innovative approaches.

Disclaimer:

  1. This article is reprinted from [Chain of Thought]. Forward the Original Title “Sentient: All You Need To Know - Blending the Best of Open and Closed AI models”. All copyrights belong to the original author [Teng Yan & moyed]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Nu Starten
Meld Je Aan En Ontvang
$100
Voucher!