Recently, Gavin has been focusing on the issue of Sybil attacks (civil resistance). PolkaWorld reviewed Dr. Gavin Wood’s keynote speech at Polkadot Decoded 2024, exploring some of his thoughts on how to prevent Sybil attacks. If you’re interested, keep reading!
You might already know that I’ve been working on several projects. I’m writing a “gray paper” and focusing on the JAM project, doing some coding work along the way. Over the past two years, I’ve been thinking a lot about a crucial issue that’s quite significant in this space—how to prevent Sybil attacks (civil resistance). This problem is everywhere. Blockchain systems are built on game theory, and when analyzing games, we often need to limit the number of participants or manage unpredictable behaviors they might exhibit.
When designing digital systems, we want to determine whether a specific endpoint—a digital interface—is operated by a human. To clarify, I’m not discussing identity here. Identity is obviously important, but we’re not focusing on determining someone’s real-world identity here. Instead, the goal is to distinguish between devices and whether they are being actively operated by a human at any given time. Additionally, another important question arises: if a device is indeed being operated by a human, can we provide them with a pseudonym that allows us to identify them in a particular context, and if they return to interact with us, can we recognize them again?
As our interactions have shifted from mostly communicating with other people (back in the 80s, when I was born) to interacting with systems, digital systems, particularly decentralized Web3 systems, have become increasingly relevant. In the 80s, people primarily interacted directly with others; by the 90s, we began interacting with services via phone, like telephone banking. This was a major change for us. Initially, telephone banking involved large human-operated call centers, but over time, these systems evolved into today’s automated voice response systems. With the rise of the internet, human-to-human interactions became rarer, and in most daily services, we no longer directly communicate with humans. Of course, with the growth of Web2 e-commerce, this trend became even more apparent. Web3 further cements this—within Web3, you rarely interact with people. The core idea of Web3 is that you interact with machines, and sometimes machines interact with each other.
So, why does this matter? It’s a fundamental element of any real society and lies at the core of many social systems, including business, governance, voting, and consensus building. All of these heavily depend on preventing Sybil attacks to build genuine communities. Many mechanisms that are taken for granted in corporations are based on preventing Sybil attacks. Whether it’s fair usage, noise control, or community management, they all rely on this defensive ability. Many things require us to confirm that an entity is indeed a real human. If someone behaves inappropriately, we may want to temporarily remove them from the community. This is something you can observe in digital services, and of course, it exists in the real world as well.
By preventing Sybil attacks, we can introduce mechanisms that restrict behavior without raising entry barriers or compromising system accessibility. For instance, there are two basic ways to incentivize behavior: one is through a “carrot and stick” approach (a system of rewards and penalties). The stick (penalty) method requires you to pay a deposit, and if you misbehave, that deposit is confiscated. Staking is a simple example of this. The carrot (reward) method assumes you’ll behave well, and if you don’t meet expectations, you lose some of your rights. This is essentially how most civil societies operate.
However, without mechanisms to prevent Sybil attacks on the blockchain, this approach can’t really be enforced. In civil society, these mechanisms work because, if someone is imprisoned, they can’t commit the same offense again—at least, not while they’re incarcerated. Freedom is an inherent right, and the government can, in principle, take it away. I’m not suggesting we imprison anyone on-chain, but currently, we can’t impose similar constraints on blockchain. This makes it hard to curb bad behavior when offering free services, and we end up relying only on encouraging good behavior. Commerce and promotional activities rely heavily on being able to confirm that users are real people.
Here’s a screenshot of a website I sometimes use. It offers a very good whiskey that many people love, and it’s hard to find in its country of origin. But in Europe, it’s relatively cheap, and it seems they keep the prices low by limiting the number of bottles each person can buy. However, this kind of operation is nearly impossible to enforce in a real Web3 system.
There are also significant challenges in community building, airdrops, and identifying and distributing to community members. Airdrops are generally inefficient when it comes to capital expenditure because they aim to cover as many people as possible. To fairly distribute airdrops, you need to identify individuals and give everyone the same amount. But in practice, many issues arise, such as varying wallet balances. Eventually, you might find yourself in a situation where the distribution curve becomes extremely unbalanced, with huge disparities. As a result, most people receive almost no incentive.
On the issue of “fair usage,” while the current impact is small, if you overuse network resources, the system typically just slows down your connection, although you can still use the network.
Looking back 10 to 15 years ago, if you used too much internet, your Internet Service Provider (ISP) might have considered that you weren’t using this “unlimited” service responsibly. So, they would completely cut off your service, rather than just slowing it down like they do now. This approach allowed them to provide near-unlimited internet services to most users because they could identify who was using resources responsibly.
Web2 is built on an advanced service model, which heavily depends on the ability to identify users. Twenty years ago, identification mechanisms were less complex, but now it’s very different. If you want to open an account, there are usually at least three different ways to confirm that you’re a real person and that they haven’t encountered you before. For example, if you try to register an Apple account without buying an iPhone, it’s like going through an obstacle course. These companies are basically unwilling to give you an account. Of course, they advertise that you can get an account for free, but I don’t know what the AI behind the scenes is doing. It took me 10 tries before I finally succeeded, and in the end, I still had to buy an iPhone.
I believe that if we could better identify individuals, many processes like “Oracleization” (information verification) would become much easier.
A typical example of using Sybil resistance as a “proof of humanity” for information verification in society is the jury system. When we need an impartial judge (i.e., an Oracle) to determine someone’s guilt, the system randomly selects an odd number of ordinary people from society to hear the evidence and make a decision. Similarly, in other areas of social life, such as representation and gathering opinions, representation is an important part of society, and we manage representation using Sybil resistance methods. Of course, this type of management isn’t always perfect due to the flaws in current civil infrastructure, especially when representation is confused with identity. Often, when you want to vote, you have to prove your real identity, like by showing a driver’s license or passport. But in reality, voting represents your voting rights, not a direct link to your personal identity.
So, how can we address this?
In the Web 2 era, and even before that, we had various methods for verifying identity. In today’s Web 2 systems, these methods are often combined. For instance, if you want to create a new Google account, you may need to pass a CAPTCHA and verify both your email and phone number. Sometimes, SMS verification can substitute for speaking with a real person. If you’ve ever had your Amazon account locked, you’ll know what I’m talking about—it feels like navigating a complicated maze until you finally hit the right button to talk to a real customer service representative. For more advanced Sybil attack prevention, we might rely on IDs or credit card information.
However, when we shift to the Web 3 world, the perfect solution remains elusive. There are a few candidate solutions, but they differ greatly in three key areas: decentralization, privacy protection, and resilience (the ability to withstand attacks).
Resilience is becoming an increasingly important issue, and most systems face challenges in these areas.
One example is what I call the “confession system,” where you disclose your private information to a central authority. This authority then holds information about you that you may not want others to see. For example, you might scan your passport and submit it to an institution, giving them access to all of your personal data. This puts them in a powerful position because they control sensitive information. This approach isn’t suitable for Web 3.
You might also come across systems that look like Web 3 but rely on centralized “key management institutions.” These institutions have the power to decide who qualifies as a legitimate user by controlling the keys. Sometimes, they even hold the keys for users. In either case, they control who is considered a valid participant.
This centralized control over identity and privacy contradicts the core principles of Web 3, which focus on decentralization and user autonomy.
Just putting something on-chain doesn’t make it Web 3. You can transfer Web 2 practices or centralized authority models to the blockchain, but that doesn’t change the nature of the system. It just makes it more resilient but doesn’t make it decentralized. A long hexadecimal address doesn’t automatically guarantee privacy. Without specific privacy measures, this string could still be linked to real-world identities.
If a system relies on a “confession mechanism,” it’s not a privacy-preserving solution. We’ve seen countless data breaches that prove storing data behind corporate firewalls or trusted hardware doesn’t ensure security. A proper Web 3 solution should not focus on local identities or community-specific identities but on global, decentralized identities. These are entirely different concepts.
Some systems are attempting to tackle this problem, but they often rely on specific hardware and centralized key management, so they don’t fully meet Web 3 standards. For example, the Worldcoin project tries to address this with trusted hardware, but it relies on a centralized key management system and data source, which doesn’t align with the decentralized ethos of Web 3.
Gitcoin Passport is another example. It’s widely used in the Ethereum community as a comprehensive identity solution platform. However, it relies on a federated key management system, and the data sources often come from centralized entities like Coinbase.
Idena is an interesting Web 3 solution that doesn’t use centralized key management or authorities. However, it’s a single mechanism, and with the rise of AI, it’s uncertain if this approach will have the resilience needed for the future. So far, it’s done well, but it only has around a thousand users.
In summary, no current solution fully addresses the problem of Sybil attacks.
When it comes to individual identity, there are two approaches to think about it: remote and local. Machines don’t inherently understand “individual identity,” and we aren’t likely to see some kind of encryption technology suddenly solve this. Some might argue that biometric tools like fingerprints could make each human unique, and machines could measure that, but it’s difficult for purely digital systems to prove this. The closest thing to achieving this might be Worldcoin, but even then, it’s just a machine that can verify people in a way that’s hard to cheat.
So, we need to recognize that individual identity is more about authentication. It’s about how elements within a digital system verify whether other elements are real individuals. The question then becomes: what forms the basis for this authentication? Is it physical contact, or some other form of proof? We might trust that an account is tied to a real person because we’ve met them and assumed they hadn’t interacted with anyone else. Or maybe we trust someone’s identity based on certain information we see on the screen, backed up by other evidence.
When we talk about remote authentication (authentication without direct physical proof), AI (artificial intelligence) might create complications. On the other hand, if we rely on physical evidence, practical implementation becomes challenging. So, we’re caught between these limitations. But I believe that with creativity and innovation, we can come up with workable solutions.
What’s the solution? What’s the plan?
To make Polkadot more practical in the real world (beyond just DeFi, NFTs, and virtual blockchain spaces), the key is finding a simple way to identify individuals. This doesn’t mean knowing who someone is, like “I know this is Gavin Wood,” but more like recognizing “this is a unique individual.” I don’t believe there’s a single solution, so we need a modular, scalable framework.
First, we can integrate existing solutions (like Idena). Second, the system shouldn’t be limited by one person’s ideas or just based on one individual’s vision of what might work. It needs to be open, allowing others to contribute to the solution. Next, we need strong contextual pseudonymity. At first, I wrote “anonymity,” and in some ways, I do mean anonymity, especially anonymity from your real-world identity. But at the same time, we want pseudonymity, so that in a specific context, you can prove you’re a unique person. Moreover, when you use the system again in that same context, you should be able to prove you’re the same individual as before.
Finally, we need a robust SDK and API so that this functionality is as easy to use as any other feature in Substrate or Polkadot smart contracts, or in the upcoming JAM ecosystem. It needs to be simple to implement. To get more specific: if you’ve written Frame code before, you might’ve come across a line like let account = ensure_signed (origin). This retrieves the source of the transaction and checks whether it came from an account, telling me which account it is. But an account isn’t the same as an individual. A person can use multiple accounts, and so can a script. Accounts don’t provide information about individual identity. If we want to make sure a transaction comes from a real person—not just one of a million accounts—we need to replace this code with something like let alias = ensure_person (origin, &b”My context”).
There are two key benefits to this. First, instead of just asking if an account is signing the transaction, we’re asking if a person is signing it. This opens up a lot of new possibilities.
Second, different operations take place in different contexts, and we can maintain both anonymity and pseudonym protection in those contexts. When the context changes, so does the pseudonym, and pseudonyms across different contexts can’t be linked or traced back to the person behind them. These pseudonyms are entirely anonymous, which makes them a powerful tool in blockchain development—especially when developing systems useful in the real world.
So, what constraints might we impose on the mechanisms that identify individuals? First, these mechanisms need to be widely accessible. If they’re only available to a select group of people, they won’t be very useful. They shouldn’t require holding assets or come with high fees—at least, nothing excessive.
There will inevitably be trade-offs between different mechanisms. I don’t think there’s a one-size-fits-all solution. But some trade-offs are acceptable, while others are not. We shouldn’t compromise on resilience, decentralization, or user sovereignty. Some mechanisms may require less effort but more trust, while others may demand more effort but offer greater assurance. We should have realistic expectations that individuals verified by the system (whether accounts linked to individuals or pseudonyms) are indeed unique, real people.
When different mechanisms in decentralized Web3 systems assess individual identity, using resilience and non-authoritative bases, there may be some overlap. This means we shouldn’t expect perfection, but the margin for error should be much smaller than an order of magnitude. Furthermore, the system must be highly resistant to identity abuse to prevent a small group or organization from gaining control of large numbers of identities.
It’s crucial that the system has safeguards to prevent such abuse. Some mechanisms might offer relatively low-confidence individual identity scores, which could be a higher goal. Some might succeed in achieving this, some might not, and some might take a binary approach: either we trust that the account belongs to a unique individual, or we don’t. Other mechanisms might suggest we’re 50% confident, meaning the individual could have two accounts, and we’re 50% confident in both.
All of this needs to be permissionless and relatively easy to implement. I shouldn’t have to stress this, but the system shouldn’t rely on common confession mechanisms or key management institutions.
What’s the benefit of this approach?
We’ve talked about how society uses and relies on individual identities, but how can this be applied on-chain? Imagine a Polkadot system where transaction fees don’t have to be paid, making reasonable usage free. Picture something like a “Plaza chain” (Plaza), which is essentially an upgraded Asset Hub with smart contract capabilities and a staking system.
In this kind of Plaza chain, you could envision a scenario where gas fees aren’t required. As long as you’re using the system within reasonable limits, gas is free. Of course, if you’re running scripts or performing a large number of transactions, you would need to pay fees since that goes beyond what a typical user might do. Picture these systems opening up for free to the public. We could efficiently bootstrap communities using targeted methods like airdrops. At the same time, we could imagine even more advanced governance models for Polkadot.
Personally, I’m not entirely sold on the idea of “one person, one vote.” In some cases, it’s necessary to ensure legitimacy, but it doesn’t always yield the best outcomes. However, we could consider alternative voting models, like quadratic voting or regional voting. In some representative elements, “one person, one vote” might be quite insightful.
We can also imagine a jury-like Oracle system, where parachains and smart contracts can utilize local, subordinate Oracle systems, perhaps for price predictions or resolving user disputes. They could also have a “grand jury” or “Supreme Court” system, where members are randomly selected from a pool of known individuals to make decisions, help resolve disputes, and receive small rewards. Since these jurors are randomly chosen from a large, neutral group, this method would offer a resilient and reliable way to resolve conflicts.
You could also envision a noise control system, particularly within decentralized social media integrations, to manage spam and undesirable behavior. In DeFi, we might see reputation-based systems similar to credit scores, but more focused on whether someone has failed to repay on time. This way, the system could operate on a freemium model, offering different levels of service.
Alright, that wraps up the first part of this speech. I hope it’s been helpful.
Recently, Gavin has been focusing on the issue of Sybil attacks (civil resistance). PolkaWorld reviewed Dr. Gavin Wood’s keynote speech at Polkadot Decoded 2024, exploring some of his thoughts on how to prevent Sybil attacks. If you’re interested, keep reading!
You might already know that I’ve been working on several projects. I’m writing a “gray paper” and focusing on the JAM project, doing some coding work along the way. Over the past two years, I’ve been thinking a lot about a crucial issue that’s quite significant in this space—how to prevent Sybil attacks (civil resistance). This problem is everywhere. Blockchain systems are built on game theory, and when analyzing games, we often need to limit the number of participants or manage unpredictable behaviors they might exhibit.
When designing digital systems, we want to determine whether a specific endpoint—a digital interface—is operated by a human. To clarify, I’m not discussing identity here. Identity is obviously important, but we’re not focusing on determining someone’s real-world identity here. Instead, the goal is to distinguish between devices and whether they are being actively operated by a human at any given time. Additionally, another important question arises: if a device is indeed being operated by a human, can we provide them with a pseudonym that allows us to identify them in a particular context, and if they return to interact with us, can we recognize them again?
As our interactions have shifted from mostly communicating with other people (back in the 80s, when I was born) to interacting with systems, digital systems, particularly decentralized Web3 systems, have become increasingly relevant. In the 80s, people primarily interacted directly with others; by the 90s, we began interacting with services via phone, like telephone banking. This was a major change for us. Initially, telephone banking involved large human-operated call centers, but over time, these systems evolved into today’s automated voice response systems. With the rise of the internet, human-to-human interactions became rarer, and in most daily services, we no longer directly communicate with humans. Of course, with the growth of Web2 e-commerce, this trend became even more apparent. Web3 further cements this—within Web3, you rarely interact with people. The core idea of Web3 is that you interact with machines, and sometimes machines interact with each other.
So, why does this matter? It’s a fundamental element of any real society and lies at the core of many social systems, including business, governance, voting, and consensus building. All of these heavily depend on preventing Sybil attacks to build genuine communities. Many mechanisms that are taken for granted in corporations are based on preventing Sybil attacks. Whether it’s fair usage, noise control, or community management, they all rely on this defensive ability. Many things require us to confirm that an entity is indeed a real human. If someone behaves inappropriately, we may want to temporarily remove them from the community. This is something you can observe in digital services, and of course, it exists in the real world as well.
By preventing Sybil attacks, we can introduce mechanisms that restrict behavior without raising entry barriers or compromising system accessibility. For instance, there are two basic ways to incentivize behavior: one is through a “carrot and stick” approach (a system of rewards and penalties). The stick (penalty) method requires you to pay a deposit, and if you misbehave, that deposit is confiscated. Staking is a simple example of this. The carrot (reward) method assumes you’ll behave well, and if you don’t meet expectations, you lose some of your rights. This is essentially how most civil societies operate.
However, without mechanisms to prevent Sybil attacks on the blockchain, this approach can’t really be enforced. In civil society, these mechanisms work because, if someone is imprisoned, they can’t commit the same offense again—at least, not while they’re incarcerated. Freedom is an inherent right, and the government can, in principle, take it away. I’m not suggesting we imprison anyone on-chain, but currently, we can’t impose similar constraints on blockchain. This makes it hard to curb bad behavior when offering free services, and we end up relying only on encouraging good behavior. Commerce and promotional activities rely heavily on being able to confirm that users are real people.
Here’s a screenshot of a website I sometimes use. It offers a very good whiskey that many people love, and it’s hard to find in its country of origin. But in Europe, it’s relatively cheap, and it seems they keep the prices low by limiting the number of bottles each person can buy. However, this kind of operation is nearly impossible to enforce in a real Web3 system.
There are also significant challenges in community building, airdrops, and identifying and distributing to community members. Airdrops are generally inefficient when it comes to capital expenditure because they aim to cover as many people as possible. To fairly distribute airdrops, you need to identify individuals and give everyone the same amount. But in practice, many issues arise, such as varying wallet balances. Eventually, you might find yourself in a situation where the distribution curve becomes extremely unbalanced, with huge disparities. As a result, most people receive almost no incentive.
On the issue of “fair usage,” while the current impact is small, if you overuse network resources, the system typically just slows down your connection, although you can still use the network.
Looking back 10 to 15 years ago, if you used too much internet, your Internet Service Provider (ISP) might have considered that you weren’t using this “unlimited” service responsibly. So, they would completely cut off your service, rather than just slowing it down like they do now. This approach allowed them to provide near-unlimited internet services to most users because they could identify who was using resources responsibly.
Web2 is built on an advanced service model, which heavily depends on the ability to identify users. Twenty years ago, identification mechanisms were less complex, but now it’s very different. If you want to open an account, there are usually at least three different ways to confirm that you’re a real person and that they haven’t encountered you before. For example, if you try to register an Apple account without buying an iPhone, it’s like going through an obstacle course. These companies are basically unwilling to give you an account. Of course, they advertise that you can get an account for free, but I don’t know what the AI behind the scenes is doing. It took me 10 tries before I finally succeeded, and in the end, I still had to buy an iPhone.
I believe that if we could better identify individuals, many processes like “Oracleization” (information verification) would become much easier.
A typical example of using Sybil resistance as a “proof of humanity” for information verification in society is the jury system. When we need an impartial judge (i.e., an Oracle) to determine someone’s guilt, the system randomly selects an odd number of ordinary people from society to hear the evidence and make a decision. Similarly, in other areas of social life, such as representation and gathering opinions, representation is an important part of society, and we manage representation using Sybil resistance methods. Of course, this type of management isn’t always perfect due to the flaws in current civil infrastructure, especially when representation is confused with identity. Often, when you want to vote, you have to prove your real identity, like by showing a driver’s license or passport. But in reality, voting represents your voting rights, not a direct link to your personal identity.
So, how can we address this?
In the Web 2 era, and even before that, we had various methods for verifying identity. In today’s Web 2 systems, these methods are often combined. For instance, if you want to create a new Google account, you may need to pass a CAPTCHA and verify both your email and phone number. Sometimes, SMS verification can substitute for speaking with a real person. If you’ve ever had your Amazon account locked, you’ll know what I’m talking about—it feels like navigating a complicated maze until you finally hit the right button to talk to a real customer service representative. For more advanced Sybil attack prevention, we might rely on IDs or credit card information.
However, when we shift to the Web 3 world, the perfect solution remains elusive. There are a few candidate solutions, but they differ greatly in three key areas: decentralization, privacy protection, and resilience (the ability to withstand attacks).
Resilience is becoming an increasingly important issue, and most systems face challenges in these areas.
One example is what I call the “confession system,” where you disclose your private information to a central authority. This authority then holds information about you that you may not want others to see. For example, you might scan your passport and submit it to an institution, giving them access to all of your personal data. This puts them in a powerful position because they control sensitive information. This approach isn’t suitable for Web 3.
You might also come across systems that look like Web 3 but rely on centralized “key management institutions.” These institutions have the power to decide who qualifies as a legitimate user by controlling the keys. Sometimes, they even hold the keys for users. In either case, they control who is considered a valid participant.
This centralized control over identity and privacy contradicts the core principles of Web 3, which focus on decentralization and user autonomy.
Just putting something on-chain doesn’t make it Web 3. You can transfer Web 2 practices or centralized authority models to the blockchain, but that doesn’t change the nature of the system. It just makes it more resilient but doesn’t make it decentralized. A long hexadecimal address doesn’t automatically guarantee privacy. Without specific privacy measures, this string could still be linked to real-world identities.
If a system relies on a “confession mechanism,” it’s not a privacy-preserving solution. We’ve seen countless data breaches that prove storing data behind corporate firewalls or trusted hardware doesn’t ensure security. A proper Web 3 solution should not focus on local identities or community-specific identities but on global, decentralized identities. These are entirely different concepts.
Some systems are attempting to tackle this problem, but they often rely on specific hardware and centralized key management, so they don’t fully meet Web 3 standards. For example, the Worldcoin project tries to address this with trusted hardware, but it relies on a centralized key management system and data source, which doesn’t align with the decentralized ethos of Web 3.
Gitcoin Passport is another example. It’s widely used in the Ethereum community as a comprehensive identity solution platform. However, it relies on a federated key management system, and the data sources often come from centralized entities like Coinbase.
Idena is an interesting Web 3 solution that doesn’t use centralized key management or authorities. However, it’s a single mechanism, and with the rise of AI, it’s uncertain if this approach will have the resilience needed for the future. So far, it’s done well, but it only has around a thousand users.
In summary, no current solution fully addresses the problem of Sybil attacks.
When it comes to individual identity, there are two approaches to think about it: remote and local. Machines don’t inherently understand “individual identity,” and we aren’t likely to see some kind of encryption technology suddenly solve this. Some might argue that biometric tools like fingerprints could make each human unique, and machines could measure that, but it’s difficult for purely digital systems to prove this. The closest thing to achieving this might be Worldcoin, but even then, it’s just a machine that can verify people in a way that’s hard to cheat.
So, we need to recognize that individual identity is more about authentication. It’s about how elements within a digital system verify whether other elements are real individuals. The question then becomes: what forms the basis for this authentication? Is it physical contact, or some other form of proof? We might trust that an account is tied to a real person because we’ve met them and assumed they hadn’t interacted with anyone else. Or maybe we trust someone’s identity based on certain information we see on the screen, backed up by other evidence.
When we talk about remote authentication (authentication without direct physical proof), AI (artificial intelligence) might create complications. On the other hand, if we rely on physical evidence, practical implementation becomes challenging. So, we’re caught between these limitations. But I believe that with creativity and innovation, we can come up with workable solutions.
What’s the solution? What’s the plan?
To make Polkadot more practical in the real world (beyond just DeFi, NFTs, and virtual blockchain spaces), the key is finding a simple way to identify individuals. This doesn’t mean knowing who someone is, like “I know this is Gavin Wood,” but more like recognizing “this is a unique individual.” I don’t believe there’s a single solution, so we need a modular, scalable framework.
First, we can integrate existing solutions (like Idena). Second, the system shouldn’t be limited by one person’s ideas or just based on one individual’s vision of what might work. It needs to be open, allowing others to contribute to the solution. Next, we need strong contextual pseudonymity. At first, I wrote “anonymity,” and in some ways, I do mean anonymity, especially anonymity from your real-world identity. But at the same time, we want pseudonymity, so that in a specific context, you can prove you’re a unique person. Moreover, when you use the system again in that same context, you should be able to prove you’re the same individual as before.
Finally, we need a robust SDK and API so that this functionality is as easy to use as any other feature in Substrate or Polkadot smart contracts, or in the upcoming JAM ecosystem. It needs to be simple to implement. To get more specific: if you’ve written Frame code before, you might’ve come across a line like let account = ensure_signed (origin). This retrieves the source of the transaction and checks whether it came from an account, telling me which account it is. But an account isn’t the same as an individual. A person can use multiple accounts, and so can a script. Accounts don’t provide information about individual identity. If we want to make sure a transaction comes from a real person—not just one of a million accounts—we need to replace this code with something like let alias = ensure_person (origin, &b”My context”).
There are two key benefits to this. First, instead of just asking if an account is signing the transaction, we’re asking if a person is signing it. This opens up a lot of new possibilities.
Second, different operations take place in different contexts, and we can maintain both anonymity and pseudonym protection in those contexts. When the context changes, so does the pseudonym, and pseudonyms across different contexts can’t be linked or traced back to the person behind them. These pseudonyms are entirely anonymous, which makes them a powerful tool in blockchain development—especially when developing systems useful in the real world.
So, what constraints might we impose on the mechanisms that identify individuals? First, these mechanisms need to be widely accessible. If they’re only available to a select group of people, they won’t be very useful. They shouldn’t require holding assets or come with high fees—at least, nothing excessive.
There will inevitably be trade-offs between different mechanisms. I don’t think there’s a one-size-fits-all solution. But some trade-offs are acceptable, while others are not. We shouldn’t compromise on resilience, decentralization, or user sovereignty. Some mechanisms may require less effort but more trust, while others may demand more effort but offer greater assurance. We should have realistic expectations that individuals verified by the system (whether accounts linked to individuals or pseudonyms) are indeed unique, real people.
When different mechanisms in decentralized Web3 systems assess individual identity, using resilience and non-authoritative bases, there may be some overlap. This means we shouldn’t expect perfection, but the margin for error should be much smaller than an order of magnitude. Furthermore, the system must be highly resistant to identity abuse to prevent a small group or organization from gaining control of large numbers of identities.
It’s crucial that the system has safeguards to prevent such abuse. Some mechanisms might offer relatively low-confidence individual identity scores, which could be a higher goal. Some might succeed in achieving this, some might not, and some might take a binary approach: either we trust that the account belongs to a unique individual, or we don’t. Other mechanisms might suggest we’re 50% confident, meaning the individual could have two accounts, and we’re 50% confident in both.
All of this needs to be permissionless and relatively easy to implement. I shouldn’t have to stress this, but the system shouldn’t rely on common confession mechanisms or key management institutions.
What’s the benefit of this approach?
We’ve talked about how society uses and relies on individual identities, but how can this be applied on-chain? Imagine a Polkadot system where transaction fees don’t have to be paid, making reasonable usage free. Picture something like a “Plaza chain” (Plaza), which is essentially an upgraded Asset Hub with smart contract capabilities and a staking system.
In this kind of Plaza chain, you could envision a scenario where gas fees aren’t required. As long as you’re using the system within reasonable limits, gas is free. Of course, if you’re running scripts or performing a large number of transactions, you would need to pay fees since that goes beyond what a typical user might do. Picture these systems opening up for free to the public. We could efficiently bootstrap communities using targeted methods like airdrops. At the same time, we could imagine even more advanced governance models for Polkadot.
Personally, I’m not entirely sold on the idea of “one person, one vote.” In some cases, it’s necessary to ensure legitimacy, but it doesn’t always yield the best outcomes. However, we could consider alternative voting models, like quadratic voting or regional voting. In some representative elements, “one person, one vote” might be quite insightful.
We can also imagine a jury-like Oracle system, where parachains and smart contracts can utilize local, subordinate Oracle systems, perhaps for price predictions or resolving user disputes. They could also have a “grand jury” or “Supreme Court” system, where members are randomly selected from a pool of known individuals to make decisions, help resolve disputes, and receive small rewards. Since these jurors are randomly chosen from a large, neutral group, this method would offer a resilient and reliable way to resolve conflicts.
You could also envision a noise control system, particularly within decentralized social media integrations, to manage spam and undesirable behavior. In DeFi, we might see reputation-based systems similar to credit scores, but more focused on whether someone has failed to repay on time. This way, the system could operate on a freemium model, offering different levels of service.
Alright, that wraps up the first part of this speech. I hope it’s been helpful.