Exploring The Design Space Of DePIN Networks

Intermediate12/24/2023, 6:29:15 AM
This article explores the trade-offs most commonly considered by the founders and community of DePIN, and presents three factors that need to be considered in constructing DePIN. It takes an optimistic view of DePIN and anticipates the launch of many new and definitive category networks in the coming years.

In April 2022, we published our thesis on Proof of Physical Work (PoPW) networks (now, more colloquially referred to as “Decentralized Physical Infrastructure Networks,” or “DePIN” for short). In that essay, we wrote:

“(PoPW networks) incentivize people to do verifiable work that builds real-world infrastructure. Relative to traditional forms of capital formation for building physical infrastructure, these permissionless and credibly-neutral protocols:

  1. Can build infrastructure faster—in many cases 10-100x faster
  2. Are more attuned to hyper-local market needs
  3. Can be far more cost effective

We were the first major investor into this thesis, and, in the time since, we’ve seen a cambrian explosion of DePIN networks across a wide range of categories, such as energy, logistics, mapping, telecom, and more. More recently, we’ve observed more niche categories emerge around special-purpose resource networks, specifically for digital commodities, such as compute, storage, bandwidth, and consumer data aggregation. Behind each of these networks lies a hidden structural-cost or performance arbitrage that’s uniquely enabled by crypto-native capital formation.

There’s a great deal of overlap in design patterns and best practices across DePIN networks. Founders and communities have several key questions to contemplate as they think through network design. Should network hardware be consumer facing, or should you bootstrap a network of professional installers? How many nodes are required in order to onboard your first paying customer, the tenth, or the thousandth? Should you make the network completely permissionless, or should it be managed through trusted intermediaries?

These decisions have to be made early on in the design of the network, and these decisions need to be correct; fulcrum questions often determine the success or failure of DePIN networks and small changes at the hardware level, token level, distribution level, or demand activation layers can have a massive impact on a network’s success, or lack thereof.

At Multicoin, we remain bullish on DePIN and expect many new, category-defining networks to come to market in the years ahead. This post will explore the most common trade-offs we see DePIN founders and communities contemplate, with the hopes of helping the next generation of DePIN founders and communities design networks more successfully. We present three necessary considerations for building DePINs: Hardware, Threshold-Scale, and Demand Generation. In each, we explore major questions that inform key design decisions, and outline their broad token design implications.

Hardware Considerations

Most DePIN networks coordinate physical infrastructure—i.e., real hardware in the world. However, that’s not always the case. Some networks manage virtual resources, such as compute, storage, or bandwidth (these networks are sometimes referred to as “Decentralized Virtual Infrastructure Networks,” or “DeVINs”). But, for the sake of discussion in this section, we’re going to assume that your network has real-world hardware, and because of that, there are some key network design questions that you need to answer.

Who makes the hardware?

DePIN networks that manufacture and distribute their own hardware have much more control over the supply side of the network. They also have the luxury of creating a direct relationship with the contributor (which sometimes results in stronger communities). However, over time, these companies run the risk of becoming a bottleneck or single point of failure in the manufacturing and distribution process, which can limit the network’s ability to scale.

The alternative to manufacturing and distributing your own hardware is open sourcing your hardware spec and asking the community to build it for you. This allows founders and communities to scale the supply side of networks while also diversifying supply chain risks across many companies. The problem with this approach, of course, is that incentivizing 3rd-party manufacturers to build hardware for a new market is difficult and expensive. Another consideration you must also think about is hardware quality and support. Assuming you do successfully build out a robust ecosystem of hardware manufacturers, you’ll also need to maintain quality across devices and support.

Helium, a decentralized wireless network, is an interesting case study in this. They started by building their own hotspots to help bootstrap the network, then quickly open sourced their hardware spec and incentivized a robust, 3rd-party ecosystem to build hardware for them. Despite their large network of 3rd-party hardware manufacturers, Helium suffered significant supply chain bottlenecks in the critical growth phase of the network and some manufacturers provided poor support.

On the other hand Hivemapper, a decentralized mapping network, opted to build and distribute their own hardware dashcams. This gave them full control over hardware production, which allowed them to quickly iterate the dashcam’s firmware and enable passive video uploading faster, which in turn accelerated map coverage and thus the commercial value of that data. As a tradeoff, having one company control production of the hardware is a centralizing impact on the supply chain which can make the supply chain more brittle.

Takeaway — Generally, we’ve observed that DePIN networks scale much faster when the hardware spec is open sourced and deployment is permissionless. When a network is mature enough, it certainly makes sense to open up hardware development to decentralize and scale the network. However, in the early days it makes sense to control the hardware to ensure quality and support.

Is your hardware active or passive?

Some DePIN networks are set-it-and-forget-it, whereas others require a more continuous degree of user engagement.

For example, in the case of Helium, the time-cost of setting up a hotspot is about 10 minutes from the moment of unboxing. Then, after that, the box just sits there and passively provides coverage to the network without much additional work from the host. On the other hand, a network like Geobyte (decentralized mapping of indoor spaces using smartphones) requires the user to actively do something to create value (capture video of indoor spaces using phone sensors). For supply-side contributors, time committed toward active networks is explicitly sacrificing time that could potentially be dedicated to other income-generating activities, or just life more generally. As such, contributors to active networks must earn more (via token or network design, in most cases) to justify their time and opportunity cost. It also means that active networks, as a consequence of their design, reach threshold-scale (which we’ll talk more about below) more slowly than passive networks.

On a positive note, because active DePIN networks require some degree of continuous engagement, they usually have more engaged and sophisticated contributors to the network. The flip side of this is that active networks are also bounded by the total number of people willing and/or able to contribute.

Takeaway — Generally, we’ve observed that DePIN networks scale more easily if the contributors pay a one-time cost (in time or money) up front, as opposed to an ongoing, continuous cost; passive networks are much more easy to set up, and therefore easier to scale.

Being an active network isn’t a death knell, they just require creative thinking and incentive design. For example, active networks like Geobyte, Dronebase, FrodoBots, and Veris look more like “perpetual games” than traditional infrastructure networks.

How difficult is it to install hardware?

Various DePIN networks range in difficulty in terms of the hardware installation process. They can be as simple as plugging a box into a wall on one hand, or require professional installers on the other.

On the simple side of the difficulty spectrum, a gamer can connect their GPU to the Render Network, a distributed compute network, by simply running a bash script, which is ideal because compute networks require tens of thousands of geographically-distributed GPUs across performance and bandwidth profiles to properly serve data centers offload.

In the middle of the difficulty spectrum, a Hivemapper dashcam requires 15-30 minutes to install. Hundreds of such vehicles in a given geographic area are required to build a robust, real-time map, and as such installation must be a simple investment of time up front and easy to operate thereafter.

In contrast, on the hard side of the difficulty spectrum, XNET is building a carrier-grade CBRS wireless network. Their network’s radios require professional installation from local ISPs, and opt-in from commercial landowners; however, their network scales despite this because only a handful of such arrangements are needed to fully cover an urban area and service carrier offload and data roaming use cases.

Takeaway — The rate at which your network can scale is directly impacted by how easy or difficult it is to install your hardware. If your network requires hundreds of thousands devices around the world, then you need to make your hardware as easy as possible to install. If your network scales rapidly with only a few nodes, then you have the option of focusing on bringing professional contributors to the network over retail contributors. Generally speaking, DePIN networks scale fastest when the installation complexity is low enough that regular people can easily become contributors.

Token Design Implications

Early supply-side contributors are among the most important stakeholders to consider as you think about building a network. Depending on the hardware decisions you make, the profile of the supply-side contributor can skew toward the average person, professionals, or some “prosumer” in the middle of that spectrum.

We have observed that professional contributors tend to think of their earnings in immediate dollar-denominated returns and are more likely to monetize their tokens early in the life of the network. On the other hand, average retail contributors who are early are more likely to be focused on the longer term outcomes and more likely to want to accumulate as many tokens as possible, irrespective of short term price fluctuations.

Networks with a larger base of professional contributors can experiment with alternatives to traditional spot token incentives, such as locked up tokens or forward-dated, dollar-denominated revenue share agreements.

Regardless of the cohort of supply-side contributors, at maturity, the supply side of a network must cover both capital investment and operational costs in dollar terms. Ensuring that tokens are available to reward contributors in later stages of network maturity, while balancing out bootstrapping incentives for early adopters, is a tricky but important balance.

Threshold-Scale Considerations

We’re using the term “threshold-scale” to describe when the supply-side of a network starts becoming commercially viable to the demand-side of the network. DePIN networks are inherently disruptive because tokens can be used to reward early contributors to deploy infrastructure to threshold-scale.

There are networks that can service demand from day one with one or a few nodes (e.g., storage and compute markets), and there are other networks that require a minimum amount of scale to service their demand (e.g., wireless networks, logistics, and fulfillment networks). As demand scales in orders of magnitude, the minimum viable set of nodes required to service that incremental demand also scales.

How important is location?

Some DePIN networks don’t meaningfully benefit from physical distribution, whereas others absolutely require it. In most cases, if a network requires the coordination of physical resources, it is location-sensitive, and so reasoning about minimum-viable coverage becomes an essential factor when determining when to engage in demand generation.

There are networks that are extremely location-dependent and networks that are location-independent. For example, energy markets, such as Anode, and mapping networks, such as Hivemapper, are very location-dependent. Wireless networks such as Helium IOT are location-dependent but less so because hotspots have significant range. Bandwidth marketplaces, such as Filecoin Saturn, Fleek, or Wynd, are even less location sensitive because they just need general geographic coverage rather than nodes in any particular location.

On the other hand, DeVINs such as compute markets like Render Network or storage markets like Filecoin, are location-insensitive. In these networks, it becomes easier to bootstrap supply-side contributor resources to a point of threshold scale since the top-of-the-funnel is not geography constrained.

Takeaway — Generally, we’ve observed that if a network is location-sensitive, supply-side contributors should be incentivized to contribute to target regions that build to threshold-scale with the goal of unlocking a serviceable market. Once achieved, networks should pursue a “land-and-expand” approach and repeat the strategy in other distinct areas.

How important is network density?

Building on the point above about minimum viable coverage, some DePIN networks have a notion of “network density,” generally defined in terms of units of hardware (or nodes), or total aggregated units of a particular resource in a specific area.

Helium Mobile, a web3 mobile carrier, defines its network coverage as Mobile Hotspots per neighborhood. Hyperlocal density is very important to Helium Mobile because the network needs significant density of Mobile Hotspots to provide continuous coverage in an area.

Teleport, a permissionless ridesharing protocol, defines density as the number of active drivers available in a 5-10 mile radius of an urban area hotspot. Density is important to Teleport because no one wants to wait 10+ minutes for a taxi. However hyperlocal density is less important for Teleport because drivers can obviously drive to pick up a passenger while Helium Mobile Hotspots cannot move to pick up user cellular traffic.

Hivemapper defines network density as the number of mappers in a given city because the network needs to have enough mappers in a city to provide constantly refreshed mapping data. But Hivemapper doesn’t need the same level of density as Teleport because map refreshes can afford greater latency than a taxi pickup.

An easy way to think about density in the context of threshold-scale is to consider, at what threshold of contributors in a geographic area can the network make its first sale or onboard its first demand-side customer? What about the tenth? The hundredth?

For instance, XNET, a decentralized and psuedo-permissioned mobile carrier, may only require 100 large, professionally-installed radios to service an urban area; however, Helium Mobile, whose radios are smaller and retail contributor-installed, require a larger number of radios to cover the same urban area—the Helium Mobile Network with a hundred small cells is worth very little, but with a hundred thousand cells is worth quite a lot. Due to their hardware design decisions, the threshold-scale for Helium Mobile is higher than the threshold-scale for XNET.

Takeaway — Generally, we’ve observed that networks with more density requirements require more contributors to achieve threshold scale. In contrast, lower density networks can leverage more complex hardware and/or professional contributors.

Token Design Implications

We’ve observed that networks that have a higher threshold scale —either due to some combination of location sensitivity or network density requirements—require more token incentives to build the supply-side of the network. In contrast, networks that have a relatively lower threshold-scale have the flexibility to be more conservative with their token incentives, and can spread them out over later stage threshold-scale milestones.

Broadly, there are two common strategies for token distribution: time-based strategies, and utilization-based strategies. Time-based strategies are best for networks that have a high threshold-scale, whereas utilization-based strategies work best for networks that have a relatively lower threshold-scale. Helium employs a time-based token emissions schedule, whereas Hivemapper employs network utilization-based emissions schedule.

Time-based strategies involve creating tokens to be emitted to contributors in a given time period pro-rata to some measure of their network contribution. These are a better fit if time-to-market matters for the infrastructure buildout, and it is critical to get to threshold-scale faster than a competitor. If the network is not the first mover in a winner take all market, time-based strategies are a strong option to consider. (Note that this approach generally requires the network to have a clear line of sight to distributing hardware through a resilient supply chain.)

Network-utilization-based token distribution is a more flexible mechanism that allows tokens to be distributed based on network growth. Rewards mechanisms include outsized tokens for network build outs in specific locations, specific times, or for specific types of resources provided to the network. The tradeoff here is that while this preserves optionality for the network to distribute tokens to the most value accretive actors, it creates earnings insecurity for the supply side which could lead to lower conversion and higher churn rates.

For instance, Hivemapper has mapped 10% of the U.S. with less than 2% of total token emissions in rewards to mapping contributors. Consequently, they can now be extremely thoughtful about constructing bonus challenges to reach threshold-scale in specific areas to continue building out the map and improving density in strategic regions.

Demand Generation Considerations

When DePIN networks reach threshold-scale, they can begin to sell to the demand side of a network in earnest. This begs the question, who should do the selling?

DePIN networks are ultimately only valuable if customers can easily access the resources that networks aggregate. Consumers and enterprises typically do not want to purchase directly from a permissionless network, but instead prefer to buy from a traditional company. This creates an opportunity for value-added resellers (VARs) to package network resources into products and services that customers understand and are comfortable buying.

Network creators also have the option of operating a network VAR. This company builds on top of the network and owns the customer relationship and everything that comes with it—i.e., product development, sales, customer acquisition and retention, ongoing support and service legal agreements, etc. The advantage of building a VAR on the network is capturing the full spread between the product sales cost (to the customer) and the raw resources cost as provided by the network. This approach makes the network full-stack, and allows for tighter product iteration because there is constant feedback from the demand-side customer.

Alternatively, you do not have to be a VAR or build on top of the network. You can instead outsource the demand-side relationship to the network ecosystem. This approach allows you to focus exclusively on core protocol development, but reducing touchpoints with customers can hinder product feedback and iteration.

Should you be a network VAR or outsource?

Different DePIN teams have approached this from many angles.

For example, Hivemapper Inc. today is the principal VAR of the Hivemapper Network. They build on top of the network mapping data and provide enterprise-grade logistics and mapping data via a commercial API.

In the case of Helium, the Helium Mobile Network is serviced by a single VAR, Helium Mobile, which spun out of Helium Systems Inc., whereas Helium’s IoT Network is commercialized by an ecosystem of VARs, such as Senet, which includes everything from helping customers deploy hotspots, to buying sensors and coverage, to validating packet transfers.

Unlike Hivemapper or Helium, Render Network outsources network resource commercialization to open compute clients, which then resell those resources to agencies and artists with rendering and machine learning jobs. The Render Network itself doesn’t provide proofs of computational integrity, privacy guarantees, or different orchestration layers that handle package or library specific workloads; instead, these are all provided by third-party clients.

Takeaway — Generally, we’ve observed that layering on services or trust guarantees can drive demand. Networks can choose to provide these services themselves, but investing in those services too soon—prior to reaching some threshold of critical scale—will result in wasted time, effort, and dollars. At scale, these services are best handled by third-parties that custom fit their offerings to the customers they seek to service.

We have also observed that networks usually take the following shape as they begin to scale and commercialize the network’s resources:

  • Phase I: At or around the first threshold-scale milestone, the core team manages all aspects of the demand-side relationship. This is to ensure that early customers receive as high quality a product as possible.
  • Phase II: Beyond the first set of threshold-scale milestones, the network can start to open up a third-party ecosystem to resell the network’s aggregated resources. 3rd-parties that handle curation can tap into the network and intermediate the relationship between demand and supply.
  • Phase III: At some steady state, there are many actors packaging the resources to sell to a wide variety of network participants. In this phase, the network is a platform for other services businesses to tap into and serve customers directly, acting purely as a resource layer.

Token Design Implications

If your network relies on specific parties to scale demand generation, it can be helpful to designate protocol incentives for these network participants. Tokens for 3rd-party demand generation activities are often milestone based, with tokens being created to reward these parties when both the network and the 3rd-party achieve some shared objectives. You should always thoughtfully structure emissions to partners such that the value they drive to the network is commensurate with the tokens they end up with.

Looking Forward

This essay explored the most common questions and considerations we discuss with founders when exploring new DePIN networks.

We expect new, category-defining DePINs to emerge over the next few years, and believe that the core properties of token distribution, hardware, threshold-scale, and demand generation are critical and should be fully explored in order to effectively build out supply-side resources and serve demand-side customers. These networks are fundamentally marketplaces, and each trade-off has ripple effects that either strengthen their inherent network effects or create gaps for new entrants to compete within.

Ultimately, we view DePIN as a way to reduce the cost of building a valuable infrastructure network through crypto-native capital formation. We believe that there is a vast design space for networks that make distinct tradeoffs and serve subsets of massive markets such as telecom, energy, data aggregation, carbon removal, physical storage, logistics and delivery, and more. If you are navigating the idea maze in DePIN, we’d love to help you think through the process.

Disclaimer:

  1. This article is reprinted from [multicoin.capital]. All copyrights belong to the original author [SHAYON SENGUPTA、TUSHAR JAIN]. If there are objections to this reprint, please contact the Gate Learn team(gatelearn@gate.io), and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

Exploring The Design Space Of DePIN Networks

Intermediate12/24/2023, 6:29:15 AM
This article explores the trade-offs most commonly considered by the founders and community of DePIN, and presents three factors that need to be considered in constructing DePIN. It takes an optimistic view of DePIN and anticipates the launch of many new and definitive category networks in the coming years.

In April 2022, we published our thesis on Proof of Physical Work (PoPW) networks (now, more colloquially referred to as “Decentralized Physical Infrastructure Networks,” or “DePIN” for short). In that essay, we wrote:

“(PoPW networks) incentivize people to do verifiable work that builds real-world infrastructure. Relative to traditional forms of capital formation for building physical infrastructure, these permissionless and credibly-neutral protocols:

  1. Can build infrastructure faster—in many cases 10-100x faster
  2. Are more attuned to hyper-local market needs
  3. Can be far more cost effective

We were the first major investor into this thesis, and, in the time since, we’ve seen a cambrian explosion of DePIN networks across a wide range of categories, such as energy, logistics, mapping, telecom, and more. More recently, we’ve observed more niche categories emerge around special-purpose resource networks, specifically for digital commodities, such as compute, storage, bandwidth, and consumer data aggregation. Behind each of these networks lies a hidden structural-cost or performance arbitrage that’s uniquely enabled by crypto-native capital formation.

There’s a great deal of overlap in design patterns and best practices across DePIN networks. Founders and communities have several key questions to contemplate as they think through network design. Should network hardware be consumer facing, or should you bootstrap a network of professional installers? How many nodes are required in order to onboard your first paying customer, the tenth, or the thousandth? Should you make the network completely permissionless, or should it be managed through trusted intermediaries?

These decisions have to be made early on in the design of the network, and these decisions need to be correct; fulcrum questions often determine the success or failure of DePIN networks and small changes at the hardware level, token level, distribution level, or demand activation layers can have a massive impact on a network’s success, or lack thereof.

At Multicoin, we remain bullish on DePIN and expect many new, category-defining networks to come to market in the years ahead. This post will explore the most common trade-offs we see DePIN founders and communities contemplate, with the hopes of helping the next generation of DePIN founders and communities design networks more successfully. We present three necessary considerations for building DePINs: Hardware, Threshold-Scale, and Demand Generation. In each, we explore major questions that inform key design decisions, and outline their broad token design implications.

Hardware Considerations

Most DePIN networks coordinate physical infrastructure—i.e., real hardware in the world. However, that’s not always the case. Some networks manage virtual resources, such as compute, storage, or bandwidth (these networks are sometimes referred to as “Decentralized Virtual Infrastructure Networks,” or “DeVINs”). But, for the sake of discussion in this section, we’re going to assume that your network has real-world hardware, and because of that, there are some key network design questions that you need to answer.

Who makes the hardware?

DePIN networks that manufacture and distribute their own hardware have much more control over the supply side of the network. They also have the luxury of creating a direct relationship with the contributor (which sometimes results in stronger communities). However, over time, these companies run the risk of becoming a bottleneck or single point of failure in the manufacturing and distribution process, which can limit the network’s ability to scale.

The alternative to manufacturing and distributing your own hardware is open sourcing your hardware spec and asking the community to build it for you. This allows founders and communities to scale the supply side of networks while also diversifying supply chain risks across many companies. The problem with this approach, of course, is that incentivizing 3rd-party manufacturers to build hardware for a new market is difficult and expensive. Another consideration you must also think about is hardware quality and support. Assuming you do successfully build out a robust ecosystem of hardware manufacturers, you’ll also need to maintain quality across devices and support.

Helium, a decentralized wireless network, is an interesting case study in this. They started by building their own hotspots to help bootstrap the network, then quickly open sourced their hardware spec and incentivized a robust, 3rd-party ecosystem to build hardware for them. Despite their large network of 3rd-party hardware manufacturers, Helium suffered significant supply chain bottlenecks in the critical growth phase of the network and some manufacturers provided poor support.

On the other hand Hivemapper, a decentralized mapping network, opted to build and distribute their own hardware dashcams. This gave them full control over hardware production, which allowed them to quickly iterate the dashcam’s firmware and enable passive video uploading faster, which in turn accelerated map coverage and thus the commercial value of that data. As a tradeoff, having one company control production of the hardware is a centralizing impact on the supply chain which can make the supply chain more brittle.

Takeaway — Generally, we’ve observed that DePIN networks scale much faster when the hardware spec is open sourced and deployment is permissionless. When a network is mature enough, it certainly makes sense to open up hardware development to decentralize and scale the network. However, in the early days it makes sense to control the hardware to ensure quality and support.

Is your hardware active or passive?

Some DePIN networks are set-it-and-forget-it, whereas others require a more continuous degree of user engagement.

For example, in the case of Helium, the time-cost of setting up a hotspot is about 10 minutes from the moment of unboxing. Then, after that, the box just sits there and passively provides coverage to the network without much additional work from the host. On the other hand, a network like Geobyte (decentralized mapping of indoor spaces using smartphones) requires the user to actively do something to create value (capture video of indoor spaces using phone sensors). For supply-side contributors, time committed toward active networks is explicitly sacrificing time that could potentially be dedicated to other income-generating activities, or just life more generally. As such, contributors to active networks must earn more (via token or network design, in most cases) to justify their time and opportunity cost. It also means that active networks, as a consequence of their design, reach threshold-scale (which we’ll talk more about below) more slowly than passive networks.

On a positive note, because active DePIN networks require some degree of continuous engagement, they usually have more engaged and sophisticated contributors to the network. The flip side of this is that active networks are also bounded by the total number of people willing and/or able to contribute.

Takeaway — Generally, we’ve observed that DePIN networks scale more easily if the contributors pay a one-time cost (in time or money) up front, as opposed to an ongoing, continuous cost; passive networks are much more easy to set up, and therefore easier to scale.

Being an active network isn’t a death knell, they just require creative thinking and incentive design. For example, active networks like Geobyte, Dronebase, FrodoBots, and Veris look more like “perpetual games” than traditional infrastructure networks.

How difficult is it to install hardware?

Various DePIN networks range in difficulty in terms of the hardware installation process. They can be as simple as plugging a box into a wall on one hand, or require professional installers on the other.

On the simple side of the difficulty spectrum, a gamer can connect their GPU to the Render Network, a distributed compute network, by simply running a bash script, which is ideal because compute networks require tens of thousands of geographically-distributed GPUs across performance and bandwidth profiles to properly serve data centers offload.

In the middle of the difficulty spectrum, a Hivemapper dashcam requires 15-30 minutes to install. Hundreds of such vehicles in a given geographic area are required to build a robust, real-time map, and as such installation must be a simple investment of time up front and easy to operate thereafter.

In contrast, on the hard side of the difficulty spectrum, XNET is building a carrier-grade CBRS wireless network. Their network’s radios require professional installation from local ISPs, and opt-in from commercial landowners; however, their network scales despite this because only a handful of such arrangements are needed to fully cover an urban area and service carrier offload and data roaming use cases.

Takeaway — The rate at which your network can scale is directly impacted by how easy or difficult it is to install your hardware. If your network requires hundreds of thousands devices around the world, then you need to make your hardware as easy as possible to install. If your network scales rapidly with only a few nodes, then you have the option of focusing on bringing professional contributors to the network over retail contributors. Generally speaking, DePIN networks scale fastest when the installation complexity is low enough that regular people can easily become contributors.

Token Design Implications

Early supply-side contributors are among the most important stakeholders to consider as you think about building a network. Depending on the hardware decisions you make, the profile of the supply-side contributor can skew toward the average person, professionals, or some “prosumer” in the middle of that spectrum.

We have observed that professional contributors tend to think of their earnings in immediate dollar-denominated returns and are more likely to monetize their tokens early in the life of the network. On the other hand, average retail contributors who are early are more likely to be focused on the longer term outcomes and more likely to want to accumulate as many tokens as possible, irrespective of short term price fluctuations.

Networks with a larger base of professional contributors can experiment with alternatives to traditional spot token incentives, such as locked up tokens or forward-dated, dollar-denominated revenue share agreements.

Regardless of the cohort of supply-side contributors, at maturity, the supply side of a network must cover both capital investment and operational costs in dollar terms. Ensuring that tokens are available to reward contributors in later stages of network maturity, while balancing out bootstrapping incentives for early adopters, is a tricky but important balance.

Threshold-Scale Considerations

We’re using the term “threshold-scale” to describe when the supply-side of a network starts becoming commercially viable to the demand-side of the network. DePIN networks are inherently disruptive because tokens can be used to reward early contributors to deploy infrastructure to threshold-scale.

There are networks that can service demand from day one with one or a few nodes (e.g., storage and compute markets), and there are other networks that require a minimum amount of scale to service their demand (e.g., wireless networks, logistics, and fulfillment networks). As demand scales in orders of magnitude, the minimum viable set of nodes required to service that incremental demand also scales.

How important is location?

Some DePIN networks don’t meaningfully benefit from physical distribution, whereas others absolutely require it. In most cases, if a network requires the coordination of physical resources, it is location-sensitive, and so reasoning about minimum-viable coverage becomes an essential factor when determining when to engage in demand generation.

There are networks that are extremely location-dependent and networks that are location-independent. For example, energy markets, such as Anode, and mapping networks, such as Hivemapper, are very location-dependent. Wireless networks such as Helium IOT are location-dependent but less so because hotspots have significant range. Bandwidth marketplaces, such as Filecoin Saturn, Fleek, or Wynd, are even less location sensitive because they just need general geographic coverage rather than nodes in any particular location.

On the other hand, DeVINs such as compute markets like Render Network or storage markets like Filecoin, are location-insensitive. In these networks, it becomes easier to bootstrap supply-side contributor resources to a point of threshold scale since the top-of-the-funnel is not geography constrained.

Takeaway — Generally, we’ve observed that if a network is location-sensitive, supply-side contributors should be incentivized to contribute to target regions that build to threshold-scale with the goal of unlocking a serviceable market. Once achieved, networks should pursue a “land-and-expand” approach and repeat the strategy in other distinct areas.

How important is network density?

Building on the point above about minimum viable coverage, some DePIN networks have a notion of “network density,” generally defined in terms of units of hardware (or nodes), or total aggregated units of a particular resource in a specific area.

Helium Mobile, a web3 mobile carrier, defines its network coverage as Mobile Hotspots per neighborhood. Hyperlocal density is very important to Helium Mobile because the network needs significant density of Mobile Hotspots to provide continuous coverage in an area.

Teleport, a permissionless ridesharing protocol, defines density as the number of active drivers available in a 5-10 mile radius of an urban area hotspot. Density is important to Teleport because no one wants to wait 10+ minutes for a taxi. However hyperlocal density is less important for Teleport because drivers can obviously drive to pick up a passenger while Helium Mobile Hotspots cannot move to pick up user cellular traffic.

Hivemapper defines network density as the number of mappers in a given city because the network needs to have enough mappers in a city to provide constantly refreshed mapping data. But Hivemapper doesn’t need the same level of density as Teleport because map refreshes can afford greater latency than a taxi pickup.

An easy way to think about density in the context of threshold-scale is to consider, at what threshold of contributors in a geographic area can the network make its first sale or onboard its first demand-side customer? What about the tenth? The hundredth?

For instance, XNET, a decentralized and psuedo-permissioned mobile carrier, may only require 100 large, professionally-installed radios to service an urban area; however, Helium Mobile, whose radios are smaller and retail contributor-installed, require a larger number of radios to cover the same urban area—the Helium Mobile Network with a hundred small cells is worth very little, but with a hundred thousand cells is worth quite a lot. Due to their hardware design decisions, the threshold-scale for Helium Mobile is higher than the threshold-scale for XNET.

Takeaway — Generally, we’ve observed that networks with more density requirements require more contributors to achieve threshold scale. In contrast, lower density networks can leverage more complex hardware and/or professional contributors.

Token Design Implications

We’ve observed that networks that have a higher threshold scale —either due to some combination of location sensitivity or network density requirements—require more token incentives to build the supply-side of the network. In contrast, networks that have a relatively lower threshold-scale have the flexibility to be more conservative with their token incentives, and can spread them out over later stage threshold-scale milestones.

Broadly, there are two common strategies for token distribution: time-based strategies, and utilization-based strategies. Time-based strategies are best for networks that have a high threshold-scale, whereas utilization-based strategies work best for networks that have a relatively lower threshold-scale. Helium employs a time-based token emissions schedule, whereas Hivemapper employs network utilization-based emissions schedule.

Time-based strategies involve creating tokens to be emitted to contributors in a given time period pro-rata to some measure of their network contribution. These are a better fit if time-to-market matters for the infrastructure buildout, and it is critical to get to threshold-scale faster than a competitor. If the network is not the first mover in a winner take all market, time-based strategies are a strong option to consider. (Note that this approach generally requires the network to have a clear line of sight to distributing hardware through a resilient supply chain.)

Network-utilization-based token distribution is a more flexible mechanism that allows tokens to be distributed based on network growth. Rewards mechanisms include outsized tokens for network build outs in specific locations, specific times, or for specific types of resources provided to the network. The tradeoff here is that while this preserves optionality for the network to distribute tokens to the most value accretive actors, it creates earnings insecurity for the supply side which could lead to lower conversion and higher churn rates.

For instance, Hivemapper has mapped 10% of the U.S. with less than 2% of total token emissions in rewards to mapping contributors. Consequently, they can now be extremely thoughtful about constructing bonus challenges to reach threshold-scale in specific areas to continue building out the map and improving density in strategic regions.

Demand Generation Considerations

When DePIN networks reach threshold-scale, they can begin to sell to the demand side of a network in earnest. This begs the question, who should do the selling?

DePIN networks are ultimately only valuable if customers can easily access the resources that networks aggregate. Consumers and enterprises typically do not want to purchase directly from a permissionless network, but instead prefer to buy from a traditional company. This creates an opportunity for value-added resellers (VARs) to package network resources into products and services that customers understand and are comfortable buying.

Network creators also have the option of operating a network VAR. This company builds on top of the network and owns the customer relationship and everything that comes with it—i.e., product development, sales, customer acquisition and retention, ongoing support and service legal agreements, etc. The advantage of building a VAR on the network is capturing the full spread between the product sales cost (to the customer) and the raw resources cost as provided by the network. This approach makes the network full-stack, and allows for tighter product iteration because there is constant feedback from the demand-side customer.

Alternatively, you do not have to be a VAR or build on top of the network. You can instead outsource the demand-side relationship to the network ecosystem. This approach allows you to focus exclusively on core protocol development, but reducing touchpoints with customers can hinder product feedback and iteration.

Should you be a network VAR or outsource?

Different DePIN teams have approached this from many angles.

For example, Hivemapper Inc. today is the principal VAR of the Hivemapper Network. They build on top of the network mapping data and provide enterprise-grade logistics and mapping data via a commercial API.

In the case of Helium, the Helium Mobile Network is serviced by a single VAR, Helium Mobile, which spun out of Helium Systems Inc., whereas Helium’s IoT Network is commercialized by an ecosystem of VARs, such as Senet, which includes everything from helping customers deploy hotspots, to buying sensors and coverage, to validating packet transfers.

Unlike Hivemapper or Helium, Render Network outsources network resource commercialization to open compute clients, which then resell those resources to agencies and artists with rendering and machine learning jobs. The Render Network itself doesn’t provide proofs of computational integrity, privacy guarantees, or different orchestration layers that handle package or library specific workloads; instead, these are all provided by third-party clients.

Takeaway — Generally, we’ve observed that layering on services or trust guarantees can drive demand. Networks can choose to provide these services themselves, but investing in those services too soon—prior to reaching some threshold of critical scale—will result in wasted time, effort, and dollars. At scale, these services are best handled by third-parties that custom fit their offerings to the customers they seek to service.

We have also observed that networks usually take the following shape as they begin to scale and commercialize the network’s resources:

  • Phase I: At or around the first threshold-scale milestone, the core team manages all aspects of the demand-side relationship. This is to ensure that early customers receive as high quality a product as possible.
  • Phase II: Beyond the first set of threshold-scale milestones, the network can start to open up a third-party ecosystem to resell the network’s aggregated resources. 3rd-parties that handle curation can tap into the network and intermediate the relationship between demand and supply.
  • Phase III: At some steady state, there are many actors packaging the resources to sell to a wide variety of network participants. In this phase, the network is a platform for other services businesses to tap into and serve customers directly, acting purely as a resource layer.

Token Design Implications

If your network relies on specific parties to scale demand generation, it can be helpful to designate protocol incentives for these network participants. Tokens for 3rd-party demand generation activities are often milestone based, with tokens being created to reward these parties when both the network and the 3rd-party achieve some shared objectives. You should always thoughtfully structure emissions to partners such that the value they drive to the network is commensurate with the tokens they end up with.

Looking Forward

This essay explored the most common questions and considerations we discuss with founders when exploring new DePIN networks.

We expect new, category-defining DePINs to emerge over the next few years, and believe that the core properties of token distribution, hardware, threshold-scale, and demand generation are critical and should be fully explored in order to effectively build out supply-side resources and serve demand-side customers. These networks are fundamentally marketplaces, and each trade-off has ripple effects that either strengthen their inherent network effects or create gaps for new entrants to compete within.

Ultimately, we view DePIN as a way to reduce the cost of building a valuable infrastructure network through crypto-native capital formation. We believe that there is a vast design space for networks that make distinct tradeoffs and serve subsets of massive markets such as telecom, energy, data aggregation, carbon removal, physical storage, logistics and delivery, and more. If you are navigating the idea maze in DePIN, we’d love to help you think through the process.

Disclaimer:

  1. This article is reprinted from [multicoin.capital]. All copyrights belong to the original author [SHAYON SENGUPTA、TUSHAR JAIN]. If there are objections to this reprint, please contact the Gate Learn team(gatelearn@gate.io), and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!