The following is a detailed explanation of Polkadot1, Polkadot2, and how they evolved into JAM. (For more details, see: https://www.navalmanack.com/almanack-of-naval-ravikant/how-to-think-clearly). This article is aimed at technical readers, especially those who may not be deeply familiar with Polkadot but have some knowledge of blockchain systems and are likely acquainted with technologies from other ecosystems.
I believe this article serves as a good precursor to reading the JAM Gray Paper. (For more details, see: https://graypaper.com/)
This article assumes the reader is familiar with the following concepts:
Let’s first revisit the most innovative features of Polkadot1.
Currently, we are discussing a Layer 1 network that hosts other Layer 2 “blockchain” networks, similar to Polkadot and Ethereum. Therefore, the terms Layer 2 and Parachain can be used interchangeably.
The core issue of blockchain scalability can be framed as: There is a set of validators that, using the crypto-economics of proof-of-stake, ensures that the execution of certain code is trustworthy. By default, these validators need to re-execute all the work of one another. As long as we enforce that all validators always re-execute everything, the entire system remains non-scalable.
Please note that, as long as the principle of absolute re-execution remains unchanged, increasing the number of validators in this model does not actually improve the system’s throughput.
This demonstrates a monolithic blockchain (as opposed to a sharded one). All network validators process inputs (i.e., blocks) one by one. In such a system, if Layer 1 wishes to host more Layer 2s, then all validators must re-execute all Layer 2s’ work. Clearly, this method does not scale.
Optimistic rollups address this issue by only re-executing (fraud proofs) when fraud is claimed. SNARK-based Rollups address this issue by leveraging the fact that the cost of validating SNARK proofs is significantly lower than the cost of generating them, thereby allowing all validators to efficiently verify SNARK proofs. For more details, refer to the “Appendix: Scalability Space Diagram.”
A straightforward solution for sharding is to divide the validator set into smaller subsets and have these smaller subsets re-execute Layer2 blocks. What are the problems with this approach? We are essentially sharding both the execution and economic security of the network. Such a Layer2 solution has lower security compared to Layer1, and its security decreases further as we divide the validator set into more shards.
Unlike optimistic rollups, where re-execution costs cannot always be avoided, Polkadot was designed with sharded execution in mind. It allows a portion of validators to re-execute Layer 2 blocks while providing enough cryptoeconomic evidence to the entire network to prove that the Layer 2 block is as secure as if the full validator set had re-executed it. This is achieved through a novel (and recently formalized) mechanism known as ELVES. (For more details, see: https://eprint.iacr.org/2024/961)
In short, ELVES can be seen as a “suspicious rollups” mechanism. Through several rounds of validators actively querying other validators on whether a given Layer 2 block is valid, we can confirm with high probability the block’s validity. In case of any dispute, the full validator set is quickly involved. Polkadot co-founder Rob Habermeier explained this in detail in an article. (For more details, see: https://polkadot.com/blog/polkadot-v1-0-sharding-and-economic-security#approval-checking-and-finality)
ELVES enable Polkadot to possess both sharded execution and shared security, two properties that were previously thought to be mutually exclusive. This is the primary technical achievement of Polkadot1 in scalability.
Now, let’s move forward with the “Core” analogy. A sharded execution blockchain is much like a CPU: just as a CPU can have multiple cores executing instructions in parallel, Polkadot can process Layer 2 blocks in parallel. This is why Layer 2 on Polkadot is called a parachain, and the environment where smaller validator subsets re-execute a single Layer 2 block is called a “core.” Each core can be abstracted as “a group of cooperating validators.”
You can think of a monolithic blockchain as processing a single block at a time, whereas Polkadot processes both a relay chain block and a parachain block for each core in the same time period.
So far, we’ve only discussed scalability and sharded execution offered by Polkadot. It’s important to note that each of Polkadot’s shards is, in fact, a completely different application. This is achieved through the metaprotocol stored as bytecode: a protocol that stores the definition of the blockchain itself as bytecode in its state. In Polkadot 1.0, WASM is used as the preferred bytecode, while in JAM, PVM/RISC-V is adopted.
This is why Polkadot is referred to as a heterogeneous sharded blockchain. (For more details, see: https://x.com/kianenigma/status/1790763921600606259) Each Layer 2 is a completely different application.
One important aspect of Polkadot2 is making the use of cores more flexible. In the original Polkadot model, core leasing periods ranged from 6 months to 2 years, which suited resource-rich enterprises but was less feasible for smaller teams. The feature that allows Polkadot cores to be used more flexibly is called “Agile Coretime.” (For more details, see: https://polkadot.com/agile-coretime) In this mode, core lease terms can be as short as a single block or as long as a month, with a price cap for those wishing to lease for longer periods.
The other features of Polkadot 2 are gradually being revealed as our discussion progresses, so there’s no need to elaborate on them further here.
To understand JAM, it’s important to first grasp what happens when a Layer 2 block enters the Polkadot core. The following is a simplified explanation.
Recall that a core consists mainly of a set of validators. So when we say “data is sent to the core,” it means the data is passed to this set of validators.
A Layer 2 block, along with part of the state of that Layer 2, is sent to the core. This data contains all the information needed to execute the Layer 2 block.
A portion of the core validators will re-execute the Layer 2 block and continue with tasks related to consensus.
These core validators provide the re-executed data to other validators outside the core. According to the ELVES rules, these validators may decide whether or not to re-execute the Layer 2 block, needing this data to do so.
It’s important to note that, so far, all these operations are happening outside the main Polkadot block and state transition function. Everything occurs within the core and the data availability layer.
From this, we can explore a few key operations that Polkadot is performing:
Understanding this forms the foundation for grasping JAM. Here’s a summary:
With this understanding, we can now introduce JAM.
JAM is a new protocol inspired by Polkadot’s design and fully compatible with it, aiming to replace the Polkadot relay chain and make core usage fully decentralized and unrestricted.
Built on Polkadot 2, JAM strives to make the deployment of Layer 2s on the core more accessible, offering even more flexibility than the agile-coretime model.
This is achieved mainly by exposing the three core concepts discussed earlier to developers: on-chain execution, in-core execution, and the DA layer.
In other words, in JAM, developers can:
This forms the basic overview of JAM’s goals. Needless to say, much of this has been simplified, and the protocol is likely to evolve further.
With this foundational understanding, we can now delve into some of the specifics of JAM in the following chapters.
In JAM, what were previously referred to as Layer 2s or parachains are now called Services, and what were previously referred to as blocks or transactions are now called Work Items or Work Packages. Specifically, a work item belongs to a service, and a work package is a collection of work items. These terms are intentionally broad to cover use cases beyond blockchain/Layer 2 scenarios.
A service is described by three entry points, two of which are fn refine() and fn accumulate(). The former describes what the service does during in-core execution, and the latter describes what it does during on-chain execution.
Finally, the names of these entry points are the reason the protocol is called JAM (Join Accumulate Machine). Join refers to fn refine()
, which is the phase where all Polkadot cores process a large volume of work in parallel across different services. After data is processed, it moves to the next stage. Accumulate refers to the process of accumulating all of these results into the main JAM state, which happens during the on-chain execution phase.
Work items can precisely specify the code they execute in-core and on-chain, as well as how, if, and from where they read or write content in the Distributed Data Lake.
Reviewing existing documentation on XCM (Polkadot’s selected language for parachain communication), all communication is asynchronous (for more details, see here). This means that once a message is sent, you cannot wait for its response. Asynchronous communication is a symptom of inconsistency in the system, and one of the primary downsides of permanently sharded systems like Polkadot 1, Polkadot 2, and Ethereum’s existing Layer 2 ecosystems.
However, as described in Section 2.4 of the Graypaper, a fully consistent system that remains synchronous for all its tenants can only scale to a certain degree without sacrificing universality, accessibility, or resilience.
This is where JAM stands out: by introducing several features, JAM achieves a novel intermediate state known as a semi-consistent system. In this system, subsystems that communicate frequently can create a consistent environment with one another, without forcing the entire system to remain consistent. This was best described by Dr. Gavin Wood, the author of the Graypaper, in an interview: https://www.youtube.com/watch?t=1378&v=O3kRAVBTkfs&embeds_referring_euri=https%3A%2F%2Fblog.kianenigma.nl%2F&source_ve_path=OTY3MTQ
Another way to understand this is by viewing Polkadot/JAM as a sharded system, where the boundaries between these shards are fluid and dynamically determined.
Polkadot has always been sharded and fully heterogeneous.
Now, it is not only sharded and heterogeneous, but these shard boundaries can be flexibly defined—what Gavin Wood refers to as a “semi-consistent” system in his tweets and the Graypaper. (please see: https://x.com/gavofyork?ref_src=twsrc%5Etfw、https://graypaper.com/)
Several features make this semi-consistent state possible:
It is important to note that while these capabilities are possible within JAM, they are not enforced at the protocol level. Consequently, some interfaces are theoretically asynchronous but can function synchronously in practice due to sophisticated abstractions and incentives. CorePlay, which will be discussed in the next section, is an example of this phenomenon.
This section introduces CorePlay, an experimental concept in the JAM environment that can be described as a new smart contract programming model. As of the time of writing, CorePlay has not been fully defined and remains a speculative idea.
To understand CorePlay, we first need to introduce the virtual machine (VM) chosen by JAM: the PVM.
PVM is a key detail in both JAM and CorePlay. The lower-level details of PVM are beyond the scope of this document and are best explained by domain experts in the Graypaper. However, for this explanation, we will highlight a few key attributes of PVM:
The latter is especially crucial for CorePlay.
CorePlay is an example of how JAM’s flexible primitives can be used to create a synchronous and scalable smart contract environment with a highly flexible programming interface. CorePlay proposes that actor-based smart contracts be deployed directly on JAM cores, allowing them to benefit from synchronous programming interfaces. Developers can write smart contracts as if they were simple fn main() functions, using expressions like let result = other_coreplay_actor(data).await? to communicate. If other_coreplay_actor is on the same JAM core in the same block, this call is synchronous. If it’s on another core, the actor will be paused and resumed in a subsequent JAM block. This is made possible by JAM services, their flexible scheduling, and PVM’s capabilities.
Finally, let’s summarize the primary reason JAM is fully compatible with Polkadot. Polkadot’s flagship product is its agile-coretime parachains, which continue in JAM. The earliest deployed services in JAM will likely be referred to as CoreChains or Parachains, enabling existing Polkadot-2-style parachains to run on JAM.
Further services can be deployed on JAM, and the existing CoreChains service can communicate with them. However, Polkadot’s current products will remain robust, simply opening new doors for existing parachain teams.
Most of this document discusses scalability from the perspective of execution sharding. However, we can also examine this issue from a data sharding standpoint. Interestingly, we find this is similar to the semi-consistent model mentioned earlier. In principle, a fully consistent system is superior but unscalable, while a fully inconsistent system scales well but is suboptimal. JAM, with its semi-consistent model, introduces a new possibility.
JAM offers something beyond these two options: it allows developers to publish arbitrary data to the JAM DA layer, which serves as a middle ground between on-chain and off-chain data. New applications can be built that leverage the DA layer for most of their data, while only persisting absolutely critical data to the JAM state.
This section revisits our perspective on blockchain scalability, which is also discussed in the Graypaper, though presented here in a more concise form.
Blockchain scalability largely follows traditional methods from distributed systems: vertical scaling and horizontal scaling.
Vertical scaling is what platforms like Solana focus on, maximizing throughput by optimizing both code and hardware to their limits.
Horizontal scaling is the strategy adopted by Ethereum and Polkadot: reducing the workload that each participant needs to handle. In traditional distributed systems, this is achieved by adding more replica machines. In blockchain, the “computer” is the entire network of validators. By distributing tasks among them (as ELVES does) or optimistically reducing their responsibilities (as in Optimistic Rollups), we decrease the workload for the entire validator set, thus achieving horizontal scaling.
In blockchain, horizontal scaling can be likened to “reducing the number of machines that need to perform all operations.”
In summary:
This section is based on Rob Habermeier’s analogy from Sub0 2023: Polkadot: Kernel/Userland | Sub0 2023 - YouTube (see: https://www.youtube.com/watch?v=15aXYvVMxlw), showcasing JAM as an upgrade to Polkadot: a kernel update on the same hardware.
In a typical computer, we can divide the entire stack into three parts:
In Polkadot, the hardware—the core infrastructure providing computation and data availability—has always been the cores, as previously mentioned.
In Polkadot, the kernel has so far consisted of two main parts:
Both of these exist in Polkadot’s Relay Chain.
User space applications, on the other hand, are the parachains themselves, their native tokens, and anything built on top of them.
We can visualize this process as follows:
Polkadot has long envisioned moving more core functionalities to its primary users—parachains. This is precisely the goal of the Minimal Relay RFC. (For more details, see: Minimal Relay RFC)
This means that the Polkadot Relay Chain would only handle providing the parachain protocol, thereby reducing the kernel space to some extent.
Once this architecture is implemented, it will be easier to visualize what the JAM migration will look like. JAM will significantly reduce Polkadot’s kernel space, making it more versatile. Additionally, the Parachains protocol will move to user space, as it is one of the few ways to build applications on the same core (hardware) and kernel (JAM).
This also reinforces why JAM is a replacement for the Polkadot Relay Chain, not for parachains.
In other words, we can view the JAM migration as a kernel upgrade. The underlying hardware remains unchanged, and much of the old kernel’s content is moved to user space to simplify the system.
The following is a detailed explanation of Polkadot1, Polkadot2, and how they evolved into JAM. (For more details, see: https://www.navalmanack.com/almanack-of-naval-ravikant/how-to-think-clearly). This article is aimed at technical readers, especially those who may not be deeply familiar with Polkadot but have some knowledge of blockchain systems and are likely acquainted with technologies from other ecosystems.
I believe this article serves as a good precursor to reading the JAM Gray Paper. (For more details, see: https://graypaper.com/)
This article assumes the reader is familiar with the following concepts:
Let’s first revisit the most innovative features of Polkadot1.
Currently, we are discussing a Layer 1 network that hosts other Layer 2 “blockchain” networks, similar to Polkadot and Ethereum. Therefore, the terms Layer 2 and Parachain can be used interchangeably.
The core issue of blockchain scalability can be framed as: There is a set of validators that, using the crypto-economics of proof-of-stake, ensures that the execution of certain code is trustworthy. By default, these validators need to re-execute all the work of one another. As long as we enforce that all validators always re-execute everything, the entire system remains non-scalable.
Please note that, as long as the principle of absolute re-execution remains unchanged, increasing the number of validators in this model does not actually improve the system’s throughput.
This demonstrates a monolithic blockchain (as opposed to a sharded one). All network validators process inputs (i.e., blocks) one by one. In such a system, if Layer 1 wishes to host more Layer 2s, then all validators must re-execute all Layer 2s’ work. Clearly, this method does not scale.
Optimistic rollups address this issue by only re-executing (fraud proofs) when fraud is claimed. SNARK-based Rollups address this issue by leveraging the fact that the cost of validating SNARK proofs is significantly lower than the cost of generating them, thereby allowing all validators to efficiently verify SNARK proofs. For more details, refer to the “Appendix: Scalability Space Diagram.”
A straightforward solution for sharding is to divide the validator set into smaller subsets and have these smaller subsets re-execute Layer2 blocks. What are the problems with this approach? We are essentially sharding both the execution and economic security of the network. Such a Layer2 solution has lower security compared to Layer1, and its security decreases further as we divide the validator set into more shards.
Unlike optimistic rollups, where re-execution costs cannot always be avoided, Polkadot was designed with sharded execution in mind. It allows a portion of validators to re-execute Layer 2 blocks while providing enough cryptoeconomic evidence to the entire network to prove that the Layer 2 block is as secure as if the full validator set had re-executed it. This is achieved through a novel (and recently formalized) mechanism known as ELVES. (For more details, see: https://eprint.iacr.org/2024/961)
In short, ELVES can be seen as a “suspicious rollups” mechanism. Through several rounds of validators actively querying other validators on whether a given Layer 2 block is valid, we can confirm with high probability the block’s validity. In case of any dispute, the full validator set is quickly involved. Polkadot co-founder Rob Habermeier explained this in detail in an article. (For more details, see: https://polkadot.com/blog/polkadot-v1-0-sharding-and-economic-security#approval-checking-and-finality)
ELVES enable Polkadot to possess both sharded execution and shared security, two properties that were previously thought to be mutually exclusive. This is the primary technical achievement of Polkadot1 in scalability.
Now, let’s move forward with the “Core” analogy. A sharded execution blockchain is much like a CPU: just as a CPU can have multiple cores executing instructions in parallel, Polkadot can process Layer 2 blocks in parallel. This is why Layer 2 on Polkadot is called a parachain, and the environment where smaller validator subsets re-execute a single Layer 2 block is called a “core.” Each core can be abstracted as “a group of cooperating validators.”
You can think of a monolithic blockchain as processing a single block at a time, whereas Polkadot processes both a relay chain block and a parachain block for each core in the same time period.
So far, we’ve only discussed scalability and sharded execution offered by Polkadot. It’s important to note that each of Polkadot’s shards is, in fact, a completely different application. This is achieved through the metaprotocol stored as bytecode: a protocol that stores the definition of the blockchain itself as bytecode in its state. In Polkadot 1.0, WASM is used as the preferred bytecode, while in JAM, PVM/RISC-V is adopted.
This is why Polkadot is referred to as a heterogeneous sharded blockchain. (For more details, see: https://x.com/kianenigma/status/1790763921600606259) Each Layer 2 is a completely different application.
One important aspect of Polkadot2 is making the use of cores more flexible. In the original Polkadot model, core leasing periods ranged from 6 months to 2 years, which suited resource-rich enterprises but was less feasible for smaller teams. The feature that allows Polkadot cores to be used more flexibly is called “Agile Coretime.” (For more details, see: https://polkadot.com/agile-coretime) In this mode, core lease terms can be as short as a single block or as long as a month, with a price cap for those wishing to lease for longer periods.
The other features of Polkadot 2 are gradually being revealed as our discussion progresses, so there’s no need to elaborate on them further here.
To understand JAM, it’s important to first grasp what happens when a Layer 2 block enters the Polkadot core. The following is a simplified explanation.
Recall that a core consists mainly of a set of validators. So when we say “data is sent to the core,” it means the data is passed to this set of validators.
A Layer 2 block, along with part of the state of that Layer 2, is sent to the core. This data contains all the information needed to execute the Layer 2 block.
A portion of the core validators will re-execute the Layer 2 block and continue with tasks related to consensus.
These core validators provide the re-executed data to other validators outside the core. According to the ELVES rules, these validators may decide whether or not to re-execute the Layer 2 block, needing this data to do so.
It’s important to note that, so far, all these operations are happening outside the main Polkadot block and state transition function. Everything occurs within the core and the data availability layer.
From this, we can explore a few key operations that Polkadot is performing:
Understanding this forms the foundation for grasping JAM. Here’s a summary:
With this understanding, we can now introduce JAM.
JAM is a new protocol inspired by Polkadot’s design and fully compatible with it, aiming to replace the Polkadot relay chain and make core usage fully decentralized and unrestricted.
Built on Polkadot 2, JAM strives to make the deployment of Layer 2s on the core more accessible, offering even more flexibility than the agile-coretime model.
This is achieved mainly by exposing the three core concepts discussed earlier to developers: on-chain execution, in-core execution, and the DA layer.
In other words, in JAM, developers can:
This forms the basic overview of JAM’s goals. Needless to say, much of this has been simplified, and the protocol is likely to evolve further.
With this foundational understanding, we can now delve into some of the specifics of JAM in the following chapters.
In JAM, what were previously referred to as Layer 2s or parachains are now called Services, and what were previously referred to as blocks or transactions are now called Work Items or Work Packages. Specifically, a work item belongs to a service, and a work package is a collection of work items. These terms are intentionally broad to cover use cases beyond blockchain/Layer 2 scenarios.
A service is described by three entry points, two of which are fn refine() and fn accumulate(). The former describes what the service does during in-core execution, and the latter describes what it does during on-chain execution.
Finally, the names of these entry points are the reason the protocol is called JAM (Join Accumulate Machine). Join refers to fn refine()
, which is the phase where all Polkadot cores process a large volume of work in parallel across different services. After data is processed, it moves to the next stage. Accumulate refers to the process of accumulating all of these results into the main JAM state, which happens during the on-chain execution phase.
Work items can precisely specify the code they execute in-core and on-chain, as well as how, if, and from where they read or write content in the Distributed Data Lake.
Reviewing existing documentation on XCM (Polkadot’s selected language for parachain communication), all communication is asynchronous (for more details, see here). This means that once a message is sent, you cannot wait for its response. Asynchronous communication is a symptom of inconsistency in the system, and one of the primary downsides of permanently sharded systems like Polkadot 1, Polkadot 2, and Ethereum’s existing Layer 2 ecosystems.
However, as described in Section 2.4 of the Graypaper, a fully consistent system that remains synchronous for all its tenants can only scale to a certain degree without sacrificing universality, accessibility, or resilience.
This is where JAM stands out: by introducing several features, JAM achieves a novel intermediate state known as a semi-consistent system. In this system, subsystems that communicate frequently can create a consistent environment with one another, without forcing the entire system to remain consistent. This was best described by Dr. Gavin Wood, the author of the Graypaper, in an interview: https://www.youtube.com/watch?t=1378&v=O3kRAVBTkfs&embeds_referring_euri=https%3A%2F%2Fblog.kianenigma.nl%2F&source_ve_path=OTY3MTQ
Another way to understand this is by viewing Polkadot/JAM as a sharded system, where the boundaries between these shards are fluid and dynamically determined.
Polkadot has always been sharded and fully heterogeneous.
Now, it is not only sharded and heterogeneous, but these shard boundaries can be flexibly defined—what Gavin Wood refers to as a “semi-consistent” system in his tweets and the Graypaper. (please see: https://x.com/gavofyork?ref_src=twsrc%5Etfw、https://graypaper.com/)
Several features make this semi-consistent state possible:
It is important to note that while these capabilities are possible within JAM, they are not enforced at the protocol level. Consequently, some interfaces are theoretically asynchronous but can function synchronously in practice due to sophisticated abstractions and incentives. CorePlay, which will be discussed in the next section, is an example of this phenomenon.
This section introduces CorePlay, an experimental concept in the JAM environment that can be described as a new smart contract programming model. As of the time of writing, CorePlay has not been fully defined and remains a speculative idea.
To understand CorePlay, we first need to introduce the virtual machine (VM) chosen by JAM: the PVM.
PVM is a key detail in both JAM and CorePlay. The lower-level details of PVM are beyond the scope of this document and are best explained by domain experts in the Graypaper. However, for this explanation, we will highlight a few key attributes of PVM:
The latter is especially crucial for CorePlay.
CorePlay is an example of how JAM’s flexible primitives can be used to create a synchronous and scalable smart contract environment with a highly flexible programming interface. CorePlay proposes that actor-based smart contracts be deployed directly on JAM cores, allowing them to benefit from synchronous programming interfaces. Developers can write smart contracts as if they were simple fn main() functions, using expressions like let result = other_coreplay_actor(data).await? to communicate. If other_coreplay_actor is on the same JAM core in the same block, this call is synchronous. If it’s on another core, the actor will be paused and resumed in a subsequent JAM block. This is made possible by JAM services, their flexible scheduling, and PVM’s capabilities.
Finally, let’s summarize the primary reason JAM is fully compatible with Polkadot. Polkadot’s flagship product is its agile-coretime parachains, which continue in JAM. The earliest deployed services in JAM will likely be referred to as CoreChains or Parachains, enabling existing Polkadot-2-style parachains to run on JAM.
Further services can be deployed on JAM, and the existing CoreChains service can communicate with them. However, Polkadot’s current products will remain robust, simply opening new doors for existing parachain teams.
Most of this document discusses scalability from the perspective of execution sharding. However, we can also examine this issue from a data sharding standpoint. Interestingly, we find this is similar to the semi-consistent model mentioned earlier. In principle, a fully consistent system is superior but unscalable, while a fully inconsistent system scales well but is suboptimal. JAM, with its semi-consistent model, introduces a new possibility.
JAM offers something beyond these two options: it allows developers to publish arbitrary data to the JAM DA layer, which serves as a middle ground between on-chain and off-chain data. New applications can be built that leverage the DA layer for most of their data, while only persisting absolutely critical data to the JAM state.
This section revisits our perspective on blockchain scalability, which is also discussed in the Graypaper, though presented here in a more concise form.
Blockchain scalability largely follows traditional methods from distributed systems: vertical scaling and horizontal scaling.
Vertical scaling is what platforms like Solana focus on, maximizing throughput by optimizing both code and hardware to their limits.
Horizontal scaling is the strategy adopted by Ethereum and Polkadot: reducing the workload that each participant needs to handle. In traditional distributed systems, this is achieved by adding more replica machines. In blockchain, the “computer” is the entire network of validators. By distributing tasks among them (as ELVES does) or optimistically reducing their responsibilities (as in Optimistic Rollups), we decrease the workload for the entire validator set, thus achieving horizontal scaling.
In blockchain, horizontal scaling can be likened to “reducing the number of machines that need to perform all operations.”
In summary:
This section is based on Rob Habermeier’s analogy from Sub0 2023: Polkadot: Kernel/Userland | Sub0 2023 - YouTube (see: https://www.youtube.com/watch?v=15aXYvVMxlw), showcasing JAM as an upgrade to Polkadot: a kernel update on the same hardware.
In a typical computer, we can divide the entire stack into three parts:
In Polkadot, the hardware—the core infrastructure providing computation and data availability—has always been the cores, as previously mentioned.
In Polkadot, the kernel has so far consisted of two main parts:
Both of these exist in Polkadot’s Relay Chain.
User space applications, on the other hand, are the parachains themselves, their native tokens, and anything built on top of them.
We can visualize this process as follows:
Polkadot has long envisioned moving more core functionalities to its primary users—parachains. This is precisely the goal of the Minimal Relay RFC. (For more details, see: Minimal Relay RFC)
This means that the Polkadot Relay Chain would only handle providing the parachain protocol, thereby reducing the kernel space to some extent.
Once this architecture is implemented, it will be easier to visualize what the JAM migration will look like. JAM will significantly reduce Polkadot’s kernel space, making it more versatile. Additionally, the Parachains protocol will move to user space, as it is one of the few ways to build applications on the same core (hardware) and kernel (JAM).
This also reinforces why JAM is a replacement for the Polkadot Relay Chain, not for parachains.
In other words, we can view the JAM migration as a kernel upgrade. The underlying hardware remains unchanged, and much of the old kernel’s content is moved to user space to simplify the system.