Reposted original title: Former Bybit technical director: Looking at the future of blockchain 3.0 and web3 from the perspective of ICP
On January 3, 2009, the first BTC block was mined. Since then, the blockchain has developed vigorously for 14 years.Throughout the past 14 years, the subtlety and greatness of BTC, EthThe emergence of ereum, the passionate crowdfunding of EOS, the fateful battle of PoS & PoW, the interconnection of thousands of Polkdadot, each amazing technology, and each wonderful story have attracted countless people in the industry to win!
Currently, in 2023, what is the landscape of the entire blockchain? The following is my thinking, see for detailsInterpretation of the public chain structure in this article
But how will the entire blockchain industry develop in the next 10 years? Here are my thoughts
Let me introduce a story first. In 2009, Alibaba proposed the “de-IOE” strategy, which was also a major milestone in Alibaba’s “Double Eleven” later.
The core content of the “De-IOE” strategy is to remove IBM minicomputers, Oracle databases and EMC storage devices, and implant the essence of “cloud computing” into Alibaba’s IT genes. in
There are three main reasons for going to IOE, but the first point is the essential reason, and the latter two are more indirect:
So why was the “de-IOE” strategy proposed in 2009 instead of earlier?
But going to IOE is not about simply changing the software and hardware themselves, replacing old software and hardware with new software and hardware, but replacing old methods with new ones, and using cloud computing to completely change the IT infrastructure. In other words, this is caused by changes in the industry, not just simple technological upgrades.
The development of an enterprise can be divided into three stages:
Let’s analyze the entire blockchain industry as an enterprise.
BTC is innovative in that it solves a problem that has vexed computer scientists for decades: how to create a digital payment system that can operate without trusting any central authority.
However, BTC does have some limitations in its design and development, which provide market opportunities for subsequent blockchain projects such as Ethereum (ETH). Here are some of the main limitations:
Transaction throughput and speed:BTC’s block generation time is approximately 10 minutes, and the size limit of each block results in an upper limit on its transaction processing capabilities. This means that when the network is busy, transaction confirmation may take longer and higher transaction fees may apply.
Smart contracts have limited functionality:BTC was designed primarily as a digital currency, and the transaction types and scripting language capabilities it supports are relatively limited. This limits BTC’s use in complex financial transactions and decentralized applications (DApps).
Not easy to upgrade and improve:Due to BTC’s decentralized and conservative design principles, major upgrades and improvements usually require broad consensus from the community, which is difficult to achieve in practice, which also makes BTC’s progress relatively slow.
Energy consumption issues:BTC’s consensus mechanism is based on Proof of Work (PoW), which means that a large amount of computing resources are used for competition among miners, resulting in a large amount of energy consumption. This has been criticized on environmental and sustainability grounds. Regarding this point, you can also pay attention to EcoPoW, which can partially alleviate this limitation.
The current Layer 2 expansion form of Ethereum can be regarded as a “vertical expansion”, which relies on the security and data availability guarantee of the underlying Layer 1. Although it seems to be a 2-layer structure, it will still be limited by the processing power of Layer 1 in the end. Even if it is changed to a multi-layer structure, that is, creating Layer 3 and Layer 4, it will only increase the complexity of the entire system and delay a little time. What’s more, according to the diminishing marginal effect, each additional layer added later will greatly reduce the expansion effect due to the extra overhead. This vertical layering method can be regarded as a single machine hardware upgrade, but this single machine refers to the entire ETH ecosystem.
And as usage increases, users’ demand for low cost and high performance will also increase. As an application on Layer1, the cost of Layer2 can only be reduced to a certain extent, and is ultimately still subject to the basic cost and throughput of Layer1. This is similar to the demand curve theory in economics - as price falls, aggregate quantity demanded increases. Vertical expansion is difficult to fundamentally solve the scalability problem.
Ethereum is a towering tree, and everyone relies on that root. Once that root cannot absorb nutrients at the same rate, people’s needs will not be met;
Therefore, only horizontal scalability is easier to achieve infinity.
Some people think that multi-chain and cross-chain can also be regarded as a horizontal expansion method.
takePolkadotTo give an example, it is a heterogeneous kingdom. Each country looks different, but every time you make something, you need to build a kingdom;
CosmosIt is an isomorphic kingdom. The meridians and bones of each country look the same, but every time you make something, you must build a kingdom;
But from an Infra perspective, the above two models are a little strange.Do you need to build an entire kingdom for every additional application you build?Let’s take an example to see how weird it is,
I bought a Mac 3 months ago and developed a Gmail application on it;
Now I want to develop a Youtube application, but I have to buy a new Mac to develop it, which is too weird.
Both of the above methods face the problem of high cross-chain communication complexity when adding new chains, so they are not my first choice.
If you want to scale-out, you need a complete set of underlying infrastructure to support rapid horizontal expansion without reinventing the wheel.
A typical example of supporting scale-out is cloud computing. [VPC+subnet+network ACL+security group] These underlying templates are exactly the same for everyone. All machines have numbers and types. The upper-layer RDS, MQ and other core components support it. Infinitely scalable, if you need more resources, you can start quickly with the click of a button.
A leader shared with me before that if you want to understand what infrastructure and components Internet companies need, then you only need to go to AWS and take a look at all the services they provide. It is the most complete and powerful combination.
In the same way, let’s take a high-level look at ICP and see why it meets the requirements of Scale-out.
Here we first explain a few concepts:
Dfinity Foundation:It is a non-profit organization dedicated to promoting the development and application of decentralized computer technology. It is the developer and maintainer of the Internet Computer protocol, aiming to achieve the comprehensive development of decentralized applications through innovative technology and an open ecosystem.
Internet Computer (IC):It is a high-speed blockchain network developed by Dfinity Foundation and specially designed for decentralized applications. It adopts a new consensus algorithm that enables high-throughput and low-latency transaction processing, while supporting the development and deployment of smart contracts and decentralized applications.
Internet Computer Protocol (ICP):It is a native Token in the Internet Computer protocol. It is a digital currency used to pay for network usage and reward nodes.
Many of the following contents will be a bit hardcore, but I have described them in vernacular, and I hope everyone can follow along. If you want to discuss more details with me, you can find my contact information at the top of the article.
From the hierarchical structure, from bottom to top they are
P2P layer,Collects and sends messages from users, other replicas in the subnet, and other subnets. Ensure that messages can be delivered to all nodes in the subnet to ensure security, reliability and resiliency
Consensus layer:The main task is to sort the input to ensure that all nodes inside the same subnet process tasks in the same order. To achieve this goal, the consensus layer uses a new consensus protocol designed to guarantee security and liveness, and to be resistant to DOS/SPAM attacks. After consensus is reached within the same subnet on the order in which various messages should be processed, these blocks are passed to the message routing layer.
Message routing layer:According to the tasks transmitted from the consensus layer, prepare the input queue of each Canister. After execution, it is also responsible for receiving the output generated by the Canister and forwarding it to the Canister in the local or other zones as needed. Additionally, it is responsible for logging and validating responses to user requests.
Execution layer:Provide a runtime environment for Canister, read input in an orderly manner according to the scheduling mechanism, call the corresponding Canister to complete the task, and return the updated status and generated output to the message routing layer. It uses the non-determinism brought by random numbers to ensure the fairness and auditability of calculations. Because in some situations, Canister’s behavior needs to be unpredictable. For example, when performing encryption operations, random numbers need to be used to increase the security of encryption. In addition, the execution results of Canister need to be random to prevent attackers from analyzing Canister’s execution results to discover vulnerabilities or predict Canister’s behavior.
(4-layer structure of ICP)
Key Components / Key Components
From the composition point of view:
Subnet:Supports unlimited expansion, each subnet is a small blockchain. Subnets communicate through Chain Key technology. Since a consensus has been reached within the subnet, all it takes is Chain Key verification.
Replica:There can be many nodes in each Subnet, and each node is a Replica. IC’s consensus mechanism will ensure that each Replica in the same Subnet will process the same input in the same order, so that the final state of each Replica All the same, this mechanism is called Replicated State Machine,
Canister:Canister is a smart contract, which is a computing unit running on the ICP network that can store data and code and communicate with other Canisters or external users. ICP provides a runtime environment for executing Wasm programs within Canister and communicating with other Canisters and external users via messaging. It can be simply thought of as a docker used to run code, and then you inject the Wasm Code Image yourself to run it inside.
Node:As an independent server, Canister still needs a physical machine to run. These physical machines are the machines in the real computer room.
Data Center:The nodes in the data center are virtualized into a replica (Replica) through the node software IC-OS, and some Replica are randomly selected from multiple data centers to form a subnet (Subnet). This ensures that even if a data center is hacked or encounters a natural disaster, the entire ICP network will still operate normally, a bit like an upgraded version of Alibaba’s “two places and three centers” disaster recovery and high availability solution. Data centers can be distributed all over the world, and a data center can even be built on Mars in the future.
Boundary Nodes:Provides ingress and egress between the external network and the IC subnet, validating responses.
Identity subject (Principal):The external user’s identifier, derived from the public key, is used for permission control.
Network Neural System (NNS):An algorithmic DAO governed using staked ICP to manage ICs.
Registry:The database maintained by NNS contains mapping relationships between entities (such as Replica, canister, and Subnet), which is somewhat similar to the current working principle of DNS.
Cycles:The local token represents the CPU quota used to pay for the resources consumed by canister when running. If I had to express it in Chinese, I would use the word “computing cycle” because cycles mainly refers to the unit used to pay for computing resources.
From the bottom layer, Chain-key technology is used, among which
Publicly Verifiable Secret Sharing scheme (PVSS Scheme): A publicly verifiable secret sharing scheme. In the white paper of the Internet Computer protocol, the PVSS scheme is used to implement the decentralized key generation (DKG) protocol to ensure that the node’s private key will not be leaked during the generation process.
Forward secure public key encryption scheme(forward-secure public-key encryption scheme): The forward-secure public-key encryption scheme ensures that even if the private key is leaked, previous messages will not be decrypted, thus improving the security of the system.
Key resharing protocol:A threshold signature-based key sharing scheme for key management in the Internet Computer protocol. The main advantage of this protocol is that it can share existing keys to new nodes without creating new keys, thereby reducing the complexity of key management. In addition, the protocol uses threshold signatures to protect the security of key sharing, thereby improving the security and fault tolerance of the system.
Threshold BLS signatures:ICP implements a threshold signature scheme. For each Subnet, there is a public and verifiable public key, and its corresponding private key is split into multiple shares. Each share is sent by a Replica in the Subnet. Hold, only the message signed by more than the threshold number of Replicas in the same Subnet is considered valid. In this way, the messages transmitted between Subnets and Replicas are encrypted but can be quickly verified, ensuring both privacy and security. The BLS algorithm is a well-known threshold signature algorithm. It is the only signature scheme that can produce a very simple and efficient threshold signature protocol, and the signature is unique, which means that for a given public key and message, there is only one Valid signature.
Non-interactive Distributed Key Generation (NIDKG):To securely deploy threshold signature schemes, Dfinity designed, analyzed and implemented a new DKG protocol that runs on asynchronous networks and is highly robust (even if up to a third of the nodes in the subnet crash or damaged, it can still succeed) while still being able to provide acceptable performance. In addition to generating new keys, this protocol can also be used to reshare existing keys. This capability is critical to enable autonomous evolution of the IC topology as subnets undergo membership changes over time.
PoUW:PoUW There is one more U than PoW, which stands for Userful. It mainly improves a lot of performance and allows the node machine to do less useless work. PoUW will not artificially create difficult hash calculations, but will focus its computing power on serving users as much as possible. Most of the resources (CPU, memory) are used for the actual execution of the code in the canister.
Chain-evolution technology:It is a technology used to maintain the blockchain state machine. It includes a series of technical means to ensure the security and reliability of the blockchain. In the Internet Computer protocol, Chain-evolution technology mainly includes the following two core technologies:
1.Summary blocks:The first block of each epoch is a summary block, which contains some special data used to manage different threshold signature schemes. Among them, a low-threshold scheme is used to generate random numbers, and a high-threshold scheme is used to authenticate the replication status of the subnet.
2.Catch-up packages (CUPs):CUPs is a technology for quickly synchronizing node status. It allows newly added nodes to quickly obtain the current status without re-running the consensus protocol.
My logical derivation of the entire IC underlying technology is:
In traditional public-key cryptography, each node has its own public-private key pair, which means that if a node’s private key is leaked or attacked, the security of the entire system will be threatened. The threshold signature scheme divides a key into multiple parts and assigns them to different nodes. Only when a sufficient number of nodes cooperate can a signature be generated. In this way, even if some nodes are attacked or leaked, it will not affect the security of the entire system. Too much impact. In addition, the threshold signature scheme can also improve the decentralization of the system because it does not require a centralized organization to manage keys, but disperses the keys to multiple nodes, which can avoid single points of failure and centralization. risk. therefore,IC uses a threshold signature scheme to improve the security and decentralization of the system,We hope to use threshold signatures to create a universal blockchain that is highly secure, scalable, and quickly verifiable.
and BLS is a well-known threshold signature algorithm. It is the only signature scheme that can produce a very simple and efficient threshold signature protocol.Another advantage of BLS signatures is that there is no need to save the signature state. As long as the message content remains unchanged, the signature is fixed, which means that for a given public key and message, there is only one valid signature. This ensures extremely high scalability, so ICP chose the BLS solution.
becauseThreshold signatures are used, so a distributor is needed to distribute key fragments to different participants.But the person who distributes the key fragments is a single point, which can easily lead to single points of failure.Therefore, Dfinity designed a distributed key distribution technology, namely NIDKG.During the initialization period of subnet creation, all participating Replicas non-interactively generate a public key A. For the corresponding private key B, each participant mathematically calculates and holds one of the derived secrets. share.
If you want to be a NIDKG, you must ensure that every participating party in the distribution does not cheat.Therefore, each participant can not only obtain his own secret share, but also publicly verify whether his secret share is correct. This is a very important point in realizing distributed key generation.
What if the subnet key at a certain historical moment is leaked? How to ensure that historical data cannot be tampered with? Dfinity adopts a forward-secure signature scheme, which ensures that even if the subnet key at a certain historical moment is leaked, attackers cannot change the data of historical blocks. This also prevents later corruption attacks on the blockchain. Threats to historical data. If this restriction is stronger, it can actually ensure that the information will not be eavesdropped during transmission, because the timestamps do not match, and even if the key is cracked within a short period of time, the content of past communications cannot be cracked.
With NIDKG, if a certain secret share is held by a node for a long time, once each node is gradually eroded by hackers, problems may occur in the entire network. Therefore, key updates need to be performed continuously, but key updates cannot require all participants Replica to come together to interact and communicate, but must also be performed non-interactively. However, because public key A has been registered in NNS, other subnets will also use this public key A for verification, so it is best not to change the subnet public key. But if the subnet public key remains unchanged, how to update the secret share between nodes? therefore Dfinity designed a Key resharing protocol. Without creating a new public key, all Replicas holding the current version of the secret share non-interactively generate a new round of derived secret shares to the new version of the secret share holder. ,so
This not only ensures that the new version of the secret share is authenticated by all current legal secret share holders
It also ensures that the old version of secret share is no longer legal.
It also ensures that even if a new version of the secret share is leaked in the future, the old version of the secret share will not be leaked, because the polynomials between the two are irrelevant and cannot be deduced. This is also what was just introduced beforeforward security。
in additionEnsures efficient random redistribution,When trusted nodes or access controls change, access policies and controllers can be modified at any time without restarting the system. This greatly simplifies the key management mechanism in many scenarios. This is useful, for example, in the case of subnet membership changes, as resharing will ensure that any new members have the appropriate secret share, and any replicas that are no longer members will no longer have a secret share. Furthermore, if a small number of secret shares are leaked to an attacker in any one epoch or even every epoch, these secret shares will not be of any benefit to the attacker.
Because traditional blockchain protocols need to store all block information starting from the genesis block, this will lead to scalability issues as the blockchain grows. This is why it is very troublesome for many public chains to develop a light client. . So IC wanted to solve this problem, so IC developed Chain-evolution Technology. At the end of each epoch, all processed input and required consensus information can be safely cleared from the memory of each Replica, which greatly reduces the cost. storage requirements per Replica, which enables the IC to scale to support a large number of users and applications. In addition, Chain-evolution technology also includes CUPs technology, which allows newly added nodes to quickly obtain the current state without the need to re-run the consensus protocol, which greatly reduces the threshold and synchronization time for new nodes to join the IC network.
To sum up,All underlying technologies for ICsThey are all linked together.Based on cryptography (from theory), it also fully considers the problems of the entire industry such as fast synchronization of nodes (from practice). He is truly a master of all things!
Reverse Gas model:Most traditional blockchain systems require users to first hold native tokens, such as ETH and BTC, and then consume the native tokens to pay transaction fees. This increases the entry barrier for new users and is not in line with people’s usage habits. Why do I have to hold Tiktok shares before I use Tiktok? ICP adopts a reverse Gas model design. Users can directly use the ICP network, and the project party will be responsible for the handling fees. This lowers the threshold for use, is more in line with Internet service habits, and is conducive to obtaining larger-scale network effects, thusSupport more users to join.
Stable Gas:For other public chains on the market, for the security of the chain and the need for transfers, some people will buy native tokens, and miners will dig hard, or some people will desperately collect native tokens, so this public chain The chain contributes computing power such as Bitcoin, or provides staking economic security to this public chain such as Ethereum. It can be said that our demand for btc/eth actually comes from the computing power/pledge requirements of the Bitcoin/Ethereum public chain, which is essentially the security requirement of the chain. Therefore, as long as the chain directly uses native tokens to pay gas, it will still be expensive in the future. Maybe native tokens are cheap now, but as long as the chain itself becomes ecological, it will become expensive later. ICP is different. The Gas consumed in the ICP blockchain is called Cycles, which is converted by consuming ICP. Cycles are stable under algorithm regulation and anchored to 1 SDR (SDR can be regarded as the calculation of comprehensive multi-national legal currencies. stable unit). Therefore, no matter how much ICP rises in the future, the money you spend doing anything in ICP will be the same as today (not taking inflation into account).
Wasm:Using WebAssembly (Wasm) as the standard for code execution, developers can use a variety of popular programming languages (such as Rust, Java, C++, Motoko, etc.) to write code.To support more developers to join。
Support running AI models:The Python language can also be compiled into wasm. Python has the largest number of users in the world and is also the first language for AI, such as matrix and large integer calculations. Someone is already thereLlama2 model is running on IC, I would not be surprised at all if the concept of AI+Web3 occurs on ICP in the future.
Web2 user experience:Currently, many applications on ICP have achieved amazing results of millisecond-level queries and second-level updates. If you don’t believe it, you can use it directlyOpenChat,A purely on-chain decentralized chat application.
Running the front end on the chain:You have only heard that part of the back-end content is written as a simple smart contract and then run on the chain. This can ensure that the core logic such as data assets is not tampered with. But the front-end actually needs to run completely on the chain to be safe, becauseFront-end attackIt is a very typical and frequent problem. Just imagine, everyone may think that Uniswap code is very safe. The smart contract has been verified by so many people over the years, and the code is simple, so there will definitely be no problems. But suddenly one day, if the front end of Uniswap is hijacked, and the contract you interact with is actually a malicious contract deployed by hackers, you may go bankrupt in an instant. But if you store and deploy all the front-end code in the IC’s Canister, at least the consensus security of the IC ensures that the front-end code cannot be tampered with by hackers. This protection is relatively complete, and the front-end can be run and rendered directly on the IC. It does not affect the normal operation of the application. On IC, developers can build applications directly without traditional cloud services, databases or payment interfaces. There is no need to buy a front-end server or worry about databases, load balancing, network distribution, firewalls, etc. Users can directly access the front-end web page deployed on ICP through a browser or mobile app, such as one I deployed beforepersonal blog。
DAO control code upgrade:In many DeFi protocols now, project parties have full control and can initiate major decisions at will, such as suspending operations, selling funds, etc., without going through community voting and discussion. I believe everyone has witnessed or heard of this case. In contrast, the DAPP code under the ICP ecosystem runs in a container controlled by the DAO. Even if a certain project party accounts for a large proportion of the voting, a public voting process is still implemented, which satisfies the blockchain transparency described at the beginning of this article. necessary conditions for transformation. This process assurance mechanism can better reflect the community’s wishes, compared with other current public chain projects.Better implementation in governance.
Automatic protocol upgrade:When the protocol needs to be upgraded, a new threshold signature scheme can be added to the summary block to achieve automatic protocol upgrade. This approach ensures the security and reliability of the network while avoiding the inconvenience and risks caused by hard forks. Specifically, the Chain Key technology in ICP can ensure the security and reliability of the network by maintaining the blockchain state machine through a special signature scheme. At the beginning of each epoch, the network uses a low-threshold signature scheme to generate random numbers, and then uses a high-threshold signature scheme to authenticate the subnet’s replication status. This signature scheme ensures the security and reliability of the network, while also enabling automatic protocol upgrades, thusThis avoids the inconvenience and risks caused by hard forks.
(Proposal Voting)
Fast fowarding:It is a technology in the Internet Computer protocol that quickly synchronizes node status. It allows newly added nodes to quickly obtain the current status without re-running the consensus protocol. Specifically, the process of Fast forwarding is as follows:
The newly added node obtains the Catch-up package (CUP) of the current epoch, which contains the Merkle tree root, summary block and random number of the current epoch.
The newly added node uses state sync subprotocol to obtain the complete status of the current epoch from other nodes, and uses the Merkle tree root in CUP to verify the correctness of the status.
The newly added node uses the random number in CUP and the protocol messages of other nodes to run the consensus protocol to quickly synchronize to the current state.
The advantage of fast forwarding is that it canThis allows newly added nodes to quickly obtain the current status without having to start from scratch like some other public chains.This can accelerate the synchronization and expansion of the network. At the same time, it can also reduce the communication volume between nodes, thereby improving the efficiency and reliability of the network.
(fast forwarding)
Decentralized Internet Identity:The identity system on IC really makes me feel that the DID problem can be completely solved, and it is completely solved, whether it is scalability or privacy. The identity system on IC currently has an implementation called Internet Identity, as well as more powerful ones developed based on itNFID。
itsprincipleas follows:
When registering, it will generate a pair of public and private keys for the user. The private key is stored in the TPM security chip within the user’s device and can never be leaked, while the public key is shared with services on the network.
When a user wants to log in to a dapp, the dapp will create a temporary session key for the user. This session key will be signed by the user through an authorized electronic signature, so that the dapp has the authority to verify the user’s identity.
Once the session key is signed, the dapp can use the key to access network services on behalf of the user without the user having to electronically sign each time. This is similar to authorized logins in Web2.
The session key is only valid for a short period of time. After expiration, the user needs to re-pass the biometric authorization signature to obtain a new session key.
The user’s private key is always stored in the local TPM security chip and will not leave the device. This ensures the security of the private key and the anonymity of the user.
By using temporary session keys, different dapps cannot track each other’s user identities. Achieve truly anonymous and private access.
Users can easily synchronize and manage their Internet Identity across multiple devices, but the device itself also requires corresponding biometrics or hardware keys for authorization.
Some of the benefits of Internet Identity are:
No need to remember password.Log in directly using biometrics features such as fingerprint recognition, eliminating the need to set and remember complex passwords.
The private key does not leave the device and is more secure.The private key is stored in the TPM’s security chip and cannot be stolen, solving the problem of username and password theft in Web2.
Log in anonymously and cannot be tracked.Unlike Web2, which uses email as a username to be tracked across platforms, Internet Identity eliminates this tracking.
Multi-device management is more convenient.You can log in to the same account on any device that supports biometrics, instead of being limited to a single device.
Do not rely on central service providers to achieve true decentralization.It is different from the model where usernames correspond to email service providers in Web2.
Adopt the entrusted certification process,No need to sign again every time you log in.The user experience is better.
Support the use of dedicated security equipmentLike Ledger or Yubikey login, security is improved.
Hide the user’s actual public key,Transaction records cannot be queried through the public key to protect user privacy.
Seamlessly compatible with Web3 blockchain,Login and sign blockchain DApps or transactions securely and efficiently.
The architecture is more advanced, representing the organic integration of the advantages of Web2 and Web3, and is the standard for future network accounts and logins.
In addition to providing a new user experience, the following technical means are also adopted to ensure its security:
Use a TPM security chip to store the private key. The chip is designed so that even developers cannot access or extract the private key to prevent the private key from being stolen.
Secondary authentication mechanisms such as biometric authentication, such as fingerprint or facial recognition, need to be verified based on the device where they are located, so that only the user holding the device can use the identity.
The session key adopts a short-term expiration design to limit the time window for being stolen, and the relevant ciphertext is forced to be destroyed at the end of the session to reduce risks.
Public key encryption technology enables the data during transmission to be encrypted, and external listeners cannot learn the user’s private information.
Does not rely on third-party identity providers. PRIVATE KEY is generated and controlled by users themselves and does not trust third parties.
Combined with the non-tamperability brought by the IC blockchain consensus mechanism, it ensures the reliability of the entire system operation.
Relevant cryptographic algorithms and security processes are being continuously updated and upgraded, such as adding multi-signature and other more secure mechanisms.
Open source code and decentralized design optimize transparency and facilitate community collaboration to improve security.
(Internet Identity)
From a team perspective, there are a total of 200+ employees, all of whom are very elite talents. Employees have published a total of 1,600+ papers, been cited 100,000+ times, and held a total of 250+ patents.
Academically, his recent mathematical theories include Threshold Relay and PSC Chains, Validation Towers and Trees and USCID.
From a technical background point of view, he has a profound technical research and development background and has been engaged in research in the field of big data and distributed computing in his early years, which laid the technical foundation for building complex ICP networks.
From an entrepreneurial perspective, he previously ran an MMO game using his own distributed system that hosted millions of users. Dominic started Dfinity in 2015 and is also the President and CTO of String labs.
From a perspective, he proposed the concept of decentralized Internet more than 10 years ago. It is not easy to promote this grand project in the long term. At present, his design ideas are very forward-looking.
FounderDominic Williams is a crypto theorist and serial entrepreneur.
In terms of technical team, Dfinity is very strong.The Dfinity Foundation brings together a large number of top cryptography and distributed systems experts, such asJan Camenisch, Timothy Roscoe, Andreas Rossberg, Maria D., Victor Shoup etc., even the “L” in the author of the BLS cryptographic algorithm -Ben Lynn Also working at Dfinity. This provides strong support for ICP’s technological innovation. The success of blockchain projects is inseparable from technology, and the gathering of top talents can bring technological breakthroughs, which is also a key advantage of ICP.
Dfinity Foundation Team
This article would be too long if I also covered this section, so I decided to write a separate article later to give you a detailed analysis. This article focuses more on the development direction of the blockchain industry and why ICP has great opportunities.
Applications
All types of applications, social platforms, creator platforms, chat tools, games, and even Metaverse games can be developed on ICP.
Many people say that IC is not suitable for DeFi because it is difficult to achieve consistent global state, but I think this question itself is wrong. It’s not that the global state is consistent, but it’s that the global state is consistent under low latency. If you can accept 1 minute, 10,000 machines around the world can achieve global consistency. With so many nodes in Ethereum and BTC, haven’t they been forced to achieve global state consistency under high latency? Therefore, they are currently unable to achieve unlimited horizontal expansion. IC first solves the problem of infinite horizontal expansion by slicing subnets. As for the global state consistency under low latency, it uses strong consistency distributed consistency algorithms, well-designed network topology, high-performance distributed data synchronization, and time Effective stamp verification and a mature fault-tolerant mechanism can also be achieved. But to be honest, it will be more difficult to build a trading platform at the IC application level than the high-performance trading platform currently being built by Wall Street people. It is not just about reaching an agreement among multiple computer rooms. However, being difficult does not mean that it cannot be done at all. It means that many technical problems must be solved first, and a moderate state will eventually be found, which not only ensures safety but also ensures an acceptable experience for people. For example, ICLightHouse below.
ICLightHouse,An orderbook dex on the whole chain, what is the concept of the whole chain? How many technical difficulties need to be solved? This is unthinkable on other public chains, but on IC at least it’s doable, which gives us hope.
OpenChat,A decentralized chat application with a great experience. I have not seen a second such product in the entire blockchain industry. Many other teams have tried in this direction before, but in the end they all failed due to various reasons. Technical issues failed. In the final analysis, users felt that the experience was not good. For example, the speed was too slow. It took 10 seconds to send a message and 10 seconds to receive other people’s messages. However, a small team of three people on ICP has made such a successful product. You can experience it for yourself just how smooth it is. Welcome to join the organization, where you can enjoy the collision of ideas and enjoy the freedom of speech to a certain extent.
Must,A platform for super creators, where everyone can create a planet and build their own individual brand, and the content you output will always be your own, and can even support paid reading. It can be called a decentralized knowledge planet. I now refresh articles on it every day.
Easy - 0xkookoo
OpenChat and Mora applications are products that I use almost every day. They give people a sense of comfort that cannot be separated from them. Two words to describe them are freedom and enrichment.
There are already some teams developing game applications on IC, and I think the narrative of full-chain games may eventually be taken over by IC. As I said in the GameFi section of this article before, game playability and fun are things that project parties need to consider. Playability is easier to achieve on ICs. Looking forward to itDragginz ‘s masterpiece.
ICP is like the earth, and Chainkey technology is like the core of the earth. Its relationship with ICP is similar to the relationship between the TCP/IP protocol and the entire Internet industry. Each Subnet is like the continent of Asia, Africa and Latin America. Of course, the Subnet can also be the Pacific/Atlantic Ocean. , there are different buildings and areas (Replica and Node) in the continent and ocean. Plants (Canister) can be planted on each area and building, and different animals live happily;
ICP supports horizontal expansion. Each subnet is autonomous and can communicate between different subnets. No matter what application you are in, social media, finance, or even the metaverse, you can achieve ultimate consistency through this distributed network. It is easy to achieve a global ledger under synchronous conditions, but it is very challenging to achieve “global state consistency” under asynchronous conditions.At present, only ICP has the opportunity to do this.
It should be noted that this does not refer to “Global status consistent”, but “The global status is consistent”. “Global state consistency” requires all participating nodes to [agree on all operation sequences], [final results are consistent], [objective consistency, does not depend on whether the node fails], [clock consistency], [instant consistency, all Operations are all processed synchronously], which is guaranteed in the IC single subnet. However, if you want to ensure “global state consistency”, you need all subnets as a whole to achieve the above “global state consistency” for the same data and status. In actual implementation, this is impossible to achieve within low latency. , this is also the bottleneck that currently prevents public chains such as ETH from horizontally expanding. Therefore, IC chose to reach a consensus within a single subnet, and other subnets quickly verified through communication that the results were not falsified, in order to achieve “final global state consistency.” Equivalent toAt the same time, it combines the decentralization of large public chains with the high throughput and low latency of alliance chains, and achieves unlimited horizontal expansion of subnets through mathematical and encryption algorithm proofs.
To sum up, we can see that according to the final development direction of blockchain that I thought about at the beginning of the article,[Sovereignty] + [Decentralized multi-point centralization] + [Transparency] + [Control of code execution] + [Infinite scalability with linear cost],
sovereigntyIt is the only issue that blockchain needs to solve, including asset sovereignty, data sovereignty, speech sovereignty, etc. Otherwise, there is no need for blockchain;
IC totally did it
IC did it too
IC totally did it
Currently only IC does this
Currently only IC does this
Based on the above facts and my thinking and analysis, I believe that ICP = Blockchain 3.0.
This article is just to talk about the future development direction of the blockchain industry and why ICP is likely to be the innovation driver of blockchain 3.0. However, it is undeniable that there are some problems in the Tokenomics design of ICP, and the ecology is not yet there. Outbreak, currently ICP still needs to continue to work hard to reach the final blockchain 3.0 in my mind. But don’t worry, this matter is inherently difficult. Even the Dfinity Foundation has prepared a 20-year roadmap. It has already achieved such a big achievement just 2 years after the mainnet was launched. It is also using cryptography to connect to BTC. and ETH ecology, I believe it will reach a higher level in 3 years.
Future
Reposted original title: Former Bybit technical director: Looking at the future of blockchain 3.0 and web3 from the perspective of ICP
On January 3, 2009, the first BTC block was mined. Since then, the blockchain has developed vigorously for 14 years.Throughout the past 14 years, the subtlety and greatness of BTC, EthThe emergence of ereum, the passionate crowdfunding of EOS, the fateful battle of PoS & PoW, the interconnection of thousands of Polkdadot, each amazing technology, and each wonderful story have attracted countless people in the industry to win!
Currently, in 2023, what is the landscape of the entire blockchain? The following is my thinking, see for detailsInterpretation of the public chain structure in this article
But how will the entire blockchain industry develop in the next 10 years? Here are my thoughts
Let me introduce a story first. In 2009, Alibaba proposed the “de-IOE” strategy, which was also a major milestone in Alibaba’s “Double Eleven” later.
The core content of the “De-IOE” strategy is to remove IBM minicomputers, Oracle databases and EMC storage devices, and implant the essence of “cloud computing” into Alibaba’s IT genes. in
There are three main reasons for going to IOE, but the first point is the essential reason, and the latter two are more indirect:
So why was the “de-IOE” strategy proposed in 2009 instead of earlier?
But going to IOE is not about simply changing the software and hardware themselves, replacing old software and hardware with new software and hardware, but replacing old methods with new ones, and using cloud computing to completely change the IT infrastructure. In other words, this is caused by changes in the industry, not just simple technological upgrades.
The development of an enterprise can be divided into three stages:
Let’s analyze the entire blockchain industry as an enterprise.
BTC is innovative in that it solves a problem that has vexed computer scientists for decades: how to create a digital payment system that can operate without trusting any central authority.
However, BTC does have some limitations in its design and development, which provide market opportunities for subsequent blockchain projects such as Ethereum (ETH). Here are some of the main limitations:
Transaction throughput and speed:BTC’s block generation time is approximately 10 minutes, and the size limit of each block results in an upper limit on its transaction processing capabilities. This means that when the network is busy, transaction confirmation may take longer and higher transaction fees may apply.
Smart contracts have limited functionality:BTC was designed primarily as a digital currency, and the transaction types and scripting language capabilities it supports are relatively limited. This limits BTC’s use in complex financial transactions and decentralized applications (DApps).
Not easy to upgrade and improve:Due to BTC’s decentralized and conservative design principles, major upgrades and improvements usually require broad consensus from the community, which is difficult to achieve in practice, which also makes BTC’s progress relatively slow.
Energy consumption issues:BTC’s consensus mechanism is based on Proof of Work (PoW), which means that a large amount of computing resources are used for competition among miners, resulting in a large amount of energy consumption. This has been criticized on environmental and sustainability grounds. Regarding this point, you can also pay attention to EcoPoW, which can partially alleviate this limitation.
The current Layer 2 expansion form of Ethereum can be regarded as a “vertical expansion”, which relies on the security and data availability guarantee of the underlying Layer 1. Although it seems to be a 2-layer structure, it will still be limited by the processing power of Layer 1 in the end. Even if it is changed to a multi-layer structure, that is, creating Layer 3 and Layer 4, it will only increase the complexity of the entire system and delay a little time. What’s more, according to the diminishing marginal effect, each additional layer added later will greatly reduce the expansion effect due to the extra overhead. This vertical layering method can be regarded as a single machine hardware upgrade, but this single machine refers to the entire ETH ecosystem.
And as usage increases, users’ demand for low cost and high performance will also increase. As an application on Layer1, the cost of Layer2 can only be reduced to a certain extent, and is ultimately still subject to the basic cost and throughput of Layer1. This is similar to the demand curve theory in economics - as price falls, aggregate quantity demanded increases. Vertical expansion is difficult to fundamentally solve the scalability problem.
Ethereum is a towering tree, and everyone relies on that root. Once that root cannot absorb nutrients at the same rate, people’s needs will not be met;
Therefore, only horizontal scalability is easier to achieve infinity.
Some people think that multi-chain and cross-chain can also be regarded as a horizontal expansion method.
takePolkadotTo give an example, it is a heterogeneous kingdom. Each country looks different, but every time you make something, you need to build a kingdom;
CosmosIt is an isomorphic kingdom. The meridians and bones of each country look the same, but every time you make something, you must build a kingdom;
But from an Infra perspective, the above two models are a little strange.Do you need to build an entire kingdom for every additional application you build?Let’s take an example to see how weird it is,
I bought a Mac 3 months ago and developed a Gmail application on it;
Now I want to develop a Youtube application, but I have to buy a new Mac to develop it, which is too weird.
Both of the above methods face the problem of high cross-chain communication complexity when adding new chains, so they are not my first choice.
If you want to scale-out, you need a complete set of underlying infrastructure to support rapid horizontal expansion without reinventing the wheel.
A typical example of supporting scale-out is cloud computing. [VPC+subnet+network ACL+security group] These underlying templates are exactly the same for everyone. All machines have numbers and types. The upper-layer RDS, MQ and other core components support it. Infinitely scalable, if you need more resources, you can start quickly with the click of a button.
A leader shared with me before that if you want to understand what infrastructure and components Internet companies need, then you only need to go to AWS and take a look at all the services they provide. It is the most complete and powerful combination.
In the same way, let’s take a high-level look at ICP and see why it meets the requirements of Scale-out.
Here we first explain a few concepts:
Dfinity Foundation:It is a non-profit organization dedicated to promoting the development and application of decentralized computer technology. It is the developer and maintainer of the Internet Computer protocol, aiming to achieve the comprehensive development of decentralized applications through innovative technology and an open ecosystem.
Internet Computer (IC):It is a high-speed blockchain network developed by Dfinity Foundation and specially designed for decentralized applications. It adopts a new consensus algorithm that enables high-throughput and low-latency transaction processing, while supporting the development and deployment of smart contracts and decentralized applications.
Internet Computer Protocol (ICP):It is a native Token in the Internet Computer protocol. It is a digital currency used to pay for network usage and reward nodes.
Many of the following contents will be a bit hardcore, but I have described them in vernacular, and I hope everyone can follow along. If you want to discuss more details with me, you can find my contact information at the top of the article.
From the hierarchical structure, from bottom to top they are
P2P layer,Collects and sends messages from users, other replicas in the subnet, and other subnets. Ensure that messages can be delivered to all nodes in the subnet to ensure security, reliability and resiliency
Consensus layer:The main task is to sort the input to ensure that all nodes inside the same subnet process tasks in the same order. To achieve this goal, the consensus layer uses a new consensus protocol designed to guarantee security and liveness, and to be resistant to DOS/SPAM attacks. After consensus is reached within the same subnet on the order in which various messages should be processed, these blocks are passed to the message routing layer.
Message routing layer:According to the tasks transmitted from the consensus layer, prepare the input queue of each Canister. After execution, it is also responsible for receiving the output generated by the Canister and forwarding it to the Canister in the local or other zones as needed. Additionally, it is responsible for logging and validating responses to user requests.
Execution layer:Provide a runtime environment for Canister, read input in an orderly manner according to the scheduling mechanism, call the corresponding Canister to complete the task, and return the updated status and generated output to the message routing layer. It uses the non-determinism brought by random numbers to ensure the fairness and auditability of calculations. Because in some situations, Canister’s behavior needs to be unpredictable. For example, when performing encryption operations, random numbers need to be used to increase the security of encryption. In addition, the execution results of Canister need to be random to prevent attackers from analyzing Canister’s execution results to discover vulnerabilities or predict Canister’s behavior.
(4-layer structure of ICP)
Key Components / Key Components
From the composition point of view:
Subnet:Supports unlimited expansion, each subnet is a small blockchain. Subnets communicate through Chain Key technology. Since a consensus has been reached within the subnet, all it takes is Chain Key verification.
Replica:There can be many nodes in each Subnet, and each node is a Replica. IC’s consensus mechanism will ensure that each Replica in the same Subnet will process the same input in the same order, so that the final state of each Replica All the same, this mechanism is called Replicated State Machine,
Canister:Canister is a smart contract, which is a computing unit running on the ICP network that can store data and code and communicate with other Canisters or external users. ICP provides a runtime environment for executing Wasm programs within Canister and communicating with other Canisters and external users via messaging. It can be simply thought of as a docker used to run code, and then you inject the Wasm Code Image yourself to run it inside.
Node:As an independent server, Canister still needs a physical machine to run. These physical machines are the machines in the real computer room.
Data Center:The nodes in the data center are virtualized into a replica (Replica) through the node software IC-OS, and some Replica are randomly selected from multiple data centers to form a subnet (Subnet). This ensures that even if a data center is hacked or encounters a natural disaster, the entire ICP network will still operate normally, a bit like an upgraded version of Alibaba’s “two places and three centers” disaster recovery and high availability solution. Data centers can be distributed all over the world, and a data center can even be built on Mars in the future.
Boundary Nodes:Provides ingress and egress between the external network and the IC subnet, validating responses.
Identity subject (Principal):The external user’s identifier, derived from the public key, is used for permission control.
Network Neural System (NNS):An algorithmic DAO governed using staked ICP to manage ICs.
Registry:The database maintained by NNS contains mapping relationships between entities (such as Replica, canister, and Subnet), which is somewhat similar to the current working principle of DNS.
Cycles:The local token represents the CPU quota used to pay for the resources consumed by canister when running. If I had to express it in Chinese, I would use the word “computing cycle” because cycles mainly refers to the unit used to pay for computing resources.
From the bottom layer, Chain-key technology is used, among which
Publicly Verifiable Secret Sharing scheme (PVSS Scheme): A publicly verifiable secret sharing scheme. In the white paper of the Internet Computer protocol, the PVSS scheme is used to implement the decentralized key generation (DKG) protocol to ensure that the node’s private key will not be leaked during the generation process.
Forward secure public key encryption scheme(forward-secure public-key encryption scheme): The forward-secure public-key encryption scheme ensures that even if the private key is leaked, previous messages will not be decrypted, thus improving the security of the system.
Key resharing protocol:A threshold signature-based key sharing scheme for key management in the Internet Computer protocol. The main advantage of this protocol is that it can share existing keys to new nodes without creating new keys, thereby reducing the complexity of key management. In addition, the protocol uses threshold signatures to protect the security of key sharing, thereby improving the security and fault tolerance of the system.
Threshold BLS signatures:ICP implements a threshold signature scheme. For each Subnet, there is a public and verifiable public key, and its corresponding private key is split into multiple shares. Each share is sent by a Replica in the Subnet. Hold, only the message signed by more than the threshold number of Replicas in the same Subnet is considered valid. In this way, the messages transmitted between Subnets and Replicas are encrypted but can be quickly verified, ensuring both privacy and security. The BLS algorithm is a well-known threshold signature algorithm. It is the only signature scheme that can produce a very simple and efficient threshold signature protocol, and the signature is unique, which means that for a given public key and message, there is only one Valid signature.
Non-interactive Distributed Key Generation (NIDKG):To securely deploy threshold signature schemes, Dfinity designed, analyzed and implemented a new DKG protocol that runs on asynchronous networks and is highly robust (even if up to a third of the nodes in the subnet crash or damaged, it can still succeed) while still being able to provide acceptable performance. In addition to generating new keys, this protocol can also be used to reshare existing keys. This capability is critical to enable autonomous evolution of the IC topology as subnets undergo membership changes over time.
PoUW:PoUW There is one more U than PoW, which stands for Userful. It mainly improves a lot of performance and allows the node machine to do less useless work. PoUW will not artificially create difficult hash calculations, but will focus its computing power on serving users as much as possible. Most of the resources (CPU, memory) are used for the actual execution of the code in the canister.
Chain-evolution technology:It is a technology used to maintain the blockchain state machine. It includes a series of technical means to ensure the security and reliability of the blockchain. In the Internet Computer protocol, Chain-evolution technology mainly includes the following two core technologies:
1.Summary blocks:The first block of each epoch is a summary block, which contains some special data used to manage different threshold signature schemes. Among them, a low-threshold scheme is used to generate random numbers, and a high-threshold scheme is used to authenticate the replication status of the subnet.
2.Catch-up packages (CUPs):CUPs is a technology for quickly synchronizing node status. It allows newly added nodes to quickly obtain the current status without re-running the consensus protocol.
My logical derivation of the entire IC underlying technology is:
In traditional public-key cryptography, each node has its own public-private key pair, which means that if a node’s private key is leaked or attacked, the security of the entire system will be threatened. The threshold signature scheme divides a key into multiple parts and assigns them to different nodes. Only when a sufficient number of nodes cooperate can a signature be generated. In this way, even if some nodes are attacked or leaked, it will not affect the security of the entire system. Too much impact. In addition, the threshold signature scheme can also improve the decentralization of the system because it does not require a centralized organization to manage keys, but disperses the keys to multiple nodes, which can avoid single points of failure and centralization. risk. therefore,IC uses a threshold signature scheme to improve the security and decentralization of the system,We hope to use threshold signatures to create a universal blockchain that is highly secure, scalable, and quickly verifiable.
and BLS is a well-known threshold signature algorithm. It is the only signature scheme that can produce a very simple and efficient threshold signature protocol.Another advantage of BLS signatures is that there is no need to save the signature state. As long as the message content remains unchanged, the signature is fixed, which means that for a given public key and message, there is only one valid signature. This ensures extremely high scalability, so ICP chose the BLS solution.
becauseThreshold signatures are used, so a distributor is needed to distribute key fragments to different participants.But the person who distributes the key fragments is a single point, which can easily lead to single points of failure.Therefore, Dfinity designed a distributed key distribution technology, namely NIDKG.During the initialization period of subnet creation, all participating Replicas non-interactively generate a public key A. For the corresponding private key B, each participant mathematically calculates and holds one of the derived secrets. share.
If you want to be a NIDKG, you must ensure that every participating party in the distribution does not cheat.Therefore, each participant can not only obtain his own secret share, but also publicly verify whether his secret share is correct. This is a very important point in realizing distributed key generation.
What if the subnet key at a certain historical moment is leaked? How to ensure that historical data cannot be tampered with? Dfinity adopts a forward-secure signature scheme, which ensures that even if the subnet key at a certain historical moment is leaked, attackers cannot change the data of historical blocks. This also prevents later corruption attacks on the blockchain. Threats to historical data. If this restriction is stronger, it can actually ensure that the information will not be eavesdropped during transmission, because the timestamps do not match, and even if the key is cracked within a short period of time, the content of past communications cannot be cracked.
With NIDKG, if a certain secret share is held by a node for a long time, once each node is gradually eroded by hackers, problems may occur in the entire network. Therefore, key updates need to be performed continuously, but key updates cannot require all participants Replica to come together to interact and communicate, but must also be performed non-interactively. However, because public key A has been registered in NNS, other subnets will also use this public key A for verification, so it is best not to change the subnet public key. But if the subnet public key remains unchanged, how to update the secret share between nodes? therefore Dfinity designed a Key resharing protocol. Without creating a new public key, all Replicas holding the current version of the secret share non-interactively generate a new round of derived secret shares to the new version of the secret share holder. ,so
This not only ensures that the new version of the secret share is authenticated by all current legal secret share holders
It also ensures that the old version of secret share is no longer legal.
It also ensures that even if a new version of the secret share is leaked in the future, the old version of the secret share will not be leaked, because the polynomials between the two are irrelevant and cannot be deduced. This is also what was just introduced beforeforward security。
in additionEnsures efficient random redistribution,When trusted nodes or access controls change, access policies and controllers can be modified at any time without restarting the system. This greatly simplifies the key management mechanism in many scenarios. This is useful, for example, in the case of subnet membership changes, as resharing will ensure that any new members have the appropriate secret share, and any replicas that are no longer members will no longer have a secret share. Furthermore, if a small number of secret shares are leaked to an attacker in any one epoch or even every epoch, these secret shares will not be of any benefit to the attacker.
Because traditional blockchain protocols need to store all block information starting from the genesis block, this will lead to scalability issues as the blockchain grows. This is why it is very troublesome for many public chains to develop a light client. . So IC wanted to solve this problem, so IC developed Chain-evolution Technology. At the end of each epoch, all processed input and required consensus information can be safely cleared from the memory of each Replica, which greatly reduces the cost. storage requirements per Replica, which enables the IC to scale to support a large number of users and applications. In addition, Chain-evolution technology also includes CUPs technology, which allows newly added nodes to quickly obtain the current state without the need to re-run the consensus protocol, which greatly reduces the threshold and synchronization time for new nodes to join the IC network.
To sum up,All underlying technologies for ICsThey are all linked together.Based on cryptography (from theory), it also fully considers the problems of the entire industry such as fast synchronization of nodes (from practice). He is truly a master of all things!
Reverse Gas model:Most traditional blockchain systems require users to first hold native tokens, such as ETH and BTC, and then consume the native tokens to pay transaction fees. This increases the entry barrier for new users and is not in line with people’s usage habits. Why do I have to hold Tiktok shares before I use Tiktok? ICP adopts a reverse Gas model design. Users can directly use the ICP network, and the project party will be responsible for the handling fees. This lowers the threshold for use, is more in line with Internet service habits, and is conducive to obtaining larger-scale network effects, thusSupport more users to join.
Stable Gas:For other public chains on the market, for the security of the chain and the need for transfers, some people will buy native tokens, and miners will dig hard, or some people will desperately collect native tokens, so this public chain The chain contributes computing power such as Bitcoin, or provides staking economic security to this public chain such as Ethereum. It can be said that our demand for btc/eth actually comes from the computing power/pledge requirements of the Bitcoin/Ethereum public chain, which is essentially the security requirement of the chain. Therefore, as long as the chain directly uses native tokens to pay gas, it will still be expensive in the future. Maybe native tokens are cheap now, but as long as the chain itself becomes ecological, it will become expensive later. ICP is different. The Gas consumed in the ICP blockchain is called Cycles, which is converted by consuming ICP. Cycles are stable under algorithm regulation and anchored to 1 SDR (SDR can be regarded as the calculation of comprehensive multi-national legal currencies. stable unit). Therefore, no matter how much ICP rises in the future, the money you spend doing anything in ICP will be the same as today (not taking inflation into account).
Wasm:Using WebAssembly (Wasm) as the standard for code execution, developers can use a variety of popular programming languages (such as Rust, Java, C++, Motoko, etc.) to write code.To support more developers to join。
Support running AI models:The Python language can also be compiled into wasm. Python has the largest number of users in the world and is also the first language for AI, such as matrix and large integer calculations. Someone is already thereLlama2 model is running on IC, I would not be surprised at all if the concept of AI+Web3 occurs on ICP in the future.
Web2 user experience:Currently, many applications on ICP have achieved amazing results of millisecond-level queries and second-level updates. If you don’t believe it, you can use it directlyOpenChat,A purely on-chain decentralized chat application.
Running the front end on the chain:You have only heard that part of the back-end content is written as a simple smart contract and then run on the chain. This can ensure that the core logic such as data assets is not tampered with. But the front-end actually needs to run completely on the chain to be safe, becauseFront-end attackIt is a very typical and frequent problem. Just imagine, everyone may think that Uniswap code is very safe. The smart contract has been verified by so many people over the years, and the code is simple, so there will definitely be no problems. But suddenly one day, if the front end of Uniswap is hijacked, and the contract you interact with is actually a malicious contract deployed by hackers, you may go bankrupt in an instant. But if you store and deploy all the front-end code in the IC’s Canister, at least the consensus security of the IC ensures that the front-end code cannot be tampered with by hackers. This protection is relatively complete, and the front-end can be run and rendered directly on the IC. It does not affect the normal operation of the application. On IC, developers can build applications directly without traditional cloud services, databases or payment interfaces. There is no need to buy a front-end server or worry about databases, load balancing, network distribution, firewalls, etc. Users can directly access the front-end web page deployed on ICP through a browser or mobile app, such as one I deployed beforepersonal blog。
DAO control code upgrade:In many DeFi protocols now, project parties have full control and can initiate major decisions at will, such as suspending operations, selling funds, etc., without going through community voting and discussion. I believe everyone has witnessed or heard of this case. In contrast, the DAPP code under the ICP ecosystem runs in a container controlled by the DAO. Even if a certain project party accounts for a large proportion of the voting, a public voting process is still implemented, which satisfies the blockchain transparency described at the beginning of this article. necessary conditions for transformation. This process assurance mechanism can better reflect the community’s wishes, compared with other current public chain projects.Better implementation in governance.
Automatic protocol upgrade:When the protocol needs to be upgraded, a new threshold signature scheme can be added to the summary block to achieve automatic protocol upgrade. This approach ensures the security and reliability of the network while avoiding the inconvenience and risks caused by hard forks. Specifically, the Chain Key technology in ICP can ensure the security and reliability of the network by maintaining the blockchain state machine through a special signature scheme. At the beginning of each epoch, the network uses a low-threshold signature scheme to generate random numbers, and then uses a high-threshold signature scheme to authenticate the subnet’s replication status. This signature scheme ensures the security and reliability of the network, while also enabling automatic protocol upgrades, thusThis avoids the inconvenience and risks caused by hard forks.
(Proposal Voting)
Fast fowarding:It is a technology in the Internet Computer protocol that quickly synchronizes node status. It allows newly added nodes to quickly obtain the current status without re-running the consensus protocol. Specifically, the process of Fast forwarding is as follows:
The newly added node obtains the Catch-up package (CUP) of the current epoch, which contains the Merkle tree root, summary block and random number of the current epoch.
The newly added node uses state sync subprotocol to obtain the complete status of the current epoch from other nodes, and uses the Merkle tree root in CUP to verify the correctness of the status.
The newly added node uses the random number in CUP and the protocol messages of other nodes to run the consensus protocol to quickly synchronize to the current state.
The advantage of fast forwarding is that it canThis allows newly added nodes to quickly obtain the current status without having to start from scratch like some other public chains.This can accelerate the synchronization and expansion of the network. At the same time, it can also reduce the communication volume between nodes, thereby improving the efficiency and reliability of the network.
(fast forwarding)
Decentralized Internet Identity:The identity system on IC really makes me feel that the DID problem can be completely solved, and it is completely solved, whether it is scalability or privacy. The identity system on IC currently has an implementation called Internet Identity, as well as more powerful ones developed based on itNFID。
itsprincipleas follows:
When registering, it will generate a pair of public and private keys for the user. The private key is stored in the TPM security chip within the user’s device and can never be leaked, while the public key is shared with services on the network.
When a user wants to log in to a dapp, the dapp will create a temporary session key for the user. This session key will be signed by the user through an authorized electronic signature, so that the dapp has the authority to verify the user’s identity.
Once the session key is signed, the dapp can use the key to access network services on behalf of the user without the user having to electronically sign each time. This is similar to authorized logins in Web2.
The session key is only valid for a short period of time. After expiration, the user needs to re-pass the biometric authorization signature to obtain a new session key.
The user’s private key is always stored in the local TPM security chip and will not leave the device. This ensures the security of the private key and the anonymity of the user.
By using temporary session keys, different dapps cannot track each other’s user identities. Achieve truly anonymous and private access.
Users can easily synchronize and manage their Internet Identity across multiple devices, but the device itself also requires corresponding biometrics or hardware keys for authorization.
Some of the benefits of Internet Identity are:
No need to remember password.Log in directly using biometrics features such as fingerprint recognition, eliminating the need to set and remember complex passwords.
The private key does not leave the device and is more secure.The private key is stored in the TPM’s security chip and cannot be stolen, solving the problem of username and password theft in Web2.
Log in anonymously and cannot be tracked.Unlike Web2, which uses email as a username to be tracked across platforms, Internet Identity eliminates this tracking.
Multi-device management is more convenient.You can log in to the same account on any device that supports biometrics, instead of being limited to a single device.
Do not rely on central service providers to achieve true decentralization.It is different from the model where usernames correspond to email service providers in Web2.
Adopt the entrusted certification process,No need to sign again every time you log in.The user experience is better.
Support the use of dedicated security equipmentLike Ledger or Yubikey login, security is improved.
Hide the user’s actual public key,Transaction records cannot be queried through the public key to protect user privacy.
Seamlessly compatible with Web3 blockchain,Login and sign blockchain DApps or transactions securely and efficiently.
The architecture is more advanced, representing the organic integration of the advantages of Web2 and Web3, and is the standard for future network accounts and logins.
In addition to providing a new user experience, the following technical means are also adopted to ensure its security:
Use a TPM security chip to store the private key. The chip is designed so that even developers cannot access or extract the private key to prevent the private key from being stolen.
Secondary authentication mechanisms such as biometric authentication, such as fingerprint or facial recognition, need to be verified based on the device where they are located, so that only the user holding the device can use the identity.
The session key adopts a short-term expiration design to limit the time window for being stolen, and the relevant ciphertext is forced to be destroyed at the end of the session to reduce risks.
Public key encryption technology enables the data during transmission to be encrypted, and external listeners cannot learn the user’s private information.
Does not rely on third-party identity providers. PRIVATE KEY is generated and controlled by users themselves and does not trust third parties.
Combined with the non-tamperability brought by the IC blockchain consensus mechanism, it ensures the reliability of the entire system operation.
Relevant cryptographic algorithms and security processes are being continuously updated and upgraded, such as adding multi-signature and other more secure mechanisms.
Open source code and decentralized design optimize transparency and facilitate community collaboration to improve security.
(Internet Identity)
From a team perspective, there are a total of 200+ employees, all of whom are very elite talents. Employees have published a total of 1,600+ papers, been cited 100,000+ times, and held a total of 250+ patents.
Academically, his recent mathematical theories include Threshold Relay and PSC Chains, Validation Towers and Trees and USCID.
From a technical background point of view, he has a profound technical research and development background and has been engaged in research in the field of big data and distributed computing in his early years, which laid the technical foundation for building complex ICP networks.
From an entrepreneurial perspective, he previously ran an MMO game using his own distributed system that hosted millions of users. Dominic started Dfinity in 2015 and is also the President and CTO of String labs.
From a perspective, he proposed the concept of decentralized Internet more than 10 years ago. It is not easy to promote this grand project in the long term. At present, his design ideas are very forward-looking.
FounderDominic Williams is a crypto theorist and serial entrepreneur.
In terms of technical team, Dfinity is very strong.The Dfinity Foundation brings together a large number of top cryptography and distributed systems experts, such asJan Camenisch, Timothy Roscoe, Andreas Rossberg, Maria D., Victor Shoup etc., even the “L” in the author of the BLS cryptographic algorithm -Ben Lynn Also working at Dfinity. This provides strong support for ICP’s technological innovation. The success of blockchain projects is inseparable from technology, and the gathering of top talents can bring technological breakthroughs, which is also a key advantage of ICP.
Dfinity Foundation Team
This article would be too long if I also covered this section, so I decided to write a separate article later to give you a detailed analysis. This article focuses more on the development direction of the blockchain industry and why ICP has great opportunities.
Applications
All types of applications, social platforms, creator platforms, chat tools, games, and even Metaverse games can be developed on ICP.
Many people say that IC is not suitable for DeFi because it is difficult to achieve consistent global state, but I think this question itself is wrong. It’s not that the global state is consistent, but it’s that the global state is consistent under low latency. If you can accept 1 minute, 10,000 machines around the world can achieve global consistency. With so many nodes in Ethereum and BTC, haven’t they been forced to achieve global state consistency under high latency? Therefore, they are currently unable to achieve unlimited horizontal expansion. IC first solves the problem of infinite horizontal expansion by slicing subnets. As for the global state consistency under low latency, it uses strong consistency distributed consistency algorithms, well-designed network topology, high-performance distributed data synchronization, and time Effective stamp verification and a mature fault-tolerant mechanism can also be achieved. But to be honest, it will be more difficult to build a trading platform at the IC application level than the high-performance trading platform currently being built by Wall Street people. It is not just about reaching an agreement among multiple computer rooms. However, being difficult does not mean that it cannot be done at all. It means that many technical problems must be solved first, and a moderate state will eventually be found, which not only ensures safety but also ensures an acceptable experience for people. For example, ICLightHouse below.
ICLightHouse,An orderbook dex on the whole chain, what is the concept of the whole chain? How many technical difficulties need to be solved? This is unthinkable on other public chains, but on IC at least it’s doable, which gives us hope.
OpenChat,A decentralized chat application with a great experience. I have not seen a second such product in the entire blockchain industry. Many other teams have tried in this direction before, but in the end they all failed due to various reasons. Technical issues failed. In the final analysis, users felt that the experience was not good. For example, the speed was too slow. It took 10 seconds to send a message and 10 seconds to receive other people’s messages. However, a small team of three people on ICP has made such a successful product. You can experience it for yourself just how smooth it is. Welcome to join the organization, where you can enjoy the collision of ideas and enjoy the freedom of speech to a certain extent.
Must,A platform for super creators, where everyone can create a planet and build their own individual brand, and the content you output will always be your own, and can even support paid reading. It can be called a decentralized knowledge planet. I now refresh articles on it every day.
Easy - 0xkookoo
OpenChat and Mora applications are products that I use almost every day. They give people a sense of comfort that cannot be separated from them. Two words to describe them are freedom and enrichment.
There are already some teams developing game applications on IC, and I think the narrative of full-chain games may eventually be taken over by IC. As I said in the GameFi section of this article before, game playability and fun are things that project parties need to consider. Playability is easier to achieve on ICs. Looking forward to itDragginz ‘s masterpiece.
ICP is like the earth, and Chainkey technology is like the core of the earth. Its relationship with ICP is similar to the relationship between the TCP/IP protocol and the entire Internet industry. Each Subnet is like the continent of Asia, Africa and Latin America. Of course, the Subnet can also be the Pacific/Atlantic Ocean. , there are different buildings and areas (Replica and Node) in the continent and ocean. Plants (Canister) can be planted on each area and building, and different animals live happily;
ICP supports horizontal expansion. Each subnet is autonomous and can communicate between different subnets. No matter what application you are in, social media, finance, or even the metaverse, you can achieve ultimate consistency through this distributed network. It is easy to achieve a global ledger under synchronous conditions, but it is very challenging to achieve “global state consistency” under asynchronous conditions.At present, only ICP has the opportunity to do this.
It should be noted that this does not refer to “Global status consistent”, but “The global status is consistent”. “Global state consistency” requires all participating nodes to [agree on all operation sequences], [final results are consistent], [objective consistency, does not depend on whether the node fails], [clock consistency], [instant consistency, all Operations are all processed synchronously], which is guaranteed in the IC single subnet. However, if you want to ensure “global state consistency”, you need all subnets as a whole to achieve the above “global state consistency” for the same data and status. In actual implementation, this is impossible to achieve within low latency. , this is also the bottleneck that currently prevents public chains such as ETH from horizontally expanding. Therefore, IC chose to reach a consensus within a single subnet, and other subnets quickly verified through communication that the results were not falsified, in order to achieve “final global state consistency.” Equivalent toAt the same time, it combines the decentralization of large public chains with the high throughput and low latency of alliance chains, and achieves unlimited horizontal expansion of subnets through mathematical and encryption algorithm proofs.
To sum up, we can see that according to the final development direction of blockchain that I thought about at the beginning of the article,[Sovereignty] + [Decentralized multi-point centralization] + [Transparency] + [Control of code execution] + [Infinite scalability with linear cost],
sovereigntyIt is the only issue that blockchain needs to solve, including asset sovereignty, data sovereignty, speech sovereignty, etc. Otherwise, there is no need for blockchain;
IC totally did it
IC did it too
IC totally did it
Currently only IC does this
Currently only IC does this
Based on the above facts and my thinking and analysis, I believe that ICP = Blockchain 3.0.
This article is just to talk about the future development direction of the blockchain industry and why ICP is likely to be the innovation driver of blockchain 3.0. However, it is undeniable that there are some problems in the Tokenomics design of ICP, and the ecology is not yet there. Outbreak, currently ICP still needs to continue to work hard to reach the final blockchain 3.0 in my mind. But don’t worry, this matter is inherently difficult. Even the Dfinity Foundation has prepared a 20-year roadmap. It has already achieved such a big achievement just 2 years after the mainnet was launched. It is also using cryptography to connect to BTC. and ETH ecology, I believe it will reach a higher level in 3 years.
Future