Akash is a decentralized computing platform designed to connect underutilized GPU supplies with users in need of GPU computing, aiming to become the “Airbnb” of GPU computing. Unlike other competitors, they primarily focus on general, enterprise-grade GPU computing. Since the launch of the GPU mainnet in September 2023, they have had 150-200 GPUs on their network, with utilization rates reaching 50-70%, and an annual total transaction value of $500,000 to $1 million. Consistent with the network market, Akash charges a 20% transaction fee for USDC payments.
We are at the beginning of a massive infrastructure transformation, with GPU-driven parallel processing on the rise. It is projected that artificial intelligence will increase global GDP by $7 trillion, while automating 300 million jobs. Nvidia, the manufacturer of GPUs, expects its revenue to increase from $27 billion in 2022 to $60 billion in 2023, reaching approximately $100 billion by 2025. Capital expenditures by cloud computing providers (AWS, GCP, Azure, etc.) on Nvidia chips have grown from single digits to 25% and are expected to exceed 50% in the coming years. (Source: Koyfin)
Morgan Stanley estimates that by 2025, the opportunity for ultra-large-scale GPU infrastructure as a service (IaaS) will reach $40-50 billion. As an example, if 30% of GPU computing is resold at a 30% discount through the secondary market, this will represent a revenue opportunity of $10 billion. Adding another $5 billion revenue opportunity from non-ultra-large-scale sources, the total revenue opportunity will be $15 billion. Assuming Akash can capture 33% market share of this opportunity ($5 billion total transaction value) and charge a 20% transaction fee, this will translate to $1 billion in net revenue. If we multiply this number by 10, the result would be nearly $10 billion of market capitalization.
In November 2022, OpenAI launched ChatGPT, setting the record for the fastest user base growth, reaching 100 million users by January 2023 and 200 million users by May. The impact of this is enormous, with estimates suggesting that increasing productivity and automating 3 million jobs will increase global GDP by $7 trillion.
Artificial intelligence has rapidly risen from a niche area of research to the largest spending focus for companies. The cost of creating GPT-4 is $100 million, with annual operating costs of $250 million. GPT-5 requires 25,000 A100 GPUs (equivalent to $2.25 billion in Nvidia hardware) and may require a total hardware investment of $10 billion. This has sparked an arms race among companies to secure enough GPUs to support AI-driven enterprise workloads.
The artificial intelligence revolution has sparked a monumental shift in infrastructure, accelerating the transition from CPU to GPU parallel processing. Historically, GPUs have been utilized for simultaneous large-scale rendering and image processing, while CPUs are designed for serial execution, incapable of such concurrent operations. Due to high memory bandwidth, GPUs have gradually evolved to handle other computations with parallel problems, such as training, optimization, and improving artificial intelligence models.
Nvidia, the pioneer of GPU technology in the 1990s, combined its top-notch hardware with CUDA software stack, establishing a leading position over competitors (primarily AMD and Intel) for many years. Nvidia’s CUDA stack was developed in 2006, allowing developers to optimize Nvidia GPUs to accelerate their workloads and simplify GPU programming. With 4 million CUDA users and over 50,000 developers using CUDA for development, it boasts a powerful ecosystem of programming languages, libraries, tools, applications, and frameworks. Over time, we expect Nvidia GPUs to surpass Intel and AMD CPUs in data centers.
Expenditure on Nvidia GPUs by hyperscale cloud service providers and large tech companies has rapidly increased from low single-digit percentages in the early 2010s to mid-single digits from 2015 to 2022, and to 25% in 2023. We believe that Nvidia will account for over 50% of capital expenditure by cloud service providers in the coming years. This is expected to drive Nvidia’s revenue from $25 billion in 2022 to $100 billion in 2025 (Source: Koyfin).
Morgan Stanley estimates that by 2025, the GPU infrastructure as a service (IaaS) market size for hyperscale cloud service providers will reach $40-50 billion. This is still only a small portion of the total revenue of hyperscale cloud service providers, with the top three hyperscale cloud service providers currently generating revenue of over $250 billion.
Given the strong demand for GPUs, GPU shortages have been widely reported by media outlets such as The New York Times and The Wall Street Journal. The CEO of AWS stated, “Demand exceeds supply, which is true for everyone.” Elon Musk said during Tesla’s Q2 2023 earnings call, “We will continue to use—we will get Nvidia hardware as soon as possible.”
Index Ventures had to purchase chips for its portfolio companies. It is nearly impossible to purchase chips from Nvidia outside of mainstream tech companies, and obtaining chips from hyperscale cloud service providers also takes a long time.
Below are the GPU pricing for AWS and Azure. As shown below, a 30-65% discount is available for 1-3 year reservations. As hyperscale cloud service providers invest billions of dollars in expanding capacity, they are looking for investment opportunities that provide revenue visibility. If customers anticipate that the utilization rates exceed 60%, it is best to opt for 1-year reserved pricing. If the anticipated utilization rate exceeds 35%, choose 3 years. Any unused capacity can be resold, significantly reducing their total cost.
If a super-scale cloud service provider establishes a $50 billion GPU computing leasing business, reselling unused computing power would be a huge opportunity. Assuming that 30% of the computing power is resold at a 30% discount, it would create a $10 billion market for reselling the GPU computing power of super-scale cloud service providers.
However, apart from super-scale cloud service providers, there are other sources of supply, including large enterprises (such as Meta, Tesla), competitors (CoreWeave, Lambda, etc.), and well-funded AI startups. From 2022 to 2025, Nvidia is expected to generate approximately $300 billion in revenue. Assuming there is an additional $70 billion worth of chips outside of super-scale cloud service providers, reselling 20% of the computing power at a 30% discount would add another $10 billion, totaling $200 billion.
Akash is a decentralized computing marketplace founded in 2015 and launched its mainnet as a Cosmos application chain in September 2020. Its vision is to democratize cloud computing by offering computing resources significantly cheaper than super-scale cloud service providers.
The blockchain handles coordination and settlement, storing records of requests, bidding, leasing, and settlement, while execution is done off-chain. Akash hosts containers where users can run any cloud-native application. Akash has built a suite of cloud management services, including Kubernetes, to orchestrate and manage these containers. Deployment is transferred from a private peer-to-peer network isolated from the blockchain.
The first version of Akash focused on CPU computing. At its peak, the business had an annual total transaction volume of around $200,000, leasing 4-5 thousand CPUs. However, there were two main issues: entry barriers (requiring starting a Cosmos wallet and using AKT tokens to pay for workloads) and customer churn (having to recharge the wallet with AKT, and if AKT runs out or the price changes, the workload stops without alternative providers).
Over the past year, Akash has transitioned from CPU computing to GPU computing, taking advantage of this paradigm shift in computing infrastructure and supply shortages.
Akash’s GPU network will be launched on the mainnet in September 2023. Since then, Akash has scaled to 150-200 GPUs and achieved utilization rates of 50-70%.
Below is a comparison of Nvidia A100 prices from several vendors. Akash’s prices are 30-60% cheaper than the competition.
There are approximately 19 unique suppliers on the Akash network, located in 7 countries, supplying more than 15 types of chips. The largest provider is Foundry, a DCG-backed company that also engages in crypto mining and staking.
Akash focuses primarily on enterprise chips (A100), which are traditionally used to support AI workloads. While they also offer some consumer chips, these have generally been difficult to use for AI in the past due to power consumption, software and latency issues. Several companies, such as FedML, io.net and Gensyn, are trying to build an orchestration layer to enable AI edge computing.
As the market increasingly shifts toward inference rather than training, consumer-grade GPUs may become more viable, but currently the market is focused on using enterprise-grade chips for training.
On the supply side, Akash focuses on public hyperscale cloud service providers, private GPU vendors, crypto miners, and enterprises holding underutilized GPUs.
In most of 2022 and 2023, prior to the launch of the GPU network, the annualized Gross Merchandise Value (GMV) for CPUs was approximately $50,000. Since the introduction of the GPU network, the GMV has reached an annualized level of $500,000 to $1,000,000, with utilization rates on the GPU network ranging from 50% to 70%.
Akash has been working on reducing user friction, improving user experience, and broadening use cases.
Akash is also validating use cases through the network. During the GPU Test Network, the community demonstrated that it can use the network to deploy and run inference on many popular AI models. The Akash Chat and Stable Diffusion XL applications both demonstrate Akash’s ability to run inference. We believe that over time the inference market will be much larger than the training market. Today’s AI-driven search costs $0.02 (10 times Google’s current costs). Given that there are 3 trillion searches per year, this would be $60 billion per year. To put this into context, training an OpenAI model costs approximately $100 million. While costs will likely be lower for both, this highlights the significant difference in long-term revenue pools.
Considering that most of the demand for high-end chips today is focused on training, Akash is also currently working to demonstrate that they can use the Akash network to train a model, which they plan to launch in early 2024. After using homogeneous wafers from a single supplier, the next project will be to use heterogeneous wafers from multiple suppliers.
Akash’s roadmap is huge. Some ongoing product features include privacy management support, on-demand/reserved instances, and better discoverability.
Akash charges a 4% handling fee for AKT payments and a 20% handling fee for USDC payments. This 20% fee rate is similar to what we see in traditional online markets (e.g. Uber’s 30%).
Akash has approximately 58% of its tokens in circulation (225 million in circulation, maximum supply 388 million). The annual inflation rate has increased from 8% to 13%. Currently, 60% of the circulating tokens have been locked, with a lock-up period of 21 days.
A fee rate of 40% (previously 25%) of inflation and GMV will also go into the community pool, which currently holds $10 million in AKT tokens.
The purpose of these funds is still being determined, but will be distributed between public funds, provider incentives, staking, potential burning, and community pools.
On January 19, Akash launched a $5 million pilot incentive program aimed at bringing 1,000 A100s to the platform. Over time, the goal is to provide supply-side revenue visibility (e.g., 95% effective utilization) to providers participating in the market.
Here are several scenarios and illustrative assumptions about Akash’s key drivers:
Short-term scenario: We estimate that if Akash can reach 15,000 A100 units, this will generate nearly $150 million in GMV. At a 20% commission rate, this would bring Akash $30 million in agreement fees. Given the growth trajectory, multiplying this number by 100 (taking into account AI valuations), this would be worth $3 billion.
Baseline case: We assume the IaaS market opportunity is in line with Morgan Stanley’s estimate of $50 billion. Assuming 70% utilization, there is $15 billion of resaleable capacity. Assuming a 30% discount on this capacity, we get $10 billion, plus $10 billion from other non-hyperscalable sources. Given that markets typically enjoy strong moats, we assume Akash can achieve 33% share (20% of Airbnb’s vacation rental market share, 75% of Uber’s ride-sharing market share, and 65% of Doordash’s food delivery market share). At a 20% commission rate, this would generate $1 billion in protocol fees. Multiplying it by 10, Akash would obtain a $10 billion result.
Upward case: Our upward case uses the same framework as the baseline case. We assume a $20 billion resale opportunity due to the ability to penetrate more unique GPU sources and higher share growth.
Background information: Nvidia is a listed company with a market capitalization of US$1.2 trillion, while OpenAI is valued at US$80 billion in the private market, Anthropic is valued at US$20 billion, and CoreWeave is valued at US$7 billion. In the crypto space, Render and TAO are valued at over $2 billion and over $5.5 billion respectively.
Concentration of Supply and Demand: Currently, the majority of GPU demand comes from large tech companies for training extremely large and complex LLMs (Large Language Models). Over time, we anticipate more interest in training smaller-scale AI models, which are cheaper and better suited to handle private data. Fine-tuning will become increasingly important as models shift from being general-purpose to vertically specific. Ultimately, as usage and adoption accelerate, inference will become increasingly critical.
Competition: There are many crypto and non-crypto companies trying to free up underutilized GPUs. Some of the more notable encryption protocols are:
Latency Issues and Technical Challenges: Given that AI training is an extremely resource-intensive task, and considering that all chips are housed within a data center, it’s unclear whether models can be trained on decentralized, non-co-located GPU stacks. OpenAI plans to build its next training facility with over 75,000 GPUs in Arizona. These are all issues being addressed by scheduling layers like FedML, Io.net, and Gensyn.
Akash is a decentralized computing platform designed to connect underutilized GPU supplies with users in need of GPU computing, aiming to become the “Airbnb” of GPU computing. Unlike other competitors, they primarily focus on general, enterprise-grade GPU computing. Since the launch of the GPU mainnet in September 2023, they have had 150-200 GPUs on their network, with utilization rates reaching 50-70%, and an annual total transaction value of $500,000 to $1 million. Consistent with the network market, Akash charges a 20% transaction fee for USDC payments.
We are at the beginning of a massive infrastructure transformation, with GPU-driven parallel processing on the rise. It is projected that artificial intelligence will increase global GDP by $7 trillion, while automating 300 million jobs. Nvidia, the manufacturer of GPUs, expects its revenue to increase from $27 billion in 2022 to $60 billion in 2023, reaching approximately $100 billion by 2025. Capital expenditures by cloud computing providers (AWS, GCP, Azure, etc.) on Nvidia chips have grown from single digits to 25% and are expected to exceed 50% in the coming years. (Source: Koyfin)
Morgan Stanley estimates that by 2025, the opportunity for ultra-large-scale GPU infrastructure as a service (IaaS) will reach $40-50 billion. As an example, if 30% of GPU computing is resold at a 30% discount through the secondary market, this will represent a revenue opportunity of $10 billion. Adding another $5 billion revenue opportunity from non-ultra-large-scale sources, the total revenue opportunity will be $15 billion. Assuming Akash can capture 33% market share of this opportunity ($5 billion total transaction value) and charge a 20% transaction fee, this will translate to $1 billion in net revenue. If we multiply this number by 10, the result would be nearly $10 billion of market capitalization.
In November 2022, OpenAI launched ChatGPT, setting the record for the fastest user base growth, reaching 100 million users by January 2023 and 200 million users by May. The impact of this is enormous, with estimates suggesting that increasing productivity and automating 3 million jobs will increase global GDP by $7 trillion.
Artificial intelligence has rapidly risen from a niche area of research to the largest spending focus for companies. The cost of creating GPT-4 is $100 million, with annual operating costs of $250 million. GPT-5 requires 25,000 A100 GPUs (equivalent to $2.25 billion in Nvidia hardware) and may require a total hardware investment of $10 billion. This has sparked an arms race among companies to secure enough GPUs to support AI-driven enterprise workloads.
The artificial intelligence revolution has sparked a monumental shift in infrastructure, accelerating the transition from CPU to GPU parallel processing. Historically, GPUs have been utilized for simultaneous large-scale rendering and image processing, while CPUs are designed for serial execution, incapable of such concurrent operations. Due to high memory bandwidth, GPUs have gradually evolved to handle other computations with parallel problems, such as training, optimization, and improving artificial intelligence models.
Nvidia, the pioneer of GPU technology in the 1990s, combined its top-notch hardware with CUDA software stack, establishing a leading position over competitors (primarily AMD and Intel) for many years. Nvidia’s CUDA stack was developed in 2006, allowing developers to optimize Nvidia GPUs to accelerate their workloads and simplify GPU programming. With 4 million CUDA users and over 50,000 developers using CUDA for development, it boasts a powerful ecosystem of programming languages, libraries, tools, applications, and frameworks. Over time, we expect Nvidia GPUs to surpass Intel and AMD CPUs in data centers.
Expenditure on Nvidia GPUs by hyperscale cloud service providers and large tech companies has rapidly increased from low single-digit percentages in the early 2010s to mid-single digits from 2015 to 2022, and to 25% in 2023. We believe that Nvidia will account for over 50% of capital expenditure by cloud service providers in the coming years. This is expected to drive Nvidia’s revenue from $25 billion in 2022 to $100 billion in 2025 (Source: Koyfin).
Morgan Stanley estimates that by 2025, the GPU infrastructure as a service (IaaS) market size for hyperscale cloud service providers will reach $40-50 billion. This is still only a small portion of the total revenue of hyperscale cloud service providers, with the top three hyperscale cloud service providers currently generating revenue of over $250 billion.
Given the strong demand for GPUs, GPU shortages have been widely reported by media outlets such as The New York Times and The Wall Street Journal. The CEO of AWS stated, “Demand exceeds supply, which is true for everyone.” Elon Musk said during Tesla’s Q2 2023 earnings call, “We will continue to use—we will get Nvidia hardware as soon as possible.”
Index Ventures had to purchase chips for its portfolio companies. It is nearly impossible to purchase chips from Nvidia outside of mainstream tech companies, and obtaining chips from hyperscale cloud service providers also takes a long time.
Below are the GPU pricing for AWS and Azure. As shown below, a 30-65% discount is available for 1-3 year reservations. As hyperscale cloud service providers invest billions of dollars in expanding capacity, they are looking for investment opportunities that provide revenue visibility. If customers anticipate that the utilization rates exceed 60%, it is best to opt for 1-year reserved pricing. If the anticipated utilization rate exceeds 35%, choose 3 years. Any unused capacity can be resold, significantly reducing their total cost.
If a super-scale cloud service provider establishes a $50 billion GPU computing leasing business, reselling unused computing power would be a huge opportunity. Assuming that 30% of the computing power is resold at a 30% discount, it would create a $10 billion market for reselling the GPU computing power of super-scale cloud service providers.
However, apart from super-scale cloud service providers, there are other sources of supply, including large enterprises (such as Meta, Tesla), competitors (CoreWeave, Lambda, etc.), and well-funded AI startups. From 2022 to 2025, Nvidia is expected to generate approximately $300 billion in revenue. Assuming there is an additional $70 billion worth of chips outside of super-scale cloud service providers, reselling 20% of the computing power at a 30% discount would add another $10 billion, totaling $200 billion.
Akash is a decentralized computing marketplace founded in 2015 and launched its mainnet as a Cosmos application chain in September 2020. Its vision is to democratize cloud computing by offering computing resources significantly cheaper than super-scale cloud service providers.
The blockchain handles coordination and settlement, storing records of requests, bidding, leasing, and settlement, while execution is done off-chain. Akash hosts containers where users can run any cloud-native application. Akash has built a suite of cloud management services, including Kubernetes, to orchestrate and manage these containers. Deployment is transferred from a private peer-to-peer network isolated from the blockchain.
The first version of Akash focused on CPU computing. At its peak, the business had an annual total transaction volume of around $200,000, leasing 4-5 thousand CPUs. However, there were two main issues: entry barriers (requiring starting a Cosmos wallet and using AKT tokens to pay for workloads) and customer churn (having to recharge the wallet with AKT, and if AKT runs out or the price changes, the workload stops without alternative providers).
Over the past year, Akash has transitioned from CPU computing to GPU computing, taking advantage of this paradigm shift in computing infrastructure and supply shortages.
Akash’s GPU network will be launched on the mainnet in September 2023. Since then, Akash has scaled to 150-200 GPUs and achieved utilization rates of 50-70%.
Below is a comparison of Nvidia A100 prices from several vendors. Akash’s prices are 30-60% cheaper than the competition.
There are approximately 19 unique suppliers on the Akash network, located in 7 countries, supplying more than 15 types of chips. The largest provider is Foundry, a DCG-backed company that also engages in crypto mining and staking.
Akash focuses primarily on enterprise chips (A100), which are traditionally used to support AI workloads. While they also offer some consumer chips, these have generally been difficult to use for AI in the past due to power consumption, software and latency issues. Several companies, such as FedML, io.net and Gensyn, are trying to build an orchestration layer to enable AI edge computing.
As the market increasingly shifts toward inference rather than training, consumer-grade GPUs may become more viable, but currently the market is focused on using enterprise-grade chips for training.
On the supply side, Akash focuses on public hyperscale cloud service providers, private GPU vendors, crypto miners, and enterprises holding underutilized GPUs.
In most of 2022 and 2023, prior to the launch of the GPU network, the annualized Gross Merchandise Value (GMV) for CPUs was approximately $50,000. Since the introduction of the GPU network, the GMV has reached an annualized level of $500,000 to $1,000,000, with utilization rates on the GPU network ranging from 50% to 70%.
Akash has been working on reducing user friction, improving user experience, and broadening use cases.
Akash is also validating use cases through the network. During the GPU Test Network, the community demonstrated that it can use the network to deploy and run inference on many popular AI models. The Akash Chat and Stable Diffusion XL applications both demonstrate Akash’s ability to run inference. We believe that over time the inference market will be much larger than the training market. Today’s AI-driven search costs $0.02 (10 times Google’s current costs). Given that there are 3 trillion searches per year, this would be $60 billion per year. To put this into context, training an OpenAI model costs approximately $100 million. While costs will likely be lower for both, this highlights the significant difference in long-term revenue pools.
Considering that most of the demand for high-end chips today is focused on training, Akash is also currently working to demonstrate that they can use the Akash network to train a model, which they plan to launch in early 2024. After using homogeneous wafers from a single supplier, the next project will be to use heterogeneous wafers from multiple suppliers.
Akash’s roadmap is huge. Some ongoing product features include privacy management support, on-demand/reserved instances, and better discoverability.
Akash charges a 4% handling fee for AKT payments and a 20% handling fee for USDC payments. This 20% fee rate is similar to what we see in traditional online markets (e.g. Uber’s 30%).
Akash has approximately 58% of its tokens in circulation (225 million in circulation, maximum supply 388 million). The annual inflation rate has increased from 8% to 13%. Currently, 60% of the circulating tokens have been locked, with a lock-up period of 21 days.
A fee rate of 40% (previously 25%) of inflation and GMV will also go into the community pool, which currently holds $10 million in AKT tokens.
The purpose of these funds is still being determined, but will be distributed between public funds, provider incentives, staking, potential burning, and community pools.
On January 19, Akash launched a $5 million pilot incentive program aimed at bringing 1,000 A100s to the platform. Over time, the goal is to provide supply-side revenue visibility (e.g., 95% effective utilization) to providers participating in the market.
Here are several scenarios and illustrative assumptions about Akash’s key drivers:
Short-term scenario: We estimate that if Akash can reach 15,000 A100 units, this will generate nearly $150 million in GMV. At a 20% commission rate, this would bring Akash $30 million in agreement fees. Given the growth trajectory, multiplying this number by 100 (taking into account AI valuations), this would be worth $3 billion.
Baseline case: We assume the IaaS market opportunity is in line with Morgan Stanley’s estimate of $50 billion. Assuming 70% utilization, there is $15 billion of resaleable capacity. Assuming a 30% discount on this capacity, we get $10 billion, plus $10 billion from other non-hyperscalable sources. Given that markets typically enjoy strong moats, we assume Akash can achieve 33% share (20% of Airbnb’s vacation rental market share, 75% of Uber’s ride-sharing market share, and 65% of Doordash’s food delivery market share). At a 20% commission rate, this would generate $1 billion in protocol fees. Multiplying it by 10, Akash would obtain a $10 billion result.
Upward case: Our upward case uses the same framework as the baseline case. We assume a $20 billion resale opportunity due to the ability to penetrate more unique GPU sources and higher share growth.
Background information: Nvidia is a listed company with a market capitalization of US$1.2 trillion, while OpenAI is valued at US$80 billion in the private market, Anthropic is valued at US$20 billion, and CoreWeave is valued at US$7 billion. In the crypto space, Render and TAO are valued at over $2 billion and over $5.5 billion respectively.
Concentration of Supply and Demand: Currently, the majority of GPU demand comes from large tech companies for training extremely large and complex LLMs (Large Language Models). Over time, we anticipate more interest in training smaller-scale AI models, which are cheaper and better suited to handle private data. Fine-tuning will become increasingly important as models shift from being general-purpose to vertically specific. Ultimately, as usage and adoption accelerate, inference will become increasingly critical.
Competition: There are many crypto and non-crypto companies trying to free up underutilized GPUs. Some of the more notable encryption protocols are:
Latency Issues and Technical Challenges: Given that AI training is an extremely resource-intensive task, and considering that all chips are housed within a data center, it’s unclear whether models can be trained on decentralized, non-co-located GPU stacks. OpenAI plans to build its next training facility with over 75,000 GPUs in Arizona. These are all issues being addressed by scheduling layers like FedML, Io.net, and Gensyn.