“The Street Where Compute Becomes a Commodity”
Compute is emerging as crypto’s next primitive: scarce, verifiable, and accessible without permission.
Decentralized compute networks (DCNs) will fill the gap as AI demand outpaces centralized supply, especially for marginal and cost-sensitive workloads.
DCNs source GPUs from small data centers, crypto miners, and individuals. Competitive pricing and token incentives make them well-suited for inference workloads.
Inference-first platforms like Inference.net and Chutes are gaining ground by offering better performance and usability. General-purpose networks serve broader use cases.
The opportunity is real, but demand is still early.
Most DCNs face five structural hurdles: coordination, developer experience, technical reliability, compliance, and economic sustainability. Fixing these is NOT optional.
Critical use cases: hosting of open-source AI models and truly sovereign autonomous AI agents
Developers still care about two things: cost and reliability. The networks that can abstract away the mess underneath—and prove they work—will win.
Crypto’s first act was money.
Bitcoin made it uncensorable. Ethereum made it programmable. These were breakthroughs, but they also defined the shape of what followed: finance-first systems with logic layered on top.
But what if the next core primitive isn’t a new currency?
This is our next big idea in AI & Crypto:
Compute becomes a new primitive: Scarce, Verifiable, and Liquid.
When we say “primitive”, we mean something raw and enabling. Like land. Or storage. For compute to qualify, it has to break free of centralized coordination and become a resource anyone can access, use, and build on without permission.
Decentralized compute networks will be the place where that shift takes root.
The pressure is coming from AI. Demand is compounding faster than the supply chains can adapt.
We’re not just in a GPU cycle. We’re in a global reallocation of compute.
In 2024, NVIDIA shipped more than 3.7 million datacenter GPUs. GB200s & H100s are moving faster than fabs can produce. The company is planning to triple output this year. Demand is still way ahead of supply.
Cloud infrastructure spending hit $330 billion last year. AI is now the primary growth driver for AWS, Azure, and Google Cloud. The economics are shifting upstream. Gartner expects generative AI spending to reach $644 billion in 2025, with the majority going toward infrastructure and hardware.
The physics of scaling is breaking. McKinsey estimates that by 2030, generative AI will require 2.5 × 10³¹ FLOPs per year. That’s an order-of-magnitude event. All of this has to be powered and cooled.
Goldman Sachs thinks global data center power consumption will grow 160% by 2030. The IEA forecasts 945 TWh, nearly Japan’s total electricity usage (IEA). And it’s not just watts: it’s land, latency, heat, supply chains, compliance.
Even as unit costs fall—$ per PFLOP (training) or $ per million tokens (inference) are dropping fast—the total demand for compute is ballooning.
Jevons’ paradox shows up again: greater efficiency leads to more use. Better models mean more apps, which means more usage, which means more strain on infrastructure.
Compute is becoming more strategic. And more unevenly distributed. That’s where crypto-native systems start to matter.
Here’s how we see the compute demand curve evolve:
Right now, demand for decentralized compute is small. But we don’t think it stays that way. As pressure builds, the curve shifts. At some point, DCNs start taking a real share of global workloads.
The rest of this piece sketches out why we think that shift is coming.
Today you don’t own your compute. You lease it. On someone else’s terms.
If you’re training or running AI models today, where do you go?
Hyperscalers (AWS, Azure, GCP) still control the high ground. They offer scale and tight integration. Large enterprises lean on them for their security, reliability, and deep integrations. For better or worse, they remain the default for most teams.
Neoclouds offer a tactical alternative. Newer companies like CoreWeave and Lambda promise lower costs and faster access, optimized for AI workflows. They run their own hardware, cut out middlemen, and pass the savings along. An H100 that costs $4–$5 per hour on AWS might run at half that on a neocloud. But they are centralized.
Then there’s decentralized compute. I hesitate to mention this in the same list because very few people use this today. Unlike the others, this model lets anyone contribute compute power into a global, permissionless pool. You might end up training your model using resources from a gamer’s rig in São Paulo or a researcher’s workstation in Berlin.
This is where compute starts looking less like a utility bill and more like a primitive.
If compute is to become a true primitive, it needs a mechanism that lets it be pooled, priced, and accessed without permission.
Decentralized Compute Networks (DCNs) are this mechanism.
The idea is simple enough: a marketplace where anyone with spare computing power can rent it out to others, without the markups of traditional cloud providers.
Supply comes from three primary sources:
Small and mid-sized data centers equipped with enterprise-grade GPUs (H100s+) but limited customer reach
Former crypto miners with warehouses of idle GPU rigs.
Individuals offering up spare capacity from gaming setups or professional workstations.
The core advantage is price. Unlike fixed-rate hyperscalers, decentralized networks rely on competitive bidding. Pricing adjusts in real time. Providers compete for jobs. That alone drives costs down. Add token incentives, and the economics shift even further.
The result:
GPU time can be 20 to 80 percent cheaper than AWS
Token rewards help subsidize usage and bootstrap liquidity on both sides
For startups and independent teams, the delta is more than just savings. It’s access, scalability, and a much-needed escape valve from hyperscaler constraints.
Reply