The blockchain trilemma reared its ugly head again in February at the Hong Kong Consensus, to some extent putting Charles Hoskinson, the founder of Cardano, in the background – he had to reassure attendees that hyperscalers like Google Cloud and Microsoft Azure not a risk to decentralization.
The point was made that there are major blockchain projects need hyperscalers, and not having to worry about a single point of failure because:
- Advanced cryptography neutralizes the risk
- Multi-party computation distributes key material
- Confidential computing protects the data used
The argument rested on the idea that ‘if the cloud can’t see the data, the cloud can’t control the system’, and was left there due to time constraints.
But there is an alternative to Hoskinson’s argument in favor of hyperscalers that deserves more attention.
MPC and Confidential Computing reduce exposure
This was somewhat of a strategic bastion in Charles’ argument – that technologies such as multi-party computation (MPC) and confidential computing would ensure that hardware vendors would not have access to the underlying data.
They are powerful tools. But her not resolve the underlying risk.
MPC distributes key material among multiple parties, so that no single participant can reconstruct a secret. That significantly reduces the risk of a single compromised node. However, the safety surface also expands in other directions. The coordination layer, the communication channels, and the governance of participating nodes all become critical.
Instead of relying on a single key holder, the system now depends on a distributed group of actors behaving correctly and on the correct implementation of the protocol. The single point of failure does not disappear. In effect, it simply becomes a distributed trust surface.
Confidential computing, and especially trusted execution environments, poses a different trade-off. Data is encrypted at runtime, limiting exposure to the hosting provider.
But Trusted Execution Environments (TEEs) rely on hardware assumptions. They rely on microarchitectural isolation, firmware integrity, and proper implementation. For example, academic literature has repeatedly shown here and here that side-channel and architectural vulnerabilities continue to emerge in enclave technologies. The security boundary is narrower than with the traditional cloud, but not absolute.
More importantly, both MPC and TEEs often operate on top of a hyperscaler infrastructure. The physical hardware, virtualization layer and supply chain remain concentrated. When an infrastructure provider controls access to machines, bandwidth, or geographic regions, it maintains operational influence. Cryptography can prevent data inspection, but it does not prevent throughput restrictions, shutdowns, or policy interventions.
Advanced cryptographic tools make specific attacks more difficult, but still do not eliminate the risk of failure at the infrastructure level. They simply replace a visible concentration with a more complex concentration.
The argument ‘No L1 can handle global computing’
Hoskinson made the point that hyperscalers are necessary because no Layer 1 can handle the computing demands of global systems, citing the trillions of dollars that have helped build such data centers.
Of course, Layer 1 networks aren’t built to run AI training loops, high-frequency trading engines, or business analytics pipelines. They are there to maintain consensus, verify state transitions, and provide sustainable data availability.
He’s right about what Layer 1 is for. But above all, global systems need results that everyone can verify, even if the calculation takes place elsewhere.
In modern crypto infrastructure, heavy calculations increasingly take place off-chain. What matters is that the results can be proven and verified onchain. This is the basis of rollups, zero-knowledge systems and verifiable computer networks.
Focusing on whether an L1 can execute global compute ignores the core question of who controls the execution and storage infrastructure behind verification.
If the computations occur off-chain but depend on the centralized infrastructure, the system inherits centralized failure modes. The settlements remain decentralized in theory, but the path to achieving valid state transitions is concentrated in practice.
The problem should be about the dependency at the infrastructure layer, and not about the computing capacity within Layer 1.
Cryptographic neutrality is not the same as participation neutrality
Cryptographic neutrality is a powerful idea and something that Hoskinson used in his argument. It means that rules cannot be changed arbitrarily, that hidden backdoors cannot be introduced, and that the protocol remains fair.
But cryptography continues hardware.
That physical layer determines who can participate, who can afford it, and who is ultimately excluded, because throughput and latency are ultimately limited by real machines and the infrastructure they run on. If hardware production, distribution, and hosting remain centralized, participation becomes economically limited, even if the protocol itself is mathematically neutral.
In systems with high computing power, hardware is the game changer. It determines the cost structure, who can scale, and resilience under censorship pressure. A neutral protocol running on a concentrated infrastructure is neutral in theory, but limited in practice.
The priority must shift to cryptography in combination with diversified hardware ownership.
Without diversity in the infrastructure, neutrality becomes vulnerable under pressure. If a small group of providers can limit workloads, restrict regions, or impose compliance gates, the system inherits their influence. Fairness of rules alone does not guarantee fairness of participation.
Specialization trumps generalization in computer markets
Competing with AWS is often presented as a matter of scale, but this is also misleading.
Optimize hyperscalers for flexibility. Their infrastructure is designed to serve thousands of workloads simultaneously. Virtualization layers, orchestration systems, enterprise compliance tools, and elasticity guarantees: These features are strengths for general-purpose computing, but they are also cost layers.
Zero-knowledge proofing and verifiable computing power are deterministic, computationally dense, limited in memory bandwidth, and pipeline-sensitive. In other words, they reward specialization.
A purpose-built proof network competes on proof per dollar, proof per watt, and proof per latency. When hardware, prover software, circuit design, and aggregation logic are vertically integrated, efficiency is increased. Removing unnecessary abstraction layers reduces overhead. Long-term throughput on persistent clusters outperforms elastic scaling for narrow, constant workloads.
In computing markets, specialization consistently outperforms generalization for stable, high-volume tasks. AWS optimizes for optional. A special test network optimizes for one type of work.
The economic structure also differs. Hyperscalers price for enterprise margins and broad demand variability. A network aligned with protocol incentives can depreciate hardware differently and tailor performance to sustainable use rather than short-term rental models.
The competition is about structural efficiency for a defined workload.
Use hyperscalers, but don’t depend on them
Hyperscalers are not the enemy. They are efficient, reliable and globally distributed infrastructure providers. The problem is dependency.
A resilient architecture leverages large vendors for burst capacity, geographic redundancy, and edge distribution, but does not anchor core functions to a single provider or small cluster of providers.
Settlement, final verification, and availability of critical artifacts must remain intact even if a cloud region fails, a vendor leaves the market, or policy restrictions tighten.
This is where decentralized storage and computing infrastructure becomes a viable alternative. Evidence artifacts, historical data, and verification inputs may not be withdrawn at the provider’s discretion. Instead, they would have to live on an infrastructure that is economically aligned with the protocol and structurally difficult to disable.
Hypescalers should be used as a optional accelerator rather than something fundamental to the product. The cloud can still be useful for reach and bursts, but the system’s ability to produce and retain evidence on which verification depends is not determined by a single vendor.
In such a system, if a hyperscaler were to disappear tomorrow, the network would only become slower, because the parts that matter most are owned and operated by a broader network rather than rented from a bottleneck of big brands.
This is how you can strengthen crypto’s decentralization ethos.
