Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of the crypto.news main article.
The second quarter of 2025 was a reality check for blockchain scaling, and as capital continues to flow into rollups and sidechains, the cracks in the layer 2 model are widening. The original promise of L2s was simple: scale L1s, but the costs, delays, and fragmentation in liquidity and user experience continue to pile up.
Summary
- L2s were intended to scale Ethereum, but they have introduced new problems while relying on centralized sequencers that can become single points of failure.
- At its core, L2s handle sequencing and state calculation, using Optimistic or ZK Rollups to settle on L1. Each has disadvantages: long finality in Optimistic Rollups and high computation cost in ZK Rollups.
- Future efficiency lies in separating computation and verification: using centralized supercomputers for computation and decentralized networks for parallel verification, enabling scalability without sacrificing security.
- The “total order” model of blockchains is outdated; The move to local, account-based ordering can unlock massive parallelism, putting an end to the “L2 compromise” and paving the way for a scalable, future-proof Web3 foundation.
New projects like stablecoin payments are starting to question the L2 paradigm, asking if L2s are really secure, and are their sequencers more like single points of failure and censorship? Often they will end up taking a pessimistic view that perhaps fragmentation is simply inevitable in web3.
Are we building a future on a solid foundation or on a house of cards? L2s must face and answer these questions. If Ethereum’s (ETH) base consensus layer were inherently fast, cheap, and infinitely scalable, the entire L2 ecosystem as we know it would be redundant. Numerous rollups and sidechains were proposed as “L1s add-ons” to alleviate the fundamental limitations of the underlying L1s. It’s a form of technical debt, a complex, fragmented solution that has been handed over to web3 users and developers.
You might also like: Fair launch is crypto’s broken promise | Opinion
And to answer these questions it is necessary to deconstruct the entire concept of an L2 into its fundamental components, to reveal a path to a more robust and efficient design.
An anatomy of L2s
Structure determines function. It is a basic principle in biology that also applies to computer systems. To determine the appropriate structure and architecture of L2s, we must carefully examine their functions.
At its core, each L2 performs two crucial functions: Sequencing, i.e. ordering transactions; as well as calculating and proving the new state. A sequencer, whether a centralized entity or a decentralized network, collects, orders, and batches user transactions. This batch is then executed, resulting in an updated status (e.g. new token balances). For security reasons, this status must be settled on the L1 via Optimistic or ZK Rollups.
Optimistic rollups assume that all status transitions are valid and rely on a challenge period (often seven days) during which anyone can submit evidence of fraud. This creates a major UX trade-off and long finality times. ZK Rollups use zero-knowledge proofs to mathematically verify the correctness of each state transition before it reaches L1, enabling near-instant finality. The disadvantage is that they are computationally intensive and complex to build. ZK proofs themselves can contain errors, which can lead to catastrophic consequences, and formal verification of this, if at all feasible, is very expensive.
Sequencing is a governance and design choice for every L2. Some prefer a centralized solution for efficiency (or perhaps for censorship power; who knows), while others prefer a decentralized solution for greater fairness and robustness. Ultimately, L2s decide how they want to do their own sequencing.
Generating and verifying state claims is where we can do much, much better in terms of efficiency. Once a set of transactions has been sequenced, calculating the next state is a purely computational task, and that can be done using just a single supercomputer, focused solely on raw speed, without the overhead of decentralization. That supercomputer can even be shared between L2s!
Once this new state is claimed, its verification becomes a separate, parallel process. A huge network of verifiers can work in parallel to verify the claim. That is also the philosophy behind Ethereum’s stateless clients and powerful implementations, such as MegaETH.
Parallel verification is infinitely scalable
Parallel verification is infinitely scalable. No matter how quickly L2s (and that supercomputer) produce claims, the verification network can always catch up by adding more verifiers. The latency here is precisely the verification time, a fixed, minimum number. This is the theoretical optimal if you use decentralization effectively: to verify, not to calculate.
After sequencing and status verification, the L2’s job is almost complete. The final step is to publish the verified status to a decentralized network, the L1, for ultimate settlement and security.
This last step exposes the elephant in the room: blockchains are terrible settlement layers for L2s! The main computing work is done off-chain, but L2s have to pay a huge premium to complete an L1. They face a double overhead: the limited processing capacity of the L1, burdened by the total, linear ordering of all transactions, creates congestion and high data posting costs. Additionally, they must endure the L1’s inherent finality delay.
For ZK Rollups this is minutes. For Optimistic Rollups, this involves a weeklong challenge period, a necessary but costly security trade-off.
Goodbye, the ‘total order’ myth in web3
Ever since Bitcoin (BTC), people have done their best to squeeze all the transactions of a blockchain into one total order. After all, we are talking about blockchains! Unfortunately, this “total order” paradigm is a costly myth and clearly exaggerated for L2 settlement. How ironic it is that one of the world’s largest decentralized networks and the world’s computer behaves just like a single-threaded desktop!
It’s time to move on. The future is local, account-based ordering, where only transactions that interact with the same account need to be ordered, allowing for high parallelism and true scalability.
Global order obviously implies local order, but it is also an incredibly naive and simplistic solution. After 15 years of ‘blockchain’, it’s time we open our eyes and create a better future. The scientific domain of distributed systems has already moved from the strong consistency concept of the 1980s (which blockchains implement) to the strong final consistency model of 2015 that unleashes parallelism and concurrency. Time for the web3 industry to move on as well, leave the past behind and follow future-oriented scientific progress.
The era of the L2 compromise is over. It’s time to build on a foundation designed for the future, from which the next wave of web3 adoption will emerge.
Read more: Web3 is open, transparent and miserable to build | Opinion
Xiaohong Chen
Xiaohong Chen is the Chief Technology Officer at Pi Squared Inc. and is working on fast, parallel and decentralized payment and settlement systems. His interests include program correctness, theorem proving, scalable ZK solutions, and applying these techniques to all programming languages. Xiaohong received his BSc in Mathematics from Peking University and his PhD in Computer Science from the University of Illinois Urbana-Champaign.
