Disclosure: The views and opinions expressed herein belong solely to the authors and do not represent the views and opinions of crypto.news editorials.
The second quarter of 2025 marks the reality of blockchain scaling, with the cracks in the layer 2 model widening as capital continues to flow into rollups and sidechains. The original promise of L2 was simple: scale up L1, but costs, delays, liquidity and fragmentation of the user experience continue to add up.
summary
- L2 was intended to extend Ethereum, but it introduces new problems as it relies on a centralized sequencer that can be a single point of failure.
- At its core, L2 handles sequence and state computations and settles down to L1 using optimistic or ZK rollups. Each has trade-offs. Optimistic rollups have long finality and ZK rollups are computationally expensive.
- The future of efficiency lies in separating computation and validation. It uses centralized supercomputers for computation and distributed networks for parallel verification, achieving scalability without sacrificing security.
- The “total order” model of blockchain is outdated. Moving to local, account-based ordering unlocks massive parallelism, ends the “L2 breach,” and paves the way for a scalable, future-proof Web3 foundation.
New projects like stablecoin payments are starting to question the L2 paradigm, asking whether L2 is really secure and whether its sequencers are a single point of failure or censorship. Web3 often leads to the pessimistic view that fragmentation is inevitable.
Are we building our future on solid foundations or a house in the sand? L2 must face and answer these questions. After all, if Ethereum (ETH)'s base consensus layer were inherently fast, cheap, and infinitely scalable, the entire L2 ecosystem as we know it today would become redundant. A myriad of rollups and sidechains have been proposed as “add-ons to L1” to alleviate the fundamental limitations of the underlying L1. This is a type of technical debt, a complex piecemeal workaround that burdens Web3 users and developers.
You may also like: Fair Launch Breaks Cryptocurrency Promise | Opinion
To answer these questions, we need to break down the entire L2 concept into its basic components and uncover a path to a more robust and efficient design.
Structure of L2
Structure determines function. This is a fundamental principle of biology, and it also applies to computer systems. Determining the appropriate structure and architecture for L2 requires careful consideration of its functionality.
At its core, every L2 performs two important functions. Sequence, or ordering of transactions. As well as calculating and proving new states. A sequencer collects, orders, and batches user transactions, whether it's a centralized entity or a decentralized network. This batch is then executed, resulting in state updates (e.g. new token balance). This condition must be resolved at L1 via optimistic or ZK rollup for security.
Optimistic rollups assume all state transitions are valid and rely on a challenge period (often 7 days) during which anyone can provide evidence of wrongdoing. This creates a big trade-off in UX and slows down finality. ZK rollup uses zero-knowledge proofs to mathematically verify the correctness of all state transitions before reaching L1, allowing for near-instantaneous finality. The trade-off is more computation and more complex construction. The ZK prover itself is buggy, with potentially disastrous results, and formal verification of these, if possible at all, is extremely expensive.
Sequence is a governance and design choice for each L2. Some people prefer centralized solutions for efficiency (or perhaps censorship capabilities), while others prefer decentralized solutions for more fairness and robustness. Ultimately, the L2 decides how to perform its own sequencing.
Generating and validating state claims can be done much more efficiently. Once a batch of transactions is ordered, computing the next state becomes a pure computational task that can be performed using only one supercomputer focused solely on raw speed, without any decentralization overhead. That supercomputer can also be shared between L2s.
When this new state is requested, its validation becomes a separate parallel process. A large network of verifiers can work in parallel to verify claims. This is also the very philosophy behind Ethereum's stateless client and high-performance implementations like MegaETH.
Parallel verification is infinitely scalable
Parallel verification is infinitely scalable. No matter how fast L2 (and its supercomputers) generate claims, the verification network can always catch up by adding more verifiers. Latency here is precisely the verification time, which is a fixed minimum number. This is a theoretical optimum that can be verified rather than calculated by effectively using decentralization.
Once the sequence and state validation is complete, the L2 job is almost complete. The final step is to expose the verified state to the decentralized network, L1, to ensure final settlement and security.
This last step reveals the problem that blockchain is a formidable payment layer for L2. The main computational work is done off-chain, but L2 has to pay a hefty premium for final processing at L1. These face double overhead. The limited throughput of L1 is burdened by the linear ordering of the sum of all transactions, causing congestion and high costs in transmitting data. Additionally, it must endure the finality delay inherent in L1.
For ZK rollups, this is a few minutes. For Optimistic Rollups, the challenge is made even more difficult by the week-long challenge period. Although necessary, this is a security trade-off and comes at a cost.
Farewell, Web3's “total order” myth
Ever since Bitcoin (BTC), people have been working hard to combine all blockchain transactions into one total order. After all, we are talking about blockchain! Unfortunately, this “perfect order” paradigm is a costly myth and clearly overkill for L2 payments. How ironic that in one of the world's largest decentralized networks, the world's computers behave like single-threaded desktops.
It's time to move on. The future will be local, account-based ordering, where only transactions that interact with the same account need to be ordered, allowing for massive parallelism and true scalability.
Of course, global ordering means local ordering, but this is also an incredibly simple and straightforward solution. After 15 years of “blockchain”, it’s time for us to open our eyes and handcraft a better future. The scientific field of distributed systems has already moved from the strong consistency concept of the 1980s (which blockchain implements) to the strong eventual consistency model of 2015 that unlocks parallelism and concurrency. It's time for the Web3 industry to similarly leave behind the past and follow forward-looking scientific advances.
The days of L2 compromise are over. It's time to build a foundation designed for the future, where the next wave of Web3 adoption will come.
read more: Web3 is open and transparent, but building on top of it is miserable. opinion
Chen Xiaohong
Chen Xiaohong He is the Chief Technology Officer at Pi Squared Inc., where he works to develop fast, parallel, and distributed systems for payments and settlement. His interests include program correctness, theorem proving, scalable ZK solutions, and applying these techniques to all programming languages. Xiaohong earned a bachelor's degree in mathematics from Peking University and a doctorate in computer science from the University of Illinois at Urbana-Champaign.