Unaligned Incentives: Pricing Attacks Against Blockchain Rollups
- URL: http://arxiv.org/abs/2509.17126v1
- Date: Sun, 21 Sep 2025 15:39:08 GMT
- Title: Unaligned Incentives: Pricing Attacks Against Blockchain Rollups
- Authors: Stefanos Chaliasos, Conner Swann, Sina Pilehchiha, Nicolas Mohnblatt, Benjamin Livshits, Assimakis Kattis,
- Abstract summary: We identify critical mis-pricings in existing rollup transaction fee mechanisms that allow for two powerful attacks.<n>An adversary can saturate the L2's DA batch capacity with compute-light data-heavy transactions, forcing low-gas batches that enable both L2 DoS attacks, and finality-delay attacks.<n>We propose comprehensive mitigations to prevent these attacks and suggest how some practical uses of multi-dimensional rollupMs can rectify the identified mis-pricing attacks.
- Score: 5.416444421133014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rollups have become the de facto scalability solution for Ethereum, securing more than $55B in assets. They achieve scale by executing transactions on a Layer 2 ledger, while periodically posting data and finalizing state on the Layer 1, either optimistically or via validity proofs. Their fees must simultaneously reflect the pricing of three resources: L2 costs (e.g., execution), L1 DA, and underlying L1 gas costs for batch settlement and proof verification. In this work, we identify critical mis-pricings in existing rollup transaction fee mechanisms (TFMs) that allow for two powerful attacks. Firstly, an adversary can saturate the L2's DA batch capacity with compute-light data-heavy transactions, forcing low-gas transaction batches that enable both L2 DoS attacks, and finality-delay attacks. Secondly, by crafting prover killer transactions that maximize proving cycles relative to the gas charges, an adversary can effectively stall proof generation, delaying finality by hours and inflicting prover-side economic losses to the rollup at a minimal cost. We analyze the above attack vectors across the major Ethereum rollups, quantifying adversarial costs and protocol losses. We find that the first attack enables periodic DoS on rollups, lasting up to 30 minutes, at a cost below 2 ETH for most rollups. Moreover, we identify three rollups that are exposed to indefinite DoS at a cost of approximately 0.8 to 2.7 ETH per hour. The attack can be further modified to increase finalization delays by a factor of about 1.45x to 2.73x, compared to direct L1 blob-stuffing, depending on the rollup's parameters. Furthermore, we find that the prover killer attack induces a finalization latency increase of about 94x. Finally, we propose comprehensive mitigations to prevent these attacks and suggest how some practical uses of multi-dimensional rollup TFMs can rectify the identified mis-pricing attacks.
Related papers
- Exploiting Liquidity Exhaustion Attacks in Intent-Based Cross-Chain Bridges [5.543794703214136]
Cross-chain bridges let off-chain entities (emphsolvers) to immediately fulfill users' orders by fronting their own liquidity.<n>While improving user experience, this approach introduces new systemic risks, such as solver liquidity concentration and delayed settlement.<n>We propose a new class of attacks called emphliquidity exhaustion attacks and a replay-based parameterized attack simulation framework.
arXiv Detail & Related papers (2026-02-19T20:13:36Z) - Rethinking Latency Denial-of-Service: Attacking the LLM Serving Framework, Not the Model [12.046157489400457]
Large Language Models face an emerging and critical threat known as latency attacks.<n>Because inference is inherently expensive, even modest slowdowns can translate into substantial operating costs and severe availability risks.<n>We introduce a new Fill and Squeeze attack strategy targeting the state transition of the scheduler.
arXiv Detail & Related papers (2026-02-08T09:05:54Z) - Time Is All It Takes: Spike-Retiming Attacks on Event-Driven Spiking Neural Networks [87.16809558673403]
Spiking neural networks (SNNs) compute with discrete spikes and exploit temporal structure.<n>We study a timing-only adversary that retimes existing spikes while preserving spike counts and amplitudes in event-driven SNNs.
arXiv Detail & Related papers (2026-02-03T09:06:53Z) - P2P: A Poison-to-Poison Remedy for Reliable Backdoor Defense in LLMs [49.908234151374785]
During fine-tuning, large language models (LLMs) are increasingly vulnerable to data-poisoning backdoor attacks.<n>We propose Poison-to-Poison (P2P), a general and effective backdoor defense algorithm.<n>We show that P2P can neutralize malicious backdoors while preserving task performance.
arXiv Detail & Related papers (2025-10-06T05:45:23Z) - A Secure Sequencer and Data Availability Committee for Rollups (Extended Version) [7.299239909796724]
Layer 2 Rollups (L2s) are a faster alternative to conventional blockchains.<n>L2s perform most computations offchain using minimally blockchains (L1) under-the-hood to guarantee correctness.<n>We propose fraud-proof mechanisms, arbitrated by L1 contracts, to detect and generate evidence of dishonest behavior.
arXiv Detail & Related papers (2025-09-08T12:32:41Z) - BitHydra: Towards Bit-flip Inference Cost Attack against Large Language Models [22.695878922889715]
We introduce the first bit-flip inference cost attack that directly modifies model weights to induce persistent overhead for all users of a compromised LLM.<n>We instantiate this attack paradigm with BitHydra, which (1) minimizes a loss that suppresses the end-of-sequence token (i.e., EOS) and (2) employs an efficient yet effective critical-bit search focused on the EOS embedding vector.
arXiv Detail & Related papers (2025-05-22T13:36:00Z) - Fast Proxies for LLM Robustness Evaluation [48.53873823665833]
We compare the ability of fast proxy metrics to predict the real-world robustness of an LLM against a simulated attacker ensemble.<n>This allows us to estimate a model's robustness to computationally expensive attacks without requiring runs of the attacks themselves.
arXiv Detail & Related papers (2025-02-14T11:15:27Z) - Denial-of-Service Poisoning Attacks against Large Language Models [64.77355353440691]
LLMs are vulnerable to denial-of-service (DoS) attacks, where spelling errors or non-semantic prompts trigger endless outputs without generating an [EOS] token.
We propose poisoning-based DoS attacks for LLMs, demonstrating that injecting a single poisoned sample designed for DoS purposes can break the output length limit.
arXiv Detail & Related papers (2024-10-14T17:39:31Z) - Blockchain Amplification Attack [13.13413794919346]
We show that an attacker can amplify network traffic at modified nodes by a factor of 3,600, and cause economic damages of approximately 13,800 times the amount needed to carry out the attack.<n>Despite these risks, aggressive latency reduction may still be profitable enough for various providers to justify the existence of modified nodes.
arXiv Detail & Related papers (2024-08-02T18:06:33Z) - Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - Rolling in the Shadows: Analyzing the Extraction of MEV Across Layer-2 Rollups [14.369191644932954]
Decentralized finance embraces a series of exploitative economic practices known as Maximal Extractable Value (MEV)<n>In this paper, we investigate the prevalence and impact of MEV on prominent rollups such as Arbitrum, and zkSync over a nearly three-year period.<n>While our findings did not detect any sandwiching activity on popular rollups, we did identify the potential for cross-layer sandwich attacks.
arXiv Detail & Related papers (2024-04-30T18:34:32Z) - DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial
Purification [63.65630243675792]
Diffusion-based purification defenses leverage diffusion models to remove crafted perturbations of adversarial examples.
Recent studies show that even advanced attacks cannot break such defenses effectively.
We propose a unified framework DiffAttack to perform effective and efficient attacks against diffusion-based purification defenses.
arXiv Detail & Related papers (2023-10-27T15:17:50Z) - Transfer Attacks Revisited: A Large-Scale Empirical Study in Real
Computer Vision Settings [64.37621685052571]
We conduct the first systematic empirical study of transfer attacks against major cloud-based ML platforms.
The study leads to a number of interesting findings which are inconsistent to the existing ones.
We believe this work sheds light on the vulnerabilities of popular ML platforms and points to a few promising research directions.
arXiv Detail & Related papers (2022-04-07T12:16:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.