How to Serve Your Sandwich? MEV Attacks in Private L2 Mempools
- URL: http://arxiv.org/abs/2601.19570v1
- Date: Tue, 27 Jan 2026 13:00:33 GMT
- Title: How to Serve Your Sandwich? MEV Attacks in Private L2 Mempools
- Authors: Krzysztof Gogol, Manvir Schneider, Jan Gorzny, Claudio Tessone,
- Abstract summary: We study the feasibility, profitability, and prevalence of sandwich attacks on rollups with private mempools.<n>Our results suggest that sandwiching, while endemic and profitable on L1, is rare, unprofitable, and largely absent in rollups with private mempools.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the feasibility, profitability, and prevalence of sandwich attacks on Ethereum rollups with private mempools. First, we extend a formal model of optimal front- and back-run sizing, relating attack profitability to victim trade volume, liquidity depth, and slippage bounds. We complement it with an execution-feasibility model that quantifies co-inclusion constraints under private mempools. Second, we examine execution constraints in the absence of builder markets: without guaranteed atomic inclusion, attackers must rely on sequencer ordering, redundant submissions, and priority fee placement, which renders sandwiching probabilistic rather than deterministic. Third, using transaction-level data from major rollups, we show that naive heuristics overstate sandwich activity. We find that the majority of flagged patterns are false positives and that the median net return for these attacks is negative. Our results suggest that sandwiching, while endemic and profitable on Ethereum L1, is rare, unprofitable, and largely absent in rollups with private mempools. These findings challenge prevailing assumptions, refine measurement of MEV in L2s, and inform the design of sequencing policies.
Related papers
- Certifying optimal MEV strategies with Lean [0.0]
We present the first mechanized formalization of Maximal Extractable Value (MEV)<n>MEV refers to a class of attacks to decentralized applications where the adversary profits by manipulating the ordering, inclusion, or exclusion of transactions in a blockchain.<n>We introduce a methodology to construct machine-checked proofs of MEV bounds, providing correctness guarantees beyond what is possible with existing techniques.
arXiv Detail & Related papers (2025-10-16T09:24:28Z) - Adversarial Robustness in One-Stage Learning-to-Defer [7.413102772934999]
Learning-to-Defer (L2D) enables hybrid decision-making by routing inputs either to a predictor or to external experts.<n>While promising, L2D is highly vulnerable to adversarial perturbations, which can not only flip predictions but also manipulate deferral decisions.<n>We introduce the first framework for adversarial robustness in one-stage L2D, covering both classification and regression.
arXiv Detail & Related papers (2025-10-13T03:55:55Z) - One Token Embedding Is Enough to Deadlock Your Large Reasoning Model [91.48868589442837]
We present the Deadlock Attack, a resource exhaustion method that hijacks an LRM's generative control flow.<n>Our method achieves a 100% attack success rate across four advanced LRMs.
arXiv Detail & Related papers (2025-10-12T07:42:57Z) - When Priority Fails: Revert-Based MEV on Fast-Finality Rollups [0.0]
We study the economics of transaction reverts on rollups and show that they are not accidental failures but equilibrium outcomes of MEV strategies.<n>We find that over 80% of reverted transactions are swaps, with half targeting USDC-WETH pools on Uniswap v3, v4.<n>Our findings establish reverts as a structural feature of rollup MEV microstructure and highlight the need for protocol-level reforms to sequencing, fee markets, and revert protection.
arXiv Detail & Related papers (2025-06-02T09:18:53Z) - Mind the Gap: A Practical Attack on GGUF Quantization [6.506984021742173]
We introduce the first attack on the GGUF family of post-training quantization methods.<n>We develop an attack that trains the target malicious LLM while constraining its weights based on quantization errors.<n>Our attack highlights that the most widely used post-training quantization method is susceptible to adversarial interferences.
arXiv Detail & Related papers (2025-05-24T16:30:37Z) - REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and Semantic Objective [57.57786477441956]
We propose an adaptive and semantic optimization problem over the population of responses.<n>Our objective doubles the attack success rate (ASR) on Llama3 and increases the ASR from 2% to 50% with circuit breaker defense.
arXiv Detail & Related papers (2025-02-24T15:34:48Z) - An Interpretable N-gram Perplexity Threat Model for Large Language Model Jailbreaks [87.64278063236847]
In this work, we propose a unified threat model for the principled comparison of jailbreak attacks.<n>Our threat model checks if a given jailbreak is likely to occur in the distribution of text.<n>We adapt popular attacks to this threat model, and, for the first time, benchmark these attacks on equal footing with it.
arXiv Detail & Related papers (2024-10-21T17:27:01Z) - Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models [79.76293901420146]
Large Language Models (LLMs) are employed across various high-stakes domains, where the reliability of their outputs is crucial.
Our research investigates the fragility of uncertainty estimation and explores potential attacks.
We demonstrate that an attacker can embed a backdoor in LLMs, which, when activated by a specific trigger in the input, manipulates the model's uncertainty without affecting the final output.
arXiv Detail & Related papers (2024-07-15T23:41:11Z) - Rolling in the Shadows: Analyzing the Extraction of MEV Across Layer-2 Rollups [14.369191644932954]
Decentralized finance embraces a series of exploitative economic practices known as Maximal Extractable Value (MEV)<n>In this paper, we investigate the prevalence and impact of MEV on prominent rollups such as Arbitrum, and zkSync over a nearly three-year period.<n>While our findings did not detect any sandwiching activity on popular rollups, we did identify the potential for cross-layer sandwich attacks.
arXiv Detail & Related papers (2024-04-30T18:34:32Z) - Emulated Disalignment: Safety Alignment for Large Language Models May Backfire! [65.06450319194454]
Large language models (LLMs) undergo safety alignment to ensure safe conversations with humans.
This paper introduces a training-free attack method capable of reversing safety alignment.
We name this method emulated disalignment (ED) because sampling from this contrastive distribution provably emulates the result of fine-tuning to minimize a safety reward.
arXiv Detail & Related papers (2024-02-19T18:16:51Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.