When Should Selfish Miners Double-Spend?
- URL: http://arxiv.org/abs/2501.03227v3
- Date: Sat, 08 Nov 2025 23:39:16 GMT
- Title: When Should Selfish Miners Double-Spend?
- Authors: Mustafa Doger, Sennur Ulukus,
- Abstract summary: Selfish mining literature usually ignores the chance of the attacker to double-spend at no-cost in each attack cycle.<n>In this paper, we give a rigorous analysis of an attack where the goal of the adversary is to double-spend while mining selfishly.<n>We show that, at each attack cycle, if the level of stubbornness is higher than $k$, the adversary gets a free shot at double-spending.
- Score: 45.776687601070705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional double-spending attack models ignore the revenue losses stemming from the orphan blocks. On the other hand, selfish mining literature usually ignores the chance of the attacker to double-spend at no-cost in each attack cycle. In this paper, we give a rigorous stochastic analysis of an attack where the goal of the adversary is to double-spend while mining selfishly. To do so, we first combine stubborn and selfish mining attacks, i.e., construct a strategy where the attacker acts stubborn until its private branch reaches a certain length and then switches to act selfish. We provide the optimal stubbornness for each parameter regime. Next, we provide the maximum stubbornness that is still more profitable than honest mining and argue a connection between the level of stubbornness and the $k$-confirmation rule. We show that, at each attack cycle, if the level of stubbornness is higher than $k$, the adversary gets a free shot at double-spending. At each cycle, for a given stubbornness level, we rigorously formulate how great the probability of double-spending is. We further modify the attack in the stubborn regime in order to conceal the attack and increase the double-spending probability.
Related papers
- Incentive Attacks in BTC: Short-Term Revenue Changes and Long-Term Efficiencies [45.776687601070705]
Bitcoin's Difficulty Adjustment Algorithm (DAA) has been a source of vulnerability for incentive attacks such as selfish mining, block withholding and coin hopping strategies.<n>We study the short-term revenue change per hashpower of the adversarial and honest miners for these incentive attacks.<n>For block withholding strategies, it turns out, the honest miners outside the pool profit from the attack, usually even more than the attacker both in the short-term and the long-term.
arXiv Detail & Related papers (2025-11-14T18:18:17Z) - Bitcoin Under Volatile Block Rewards: How Mempool Statistics Can Influence Bitcoin Mining [5.893888881448058]
As Bitcoin experiences more halving events, the protocol reward converges to zero, making transaction fees the primary source of miner rewards.
Previous security analyses of Bitcoin have either considered a fixed block reward model or a highly simplified volatile model.
We present a reinforcement learning-based tool designed to analyze mining strategies under a more realistic volatile model.
arXiv Detail & Related papers (2024-11-18T16:29:20Z) - BM-PAW: A Profitable Mining Attack in the PoW-based Blockchain System [9.292531856119329]
We introduce a novel mining strategy, named BM-PAW, which yields superior rewards for both the attacker and the targeted pool.
We find the BM-PAW attacker can circumvent the "miner's dilemma" through equilibrium analysis in a two-pool BM-PAW game scenario.
arXiv Detail & Related papers (2024-11-09T13:59:55Z) - Fully Automated Selfish Mining Analysis in Efficient Proof Systems Blockchains [5.864854777864723]
We study selfish mining attacks in longest-chain blockchains like Bitcoin, but where the proof of work is replaced with efficient proof systems.
We propose a novel selfish mining attack that aims to maximize expected relative revenue of the adversary.
We present a formal analysis procedure which computes an $epsilon$-tight lower bound on the optimal expected relative revenue in the MDP.
arXiv Detail & Related papers (2024-05-07T15:44:39Z) - Tie-Breaking Rule Based on Partial Proof of Work in a Blockchain [2.9281463284266973]
We propose another countermeasure that can be easily applied to existing proof of work blockchain systems.
By using the characteristic of partial proof of work, the proposed method enables miners to choose the last-generated block in a chain tie.
Only weak synchrony, which is already met by existing systems such as Bitcoin, is required for effective functioning.
arXiv Detail & Related papers (2024-03-22T08:24:12Z) - Blockchain Bribing Attacks and the Efficacy of Counterincentives [6.66161432273916]
We analyze bribing attacks in Proof-of-Stake distributed ledgers from a game theoretic perspective.
In guided bribing, the bribe is given as long as the bribed party behaves as instructed.
In effective bribing, we show that both the protocol and the "all bribed" setting are equilibria.
arXiv Detail & Related papers (2024-02-09T11:57:38Z) - Parallel Proof-of-Work with DAG-Style Voting and Targeted Reward Discounting [0.0]
We present parallel proof-of-work with DAG-style voting, a novel proof-of-work cryptocurrency protocol.
It provides better consistency guarantees, higher transaction throughput, lower transaction confirmation latency, and higher resilience against incentive attacks.
An interesting by-product of our analysis is that parallel proof-of-work without reward discounting is less resilient to incentive attacks than Bitcoin in some realistic network scenarios.
arXiv Detail & Related papers (2023-12-05T20:14:33Z) - New intelligent defense systems to reduce the risks of Selfish Mining
and Double-Spending attacks using Learning Automata [0.43512163406551996]
We introduce a new attack that combines double-spending and selfish mining attacks.
We develop two models, namely the SDTLA and WVBM, which can effectively defend against selfish mining attacks.
Our findings highlight the potential of SDTLA and WVBM as promising solutions for enhancing the security and efficiency of blockchain networks.
arXiv Detail & Related papers (2023-07-02T09:46:05Z) - Robust Lipschitz Bandits to Adversarial Corruptions [61.85150061213987]
Lipschitz bandit is a variant of bandits that deals with a continuous arm set defined on a metric space.
In this paper, we introduce a new problem of Lipschitz bandits in the presence of adversarial corruptions.
Our work presents the first line of robust Lipschitz bandit algorithms that can achieve sub-linear regret under both types of adversary.
arXiv Detail & Related papers (2023-05-29T18:16:59Z) - Nik Defense: An Artificial Intelligence Based Defense Mechanism against
Selfish Mining in Bitcoin [1.160208922584163]
Bitcoin mining's protocol is not incentive-compatible.
Some nodes with high computational power can obtain more revenue than their fair share.
We propose an artificial intelligence-based defense against selfish mining attacks.
arXiv Detail & Related papers (2023-01-26T23:30:44Z) - Multiple Perturbation Attack: Attack Pixelwise Under Different
$\ell_p$-norms For Better Adversarial Performance [17.57296795184232]
Adversarial attacks and defenses are usually likened to a cat-and-mouse game in which defenders and attackers evolve over the time.
We come up with a natural approach: combining various $ell_p$ gradient projections on a pixel level to achieve a joint adversarial perturbation.
Specifically, we learn how to perturb each pixel to maximize the attack performance, while maintaining the overall visual imperceptibility of adversarial examples.
arXiv Detail & Related papers (2022-12-05T15:38:37Z) - When Are Linear Stochastic Bandits Attackable? [47.25702824488642]
This paper studies the attackability of a $k$-armed linear bandit environment.
We propose a two-stage attack method against LinUCB and Robust Phase Elimination.
arXiv Detail & Related papers (2021-10-18T04:12:09Z) - Adversarial Attacks on Gaussian Process Bandits [47.84198626686564]
We propose various adversarial attack methods with differing assumptions on the attacker's strength and prior information.
Our goal is to understand adversarial attacks on GP bandits from both a theoretical and practical perspective.
We demonstrate that adversarial attacks on GP bandits can succeed in forcing the algorithm towards $mathcalR_rm target$ even with a low attack budget.
arXiv Detail & Related papers (2021-10-16T02:39:10Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - Adversarial Imitation Attack [63.76805962712481]
A practical adversarial attack should require as little as possible knowledge of attacked models.
Current substitute attacks need pre-trained models to generate adversarial examples.
In this study, we propose a novel adversarial imitation attack.
arXiv Detail & Related papers (2020-03-28T10:02:49Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.