Blockchain Bribing Attacks and the Efficacy of Counterincentives
- URL: http://arxiv.org/abs/2402.06352v2
- Date: Wed, 19 Jun 2024 07:45:38 GMT
- Title: Blockchain Bribing Attacks and the Efficacy of Counterincentives
- Authors: Dimitris Karakostas, Aggelos Kiayias, Thomas Zacharias,
- Abstract summary: We analyze bribing attacks in Proof-of-Stake distributed ledgers from a game theoretic perspective.
In guided bribing, the bribe is given as long as the bribed party behaves as instructed.
In effective bribing, we show that both the protocol and the "all bribed" setting are equilibria.
- Score: 6.66161432273916
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We analyze bribing attacks in Proof-of-Stake distributed ledgers from a game theoretic perspective. In bribing attacks, an adversary offers participants a reward in exchange for instructing them how to behave, with the goal of attacking the protocol's properties. Specifically, our work focuses on adversaries that target blockchain safety. We consider two types of bribing, depending on how the bribes are awarded: i) guided bribing, where the bribe is given as long as the bribed party behaves as instructed; ii) effective bribing, where bribes are conditional on the attack's success, w.r.t. well-defined metrics. We analyze each type of attack in a game theoretic setting and identify relevant equilibria. In guided bribing, we show that the protocol is not an equilibrium and then describe good equilibria, where the attack is unsuccessful, and a negative one, where all parties are bribed such that the attack succeeds. In effective bribing, we show that both the protocol and the "all bribed" setting are equilibria. Using the identified equilibria, we then compute bounds on the Prices of Stability and Anarchy. Our results indicate that additional mitigations are needed for guided bribing, so our analysis concludes with incentive-based mitigation techniques, namely slashing and dilution. Here, we present two positive results, that both render the protocol an equilibrium and achieve maximal welfare for all parties, and a negative result, wherein an attack becomes more plausible if it severely affects the ledger's token's market price.
Related papers
- Exploiting Liquidity Exhaustion Attacks in Intent-Based Cross-Chain Bridges [5.543794703214136]
Cross-chain bridges let off-chain entities (emphsolvers) to immediately fulfill users' orders by fronting their own liquidity.<n>While improving user experience, this approach introduces new systemic risks, such as solver liquidity concentration and delayed settlement.<n>We propose a new class of attacks called emphliquidity exhaustion attacks and a replay-based parameterized attack simulation framework.
arXiv Detail & Related papers (2026-02-19T20:13:36Z) - Resilient Alerting Protocols for Blockchains [7.817051429480045]
High-stakes smart contracts often rely on timely alerts about external events, but prior work has not analyzed their resilience to an attacker suppressing alerts via bribery.<n>We analyze this challenge in a cryptoeconomic setting as the emphalerting problem, giving rise to a game between an adversary bribing andnaneous participants, who pay a penalty if they are caught deviating from protocol.
arXiv Detail & Related papers (2026-02-11T14:23:15Z) - Secret Leader Election in Ethereum PoS: An Empirical Security Analysis of Whisk and Homomorphic Sortition under DoS on the Leader and Censorship Attacks [0.42056926734482064]
Proposer anonymity in Proof-of-Stake (PoS) blockchains is a critical concern due to the risk of targeted attacks such as malicious denial-of-service (DoS) and censorship attacks.<n>We present a unified experimental framework for evaluating SSLE mechanisms under adversarial conditions.
arXiv Detail & Related papers (2025-09-29T15:48:05Z) - Bribers, Bribers on The Chain, Is Resisting All in Vain? Trustless Consensus Manipulation Through Bribing Contracts [0.8237070283392806]
This work introduces, implements, and evaluates three novel and efficient bribery contracts targeting validators.<n>The first bribery contract enables a briber to fork the blockchain by buying votes on their proposed blocks.<n>The second contract incentivizes validators to voluntarily exit the consensus protocol, thus increasing the adversary's relative staking power.<n>The third contract builds a trustless bribery market that enables the briber to auction off their manipulative power over the RANDAO beacon.
arXiv Detail & Related papers (2025-09-21T18:12:17Z) - Deceptive Sequential Decision-Making via Regularized Policy Optimization [54.38738815697299]
Two regularization strategies for policy synthesis problems that actively deceive an adversary about a system's underlying rewards are presented.
We show how each form of deception can be implemented in policy optimization problems.
We show that diversionary deception can cause the adversary to believe that the most important agent is the least important, while attaining a total accumulated reward that is $98.83%$ of its optimal, non-deceptive value.
arXiv Detail & Related papers (2025-01-30T23:41:40Z) - When Should Selfish Miners Double-Spend? [35.16231062731263]
We construct a strategy where the attacker acts stubborn until its private branch reaches a certain length and then switches to act selfish.
We show that, at each attack cycle, if the level of stubbornness is higher than $k$, there is a risk of double-spending which comes at no-cost to the adversary.
arXiv Detail & Related papers (2025-01-06T18:59:26Z) - Certified Robustness to Clean-Label Poisoning Using Diffusion Denoising [56.04951180983087]
We present a certified defense to clean-label poisoning attacks under $ell$-norm.<n>Inspired by the adversarial robustness achieved by $randomized$ $smoothing, we show how an off-the-shelf diffusion denoising model can sanitize the tampered training data.
arXiv Detail & Related papers (2024-03-18T17:17:07Z) - Bribe & Fork: Cheap Bribing Attacks via Forking Threat [2.9061423802698565]
Bribe & Fork is a modified bribing attack that leverages the threat of a so-called feather fork.
We empirically analyze historical data of some real-world blockchain implementations to evaluate the scale of this cost reduction.
Our findings shed more light on the potential vulnerability of PCNs and highlight the need for robust solutions.
arXiv Detail & Related papers (2024-02-02T12:33:14Z) - Parallel Proof-of-Work with DAG-Style Voting and Targeted Reward Discounting [0.0]
We present parallel proof-of-work with DAG-style voting, a novel proof-of-work cryptocurrency protocol.
It provides better consistency guarantees, higher transaction throughput, lower transaction confirmation latency, and higher resilience against incentive attacks.
An interesting by-product of our analysis is that parallel proof-of-work without reward discounting is less resilient to incentive attacks than Bitcoin in some realistic network scenarios.
arXiv Detail & Related papers (2023-12-05T20:14:33Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Revisiting Transferable Adversarial Image Examples: Attack
Categorization, Evaluation Guidelines, and New Insights [30.14129637790446]
Transferable adversarial examples raise critical security concerns in real-world, black-box attack scenarios.
In this work, we identify two main problems in common evaluation practices.
We provide the first large-scale evaluation of transferable adversarial examples on ImageNet.
arXiv Detail & Related papers (2023-10-18T10:06:42Z) - Robust Lipschitz Bandits to Adversarial Corruptions [61.85150061213987]
Lipschitz bandit is a variant of bandits that deals with a continuous arm set defined on a metric space.
In this paper, we introduce a new problem of Lipschitz bandits in the presence of adversarial corruptions.
Our work presents the first line of robust Lipschitz bandit algorithms that can achieve sub-linear regret under both types of adversary.
arXiv Detail & Related papers (2023-05-29T18:16:59Z) - Adversarial Attacks on Adversarial Bandits [10.891819703383408]
We show that the attacker is able to mislead any no-regret adversarial bandit algorithm into selecting a suboptimal target arm.
This result implies critical security concern in real-world bandit-based systems.
arXiv Detail & Related papers (2023-01-30T00:51:39Z) - A Tale of HodgeRank and Spectral Method: Target Attack Against Rank
Aggregation Is the Fixed Point of Adversarial Game [153.74942025516853]
The intrinsic vulnerability of the rank aggregation methods is not well studied in the literature.
In this paper, we focus on the purposeful adversary who desires to designate the aggregated results by modifying the pairwise data.
The effectiveness of the suggested target attack strategies is demonstrated by a series of toy simulations and several real-world data experiments.
arXiv Detail & Related papers (2022-09-13T05:59:02Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Adversarial Attacks on Gaussian Process Bandits [47.84198626686564]
We propose various adversarial attack methods with differing assumptions on the attacker's strength and prior information.
Our goal is to understand adversarial attacks on GP bandits from both a theoretical and practical perspective.
We demonstrate that adversarial attacks on GP bandits can succeed in forcing the algorithm towards $mathcalR_rm target$ even with a low attack budget.
arXiv Detail & Related papers (2021-10-16T02:39:10Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.