Defining and Preventing Asymmetric Mempool DoS in Ethereum with saferAd
- URL: http://arxiv.org/abs/2309.11721v5
- Date: Mon, 15 Jul 2024 20:00:44 GMT
- Title: Defining and Preventing Asymmetric Mempool DoS in Ethereum with saferAd
- Authors: Wanning Ding, Yibo Wang, Yuzhe Tang,
- Abstract summary: We formulate safety definitions under two abstract DoSes, namely eviction- and locking-based attacks.
We propose a safe transaction admission framework for securing mempools, named saferAd, that achieves both eviction- and locking-safety.
- Score: 17.06992341258962
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents secure mempool designs under asymmetric DoS attacks. We formulate safety definitions under two abstract DoSes, namely eviction- and locking-based attacks. We propose a safe transaction admission framework for securing mempools, named saferAd, that achieves both eviction- and locking-safety. The proven security stems from an upper bound of the attack damage under locking DoSes and a lower bound of the attack cost under eviction DoSes. The evaluation by replaying real transaction traces shows saferAd incurs negligible latency or insignificant change of validator revenue.
Related papers
- Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training [67.30423823744506]
This study addresses a critical gap in safety tuning practices for Large Language Models (LLMs)
We introduce a novel approach, Decoupled Refusal Training (DeRTa), designed to empower LLMs to refuse compliance to harmful prompts at any response position.
DeRTa incorporates two novel components: (1) Maximum Likelihood Estimation with Harmful Response Prefix, which trains models to recognize and avoid unsafe content by appending a segment of harmful response to the beginning of a safe response, and (2) Reinforced Transition Optimization (RTO), which equips models with the ability to transition from potential harm to safety refusal consistently throughout the harmful
arXiv Detail & Related papers (2024-07-12T09:36:33Z) - Asymmetric Mempool DoS Security: Formal Definitions and Provable Secure Designs [17.06992341258962]
This paper introduces secure blockchain-mempool designs capable of defending against any form of asymmetric eviction DoS attacks.
Our proposed secure transaction admission algorithm, named textscsaferAd-CP, ensures eviction-security by providing a provable lower bound on the cost of executing eviction DoS attacks.
arXiv Detail & Related papers (2024-07-03T23:28:35Z) - Purple-teaming LLMs with Adversarial Defender Training [57.535241000787416]
We present Purple-teaming LLMs with Adversarial Defender training (PAD)
PAD is a pipeline designed to safeguard LLMs by novelly incorporating the red-teaming (attack) and blue-teaming (safety training) techniques.
PAD significantly outperforms existing baselines in both finding effective attacks and establishing a robust safe guardrail.
arXiv Detail & Related papers (2024-07-01T23:25:30Z) - BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Language Models [57.5404308854535]
Safety backdoor attacks in large language models (LLMs) enable the stealthy triggering of unsafe behaviors while evading detection during normal interactions.
We present BEEAR, a mitigation approach leveraging the insight that backdoor triggers induce relatively uniform drifts in the model's embedding space.
Our bi-level optimization method identifies universal embedding perturbations that elicit unwanted behaviors and adjusts the model parameters to reinforce safe behaviors against these perturbations.
arXiv Detail & Related papers (2024-06-24T19:29:47Z) - Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment [56.2017039028998]
Fine-tuning of Language-Model-as-a-Service (LM) introduces new threats, particularly against the Fine-tuning based Jailbreak Attack (FJAttack)
We propose the Backdoor Enhanced Safety Alignment method inspired by an analogy with the concept of backdoor attacks.
Our comprehensive experiments demonstrate that through the Backdoor Enhanced Safety Alignment with adding as few as 11 safety examples, the maliciously finetuned LLMs will achieve similar safety performance as the original aligned models without harming the benign performance.
arXiv Detail & Related papers (2024-02-22T21:05:18Z) - Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding Indistinguishability [61.549465258257115]
We propose a novel and severer backdoor attack, TransTroj, which enables the backdoors embedded in PTMs to efficiently transfer in the model supply chain.
Experimental results show that our method significantly outperforms SOTA task-agnostic backdoor attacks.
arXiv Detail & Related papers (2024-01-29T04:35:48Z) - Understanding Ethereum Mempool Security under Asymmetric DoS by Symbolized Stateful Fuzzing [21.076514594542118]
MPFUZZ is the first mempool fuzzer to find asymmetric DoS bugs.
Running MPFUZZ on six major clients leads to the discovering of new mempool vulnerabilities.
arXiv Detail & Related papers (2023-12-05T10:31:02Z) - FRAD: Front-Running Attacks Detection on Ethereum using Ternary
Classification Model [3.929929061618338]
Front-running attacks, a unique form of security threat, pose significant challenges to the integrity of blockchain transactions.
In these attack scenarios, malicious actors monitor other users' transaction activities, then strategically submit their own transactions with higher fees.
We introduce a novel detection method named FRAD (Front-Running Attacks Detection on using Ternary Classification Model)
Our experimental validation reveals that the Multilayer Perceptron (MLP) classifier offers the best performance in detecting front-running attacks, achieving an impressive accuracy rate of 84.59% and F1-score of 84.60%.
arXiv Detail & Related papers (2023-11-24T14:42:29Z) - ADESS: A Proof-of-Work Protocol to Deter Double-Spend Attacks [0.0]
A principal vulnerability of a proof-of-work ("PoW") blockchain is that an attacker can re-write the history of transactions.
We propose a modification to PoW protocols, called ADESS, that contains two novel features.
arXiv Detail & Related papers (2023-09-25T21:50:23Z) - Certifying LLM Safety against Adversarial Prompting [75.19953634352258]
Large language models (LLMs) are vulnerable to adversarial attacks that add malicious tokens to an input prompt.
We introduce erase-and-check, the first framework for defending against adversarial prompts with certifiable safety guarantees.
arXiv Detail & Related papers (2023-09-06T04:37:20Z) - Bayes Security: A Not So Average Metric [20.60340368521067]
Security system designers favor worst-case security metrics, such as those derived from differential privacy (DP)
In this paper, we study Bayes security, a security metric inspired by the cryptographic advantage.
arXiv Detail & Related papers (2020-11-06T14:53:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.