Commit-Reveal$^2$: Randomized Reveal Order Mitigates Last-Revealer Attacks in Commit-Reveal
- URL: http://arxiv.org/abs/2504.03936v1
- Date: Fri, 04 Apr 2025 21:05:51 GMT
- Title: Commit-Reveal$^2$: Randomized Reveal Order Mitigates Last-Revealer Attacks in Commit-Reveal
- Authors: Suheyon Lee, Euisin Gee,
- Abstract summary: Commit-Reveal$2$ protocol employs a two-layer Commit-Reveal process to randomize the reveal order and mitigate the risk of such attacks.<n>We implement a prototype of the proposed mechanism and publicly release the code to facilitate practical adoption and further research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Randomness generation is a fundamental component in blockchain systems, essential for tasks such as validator selection, zero-knowledge proofs, and decentralized finance operations. Traditional Commit-Reveal mechanisms provide simplicity and security but are susceptible to last revealer attacks, where an adversary can manipulate the random outcome by withholding their reveal. To address this vulnerability, we propose the Commit-Reveal$^2$ protocol, which employs a two-layer Commit-Reveal process to randomize the reveal order and mitigate the risk of such attacks. Additionally, we introduces a method to leverage off-chain networks to optimize communication costs and enhance efficiency. We implement a prototype of the proposed mechanism and publicly release the code to facilitate practical adoption and further research.
Related papers
- Adversary-Augmented Simulation for Fairness Evaluation and Defense in Hyperledger Fabric [0.0]
This paper presents an adversary model and a simulation framework specifically tailored for analyzing attacks on distributed systems composed of multiple protocols.
Our model classifies and constrains adversarial actions based on the assumptions of the target protocols.
We apply this framework to analyze fairness properties in a Hyperledger Fabric (HF) blockchain network.
arXiv Detail & Related papers (2025-04-17T08:17:27Z) - Hollow Victory: How Malicious Proposers Exploit Validator Incentives in Optimistic Rollup Dispute Games [2.88268082568407]
A popular layer-2 approach is the Optimistic Rollup, which relies on a mechanism known as a dispute game for block proposals.<n>In these systems, validators can challenge blocks that they believe contain errors, and a successful challenge results in the transfer of a portion of the proposer's deposit as a reward.<n>We reveal a structural vulnerability in the mechanism: validators may not be awarded a proper profit despite winning a dispute challenge.
arXiv Detail & Related papers (2025-04-07T14:00:46Z) - Jailbreaking as a Reward Misspecification Problem [80.52431374743998]
We propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process.<n>We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness.<n>We present ReMiss, a system for automated red teaming that generates adversarial prompts in a reward-misspecified space.
arXiv Detail & Related papers (2024-06-20T15:12:27Z) - It Takes Two: A Peer-Prediction Solution for Blockchain Verifier's Dilemma [12.663727952216476]
We develop a Byzantine-robust peer prediction framework towards the design of one-phase Bayesian truthful mechanisms for the decentralized verification games.
Our study provides a framework of incentive design for decentralized verification protocols that enhances the security and robustness of the blockchain.
arXiv Detail & Related papers (2024-06-03T21:21:17Z) - Adversary-Augmented Simulation to evaluate fairness on HyperLedger Fabric [0.0]
This paper builds upon concepts such as adversarial assumptions, goals, and capabilities.
It classifies and constrains the use of adversarial actions based on classical distributed system models.
The objective is to study the effects of these allowed actions on the properties of protocols under various system models.
arXiv Detail & Related papers (2024-03-21T12:20:36Z) - A Zero Trust Framework for Realization and Defense Against Generative AI
Attacks in Power Grid [62.91192307098067]
This paper proposes a novel zero trust framework for a power grid supply chain (PGSC)
It facilitates early detection of potential GenAI-driven attack vectors, assessment of tail risk-based stability measures, and mitigation of such threats.
Experimental results show that the proposed zero trust framework achieves an accuracy of 95.7% on attack vector generation, a risk measure of 9.61% for a 95% stable PGSC, and a 99% confidence in defense against GenAI-driven attack.
arXiv Detail & Related papers (2024-03-11T02:47:21Z) - LookAhead: Preventing DeFi Attacks via Unveiling Adversarial Contracts [15.071155232677643]
Decentralized Finance (DeFi) incidents have resulted in financial damages exceeding 3 billion US dollars.<n>Current detection tools face significant challenges in identifying attack activities effectively.<n>We propose a new framework for effectively detecting DeFi attacks via unveiling adversarial contracts.
arXiv Detail & Related papers (2024-01-14T11:39:33Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Collaborative Learning Framework to Detect Attacks in Transactions and Smart Contracts [26.70294159598272]
This paper presents a novel collaborative learning framework designed to detect attacks in blockchain transactions and smart contracts.
Our framework exhibits the capability to classify various types of blockchain attacks, including intricate attacks at the machine code level.
Our framework achieves a detection accuracy of approximately 94% through extensive simulations and 91% in real-time experiments with a throughput of over 2,150 transactions per second.
arXiv Detail & Related papers (2023-08-30T07:17:20Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.