Raven: Mining Defensive Patterns in Ethereum via Semantic Transaction Revert Invariants Categories
- URL: http://arxiv.org/abs/2512.22616v1
- Date: Sat, 27 Dec 2025 14:47:38 GMT
- Title: Raven: Mining Defensive Patterns in Ethereum via Semantic Transaction Revert Invariants Categories
- Authors: Mojtaba Eshghie, Melissa Mazura, Alexandre Bartel,
- Abstract summary: We frame transactions reverted by invariants-require(invariant>)/ assert(invariant>)/if.<n>Despite their value, the defensive patterns in these transactions remain undiscovered and underutilized in security research.<n>We present Raven, a framework that aligns reverted transactions to the invariant causing the reversion in the smart contract source code.
- Score: 42.72175126929749
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We frame Ethereum transactions reverted by invariants-require(<invariant>)/ assert(<invariant>)/if (<invariant>) revert statements in the contract implementation-as a positive signal of active on-chain defenses. Despite their value, the defensive patterns in these transactions remain undiscovered and underutilized in security research. We present Raven, a framework that aligns reverted transactions to the invariant causing the reversion in the smart contract source code, embeds these invariants using our BERT-based fine-tuned model, and clusters them by semantic intent to mine defensive invariant categories on Ethereum. Evaluated on a sample of 20,000 reverted transactions, Raven achieves cohesive and meaningful clusters of transaction-reverting invariants. Manual expert review of the mined 19 semantic clusters uncovers six new invariant categories absent from existing invariant catalogs, including feature toggles, replay prevention, proof/signature verification, counters, caller-provided slippage thresholds, and allow/ban/bot lists. To demonstrate the practical utility of this invariant catalog mining pipeline, we conduct a case study using one of the newly discovered invariant categories as a fuzzing oracle to detect vulnerabilities in a real-world attack. Raven thus can map Ethereum's successful defenses. These invariant categories enable security researchers to develop analysis tools based on data-driven security oracles extracted from the smart contracts' working defenses.
Related papers
- Defense Against Syntactic Textual Backdoor Attacks with Token Substitution [15.496176148454849]
It embeds carefully chosen triggers into a victim model at the training stage, and makes the model erroneously predict inputs containing the same triggers as a certain class.
This paper proposes a novel online defense algorithm that effectively counters syntax-based as well as special token-based backdoor attacks.
arXiv Detail & Related papers (2024-07-04T22:48:57Z) - Towards a Formal Foundation for Blockchain Rollups [5.770720128901053]
ZK-Rollups aim to address challenges by processing transactions off-chain and validating them on the main chain.<n>In their current form, L2s are susceptible to multisig attacks that can lead to total user funds loss.<n>This work presents a formal analysis using the Alloy specification language to examine and design key Layer 2 functionalities.
arXiv Detail & Related papers (2024-06-23T21:12:19Z) - SmartOracle: Generating Smart Contract Oracle via Fine-Grained Invariant Detection [27.4175374482506]
SmartOracle is a dynamic invariant detector that automatically generates fine-grained invariants as application-specific oracles for vulnerability detection.
From historical transactions, SmartOracle uses pattern-based detection and advanced inference to construct comprehensive properties.
SmartOracle successfully detects 466 abnormal transactions with an acceptable precision rate 96%, involving 31 vulnerable contracts.
arXiv Detail & Related papers (2024-06-14T14:09:20Z) - Demystifying Invariant Effectiveness for Securing Smart Contracts [8.848934430494088]
In this paper, we studied 23 prevalent invariants of 8 categories, which are either deployed in high-profile protocols or endorsed by leading auditing firms and security experts.
We developed a tool Trace2Inv which dynamically generates new invariants customized for a given contract based on its historical transaction data.
Our findings reveal that the most effective invariant guard alone can successfully block 18 of the 27 identified exploits with minimal gas overhead.
arXiv Detail & Related papers (2024-04-22T20:59:09Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Defending Variational Autoencoders from Adversarial Attacks with MCMC [74.36233246536459]
Variational autoencoders (VAEs) are deep generative models used in various domains.
As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input.
Here, we examine several objective functions for adversarial attacks construction, suggest metrics assess the model robustness, and propose a solution.
arXiv Detail & Related papers (2022-03-18T13:25:18Z) - Towards Defending against Adversarial Examples via Attack-Invariant
Features [147.85346057241605]
Deep neural networks (DNNs) are vulnerable to adversarial noise.
adversarial robustness can be improved by exploiting adversarial examples.
Models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
arXiv Detail & Related papers (2021-06-09T12:49:54Z) - The art of defense: letting networks fool the attacker [7.228685736051466]
Deep neural networks are invariant to some input transformations, such as Pointnetis permutation invariant to the input point cloud.
In this paper, we demonstrate this property can be powerful in the defense of gradient based attacks.
arXiv Detail & Related papers (2021-04-07T07:28:46Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial
Perturbations [65.05561023880351]
Adversarial examples are malicious inputs crafted to induce misclassification.
This paper studies a complementary failure mode, invariance-based adversarial examples.
We show that defenses against sensitivity-based attacks actively harm a model's accuracy on invariance-based attacks.
arXiv Detail & Related papers (2020-02-11T18:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.