LookAhead: Preventing DeFi Attacks via Unveiling Adversarial Contracts
- URL: http://arxiv.org/abs/2401.07261v5
- Date: Wed, 02 Apr 2025 02:00:44 GMT
- Title: LookAhead: Preventing DeFi Attacks via Unveiling Adversarial Contracts
- Authors: Shoupeng Ren, Lipeng He, Tianyu Tu, Di Wu, Jian Liu, Kui Ren, Chun Chen,
- Abstract summary: Decentralized Finance (DeFi) incidents have resulted in financial damages exceeding 3 billion US dollars.<n>Current detection tools face significant challenges in identifying attack activities effectively.<n>We propose a new framework for effectively detecting DeFi attacks via unveiling adversarial contracts.
- Score: 15.071155232677643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decentralized Finance (DeFi) incidents stemming from the exploitation of smart contract vulnerabilities have culminated in financial damages exceeding 3 billion US dollars. Existing defense mechanisms typically focus on detecting and reacting to malicious transactions executed by attackers that target victim contracts. However, with the emergence of private transaction pools where transactions are sent directly to miners without first appearing in public mempools, current detection tools face significant challenges in identifying attack activities effectively. Based on the fact that most attack logic rely on deploying one or more intermediate smart contracts as supporting components to the exploitation of victim contracts, detection methods have been proposed that focus on identifying these adversarial contracts instead of adversarial transactions. However, previous state-of-the-art approaches in this direction have failed to produce results satisfactory enough for real-world deployment. In this paper, we propose a new framework for effectively detecting DeFi attacks via unveiling adversarial contracts. Our approach allows us to leverage common attack patterns, code semantics and intrinsic characteristics found in malicious smart contracts to build the LookAhead system based on Machine Learning (ML) classifiers and a transformer model that is able to effectively distinguish adversarial contracts from benign ones, and make timely predictions of different types of potential attacks. Experiments show that LookAhead achieves an F1-score as high as 0.8966, which represents an improvement of over 44.4% compared to the previous state-of-the-art solution Forta, with a False Positive Rate (FPR) at only 0.16%.
Related papers
- Secure Smart Contract with Control Flow Integrity [3.1655211232629563]
We develop CrossGuard, a framework that enforces control flow integrity in real-time to secure smart contracts.
Our evaluation demonstrates that CrossGuard effectively blocks 28 of the 30 analyzed attacks when configured only once prior to contract deployment.
arXiv Detail & Related papers (2025-04-07T21:08:16Z) - Following Devils' Footprint: Towards Real-time Detection of Price Manipulation Attacks [10.782846331348379]
Price manipulation attacks are one of the notorious threats in decentralized finance (DeFi) applications.
We propose SMARTCAT, a novel approach for identifying price manipulation attacks in the pre-attack stage proactively.
We show that SMARTCAT significantly outperforms existing baselines with 91.6% recall and 100% precision.
arXiv Detail & Related papers (2025-02-06T02:11:24Z) - Smart Contract Fuzzing Towards Profitable Vulnerabilities [10.908512696717724]
VERITE is a profit-centric smart contract fuzzing framework.
It detects profitable vulnerabilities and maximizes the exploited profits.
It can extract more than 18 million dollars in total and is significantly better than state-of-the-art fuzzer ITYFUZZ in both detection and exploitation.
arXiv Detail & Related papers (2025-01-15T14:38:18Z) - ML Study of MaliciousTransactions in Ethereum [0.0]
This paper presents two successful approaches for detecting malicious contracts.
One uses opcode and relies on GPT2 and the other uses the Solidity source and a LORA fine-tuned CodeLlama.
arXiv Detail & Related papers (2024-08-16T13:50:04Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Soley: Identification and Automated Detection of Logic Vulnerabilities in Ethereum Smart Contracts Using Large Language Models [1.081463830315253]
We empirically investigate logic vulnerabilities in real-world smart contracts extracted from code changes on GitHub.
We introduce Soley, an automated method for detecting logic vulnerabilities in smart contracts.
We examine mitigation strategies employed by smart contract developers to address these vulnerabilities in real-world scenarios.
arXiv Detail & Related papers (2024-06-24T00:15:18Z) - Improving Smart Contract Security with Contrastive Learning-based Vulnerability Detection [8.121484960948303]
We propose Contrastive Learning Enhanced Automated Recognition Approach for Smart Contract Vulnerabilities, named Clear.
In particular, Clear employs a contrastive learning (CL) model to capture the fine-grained correlation information among contracts.
We show that Clear achieves optimal performance over all baseline methods; (2) 9.73%-39.99% higher F1-score than existing deep learning methods.
arXiv Detail & Related papers (2024-04-27T09:13:25Z) - Uncover the Premeditated Attacks: Detecting Exploitable Reentrancy Vulnerabilities by Identifying Attacker Contracts [27.242299425486273]
Reentrancy, a notorious vulnerability in smart contracts, has led to millions of dollars in financial loss.
Current smart contract vulnerability detection tools suffer from a high false positive rate in identifying contracts with reentrancy vulnerabilities.
We propose BlockWatchdog, a tool that focuses on detecting reentrancy vulnerabilities by identifying attacker contracts.
arXiv Detail & Related papers (2024-03-28T03:07:23Z) - Blockchain Smart Contract Threat Detection Technology Based on Symbolic
Execution [0.0]
Reentrancy vulnerability, which is hidden and complex, poses a great threat to smart contracts.
In this paper, we propose a smart contract threat detection technology based on symbolic execution.
The experimental results show that this method significantly increases both detection efficiency and accuracy.
arXiv Detail & Related papers (2023-12-24T03:27:03Z) - Malicious Agent Detection for Robust Multi-Agent Collaborative Perception [52.261231738242266]
Multi-agent collaborative (MAC) perception is more vulnerable to adversarial attacks than single-agent perception.
We propose Malicious Agent Detection (MADE), a reactive defense specific to MAC perception.
We conduct comprehensive evaluations on a benchmark 3D dataset V2X-sim and a real-road dataset DAIR-V2X.
arXiv Detail & Related papers (2023-10-18T11:36:42Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Collaborative Learning Framework to Detect Attacks in Transactions and Smart Contracts [26.70294159598272]
This paper presents a novel collaborative learning framework designed to detect attacks in blockchain transactions and smart contracts.
Our framework exhibits the capability to classify various types of blockchain attacks, including intricate attacks at the machine code level.
Our framework achieves a detection accuracy of approximately 94% through extensive simulations and 91% in real-time experiments with a throughput of over 2,150 transactions per second.
arXiv Detail & Related papers (2023-08-30T07:17:20Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - Blockchain Large Language Models [65.7726590159576]
This paper presents a dynamic, real-time approach to detecting anomalous blockchain transactions.
The proposed tool, BlockGPT, generates tracing representations of blockchain activity and trains from scratch a large language model to act as a real-time Intrusion Detection System.
arXiv Detail & Related papers (2023-04-25T11:56:18Z) - Prototype-supervised Adversarial Network for Targeted Attack of Deep
Hashing [65.32148145602865]
deep hashing networks are vulnerable to adversarial examples.
We propose a novel prototype-supervised adversarial network (ProS-GAN)
To the best of our knowledge, this is the first generation-based method to attack deep hashing networks.
arXiv Detail & Related papers (2021-05-17T00:31:37Z) - ESCORT: Ethereum Smart COntRacTs Vulnerability Detection using Deep
Neural Network and Transfer Learning [80.85273827468063]
Existing machine learning-based vulnerability detection methods are limited and only inspect whether the smart contract is vulnerable.
We propose ESCORT, the first Deep Neural Network (DNN)-based vulnerability detection framework for smart contracts.
We show that ESCORT achieves an average F1-score of 95% on six vulnerability types and the detection time is 0.02 seconds per contract.
arXiv Detail & Related papers (2021-03-23T15:04:44Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.