DeFiScope: Detecting Various DeFi Price Manipulations with LLM Reasoning
- URL: http://arxiv.org/abs/2502.11521v1
- Date: Mon, 17 Feb 2025 07:45:03 GMT
- Title: DeFiScope: Detecting Various DeFi Price Manipulations with LLM Reasoning
- Authors: Juantao Zhong, Daoyuan Wu, Ye Liu, Maoyi Xie, Yang Liu, Yi Li, Ning Liu,
- Abstract summary: We introduce the first LLM-based approach, DeFiScope, for detecting DeFi price manipulation attacks.<n>DeFiScope achieves a high precision of 96% and a recall rate of 80%, significantly outperforming SOTA approaches.
- Score: 13.536828549768858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: DeFi (Decentralized Finance) is one of the most important applications of today's cryptocurrencies and smart contracts. It manages hundreds of billions in Total Value Locked (TVL) on-chain, yet it remains susceptible to common DeFi price manipulation attacks. Despite state-of-the-art (SOTA) systems like DeFiRanger and DeFort, we found that they are less effective to non-standard price models in custom DeFi protocols, which account for 44.2% of the 95 DeFi price manipulation attacks reported over the past three years. In this paper, we introduce the first LLM-based approach, DeFiScope, for detecting DeFi price manipulation attacks in both standard and custom price models. Our insight is that large language models (LLMs) have certain intelligence to abstract price calculation from code and infer the trend of token price changes based on the extracted price models. To further strengthen LLMs in this aspect, we leverage Foundry to synthesize on-chain data and use it to fine-tune a DeFi price-specific LLM. Together with the high-level DeFi operations recovered from low-level transaction data, DeFiScope detects various DeFi price manipulations according to systematically mined patterns. Experimental results show that DeFiScope achieves a high precision of 96% and a recall rate of 80%, significantly outperforming SOTA approaches. Moreover, we evaluate DeFiScope's cost-effectiveness and demonstrate its practicality by helping our industry partner confirm 147 real-world price manipulation attacks, including discovering 81 previously unknown historical incidents.
Related papers
- Enhancing Smart Contract Vulnerability Detection in DApps Leveraging Fine-Tuned LLM [0.7018579932647147]
Decentralized applications (DApps) face significant security risks due to vulnerabilities in smart contracts.
This paper proposes a novel approach leveraging fine-tuned Large Language Models (LLMs) to enhance smart contract vulnerability detection.
arXiv Detail & Related papers (2025-04-07T12:32:14Z) - Are You Getting What You Pay For? Auditing Model Substitution in LLM APIs [60.881609323604685]
Large Language Models (LLMs) accessed via black-box APIs introduce a trust challenge.
Users pay for services based on advertised model capabilities.
providers may covertly substitute the specified model with a cheaper, lower-quality alternative to reduce operational costs.
This lack of transparency undermines fairness, erodes trust, and complicates reliable benchmarking.
arXiv Detail & Related papers (2025-04-07T03:57:41Z) - FinTSB: A Comprehensive and Practical Benchmark for Financial Time Series Forecasting [58.70072722290475]
Financial time series (FinTS) record the behavior of human-brain-augmented decision-making.
FinTSB is a comprehensive and practical benchmark for financial time series forecasting.
arXiv Detail & Related papers (2025-02-26T05:19:16Z) - AiRacleX: Automated Detection of Price Oracle Manipulations via LLM-Driven Knowledge Mining and Prompt Generation [30.312011441118194]
Decentralized finance applications depend on accurate price oracles to ensure secure transactions.<n>Price oracles are highly vulnerable to manipulation, enabling attackers to exploit smart contract vulnerabilities.<n>We propose a novel framework that automates the detection of price oracle manipulations.
arXiv Detail & Related papers (2025-02-10T10:58:09Z) - DeFiGuard: A Price Manipulation Detection Service in DeFi using Graph Neural Networks [20.373624767892302]
This paper introduces a novel detection service, DeFiGuard, using Graph Neural Networks (GNNs)
DeFiGuard integrates transaction parsing, graph construction, model training, and PMA detection.
Evaluations show that DeFiGuard with GNN models outperforms the baseline in Accuracy, TPR, FPR, and AUC-ROC.
arXiv Detail & Related papers (2024-06-17T02:51:18Z) - Advancing Anomaly Detection: Non-Semantic Financial Data Encoding with LLMs [49.57641083688934]
We introduce a novel approach to anomaly detection in financial data using Large Language Models (LLMs) embeddings.
Our experiments demonstrate that LLMs contribute valuable information to anomaly detection as our models outperform the baselines.
arXiv Detail & Related papers (2024-06-05T20:19:09Z) - DeFiTail: DeFi Protocol Inspection through Cross-Contract Execution Analysis [4.891180928768215]
Decentralized finance (DeFi) protocols are crypto projects developed on the blockchain to manage digital assets.
We propose DeFiTail, the first framework that utilizes deep learning to detect access control and flash loan exploits.
DeFiTail achieves the highest accuracy, with 98.39% in access control and 97.43% in flash loan exploits.
arXiv Detail & Related papers (2024-05-17T18:14:19Z) - From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers [52.199303258423306]
We propose a novel density loss that encourages higher activation sparsity in pre-trained models.
Our proposed method, textbfDEFT, can consistently reduce activation density by up to textbf44.94% on RoBERTa$_mathrmLarge$ and by textbf53.19% (encoder density) and textbf90.60% (decoder density) on Flan-T5$_mathrmXXL$.
arXiv Detail & Related papers (2024-02-02T21:25:46Z) - Data-Free Hard-Label Robustness Stealing Attack [67.41281050467889]
We introduce a novel Data-Free Hard-Label Robustness Stealing (DFHL-RS) attack in this paper.
It enables the stealing of both model accuracy and robustness by simply querying hard labels of the target model.
Our method achieves a clean accuracy of 77.86% and a robust accuracy of 39.51% against AutoAttack.
arXiv Detail & Related papers (2023-12-10T16:14:02Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - Machine Learning Force Fields with Data Cost Aware Training [94.78998399180519]
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation.
Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels.
We propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data.
arXiv Detail & Related papers (2023-06-05T04:34:54Z) - Leveraging Machine Learning for Multichain DeFi Fraud Detection [5.213509776274283]
We present a framework for extracting features from different chains, including the largest one, and it is evaluated over an extensive dataset.
Different Machine Learning methods were employed, such as XGBoost and a Neural Network for identifying fraud accounts detection interacting with DeFi.
We demonstrate that the introduction of novel DeFi-related features, significantly improves the evaluation results.
arXiv Detail & Related papers (2023-05-17T15:48:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.