Protecting DeFi Platforms against Non-Price Flash Loan Attacks
- URL: http://arxiv.org/abs/2503.01944v1
- Date: Mon, 03 Mar 2025 18:18:05 GMT
- Title: Protecting DeFi Platforms against Non-Price Flash Loan Attacks
- Authors: Abdulrahman Alhaidari, Balaji Palanisamy, Prashant Krishnamurthy,
- Abstract summary: We present FlashGuard, a runtime detection and mitigation method for non-price flash loan attacks.<n>Our approach targets smart contract function signatures to identify attacks in real-time and counterattack by disrupting the attack transaction atomicity.<n>FlashGuard achieves an average real-time detection latency of 150.31ms, a detection accuracy of over 99.93%, and an average disruption time of 410.92ms.
- Score: 0.6096888891865663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Smart contracts in Decentralized Finance (DeFi) platforms are attractive targets for attacks as their vulnerabilities can lead to massive amounts of financial losses. Flash loan attacks, in particular, pose a major threat to DeFi protocols that hold a Total Value Locked (TVL) exceeding \$106 billion. These attacks use the atomicity property of blockchains to drain funds from smart contracts in a single transaction. While existing research primarily focuses on price manipulation attacks, such as oracle manipulation, mitigating non-price flash loan attacks that often exploit smart contracts' zero-day vulnerabilities remains largely unaddressed. These attacks are challenging to detect because of their unique patterns, time sensitivity, and complexity. In this paper, we present FlashGuard, a runtime detection and mitigation method for non-price flash loan attacks. Our approach targets smart contract function signatures to identify attacks in real-time and counterattack by disrupting the attack transaction atomicity by leveraging the short window when transactions are visible in the mempool but not yet confirmed. When FlashGuard detects an attack, it dispatches a stealthy dusting counterattack transaction to miners to change the victim contract's state which disrupts the attack's atomicity and forces the attack transaction to revert. We evaluate our approach using 20 historical attacks and several unseen attacks. FlashGuard achieves an average real-time detection latency of 150.31ms, a detection accuracy of over 99.93\%, and an average disruption time of 410.92ms. FlashGuard could have potentially rescued over \$405.71 million in losses if it were deployed prior to these attack instances. FlashGuard demonstrates significant potential as a DeFi security solution to mitigate and handle rising threats of non-price flash loan attacks.
Related papers
- Following Devils' Footprint: Towards Real-time Detection of Price Manipulation Attacks [10.782846331348379]
Price manipulation attacks are one of the notorious threats in decentralized finance (DeFi) applications.<n>We propose SMARTCAT, a novel approach for identifying price manipulation attacks in the pre-attack stage proactively.<n>We show that SMARTCAT significantly outperforms existing baselines with 91.6% recall and 100% precision.
arXiv Detail & Related papers (2025-02-06T02:11:24Z) - Strengthening DeFi Security: A Static Analysis Approach to Flash Loan Vulnerabilities [0.0]
We introduce FlashDeFier, an advanced detection framework for price manipulation vulnerabilities arising from flash loans.<n>FlashDeFier expands the scope of taint sources and sinks, enabling comprehensive analysis of data flows across DeFi protocols.<n>Tested against a dataset of high-profile DeFi incidents, FlashDeFier identifies 76.4% of price manipulation vulnerabilities, marking a 30% improvement over DeFiTainter.
arXiv Detail & Related papers (2024-11-02T12:42:01Z) - SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers [51.0477382050976]
An extra prompt token, called the switch token in this work, can turn the backdoor mode on, converting a benign model into a backdoored one.
To attack a pre-trained model, our proposed attack, named SWARM, learns a trigger and prompt tokens including a switch token.
Experiments on diverse visual recognition tasks confirm the success of our switchable backdoor attack, achieving 95%+ attack success rate.
arXiv Detail & Related papers (2024-05-17T08:19:48Z) - Steal Now and Attack Later: Evaluating Robustness of Object Detection against Black-box Adversarial Attacks [47.9744734181236]
"steal now, later" attacks can be employed to exploit potential vulnerabilities in the AI service.
The average cost of each attack is less than $ 1 dollars, posing a significant threat to AI security.
arXiv Detail & Related papers (2024-04-24T13:51:56Z) - Uncover the Premeditated Attacks: Detecting Exploitable Reentrancy Vulnerabilities by Identifying Attacker Contracts [27.242299425486273]
Reentrancy, a notorious vulnerability in smart contracts, has led to millions of dollars in financial loss.
Current smart contract vulnerability detection tools suffer from a high false positive rate in identifying contracts with reentrancy vulnerabilities.
We propose BlockWatchdog, a tool that focuses on detecting reentrancy vulnerabilities by identifying attacker contracts.
arXiv Detail & Related papers (2024-03-28T03:07:23Z) - LookAhead: Preventing DeFi Attacks via Unveiling Adversarial Contracts [15.071155232677643]
Decentralized Finance (DeFi) has resulted in financial losses exceeding 3 billion US dollars.
Current detection tools face significant challenges in identifying attack activities effectively.
We propose LookAhead, a new framework for detecting DeFi attacks via unveiling adversarial contracts.
arXiv Detail & Related papers (2024-01-14T11:39:33Z) - Does Few-shot Learning Suffer from Backdoor Attacks? [63.9864247424967]
We show that few-shot learning can still be vulnerable to backdoor attacks.
Our method demonstrates a high Attack Success Rate (ASR) in FSL tasks with different few-shot learning paradigms.
This study reveals that few-shot learning still suffers from backdoor attacks, and its security should be given attention.
arXiv Detail & Related papers (2023-12-31T06:43:36Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - FlashSyn: Flash Loan Attack Synthesis via Counter Example Driven
Approximation [4.639819221995903]
In decentralized finance (DeFi), lenders can offer flash loans to borrowers.
Unlike normal loans, flash loans allow borrowers to borrow large assets without upfront collaterals deposits.
Malicious adversaries use flash loans to gather large assets to exploit vulnerable DeFi protocols.
arXiv Detail & Related papers (2022-06-21T19:56:54Z) - RayS: A Ray Searching Method for Hard-label Adversarial Attack [99.72117609513589]
We present the Ray Searching attack (RayS), which greatly improves the hard-label attack effectiveness as well as efficiency.
RayS attack can also be used as a sanity check for possible "falsely robust" models.
arXiv Detail & Related papers (2020-06-23T07:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.