Adversarial Attacks Against Deep Learning-Based Radio Frequency Fingerprint Identification
- URL: http://arxiv.org/abs/2512.12002v1
- Date: Fri, 12 Dec 2025 19:33:39 GMT
- Title: Adversarial Attacks Against Deep Learning-Based Radio Frequency Fingerprint Identification
- Authors: Jie Ma, Junqing Zhang, Guanxiong Shen, Alan Marshall, Chip-Hong Chang,
- Abstract summary: Radio frequency fingerprint identification (RFFI) is an emerging technique for the lightweight authentication of wireless Internet of things (IoT) devices.<n>Recent studies show deep learning-based RFFI is vulnerable to adversarial attacks.
- Score: 27.217069859196133
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Radio frequency fingerprint identification (RFFI) is an emerging technique for the lightweight authentication of wireless Internet of things (IoT) devices. RFFI exploits deep learning models to extract hardware impairments to uniquely identify wireless devices. Recent studies show deep learning-based RFFI is vulnerable to adversarial attacks. However, effective adversarial attacks against different types of RFFI classifiers have not yet been explored. In this paper, we carried out a comprehensive investigations into different adversarial attack methods on RFFI systems using various deep learning models. Three specific algorithms, fast gradient sign method (FGSM), projected gradient descent (PGD), and universal adversarial perturbation (UAP), were analyzed. The attacks were launched to LoRa-RFFI and the experimental results showed the generated perturbations were effective against convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and gated recurrent units (GRU). We further used UAP to launch practical attacks. Special factors were considered for the wireless context, including implementing real-time attacks, the effectiveness of the attacks over a period of time, etc. Our experimental evaluation demonstrated that UAP can successfully launch adversarial attacks against the RFFI, achieving a success rate of 81.7% when the adversary almost has no prior knowledge of the victim RFFI systems.
Related papers
- MI$^2$DAS: A Multi-Layer Intrusion Detection Framework with Incremental Learning for Securing Industrial IoT Networks [47.386868423451595]
MI$2$DAS is a multi-layer intrusion detection framework that integrates anomaly-based hierarchical traffic pooling and open-set recognition.<n>Experiments conducted on the Edge-IIoTset dataset demonstrate strong performance across all layers.<n>These results showcase MI$2$DAS as an effective, scalable and adaptive framework for enhancing IIoT security.
arXiv Detail & Related papers (2026-02-27T09:37:05Z) - Towards Channel-Robust and Receiver-Independent Radio Frequency Fingerprint Identification [23.32212243151147]
Radio frequency fingerprint identification (RFFI) is an emerging method for authenticating Internet of Things (IoT) devices.<n>Deep learning-based RFFI has shown excellent performance, but there are still remaining research challenges.<n>We propose a three-stage RFFI approach involving contrastive learning-enhanced pretraining, Siamese network-based classification network training, and inference.
arXiv Detail & Related papers (2025-12-12T22:32:08Z) - An Adversarial-Driven Experimental Study on Deep Learning for RF Fingerprinting [6.145167956956878]
Radio frequency (RF) fingerprinting has emerged as a promising physical-layer device identification mechanism.<n>Deep learning (DL) methods have demonstrated state-of-the-art performance in this domain.<n>We show that training DL models on raw received signals causes the models to entangle RF fingerprints with environmental and signal-pattern features.
arXiv Detail & Related papers (2025-07-18T17:42:20Z) - Adversarial Attacks on Deep Learning-Based False Data Injection Detection in Differential Relays [3.4061238650474666]
This paper demonstrates that adversarial attacks, carefully crafted FDIAs, can evade existing Deep Learning-based Schemes (DLSs) used for False Data Injection Attacks (FDIAs) in smart grids.<n>We propose a novel adversarial attack framework, utilizing the Fast Gradient Sign Method, which exploits DLS vulnerabilities.<n>Our results highlight the significant threat posed by adversarial attacks to DLS-based FDIA detection, underscore the necessity for robust cybersecurity measures in smart grids, and demonstrate the effectiveness of adversarial training in enhancing model robustness against adversarial FDIAs.
arXiv Detail & Related papers (2025-06-24T04:22:26Z) - Learning in Multiple Spaces: Few-Shot Network Attack Detection with Metric-Fused Prototypical Networks [47.18575262588692]
We propose a novel Multi-Space Prototypical Learning framework tailored for few-shot attack detection.<n>By leveraging Polyak-averaged prototype generation, the framework stabilizes the learning process and effectively adapts to rare and zero-day attacks.<n> Experimental results on benchmark datasets demonstrate that MSPL outperforms traditional approaches in detecting low-profile and novel attack types.
arXiv Detail & Related papers (2024-12-28T00:09:46Z) - Transfer-based Adversarial Poisoning Attacks for Online (MIMO-)Deep Receviers [44.051757540209756]
We propose a transfer-based adversarial poisoning attack method for online receivers.
Without knowledge of the attack target, perturbations are injected to the pilots, poisoning the online deep receiver.
Simulation results indicate that the proposed poisoning attack significantly reduces the performance of online receivers.
arXiv Detail & Related papers (2024-09-04T04:17:57Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Towards Adversarial Realism and Robust Learning for IoT Intrusion
Detection and Classification [0.0]
The Internet of Things (IoT) faces tremendous security challenges.
The increasing threat posed by adversarial attacks restates the need for reliable defense strategies.
This work describes the types of constraints required for an adversarial cyber-attack example to be realistic.
arXiv Detail & Related papers (2023-01-30T18:00:28Z) - One-shot Generative Distribution Matching for Augmented RF-based UAV Identification [0.0]
This work addresses the challenge of identifying Unmanned Aerial Vehicles (UAV) using radiofrequency (RF) fingerprinting in limited RF environments.
The complexity and variability of RF signals, influenced by environmental interference and hardware imperfections, often render traditional RF-based identification methods ineffective.
One-shot generative methods for augmenting transformed RF signals offer a significant improvement in UAV identification.
arXiv Detail & Related papers (2023-01-20T02:35:43Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.