Adversarial Attacks on LoRa Device Identification and Rogue Signal
Detection with Deep Learning
- URL: http://arxiv.org/abs/2312.16715v1
- Date: Wed, 27 Dec 2023 20:49:28 GMT
- Title: Adversarial Attacks on LoRa Device Identification and Rogue Signal
Detection with Deep Learning
- Authors: Yalin E. Sagduyu, Tugba Erpek
- Abstract summary: This paper studies a deep learning framework to address LoRa device identification and legitimate vs. rogue LoRa device classification tasks.
Fast Gradient Sign Method (FGSM)-based adversarial attacks are considered for LoRa signal classification tasks using deep learning models.
Results presented in this paper quantify the level of transferability of adversarial attacks on different LoRa signal classification tasks as a major vulnerability.
- Score: 7.373498690601958
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-Power Wide-Area Network (LPWAN) technologies, such as LoRa, have gained
significant attention for their ability to enable long-range, low-power
communication for Internet of Things (IoT) applications. However, the security
of LoRa networks remains a major concern, particularly in scenarios where
device identification and classification of legitimate and spoofed signals are
crucial. This paper studies a deep learning framework to address these
challenges, considering LoRa device identification and legitimate vs. rogue
LoRa device classification tasks. A deep neural network (DNN), either a
convolutional neural network (CNN) or feedforward neural network (FNN), is
trained for each task by utilizing real experimental I/Q data for LoRa signals,
while rogue signals are generated by using kernel density estimation (KDE) of
received signals by rogue devices. Fast Gradient Sign Method (FGSM)-based
adversarial attacks are considered for LoRa signal classification tasks using
deep learning models. The impact of these attacks is assessed on the
performance of two tasks, namely device identification and legitimate vs. rogue
device classification, by utilizing separate or common perturbations against
these signal classification tasks. Results presented in this paper quantify the
level of transferability of adversarial attacks on different LoRa signal
classification tasks as a major vulnerability and highlight the need to make
IoT applications robust to adversarial attacks.
Related papers
- Adversarial Attack and Defense for LoRa Device Identification and Authentication via Deep Learning [6.241494296494434]
Concerns persist regarding the security of LoRa networks.
This paper focuses on two critical tasks, namely (i) identifying LoRa devices and (ii) classifying them to legitimate and rogue devices.
Deep neural networks (DNNs), encompassing both convolutional and feedforward neural networks, are trained for these tasks.
arXiv Detail & Related papers (2024-12-30T18:43:21Z) - SCGNet-Stacked Convolution with Gated Recurrent Unit Network for Cyber Network Intrusion Detection and Intrusion Type Classification [0.0]
Intrusion detection systems (IDSs) are far from being able to quickly and efficiently identify complex and varied network attacks.
The SCGNet is a novel deep learning architecture that we propose in this study.
It exhibits promising results on the NSL-KDD dataset in both task, network attack detection, and attack type classification with 99.76% and 98.92% accuracy, respectively.
arXiv Detail & Related papers (2024-10-29T09:09:08Z) - Multi-class Network Intrusion Detection with Class Imbalance via LSTM & SMOTE [1.0591656257413806]
This paper proposes to use oversampling techniques along with appropriate loss functions to handle class imbalance for the detection of various types of network intrusions.
Our deep learning model employs LSTM with fully connected layers to perform multi-class classification of network attacks.
arXiv Detail & Related papers (2023-10-03T07:28:04Z) - Performance evaluation of Machine learning algorithms for Intrusion Detection System [0.40964539027092917]
This paper focuses on intrusion detection systems (IDSs) analysis using Machine Learning (ML) techniques.
We analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models.
arXiv Detail & Related papers (2023-10-01T06:35:37Z) - OMINACS: Online ML-Based IoT Network Attack Detection and Classification
System [0.0]
This paper proposes an online attack detection and network traffic classification system.
It combines stream Machine Learning, Deep Learning, and Ensemble Learning technique.
It can detect the presence of malicious traffic flows and classify them according to the type of attack they represent.
arXiv Detail & Related papers (2023-02-18T04:06:24Z) - Task-Oriented Communications for NextG: End-to-End Deep Learning and AI
Security Aspects [78.84264189471936]
NextG communication systems are beginning to explore shifting this design paradigm to reliably executing a given task such as in task-oriented communications.
Wireless signal classification is considered as the task for the NextG Radio Access Network (RAN), where edge devices collect wireless signals for spectrum awareness and communicate with the NextG base station (gNodeB) that needs to identify the signal label.
Task-oriented communications is considered by jointly training the transmitter, receiver and classifier functionalities as an encoder-decoder pair for the edge device and the gNodeB.
arXiv Detail & Related papers (2022-12-19T17:54:36Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.