Gradient-based Adversarial Deep Modulation Classification with
Data-driven Subsampling
- URL: http://arxiv.org/abs/2104.06375v1
- Date: Sat, 3 Apr 2021 22:28:04 GMT
- Title: Gradient-based Adversarial Deep Modulation Classification with
Data-driven Subsampling
- Authors: Jinho Yi and Aly El Gamal
- Abstract summary: Deep learning techniques have been shown to deliver superior performance to conventional model-based strategies.
Deep learning techniques have also been shown to be vulnerable to gradient-based adversarial attacks.
We consider a data-driven subsampling setting, where several recently introduced deep-learning-based algorithms are employed.
We evaluate best strategies under various assumptions on the knowledge of the other party's strategy.
- Score: 6.447052211404121
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic modulation classification can be a core component for intelligent
spectrally efficient wireless communication networks, and deep learning
techniques have recently been shown to deliver superior performance to
conventional model-based strategies, particularly when distinguishing between a
large number of modulation types. However, such deep learning techniques have
also been recently shown to be vulnerable to gradient-based adversarial attacks
that rely on subtle input perturbations, which would be particularly feasible
in a wireless setting via jamming. One such potent attack is the one known as
the Carlini-Wagner attack, which we consider in this work. We further consider
a data-driven subsampling setting, where several recently introduced
deep-learning-based algorithms are employed to select a subset of samples that
lead to reducing the final classifier's training time with minimal loss in
accuracy. In this setting, the attacker has to make an assumption about the
employed subsampling strategy, in order to calculate the loss gradient. Based
on state of the art techniques available to both the attacker and defender, we
evaluate best strategies under various assumptions on the knowledge of the
other party's strategy. Interestingly, in presence of knowledgeable attackers,
we identify computational cost reduction opportunities for the defender with no
or minimal loss in performance.
Related papers
- Golden Ratio Search: A Low-Power Adversarial Attack for Deep Learning based Modulation Classification [8.187445866881637]
We propose a minimal power white box adversarial attack for Deep Learning based Automatic Modulation Classification (AMC)
We evaluate the efficacy of the proposed method by comparing it with existing adversarial attack approaches.
Experimental results demonstrate that the proposed attack is powerful, requires minimal power, and can be generated in less time.
arXiv Detail & Related papers (2024-09-17T17:17:54Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - RamBoAttack: A Robust Query Efficient Deep Neural Network Decision
Exploit [9.93052896330371]
We develop a robust query efficient attack capable of avoiding entrapment in a local minimum and misdirection from noisy gradients.
The RamBoAttack is more robust to the different sample inputs available to an adversary and the targeted class.
arXiv Detail & Related papers (2021-12-10T01:25:24Z) - Adversarial Poisoning Attacks and Defense for General Multi-Class Models
Based On Synthetic Reduced Nearest Neighbors [14.968442560499753]
State-of-the-art machine learning models are vulnerable to data poisoning attacks.
This paper proposes a novel model-free label-flipping attack based on the multi-modality of the data.
Second, a novel defense technique based on the Synthetic Reduced Nearest Neighbor (SRNN) model is proposed.
arXiv Detail & Related papers (2021-02-11T06:55:40Z) - Ensemble Wrapper Subsampling for Deep Modulation Classification [70.91089216571035]
Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
arXiv Detail & Related papers (2020-05-10T06:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.