Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network
- URL: http://arxiv.org/abs/2101.12090v1
- Date: Thu, 28 Jan 2021 16:18:19 GMT
- Title: Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network
- Authors: B. R. Manoj, Meysam Sadeghi, Erik G. Larsson
- Abstract summary: We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
- Score: 62.77129284830945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) is becoming popular as a new tool for many applications in
wireless communication systems. However, for many classification tasks (e.g.,
modulation classification) it has been shown that DL-based wireless systems are
susceptible to adversarial examples; adversarial examples are well-crafted
malicious inputs to the neural network (NN) with the objective to cause
erroneous outputs. In this paper, we extend this to regression problems and
show that adversarial attacks can break DL-based power allocation in the
downlink of a massive multiple-input-multiple-output (maMIMO) network.
Specifically, we extend the fast gradient sign method (FGSM), momentum
iterative FGSM, and projected gradient descent adversarial attacks in the
context of power allocation in a maMIMO system. We benchmark the performance of
these attacks and show that with a small perturbation in the input of the NN,
the white-box attacks can result in infeasible solutions up to 86%.
Furthermore, we investigate the performance of black-box attacks. All the
evaluations conducted in this work are based on an open dataset and NN models,
which are publicly available.
Related papers
- Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - General Adversarial Defense Against Black-box Attacks via Pixel Level
and Feature Level Distribution Alignments [75.58342268895564]
We use Deep Generative Networks (DGNs) with a novel training mechanism to eliminate the distribution gap.
The trained DGNs align the distribution of adversarial samples with clean ones for the target DNNs by translating pixel values.
Our strategy demonstrates its unique effectiveness and generality against black-box attacks.
arXiv Detail & Related papers (2022-12-11T01:51:31Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Universal Adversarial Attacks on Neural Networks for Power Allocation in
a Massive MIMO System [60.46526086158021]
We propose universal adversarial perturbation (UAP)-crafting methods as white-box and black-box attacks.
We show that the adversarial success rate can achieve up to 60% and 40%, respectively.
The proposed UAP-based attacks make a more practical and realistic approach as compared to classical white-box attacks.
arXiv Detail & Related papers (2021-10-10T08:21:03Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - Towards More Practical Adversarial Attacks on Graph Neural Networks [14.78539966828287]
We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint.
We show that the structural inductive biases of GNN models can be an effective source for this type of attacks.
arXiv Detail & Related papers (2020-06-09T05:27:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.