Universal Adversarial Attacks on Neural Networks for Power Allocation in
a Massive MIMO System
- URL: http://arxiv.org/abs/2110.04731v1
- Date: Sun, 10 Oct 2021 08:21:03 GMT
- Title: Universal Adversarial Attacks on Neural Networks for Power Allocation in
a Massive MIMO System
- Authors: Pablo Mill\'an Santos, B. R. Manoj, Meysam Sadeghi, and Erik G.
Larsson
- Abstract summary: We propose universal adversarial perturbation (UAP)-crafting methods as white-box and black-box attacks.
We show that the adversarial success rate can achieve up to 60% and 40%, respectively.
The proposed UAP-based attacks make a more practical and realistic approach as compared to classical white-box attacks.
- Score: 60.46526086158021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) architectures have been successfully used in many
applications including wireless systems. However, they have been shown to be
susceptible to adversarial attacks. We analyze DL-based models for a regression
problem in the context of downlink power allocation in massive
multiple-input-multiple-output systems and propose universal adversarial
perturbation (UAP)-crafting methods as white-box and black-box attacks. We
benchmark the UAP performance of white-box and black-box attacks for the
considered application and show that the adversarial success rate can achieve
up to 60% and 40%, respectively. The proposed UAP-based attacks make a more
practical and realistic approach as compared to classical white-box attacks.
Related papers
- Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Simple black-box universal adversarial attacks on medical image
classification based on deep neural networks [0.0]
Universal adversarial attacks (UAPs) hinder most deep neural network (DNN) tasks using only a small single perturbation.
We show that UAPs are easily generatable using a relatively small dataset under black-box conditions.
Black-box UAPs can be used to conduct both non-targeted and targeted attacks.
arXiv Detail & Related papers (2021-08-11T00:59:34Z) - Universal Adversarial Training with Class-Wise Perturbations [78.05383266222285]
adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
arXiv Detail & Related papers (2021-04-07T09:05:49Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Improving Query Efficiency of Black-box Adversarial Attack [75.71530208862319]
We propose a Neural Process based black-box adversarial attack (NP-Attack)
NP-Attack could greatly decrease the query counts under the black-box setting.
arXiv Detail & Related papers (2020-09-24T06:22:56Z) - Learning One Class Representations for Face Presentation Attack
Detection using Multi-channel Convolutional Neural Networks [7.665392786787577]
presentation attack detection (PAD) methods often fail in generalizing to unseen attacks.
We propose a new framework for PAD using a one-class classifier, where the representation used is learned with a Multi-Channel Convolutional Neural Network (MCCNN)
A novel loss function is introduced, which forces the network to learn a compact embedding for bonafide class while being far from the representation of attacks.
The proposed framework introduces a novel approach to learn a robust PAD system from bonafide and available (known) attack classes.
arXiv Detail & Related papers (2020-07-22T14:19:33Z) - Evaluating and Improving Adversarial Robustness of Machine
Learning-Based Network Intrusion Detectors [21.86766733460335]
We study the first systematic study of the gray/black-box traffic-space adversarial attacks to evaluate the robustness of ML-based NIDSs.
Our work outperforms previous ones in the following aspects.
We also propose a defense scheme against adversarial attacks to improve system robustness.
arXiv Detail & Related papers (2020-05-15T13:06:00Z) - Diversity can be Transferred: Output Diversification for White- and
Black-box Attacks [89.92353493977173]
Adrial attacks often involve random perturbations of the inputs drawn from uniform or Gaussian distributions, e.g., to initialize optimization-based white-box attacks or generate update directions in black-box attacks.
We propose Output Diversified Sampling (ODS), a novel sampling strategy that attempts to maximize diversity in the target model's outputs among the generated samples.
ODS significantly improves the performance of existing white-box and black-box attacks.
In particular, ODS reduces the number of queries needed for state-of-the-art black-box attacks on ImageNet by a factor of two.
arXiv Detail & Related papers (2020-03-15T17:49:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.