Adversarial Attacks against Deep Learning Based Power Control in
Wireless Communications
- URL: http://arxiv.org/abs/2109.08139v1
- Date: Thu, 16 Sep 2021 17:54:16 GMT
- Title: Adversarial Attacks against Deep Learning Based Power Control in
Wireless Communications
- Authors: Brian Kim and Yi Shi and Yalin E. Sagduyu and Tugba Erpek and Sennur
Ulukus
- Abstract summary: We consider adversarial machine learning based attacks on power allocation where the base station (BS) allocates its transmit power to multiple subcarriers.
We show that adversarial attacks are much more effective than the benchmark attack in terms of reducing the rate of communications.
- Score: 45.24732440940411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider adversarial machine learning based attacks on power allocation
where the base station (BS) allocates its transmit power to multiple orthogonal
subcarriers by using a deep neural network (DNN) to serve multiple user
equipments (UEs). The DNN that corresponds to a regression model is trained
with channel gains as the input and allocated transmit powers as the output.
While the BS allocates the transmit power to the UEs to maximize rates for all
UEs, there is an adversary that aims to minimize these rates. The adversary may
be an external transmitter that aims to manipulate the inputs to the DNN by
interfering with the pilot signals that are transmitted to measure the channel
gain. Alternatively, the adversary may be a rogue UE that transmits fabricated
channel estimates to the BS. In both cases, the adversary carefully crafts
adversarial perturbations to manipulate the inputs to the DNN of the BS subject
to an upper bound on the strengths of these perturbations. We consider the
attacks targeted on a single UE or all UEs. We compare these attacks with a
benchmark, where the adversary scales down the input to the DNN. We show that
adversarial attacks are much more effective than the benchmark attack in terms
of reducing the rate of communications. We also show that adversarial attacks
are robust to the uncertainty at the adversary including the erroneous
knowledge of channel gains and the potential errors in exercising the attacks
exactly as specified.
Related papers
- Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Universal Adversarial Training with Class-Wise Perturbations [78.05383266222285]
adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
arXiv Detail & Related papers (2021-04-07T09:05:49Z) - Adversarial Attacks on Deep Learning Based mmWave Beam Prediction in 5G
and Beyond [46.34482158291128]
A deep neural network (DNN) can predict the beam that is best slanted to each UE by using the received signal strengths ( RSSs) from a subset of possible narrow beams.
We present an adversarial attack by generating perturbations to manipulate the over-the-air captured RSSs as the input to the DNN.
This attack reduces the IA performance significantly and fools the DNN into choosing the beams with small RSSs compared to jamming attacks with Gaussian or uniform noise.
arXiv Detail & Related papers (2021-03-25T17:25:21Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - Improving adversarial robustness of deep neural networks by using
semantic information [17.887586209038968]
Adrial training is the main method for improving adversarial robustness and the first line of defense against adversarial attacks.
This paper provides a new perspective on the issue of adversarial robustness, one that shifts the focus from the network as a whole to the critical part of the region close to the decision boundary corresponding to a given class.
Experimental results on the MNIST and CIFAR-10 datasets show that this approach greatly improves adversarial robustness even using a very small dataset from the training data.
arXiv Detail & Related papers (2020-08-18T10:23:57Z) - Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless
Signal Classifiers [43.156901821548935]
This paper presents channel-aware adversarial attacks against deep learning-based wireless signal classifiers.
A certified defense based on randomized smoothing that augments training data with noise is introduced to make the modulation classifier robust to adversarial perturbations.
arXiv Detail & Related papers (2020-05-11T15:42:54Z) - Over-the-Air Adversarial Attacks on Deep Learning Based Modulation
Classifier over Wireless Channels [43.156901821548935]
We consider a wireless communication system that consists of a transmitter, a receiver, and an adversary.
In the meantime, the adversary makes over-the-air transmissions that are received as superimposed with the transmitter's signals.
We present how to launch a realistic evasion attack by considering channels from the adversary to the receiver.
arXiv Detail & Related papers (2020-02-05T18:45:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.