Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless
Signal Classifiers
- URL: http://arxiv.org/abs/2005.05321v3
- Date: Mon, 20 Dec 2021 21:53:51 GMT
- Title: Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless
Signal Classifiers
- Authors: Brian Kim, Yalin E. Sagduyu, Kemal Davaslioglu, Tugba Erpek, Sennur
Ulukus
- Abstract summary: This paper presents channel-aware adversarial attacks against deep learning-based wireless signal classifiers.
A certified defense based on randomized smoothing that augments training data with noise is introduced to make the modulation classifier robust to adversarial perturbations.
- Score: 43.156901821548935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents channel-aware adversarial attacks against deep
learning-based wireless signal classifiers. There is a transmitter that
transmits signals with different modulation types. A deep neural network is
used at each receiver to classify its over-the-air received signals to
modulation types. In the meantime, an adversary transmits an adversarial
perturbation (subject to a power budget) to fool receivers into making errors
in classifying signals that are received as superpositions of transmitted
signals and adversarial perturbations. First, these evasion attacks are shown
to fail when channels are not considered in designing adversarial
perturbations. Then, realistic attacks are presented by considering channel
effects from the adversary to each receiver. After showing that a channel-aware
attack is selective (i.e., it affects only the receiver whose channel is
considered in the perturbation design), a broadcast adversarial attack is
presented by crafting a common adversarial perturbation to simultaneously fool
classifiers at different receivers. The major vulnerability of modulation
classifiers to over-the-air adversarial attacks is shown by accounting for
different levels of information available about the channel, the transmitter
input, and the classifier model. Finally, a certified defense based on
randomized smoothing that augments training data with noise is introduced to
make the modulation classifier robust to adversarial perturbations.
Related papers
- Secure Semantic Communication via Paired Adversarial Residual Networks [59.468221305630784]
This letter explores the positive side of the adversarial attack for the security-aware semantic communication system.
A pair of matching pluggable modules is installed: one after the semantic transmitter and the other before the semantic receiver.
The proposed scheme is capable of fooling the eavesdropper while maintaining the high-quality semantic communication.
arXiv Detail & Related papers (2024-07-02T08:32:20Z) - Vulnerabilities of Deep Learning-Driven Semantic Communications to
Backdoor (Trojan) Attacks [70.51799606279883]
This paper highlights vulnerabilities of deep learning-driven semantic communications to backdoor (Trojan) attacks.
Backdoor attack can effectively change the semantic information transferred for poisoned input samples to a target meaning.
Design guidelines are presented to preserve the meaning of transferred information in the presence of backdoor attacks.
arXiv Detail & Related papers (2022-12-21T17:22:27Z) - Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial
Attacks [70.51799606279883]
We introduce test-time adversarial attacks on deep neural networks (DNNs) for semantic communications.
We show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low.
arXiv Detail & Related papers (2022-12-20T17:13:22Z) - Channel Effects on Surrogate Models of Adversarial Attacks against
Wireless Signal Classifiers [42.56367378986028]
We consider a wireless communication system that consists of a background emitter, a transmitter, and an adversary.
The adversary generates adversarial attacks to fool the transmitter into misclassifying the channel as idle.
We consider different topologies to investigate how different surrogate models that are trained by the adversary affect the performance of the adversarial attack.
arXiv Detail & Related papers (2020-12-03T18:46:28Z) - Adversarial Attacks with Multiple Antennas Against Deep Learning-Based
Modulation Classifiers [43.156901821548935]
We show how to utilize multiple antennas at the adversary to improve the adversarial (evasion) attack performance.
We introduce an attack to transmit the adversarial perturbation through the channel with the largest channel gain at the symbol level.
arXiv Detail & Related papers (2020-07-31T17:56:50Z) - Over-the-Air Adversarial Attacks on Deep Learning Based Modulation
Classifier over Wireless Channels [43.156901821548935]
We consider a wireless communication system that consists of a transmitter, a receiver, and an adversary.
In the meantime, the adversary makes over-the-air transmissions that are received as superimposed with the transmitter's signals.
We present how to launch a realistic evasion attack by considering channels from the adversary to the receiver.
arXiv Detail & Related papers (2020-02-05T18:45:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.