Vulnerabilities of Deep Learning-Driven Semantic Communications to
Backdoor (Trojan) Attacks
- URL: http://arxiv.org/abs/2212.11205v1
- Date: Wed, 21 Dec 2022 17:22:27 GMT
- Title: Vulnerabilities of Deep Learning-Driven Semantic Communications to
Backdoor (Trojan) Attacks
- Authors: Yalin E. Sagduyu, Tugba Erpek, Sennur Ulukus, Aylin Yener
- Abstract summary: This paper highlights vulnerabilities of deep learning-driven semantic communications to backdoor (Trojan) attacks.
Backdoor attack can effectively change the semantic information transferred for poisoned input samples to a target meaning.
Design guidelines are presented to preserve the meaning of transferred information in the presence of backdoor attacks.
- Score: 70.51799606279883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper highlights vulnerabilities of deep learning-driven semantic
communications to backdoor (Trojan) attacks. Semantic communications aims to
convey a desired meaning while transferring information from a transmitter to
its receiver. An encoder-decoder pair that is represented by two deep neural
networks (DNNs) as part of an autoencoder is trained to reconstruct signals
such as images at the receiver by transmitting latent features of small size
over a limited number of channel uses. In the meantime, another DNN of a
semantic task classifier at the receiver is jointly trained with the
autoencoder to check the meaning conveyed to the receiver. The complex decision
space of the DNNs makes semantic communications susceptible to adversarial
manipulations. In a backdoor (Trojan) attack, the adversary adds triggers to a
small portion of training samples and changes the label to a target label. When
the transfer of images is considered, the triggers can be added to the images
or equivalently to the corresponding transmitted or received signals. In test
time, the adversary activates these triggers by providing poisoned samples as
input to the encoder (or decoder) of semantic communications. The backdoor
attack can effectively change the semantic information transferred for the
poisoned input samples to a target meaning. As the performance of semantic
communications improves with the signal-to-noise ratio and the number of
channel uses, the success of the backdoor attack increases as well. Also,
increasing the Trojan ratio in training data makes the attack more successful.
In the meantime, the effect of this attack on the unpoisoned input samples
remains limited. Overall, this paper shows that the backdoor attack poses a
serious threat to semantic communications and presents novel design guidelines
to preserve the meaning of transferred information in the presence of backdoor
attacks.
Related papers
- DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders [6.698677477097004]
Self-supervised learning (SSL) is pervasively exploited in training high-quality upstream encoders with a large amount of unlabeled data.
backdoor attacks merely via polluting a small portion of training data.
We propose a novel detection mechanism, DeDe, which detects the activation of the backdoor mapping with the cooccurrence of victim encoder and trigger inputs.
arXiv Detail & Related papers (2024-11-25T07:26:22Z) - Secure Semantic Communication via Paired Adversarial Residual Networks [59.468221305630784]
This letter explores the positive side of the adversarial attack for the security-aware semantic communication system.
A pair of matching pluggable modules is installed: one after the semantic transmitter and the other before the semantic receiver.
The proposed scheme is capable of fooling the eavesdropper while maintaining the high-quality semantic communication.
arXiv Detail & Related papers (2024-07-02T08:32:20Z) - FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases [50.065022493142116]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence.
FreeEagle is the first data-free backdoor detection method that can effectively detect complex backdoor attacks.
arXiv Detail & Related papers (2023-02-28T11:31:29Z) - Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial
Attacks [70.51799606279883]
We introduce test-time adversarial attacks on deep neural networks (DNNs) for semantic communications.
We show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low.
arXiv Detail & Related papers (2022-12-20T17:13:22Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless
Signal Classifiers [43.156901821548935]
This paper presents channel-aware adversarial attacks against deep learning-based wireless signal classifiers.
A certified defense based on randomized smoothing that augments training data with noise is introduced to make the modulation classifier robust to adversarial perturbations.
arXiv Detail & Related papers (2020-05-11T15:42:54Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z) - Over-the-Air Adversarial Attacks on Deep Learning Based Modulation
Classifier over Wireless Channels [43.156901821548935]
We consider a wireless communication system that consists of a transmitter, a receiver, and an adversary.
In the meantime, the adversary makes over-the-air transmissions that are received as superimposed with the transmitter's signals.
We present how to launch a realistic evasion attack by considering channels from the adversary to the receiver.
arXiv Detail & Related papers (2020-02-05T18:45:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.