Transfer-based Adversarial Poisoning Attacks for Online (MIMO-)Deep Receviers
- URL: http://arxiv.org/abs/2409.02430v3
- Date: Mon, 23 Sep 2024 06:07:32 GMT
- Title: Transfer-based Adversarial Poisoning Attacks for Online (MIMO-)Deep Receviers
- Authors: Kunze Wu, Weiheng Jiang, Dusit Niyato, Yinghuan Li, Chuang Luo,
- Abstract summary: We propose a transfer-based adversarial poisoning attack method for online receivers.
Without knowledge of the attack target, perturbations are injected to the pilots, poisoning the online deep receiver.
Simulation results indicate that the proposed poisoning attack significantly reduces the performance of online receivers.
- Score: 44.051757540209756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the design of wireless receivers using deep neural networks (DNNs), known as deep receivers, has attracted extensive attention for ensuring reliable communication in complex channel environments. To adapt quickly to dynamic channels, online learning has been adopted to update the weights of deep receivers with over-the-air data (e.g., pilots). However, the fragility of neural models and the openness of wireless channels expose these systems to malicious attacks. To this end, understanding these attack methods is essential for robust receiver design. In this paper, we propose a transfer-based adversarial poisoning attack method for online receivers. Without knowledge of the attack target, adversarial perturbations are injected to the pilots, poisoning the online deep receiver and impairing its ability to adapt to dynamic channels and nonlinear effects. In particular, our attack method targets Deep Soft Interference Cancellation (DeepSIC)[1] using online meta-learning. As a classical model-driven deep receiver, DeepSIC incorporates wireless domain knowledge into its architecture. This integration allows it to adapt efficiently to time-varying channels with only a small number of pilots, achieving optimal performance in a multi-input and multi-output (MIMO) scenario. The deep receiver in this scenario has a number of applications in the field of wireless communication, which motivates our study of the attack methods targeting it. Specifically, we demonstrate the effectiveness of our attack in simulations on synthetic linear, synthetic nonlinear, static, and COST 2100 channels. Simulation results indicate that the proposed poisoning attack significantly reduces the performance of online receivers in rapidly changing scenarios.
Related papers
- Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial
Attacks [70.51799606279883]
We introduce test-time adversarial attacks on deep neural networks (DNNs) for semantic communications.
We show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low.
arXiv Detail & Related papers (2022-12-20T17:13:22Z) - Interference Cancellation GAN Framework for Dynamic Channels [74.22393885274728]
We introduce an online training framework that can adapt to any changes in the channel.
Our framework significantly outperforms recent neural network models on highly dynamic channels.
arXiv Detail & Related papers (2022-08-17T02:01:18Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Real-time Over-the-air Adversarial Perturbations for Digital
Communications using Deep Neural Networks [0.0]
adversarial perturbations can be used by RF communications systems to avoid reactive-jammers and interception systems.
This work attempts to bridge this gap by defining class-specific and sample-independent adversarial perturbations.
We demonstrate the effectiveness of these attacks over-the-air across a physical channel using software-defined radios.
arXiv Detail & Related papers (2022-02-20T14:50:52Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - SCNet: A Neural Network for Automated Side-Channel Attack [13.0547560056431]
We propose SCNet, which automatically performs side-channel attacks.
We also design this network combining with side-channel domain knowledge and different deep learning model to improve the performance.
The proposed model is a useful tool for automatically testing the robustness of computer systems.
arXiv Detail & Related papers (2020-08-02T13:14:12Z) - Over-the-Air Adversarial Attacks on Deep Learning Based Modulation
Classifier over Wireless Channels [43.156901821548935]
We consider a wireless communication system that consists of a transmitter, a receiver, and an adversary.
In the meantime, the adversary makes over-the-air transmissions that are received as superimposed with the transmitter's signals.
We present how to launch a realistic evasion attack by considering channels from the adversary to the receiver.
arXiv Detail & Related papers (2020-02-05T18:45:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.