Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks
- URL: http://arxiv.org/abs/2205.15743v1
- Date: Sun, 29 May 2022 22:30:32 GMT
- Title: Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks
- Authors: Eyad Shtaiwi, Ahmed El Ouadrhiri, Majid Moradikia, Salma Sultana,
Ahmed Abdelhadi, and Zhu Han
- Abstract summary: We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
- Score: 55.92475932732775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic modulation classification (AMC) using the Deep Neural Network (DNN)
approach outperforms the traditional classification techniques, even in the
presence of challenging wireless channel environments. However, the adversarial
attacks cause the loss of accuracy for the DNN-based AMC by injecting a
well-designed perturbation to the wireless channels. In this paper, we propose
a novel generative adversarial network (GAN)-based countermeasure approach to
safeguard the DNN-based AMC systems against adversarial attack examples.
GAN-based aims to eliminate the adversarial attack examples before feeding to
the DNN-based classifier. Specifically, we have shown the resiliency of our
proposed defense GAN against the Fast-Gradient Sign method (FGSM) algorithm as
one of the most potent kinds of attack algorithms to craft the perturbed
signals. The existing defense-GAN has been designed for image classification
and does not work in our case where the above-mentioned communication system is
considered. Thus, our proposed countermeasure approach deploys GANs with a
mixture of generators to overcome the mode collapsing problem in a typical GAN
facing radio signal classification problem. Simulation results show the
effectiveness of our proposed defense GAN so that it could enhance the accuracy
of the DNN-based AMC under adversarial attacks to 81%, approximately.
Related papers
- Problem space structural adversarial attacks for Network Intrusion Detection Systems based on Graph Neural Networks [8.629862888374243]
We propose the first formalization of adversarial attacks specifically tailored for GNN in network intrusion detection.
We outline and model the problem space constraints that attackers need to consider to carry out feasible structural attacks in real-world scenarios.
Our findings demonstrate the increased robustness of the models against classical feature-based adversarial attacks.
arXiv Detail & Related papers (2024-03-18T14:40:33Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Scattering Model Guided Adversarial Examples for SAR Target Recognition:
Attack and Defense [20.477411616398214]
This article explores the domain knowledge of SAR imaging process and proposes a novel Scattering Model Guided Adrial Attack (SMGAA) algorithm.
The proposed SMGAA algorithm can generate adversarial perturbations in the form of electromagnetic scattering response (called adversarial scatterers)
Comprehensive evaluations on the MSTAR dataset show that the adversarial scatterers generated by SMGAA are more robust to perturbations and transformations in the SAR processing chain than the currently studied attacks.
arXiv Detail & Related papers (2022-09-11T03:41:12Z) - A Mask-Based Adversarial Defense Scheme [3.759725391906588]
Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs)
We propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks.
arXiv Detail & Related papers (2022-04-21T12:55:27Z) - The Adversarial Security Mitigations of mmWave Beamforming Prediction
Models using Defensive Distillation and Adversarial Retraining [0.41998444721319217]
This paper presents the security vulnerabilities in deep learning for beamforming prediction using deep neural networks (DNNs) in 6G wireless networks.
The proposed scheme can be used in situations where the data are corrupted due to the adversarial examples in the training data.
arXiv Detail & Related papers (2022-02-16T16:47:17Z) - Generative Adversarial Network-Driven Detection of Adversarial Tasks in
Mobile Crowdsensing [5.675436513661266]
Crowdsensing systems are vulnerable to various attacks as they build on non-dedicated and ubiquitous properties.
Previous works suggest that GAN-based attacks exhibit more crucial devastation than empirically designed attack samples.
This paper aims to detect intelligently designed illegitimate sensing service requests by integrating a GAN-based model.
arXiv Detail & Related papers (2022-02-16T00:23:25Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Class-Conditional Defense GAN Against End-to-End Speech Attacks [82.21746840893658]
We propose a novel approach against end-to-end adversarial attacks developed to fool advanced speech-to-text systems such as DeepSpeech and Lingvo.
Unlike conventional defense approaches, the proposed approach does not directly employ low-level transformations such as autoencoding a given input signal.
Our defense-GAN considerably outperforms conventional defense algorithms in terms of word error rate and sentence level recognition accuracy.
arXiv Detail & Related papers (2020-10-22T00:02:02Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.