SafeAMC: Adversarial training for robust modulation recognition models
- URL: http://arxiv.org/abs/2105.13746v1
- Date: Fri, 28 May 2021 11:29:04 GMT
- Title: SafeAMC: Adversarial training for robust modulation recognition models
- Authors: Javier Maroto, G\'er\^ome Bovet and Pascal Frossard
- Abstract summary: In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
- Score: 53.391095789289736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In communication systems, there are many tasks, like modulation recognition,
which rely on Deep Neural Networks (DNNs) models. However, these models have
been shown to be susceptible to adversarial perturbations, namely imperceptible
additive noise crafted to induce misclassification. This raises questions about
the security but also the general trust in model predictions. We propose to use
adversarial training, which consists of fine-tuning the model with adversarial
perturbations, to increase the robustness of automatic modulation recognition
(AMC) models. We show that current state-of-the-art models benefit from
adversarial training, which mitigates the robustness issues for some families
of modulations. We use adversarial perturbations to visualize the features
learned, and we found that in robust models the signal symbols are shifted
towards the nearest classes in constellation space, like maximum likelihood
methods. This confirms that robust models not only are more secure, but also
more interpretable, building their decisions on signal statistics that are
relevant to modulation recognition.
Related papers
- Robustness-Congruent Adversarial Training for Secure Machine Learning
Model Updates [13.911586916369108]
We show that misclassifications in machine-learning models can affect robustness to adversarial examples.
We propose a technique, named robustness-congruent adversarial training, to address this issue.
We show that our algorithm and, more generally, learning with non-regression constraints, provides a theoretically-grounded framework to train consistent estimators.
arXiv Detail & Related papers (2024-02-27T10:37:13Z) - JAB: Joint Adversarial Prompting and Belief Augmentation [81.39548637776365]
We introduce a joint framework in which we probe and improve the robustness of a black-box target model via adversarial prompting and belief augmentation.
This framework utilizes an automated red teaming approach to probe the target model, along with a belief augmenter to generate instructions for the target model to improve its robustness to those adversarial probes.
arXiv Detail & Related papers (2023-11-16T00:35:54Z) - Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets [46.19529338280716]
Language models, characterized by their black-box nature, often hallucinate and display sensitivity to input perturbations.
We introduce a methodology designed to examine how input perturbations affect language models across various scales.
We present three distinct fine-tuning strategies to address robustness against multiple perturbations.
arXiv Detail & Related papers (2023-11-15T02:59:10Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Maximum Likelihood Distillation for Robust Modulation Classification [50.51144496609274]
We build on knowledge distillation ideas and adversarial training to build more robust AMC systems.
We propose to use the Maximum Likelihood function, which could solve the AMC problem in offline settings, to generate better training labels.
arXiv Detail & Related papers (2022-11-01T21:06:11Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Frequency-based Automated Modulation Classification in the Presence of
Adversaries [17.930854969511046]
We present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference.
In this work, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-11-02T17:12:22Z) - Learning to Generate Noise for Multi-Attack Robustness [126.23656251512762]
Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations.
In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system.
We propose a novel meta-learning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks.
arXiv Detail & Related papers (2020-06-22T10:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.