On the benefits of robust models in modulation recognition
- URL: http://arxiv.org/abs/2103.14977v1
- Date: Sat, 27 Mar 2021 19:58:06 GMT
- Title: On the benefits of robust models in modulation recognition
- Authors: Javier Maroto, G\'er\^ome Bovet and Pascal Frossard
- Abstract summary: Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
- Score: 53.391095789289736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the rapid changes in telecommunication systems and their higher
dependence on artificial intelligence, it is increasingly important to have
models that can perform well under different, possibly adverse, conditions.
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in
many tasks in communications. However, in other domains, like image
classification, DNNs have been shown to be vulnerable to adversarial
perturbations, which consist of imperceptible crafted noise that when added to
the data fools the model into misclassification. This puts into question the
security of DNNs in communication tasks, and in particular in modulation
recognition. We propose a novel framework to test the robustness of current
state-of-the-art models where the adversarial perturbation strength is
dependent on the signal strength and measured with the "signal to perturbation
ratio" (SPR). We show that current state-of-the-art models are susceptible to
these perturbations. In contrast to current research on the topic of image
classification, modulation recognition allows us to have easily accessible
insights on the usefulness of the features learned by DNNs by looking at the
constellation space. When analyzing these vulnerable models we found that
adversarial perturbations do not shift the symbols towards the nearest classes
in constellation space. This shows that DNNs do not base their decisions on
signal statistics that are important for the Bayes-optimal modulation
recognition model, but spurious correlations in the training data. Our feature
analysis and proposed framework can help in the task of finding better models
for communication systems.
Related papers
- Cognitive Networks and Performance Drive fMRI-Based State Classification Using DNN Models [0.0]
We employ two structurally different and complementary DNN-based models to classify individual cognitive states.
We show that despite the architectural differences, both models consistently produce a robust relationship between prediction accuracy and individual cognitive performance.
arXiv Detail & Related papers (2024-08-14T15:25:51Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Uncertainty-aware deep learning for digital twin-driven monitoring:
Application to fault detection in power lines [0.0]
Deep neural networks (DNNs) are often coupled with physics-based models or data-driven surrogate models to perform fault detection and health monitoring of systems in the low data regime.
These models can exhibit parametric uncertainty that propagates to the generated data.
In this article, we quantify the impact of both these sources of uncertainty on the performance of the DNN.
arXiv Detail & Related papers (2023-03-20T09:27:58Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - Zero-bias Deep Neural Network for Quickest RF Signal Surveillance [14.804498377638696]
The Internet of Things (IoT) is reshaping modern society by allowing a decent number of RF devices to connect and share information through RF channels.
This paper provides a deep learning framework for RF signal surveillance.
We jointly integrate the Deep Neural Networks (DNNs) and Quickest Detection (QD) to form a sequential signal surveillance scheme.
arXiv Detail & Related papers (2021-10-12T07:48:57Z) - SafeAMC: Adversarial training for robust modulation recognition models [53.391095789289736]
In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
arXiv Detail & Related papers (2021-05-28T11:29:04Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.