A Neural Rejection System Against Universal Adversarial Perturbations in Radio Signal Classification
- URL: http://arxiv.org/abs/2506.11901v1
- Date: Fri, 13 Jun 2025 15:52:07 GMT
- Title: A Neural Rejection System Against Universal Adversarial Perturbations in Radio Signal Classification
- Authors: Lu Zhang, Sangarapillai Lambotharan, Gan Zheng, Fabio Roli,
- Abstract summary: A defense system called neural rejection system is proposed to propose against universal adversarial perturbations.<n>We show that the proposed neural rejection system is able to defend universal adversarial perturbations with significantly higher accuracy than the undefended deep neural network.
- Score: 23.98877578038472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advantages of deep learning over traditional methods have been demonstrated for radio signal classification in the recent years. However, various researchers have discovered that even a small but intentional feature perturbation known as adversarial examples can significantly deteriorate the performance of the deep learning based radio signal classification. Among various kinds of adversarial examples, universal adversarial perturbation has gained considerable attention due to its feature of being data independent, hence as a practical strategy to fool the radio signal classification with a high success rate. Therefore, in this paper, we investigate a defense system called neural rejection system to propose against universal adversarial perturbations, and evaluate its performance by generating white-box universal adversarial perturbations. We show that the proposed neural rejection system is able to defend universal adversarial perturbations with significantly higher accuracy than the undefended deep neural network.
Related papers
- Democratic Training Against Universal Adversarial Perturbations [7.123808749940524]
In this work, we observe that universal adversarial perturbations usually lead to abnormal entropy spectrum in hidden layers.<n>We propose an efficient yet effective defense method for mitigating UAPs called emphDemocratic Training<n>The results show that it effectively reduces the attack success rate, improves model robustness and preserves the model accuracy on clean samples.
arXiv Detail & Related papers (2025-02-08T12:15:32Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - On Procedural Adversarial Noise Attack And Defense [2.5388455804357952]
adversarial examples would inveigle neural networks to make prediction errors with small per- turbations on the input images.
In this paper, we propose two universal adversarial perturbation (UAP) generation methods based on procedural noise functions.
Without changing the semantic representations, the adversarial examples generated via our methods show superior performance on the attack.
arXiv Detail & Related papers (2021-08-10T02:47:01Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Universal Adversarial Perturbations Through the Lens of Deep
Steganography: Towards A Fourier Perspective [78.05383266222285]
A human imperceptible perturbation can be generated to fool a deep neural network (DNN) for most images.
A similar phenomenon has been observed in the deep steganography task, where a decoder network can retrieve a secret image back from a slightly perturbed cover image.
We propose two new variants of universal perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is less visible to the human eye.
arXiv Detail & Related papers (2021-02-12T12:26:39Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Improving Adversarial Robustness by Enforcing Local and Global
Compactness [19.8818435601131]
Adversary training is the most successful method that consistently resists a wide range of attacks.
We propose the Adversary Divergence Reduction Network which enforces local/global compactness and the clustering assumption.
The experimental results demonstrate that augmenting adversarial training with our proposed components can further improve the robustness of the network.
arXiv Detail & Related papers (2020-07-10T00:43:06Z) - Universal Adversarial Perturbations: A Survey [0.0]
Deep neural networks are susceptible to adversarial perturbations.
These perturbations can cause the network's prediction to change without making perceptible changes to the input image.
We provide a detailed discussion on the various data-driven and data-independent methods for generating universal perturbations.
arXiv Detail & Related papers (2020-05-16T20:18:26Z) - Frequency-Tuned Universal Adversarial Attacks [19.79803434998116]
We propose a frequency-tuned universal attack method to compute universal perturbations.
We show that our method can realize a good balance between perceivability and effectiveness in terms of fooling rate.
arXiv Detail & Related papers (2020-03-11T22:52:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.