Filtered Randomized Smoothing: A New Defense for Robust Modulation Classification
- URL: http://arxiv.org/abs/2410.06339v1
- Date: Tue, 8 Oct 2024 20:17:25 GMT
- Title: Filtered Randomized Smoothing: A New Defense for Robust Modulation Classification
- Authors: Wenhan Zhang, Meiyu Zhong, Ravi Tandon, Marwan Krunz,
- Abstract summary: We study the problem of designing robust modulation classifiers that can provide provable defense against arbitrary attacks.
We propose Filtered Randomized Smoothing (FRS), a novel defense which combines spectral filtering together with randomized smoothing.
We show that FRS significantly outperforms existing defenses including AT and RS in terms of accuracy on both attacked and benign signals.
- Score: 16.974803642923465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Network (DNN) based classifiers have recently been used for the modulation classification of RF signals. These classifiers have shown impressive performance gains relative to conventional methods, however, they are vulnerable to imperceptible (low-power) adversarial attacks. Some of the prominent defense approaches include adversarial training (AT) and randomized smoothing (RS). While AT increases robustness in general, it fails to provide resilience against previously unseen adaptive attacks. Other approaches, such as Randomized Smoothing (RS), which injects noise into the input, address this shortcoming by providing provable certified guarantees against arbitrary attacks, however, they tend to sacrifice accuracy. In this paper, we study the problem of designing robust DNN-based modulation classifiers that can provide provable defense against arbitrary attacks without significantly sacrificing accuracy. To this end, we first analyze the spectral content of commonly studied attacks on modulation classifiers for the benchmark RadioML dataset. We observe that spectral signatures of un-perturbed RF signals are highly localized, whereas attack signals tend to be spread out in frequency. To exploit this spectral heterogeneity, we propose Filtered Randomized Smoothing (FRS), a novel defense which combines spectral filtering together with randomized smoothing. FRS can be viewed as a strengthening of RS by leveraging the specificity (spectral Heterogeneity) inherent to the modulation classification problem. In addition to providing an approach to compute the certified accuracy of FRS, we also provide a comprehensive set of simulations on the RadioML dataset to show the effectiveness of FRS and show that it significantly outperforms existing defenses including AT and RS in terms of accuracy on both attacked and benign signals.
Related papers
- Correlation Analysis of Adversarial Attack in Time Series Classification [6.117704456424016]
This study investigates the vulnerability of time series classification models to adversarial attacks.
Regularization techniques and noise introduction are shown to enhance the effectiveness of attacks.
Models designed to prioritize global information are revealed to possess greater resistance to adversarial manipulations.
arXiv Detail & Related papers (2024-08-21T01:11:32Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - AudioFool: Fast, Universal and synchronization-free Cross-Domain Attack
on Speech Recognition [0.9913418444556487]
We investigate the needed properties of robust attacks compatible with the Over-The-Air (OTA) model.
We design a method of generating attacks with arbitrary such desired properties.
We evaluate our method on standard keyword classification tasks and analyze it in OTA.
arXiv Detail & Related papers (2023-09-20T16:59:22Z) - A Spectral Perspective towards Understanding and Improving Adversarial
Robustness [8.912245110734334]
adversarial training (AT) has proven to be an effective defense approach, but mechanism for robustness improvement is not fully understood.
We show that AT induces the deep model to focus more on the low-frequency region, which retains the shape-biased representations, to gain robustness.
We propose a spectral alignment regularization (SAR) such that the spectral output inferred by an attacked adversarial input stays as close as possible to its natural input counterpart.
arXiv Detail & Related papers (2023-06-25T14:47:03Z) - One-shot Generative Distribution Matching for Augmented RF-based UAV Identification [0.0]
This work addresses the challenge of identifying Unmanned Aerial Vehicles (UAV) using radiofrequency (RF) fingerprinting in limited RF environments.
The complexity and variability of RF signals, influenced by environmental interference and hardware imperfections, often render traditional RF-based identification methods ineffective.
One-shot generative methods for augmenting transformed RF signals offer a significant improvement in UAV identification.
arXiv Detail & Related papers (2023-01-20T02:35:43Z) - Few-shot One-class Domain Adaptation Based on Frequency for Iris
Presentation Attack Detection [33.41823375502942]
Iris presentation attack detection (PAD) has achieved remarkable success to ensure the reliability and security of iris recognition systems.
Most existing methods exploit discriminative features in the spatial domain and report outstanding performance under intra-dataset settings.
We propose a new domain adaptation setting called Few-shot One-class Domain Adaptation (FODA), where adaptation only relies on a limited number of target bonafide samples.
arXiv Detail & Related papers (2022-04-01T11:55:06Z) - Certified Adversarial Defenses Meet Out-of-Distribution Corruptions:
Benchmarking Robustness and Simple Baselines [65.0803400763215]
This work critically examines how adversarial robustness guarantees change when state-of-the-art certifiably robust models encounter out-of-distribution data.
We propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.
We find that FourierMix augmentations help eliminate the spectral bias of certifiably robust models enabling them to achieve significantly better robustness guarantees on a range of OOD benchmarks.
arXiv Detail & Related papers (2021-12-01T17:11:22Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.