Frequency-based Automated Modulation Classification in the Presence of
Adversaries
- URL: http://arxiv.org/abs/2011.01132v3
- Date: Fri, 19 Feb 2021 20:14:55 GMT
- Title: Frequency-based Automated Modulation Classification in the Presence of
Adversaries
- Authors: Rajeev Sahay and Christopher G. Brinton and David J. Love
- Abstract summary: We present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference.
In this work, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs)
- Score: 17.930854969511046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic modulation classification (AMC) aims to improve the efficiency of
crowded radio spectrums by automatically predicting the modulation
constellation of wireless RF signals. Recent work has demonstrated the ability
of deep learning to achieve robust AMC performance using raw in-phase and
quadrature (IQ) time samples. Yet, deep learning models are highly susceptible
to adversarial interference, which cause intelligent prediction models to
misclassify received samples with high confidence. Furthermore, adversarial
interference is often transferable, allowing an adversary to attack multiple
deep learning models with a single perturbation crafted for a particular
classification network. In this work, we present a novel receiver architecture
consisting of deep learning models capable of withstanding transferable
adversarial interference. Specifically, we show that adversarial attacks
crafted to fool models trained on time-domain features are not easily
transferable to models trained using frequency-domain features. In this
capacity, we demonstrate classification performance improvements greater than
30% on recurrent neural networks (RNNs) and greater than 50% on convolutional
neural networks (CNNs). We further demonstrate our frequency feature-based
classification models to achieve accuracies greater than 99% in the absence of
attacks.
Related papers
- Correlation Analysis of Adversarial Attack in Time Series Classification [6.117704456424016]
This study investigates the vulnerability of time series classification models to adversarial attacks.
Regularization techniques and noise introduction are shown to enhance the effectiveness of attacks.
Models designed to prioritize global information are revealed to possess greater resistance to adversarial manipulations.
arXiv Detail & Related papers (2024-08-21T01:11:32Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Frequency Domain Adversarial Training for Robust Volumetric Medical
Segmentation [111.61781272232646]
It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare.
We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models.
arXiv Detail & Related papers (2023-07-14T10:50:43Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Keep It Simple: CNN Model Complexity Studies for Interference
Classification Tasks [7.358050500046429]
We study the trade-off amongst dataset size, CNN model complexity, and classification accuracy under various levels of classification difficulty.
Our study, based on three wireless datasets, shows that a simpler CNN model with fewer parameters can perform just as well as a more complex model.
arXiv Detail & Related papers (2023-03-06T17:53:42Z) - Phase-shifted Adversarial Training [8.89749787668458]
We analyze the behavior of adversarial training through the lens of response frequency.
PhaseAT significantly improves the convergence for high-frequency information.
This results in improved adversarial robustness by enabling the model to have smoothed predictions near each data.
arXiv Detail & Related papers (2023-01-12T02:25:22Z) - SafeAMC: Adversarial training for robust modulation recognition models [53.391095789289736]
In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
arXiv Detail & Related papers (2021-05-28T11:29:04Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Action-Conditional Recurrent Kalman Networks For Forward and Inverse
Dynamics Learning [17.80270555749689]
Estimating accurate forward and inverse dynamics models is a crucial component of model-based control for robots.
We present two architectures for forward model learning and one for inverse model learning.
Both architectures significantly outperform exist-ing model learning frameworks as well as analytical models in terms of prediction performance.
arXiv Detail & Related papers (2020-10-20T11:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.