Data-Driven Subsampling in the Presence of an Adversarial Actor
- URL: http://arxiv.org/abs/2401.03488v1
- Date: Sun, 7 Jan 2024 14:02:22 GMT
- Title: Data-Driven Subsampling in the Presence of an Adversarial Actor
- Authors: Abu Shafin Mohammad Mahdee Jameel, Ahmed P. Mohamed, Jinho Yi, Aly El
Gamal and Akshay Malhotra
- Abstract summary: Deep learning based automatic modulation classification (AMC) has received significant attention owing to its potential applications in both military and civilian use cases.
Data-driven subsampling techniques have been utilized to overcome the challenges associated with computational complexity and training time for AMC.
In this paper, we investigate the effects of an adversarial attack on an AMC system that employs deep learning models both for AMC and for subsampling.
- Score: 9.718390110364789
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based automatic modulation classification (AMC) has received
significant attention owing to its potential applications in both military and
civilian use cases. Recently, data-driven subsampling techniques have been
utilized to overcome the challenges associated with computational complexity
and training time for AMC. Beyond these direct advantages of data-driven
subsampling, these methods also have regularizing properties that may improve
the adversarial robustness of the modulation classifier. In this paper, we
investigate the effects of an adversarial attack on an AMC system that employs
deep learning models both for AMC and for subsampling. Our analysis shows that
subsampling itself is an effective deterrent to adversarial attacks. We also
uncover the most efficient subsampling strategy when an adversarial attack on
both the classifier and the subsampler is anticipated.
Related papers
- Adaptive Meta-learning-based Adversarial Training for Robust Automatic Modulation Classification [4.754812565644714]
We propose a meta-learning-based adversarial training framework for automatic modulation classification (AMC) models.
Our results demonstrate that this training framework provides superior robustness and accuracy with much less online training time than conventional adversarial training of AMC models.
arXiv Detail & Related papers (2025-01-03T03:28:33Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Correlation Analysis of Adversarial Attack in Time Series Classification [6.117704456424016]
This study investigates the vulnerability of time series classification models to adversarial attacks.
Regularization techniques and noise introduction are shown to enhance the effectiveness of attacks.
Models designed to prioritize global information are revealed to possess greater resistance to adversarial manipulations.
arXiv Detail & Related papers (2024-08-21T01:11:32Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Gradient-based Adversarial Deep Modulation Classification with
Data-driven Subsampling [6.447052211404121]
Deep learning techniques have been shown to deliver superior performance to conventional model-based strategies.
Deep learning techniques have also been shown to be vulnerable to gradient-based adversarial attacks.
We consider a data-driven subsampling setting, where several recently introduced deep-learning-based algorithms are employed.
We evaluate best strategies under various assumptions on the knowledge of the other party's strategy.
arXiv Detail & Related papers (2021-04-03T22:28:04Z) - ATRO: Adversarial Training with a Rejection Option [10.36668157679368]
This paper proposes a classification framework with a rejection option to mitigate the performance deterioration caused by adversarial examples.
Applying the adversarial training objective to both a classifier and a rejection function simultaneously, we can choose to abstain from classification when it has insufficient confidence to classify a test data point.
arXiv Detail & Related papers (2020-10-24T14:05:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.