Towards Assessing the Synthetic-to-Measured Adversarial Vulnerability of
SAR ATR
- URL: http://arxiv.org/abs/2401.17038v1
- Date: Tue, 30 Jan 2024 14:16:24 GMT
- Title: Towards Assessing the Synthetic-to-Measured Adversarial Vulnerability of
SAR ATR
- Authors: Bowen Peng, Bo Peng, Jingyuan Xia, Tianpeng Liu, Yongxiang Liu, Li Liu
- Abstract summary: This paper studies the synthetic-to-measured (S2M) transfer setting, where an attacker generates adversarial perturbation based solely on synthetic data and transfers it against victim models trained with measured data.
We also propose the transferability estimation attack (TEA) to uncover the adversarial risks in this more challenging and practical scenario.
- Score: 16.144102386839574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, there has been increasing concern about the vulnerability of deep
neural network (DNN)-based synthetic aperture radar (SAR) automatic target
recognition (ATR) to adversarial attacks, where a DNN could be easily deceived
by clean input with imperceptible but aggressive perturbations. This paper
studies the synthetic-to-measured (S2M) transfer setting, where an attacker
generates adversarial perturbation based solely on synthetic data and transfers
it against victim models trained with measured data. Compared with the current
measured-to-measured (M2M) transfer setting, our approach does not need direct
access to the victim model or the measured SAR data. We also propose the
transferability estimation attack (TEA) to uncover the adversarial risks in
this more challenging and practical scenario. The TEA makes full use of the
limited similarity between the synthetic and measured data pairs for blind
estimation and optimization of S2M transferability, leading to feasible
surrogate model enhancement without mastering the victim model and data.
Comprehensive evaluations based on the publicly available synthetic and
measured paired labeled experiment (SAMPLE) dataset demonstrate that the TEA
outperforms state-of-the-art methods and can significantly enhance various
attack algorithms in computer vision and remote sensing applications. Codes and
data are available at https://github.com/scenarri/S2M-TEA.
Related papers
- AdvSwap: Covert Adversarial Perturbation with High Frequency Info-swapping for Autonomous Driving Perception [14.326474757036925]
This paper introduces a novel adversarial attack method, AdvSwap, which creatively utilizes wavelet-based high-frequency information swapping.
The scheme effectively removes the original label data and incorporates the guidance image data, producing concealed and robust adversarial samples.
The generates adversarial samples are also difficult to perceive by humans and algorithms.
arXiv Detail & Related papers (2025-02-12T13:05:35Z) - A Conditional Tabular GAN-Enhanced Intrusion Detection System for Rare Attacks in IoT Networks [1.1970409518725493]
Internet of things (IoT) networks, boosted by 6G technology, are transforming various industries.
Their widespread adoption introduces significant security risks, particularly in detecting rare but potentially damaging cyber-attacks.
Traditional IDS often struggle with detecting rare attacks due to severe class imbalances in IoT data.
arXiv Detail & Related papers (2025-02-09T21:13:11Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Soft Segmented Randomization: Enhancing Domain Generalization in SAR-ATR for Synthetic-to-Measured [4.089756319249042]
We introduce a novel framework, soft segmented randomization, designed to reduce domain discrepancy and improve the ability to generalize automatic target recognition models.
Experimental results demonstrate that the proposed soft segmented randomization framework significantly enhances model performance on measured synthetic aperture radar data.
arXiv Detail & Related papers (2024-09-21T08:24:51Z) - Advancing DDoS Attack Detection: A Synergistic Approach Using Deep
Residual Neural Networks and Synthetic Oversampling [2.988269372716689]
We introduce an enhanced approach for DDoS attack detection by leveraging the capabilities of Deep Residual Neural Networks (ResNets)
We balance the representation of benign and malicious data points, enabling the model to better discern intricate patterns indicative of an attack.
Experimental results on a real-world dataset demonstrate that our approach achieves an accuracy of 99.98%, significantly outperforming traditional methods.
arXiv Detail & Related papers (2024-01-06T03:03:52Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Autoregressive Perturbations for Data Poisoning [54.205200221427994]
Data scraping from social media has led to growing concerns regarding unauthorized use of data.
Data poisoning attacks have been proposed as a bulwark against scraping.
We introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset.
arXiv Detail & Related papers (2022-06-08T06:24:51Z) - Interpolated Joint Space Adversarial Training for Robust and
Generalizable Defenses [82.3052187788609]
Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks.
Recent works show generalization improvement with adversarial samples under novel threat models.
We propose a novel threat model called Joint Space Threat Model (JSTM)
Under JSTM, we develop novel adversarial attacks and defenses.
arXiv Detail & Related papers (2021-12-12T21:08:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.