SRAW-Attack: Space-Reweighted Adversarial Warping Attack for SAR Target Recognition
- URL: http://arxiv.org/abs/2601.10324v2
- Date: Sun, 18 Jan 2026 05:07:30 GMT
- Title: SRAW-Attack: Space-Reweighted Adversarial Warping Attack for SAR Target Recognition
- Authors: Yiming Zhang, Weibo Qin, Yuntian Liu, Feng Wang,
- Abstract summary: Space-Reweighted Adrial Warping (SRAW) is proposed, which generates adversarial examples through optimized spatial deformation.<n>Experiments demonstrate that SRAW significantly degrades the performance of state-of-the-art SAR-ATR models.
- Score: 4.643429435927802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic aperture radar (SAR) imagery exhibits intrinsic information sparsity due to its unique electromagnetic scattering mechanism. Despite the widespread adoption of deep neural network (DNN)-based SAR automatic target recognition (SAR-ATR) systems, they remain vulnerable to adversarial examples and tend to over-rely on background regions, leading to degraded adversarial robustness. Existing adversarial attacks for SAR-ATR often require visually perceptible distortions to achieve effective performance, thereby necessitating an attack method that balances effectiveness and stealthiness. In this paper, a novel attack method termed Space-Reweighted Adversarial Warping (SRAW) is proposed, which generates adversarial examples through optimized spatial deformation with reweighted budgets across foreground and background regions. Extensive experiments demonstrate that SRAW significantly degrades the performance of state-of-the-art SAR-ATR models and consistently outperforms existing methods in terms of imperceptibility and adversarial transferability. Code is made available at https://github.com/boremycin/SAR-ATR-TransAttack.
Related papers
- Adaptive Residual Transformation for Enhanced Feature-Based OOD Detection in SAR Imagery [5.63530048112308]
The presence of unknown targets in real battlefield scenarios is unavoidable.
Various feature-based out-of-distribution approaches have been developed to address this issue.
We propose transforming feature-based OOD detection into a class-localized feature-residual-based approach.
arXiv Detail & Related papers (2024-11-01T00:09:02Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)<n>To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - SAR Despeckling via Regional Denoising Diffusion Probabilistic Model [6.154796320245652]
Region Denoising Diffusion Probabilistic Model (R-DDPM) based on generative models.
This paper introduces a novel despeckling approach termed Region Denoising Diffusion Probabilistic Model (R-DDPM) based on generative models.
arXiv Detail & Related papers (2024-01-06T04:34:46Z) - Realistic Scatterer Based Adversarial Attacks on SAR Image Classifiers [7.858656052565242]
An adversarial attack perturbs SAR images of on-ground targets such that the classifiers are misled into making incorrect predictions.
We propose the On-Target Scatterer Attack (OTSA), a scatterer-based physical adversarial attack.
We show that our attack obtains significantly higher success rates under the positioning constraint compared with the existing method.
arXiv Detail & Related papers (2023-12-05T17:36:34Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Scattering Model Guided Adversarial Examples for SAR Target Recognition:
Attack and Defense [20.477411616398214]
This article explores the domain knowledge of SAR imaging process and proposes a novel Scattering Model Guided Adrial Attack (SMGAA) algorithm.
The proposed SMGAA algorithm can generate adversarial perturbations in the form of electromagnetic scattering response (called adversarial scatterers)
Comprehensive evaluations on the MSTAR dataset show that the adversarial scatterers generated by SMGAA are more robust to perturbations and transformations in the SAR processing chain than the currently studied attacks.
arXiv Detail & Related papers (2022-09-11T03:41:12Z) - SAR Despeckling using a Denoising Diffusion Probabilistic Model [52.25981472415249]
The presence of speckle degrades the image quality and adversely affects the performance of SAR image understanding applications.
We introduce SAR-DDPM, a denoising diffusion probabilistic model for SAR despeckling.
The proposed method achieves significant improvements in both quantitative and qualitative results over the state-of-the-art despeckling methods.
arXiv Detail & Related papers (2022-06-09T14:00:26Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Universal adversarial perturbation for remote sensing images [41.54094422831997]
This paper proposes a novel method combining an encoder-decoder network with an attention mechanism to verify that UAP makes the RSI classification model error classification.
The experimental results show that the UAP can make the RSI misclassify, and the attack success rate (ASR) of our proposed method on the RSI data set is as high as 97.35%.
arXiv Detail & Related papers (2022-02-22T06:43:28Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.