Scattering Model Guided Adversarial Examples for SAR Target Recognition:
Attack and Defense
- URL: http://arxiv.org/abs/2209.04779v1
- Date: Sun, 11 Sep 2022 03:41:12 GMT
- Title: Scattering Model Guided Adversarial Examples for SAR Target Recognition:
Attack and Defense
- Authors: Bowen Peng, Bo Peng, Jie Zhou, Jianyue Xie, and Li Liu
- Abstract summary: This article explores the domain knowledge of SAR imaging process and proposes a novel Scattering Model Guided Adrial Attack (SMGAA) algorithm.
The proposed SMGAA algorithm can generate adversarial perturbations in the form of electromagnetic scattering response (called adversarial scatterers)
Comprehensive evaluations on the MSTAR dataset show that the adversarial scatterers generated by SMGAA are more robust to perturbations and transformations in the SAR processing chain than the currently studied attacks.
- Score: 20.477411616398214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) based Synthetic Aperture Radar (SAR) Automatic
Target Recognition (ATR) systems have shown to be highly vulnerable to
adversarial perturbations that are deliberately designed yet almost
imperceptible but can bias DNN inference when added to targeted objects. This
leads to serious safety concerns when applying DNNs to high-stake SAR ATR
applications. Therefore, enhancing the adversarial robustness of DNNs is
essential for implementing DNNs to modern real-world SAR ATR systems. Toward
building more robust DNN-based SAR ATR models, this article explores the domain
knowledge of SAR imaging process and proposes a novel Scattering Model Guided
Adversarial Attack (SMGAA) algorithm which can generate adversarial
perturbations in the form of electromagnetic scattering response (called
adversarial scatterers). The proposed SMGAA consists of two parts: 1) a
parametric scattering model and corresponding imaging method and 2) a
customized gradient-based optimization algorithm. First, we introduce the
effective Attributed Scattering Center Model (ASCM) and a general imaging
method to describe the scattering behavior of typical geometric structures in
the SAR imaging process. By further devising several strategies to take the
domain knowledge of SAR target images into account and relax the greedy search
procedure, the proposed method does not need to be prudentially finetuned, but
can efficiently to find the effective ASCM parameters to fool the SAR
classifiers and facilitate the robust model training. Comprehensive evaluations
on the MSTAR dataset show that the adversarial scatterers generated by SMGAA
are more robust to perturbations and transformations in the SAR processing
chain than the currently studied attacks, and are effective to construct a
defensive model against the malicious scatterers.
Related papers
- Uncertainty-Aware SAR ATR: Defending Against Adversarial Attacks via Bayesian Neural Networks [7.858656052565242]
Adversarial attacks have demonstrated the vulnerability of Machine Learning (ML) image classifiers in Automatic Target Recognition (ATR) systems.
We propose a novel uncertainty-aware SAR ATR for detecting adversarial attacks.
arXiv Detail & Related papers (2024-03-27T07:40:51Z) - Black-box Adversarial Attacks against Dense Retrieval Models: A
Multi-view Contrastive Learning Method [115.29382166356478]
We introduce the adversarial retrieval attack (AREA) task.
It is meant to trick DR models into retrieving a target document that is outside the initial set of candidate documents retrieved by the DR model.
We find that the promising results that have previously been reported on attacking NRMs, do not generalize to DR models.
We propose to formalize attacks on DR models as a contrastive learning problem in a multi-view representation space.
arXiv Detail & Related papers (2023-08-19T00:24:59Z) - FACADE: A Framework for Adversarial Circuit Anomaly Detection and
Evaluation [9.025997629442896]
FACADE is designed for unsupervised mechanistic anomaly detection in deep neural networks.
Our approach seeks to improve model robustness, enhance scalable model oversight, and demonstrates promising applications in real-world deployment settings.
arXiv Detail & Related papers (2023-07-20T04:00:37Z) - TSFool: Crafting Highly-Imperceptible Adversarial Time Series through Multi-Objective Attack [6.243453526766042]
We propose an efficient method called TSFool to craft highly-imperceptible adversarial time series for RNN-based TSC.
The core idea is a new global optimization objective known as "Camouflage Coefficient" that captures the imperceptibility of adversarial samples from the class distribution.
Experiments on 11 UCR and UEA datasets showcase that TSFool significantly outperforms six white-box and three black-box benchmark attacks.
arXiv Detail & Related papers (2022-09-14T03:02:22Z) - Threat Model-Agnostic Adversarial Defense using Diffusion Models [14.603209216642034]
Deep Neural Networks (DNNs) are highly sensitive to imperceptible malicious perturbations, known as adversarial attacks.
Deep Neural Networks (DNNs) are highly sensitive to imperceptible malicious perturbations, known as adversarial attacks.
arXiv Detail & Related papers (2022-07-17T06:50:48Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - Universal adversarial perturbation for remote sensing images [41.54094422831997]
This paper proposes a novel method combining an encoder-decoder network with an attention mechanism to verify that UAP makes the RSI classification model error classification.
The experimental results show that the UAP can make the RSI misclassify, and the attack success rate (ASR) of our proposed method on the RSI data set is as high as 97.35%.
arXiv Detail & Related papers (2022-02-22T06:43:28Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.