Realistic Scatterer Based Adversarial Attacks on SAR Image Classifiers
- URL: http://arxiv.org/abs/2312.02912v1
- Date: Tue, 5 Dec 2023 17:36:34 GMT
- Title: Realistic Scatterer Based Adversarial Attacks on SAR Image Classifiers
- Authors: Tian Ye, Rajgopal Kannan, Viktor Prasanna, Carl Busart, Lance Kaplan
- Abstract summary: An adversarial attack perturbs SAR images of on-ground targets such that the classifiers are misled into making incorrect predictions.
We propose the On-Target Scatterer Attack (OTSA), a scatterer-based physical adversarial attack.
We show that our attack obtains significantly higher success rates under the positioning constraint compared with the existing method.
- Score: 7.858656052565242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks have highlighted the vulnerability of classifiers based
on machine learning for Synthetic Aperture Radar (SAR) Automatic Target
Recognition (ATR) tasks. An adversarial attack perturbs SAR images of on-ground
targets such that the classifiers are misled into making incorrect predictions.
However, many existing attacking techniques rely on arbitrary manipulation of
SAR images while overlooking the feasibility of executing the attacks on
real-world SAR imagery. Instead, adversarial attacks should be able to be
implemented by physical actions, for example, placing additional false objects
as scatterers around the on-ground target to perturb the SAR image and fool the
SAR ATR.
In this paper, we propose the On-Target Scatterer Attack (OTSA), a
scatterer-based physical adversarial attack. To ensure the feasibility of its
physical execution, we enforce a constraint on the positioning of the
scatterers. Specifically, we restrict the scatterers to be placed only on the
target instead of in the shadow regions or the background. To achieve this, we
introduce a positioning score based on Gaussian kernels and formulate an
optimization problem for our OTSA attack. Using a gradient ascent method to
solve the optimization problem, the OTSA can generate a vector of parameters
describing the positions, shapes, sizes and amplitudes of the scatterers to
guide the physical execution of the attack that will mislead SAR image
classifiers. The experimental results show that our attack obtains
significantly higher success rates under the positioning constraint compared
with the existing method.
Related papers
- Uncertainty-Aware SAR ATR: Defending Against Adversarial Attacks via Bayesian Neural Networks [7.858656052565242]
Adversarial attacks have demonstrated the vulnerability of Machine Learning (ML) image classifiers in Automatic Target Recognition (ATR) systems.
We propose a novel uncertainty-aware SAR ATR for detecting adversarial attacks.
arXiv Detail & Related papers (2024-03-27T07:40:51Z) - SAR-AE-SFP: SAR Imagery Adversarial Example in Real Physics domain with
Target Scattering Feature Parameters [2.3930545422544856]
Current adversarial example generation methods for SAR imagery operate in the 2D digital domain, known as image adversarial examples.
This paper proposes SAR-AE-SFP-Attack, a method to generate real physics adversarial examples by altering the scattering feature parameters of target objects.
Experimental results show that SAR-AE-SFP Attack significantly improves attack efficiency on CNN-based models and Transformer-based models.
arXiv Detail & Related papers (2024-03-02T13:52:28Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - SAIF: Sparse Adversarial and Imperceptible Attack Framework [7.025774823899217]
We propose a novel attack technique called Sparse Adversarial and Interpretable Attack Framework (SAIF)
Specifically, we design imperceptible attacks that contain low-magnitude perturbations at a small number of pixels and leverage these sparse attacks to reveal the vulnerability of classifiers.
SAIF computes highly imperceptible and interpretable adversarial examples, and outperforms state-of-the-art sparse attack methods on the ImageNet dataset.
arXiv Detail & Related papers (2022-12-14T20:28:50Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - CARBEN: Composite Adversarial Robustness Benchmark [70.05004034081377]
This paper demonstrates how composite adversarial attack (CAA) affects the resulting image.
It provides real-time inferences of different models, which will facilitate users' configuration of the parameters of the attack level.
A leaderboard to benchmark adversarial robustness against CAA is also introduced.
arXiv Detail & Related papers (2022-07-16T01:08:44Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z) - Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework
for Refining Arbitrary Dense Adversarial Attacks [21.349059923635515]
adversarial evasion attacks are reported to be susceptible to deep neural network image classifiers.
We propose a probabilistic post-hoc framework that refines given dense attacks by significantly reducing the number of perturbed pixels.
Our framework performs adversarial attacks much faster than existing sparse attacks.
arXiv Detail & Related papers (2020-10-13T02:51:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.