Uncertainty-Aware SAR ATR: Defending Against Adversarial Attacks via Bayesian Neural Networks
- URL: http://arxiv.org/abs/2403.18318v1
- Date: Wed, 27 Mar 2024 07:40:51 GMT
- Title: Uncertainty-Aware SAR ATR: Defending Against Adversarial Attacks via Bayesian Neural Networks
- Authors: Tian Ye, Rajgopal Kannan, Viktor Prasanna, Carl Busart,
- Abstract summary: Adversarial attacks have demonstrated the vulnerability of Machine Learning (ML) image classifiers in Automatic Target Recognition (ATR) systems.
We propose a novel uncertainty-aware SAR ATR for detecting adversarial attacks.
- Score: 7.858656052565242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks have demonstrated the vulnerability of Machine Learning (ML) image classifiers in Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems. An adversarial attack can deceive the classifier into making incorrect predictions by perturbing the input SAR images, for example, with a few scatterers attached to the on-ground objects. Therefore, it is critical to develop robust SAR ATR systems that can detect potential adversarial attacks by leveraging the inherent uncertainty in ML classifiers, thereby effectively alerting human decision-makers. In this paper, we propose a novel uncertainty-aware SAR ATR for detecting adversarial attacks. Specifically, we leverage the capability of Bayesian Neural Networks (BNNs) in performing image classification with quantified epistemic uncertainty to measure the confidence for each input SAR image. By evaluating the uncertainty, our method alerts when the input SAR image is likely to be adversarially generated. Simultaneously, we also generate visual explanations that reveal the specific regions in the SAR image where the adversarial scatterers are likely to to be present, thus aiding human decision-making with hints of evidence of adversarial attacks. Experiments on the MSTAR dataset demonstrate that our approach can identify over 80% adversarial SAR images with fewer than 20% false alarms, and our visual explanations can identify up to over 90% of scatterers in an adversarial SAR image.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Realistic Scatterer Based Adversarial Attacks on SAR Image Classifiers [7.858656052565242]
An adversarial attack perturbs SAR images of on-ground targets such that the classifiers are misled into making incorrect predictions.
We propose the On-Target Scatterer Attack (OTSA), a scatterer-based physical adversarial attack.
We show that our attack obtains significantly higher success rates under the positioning constraint compared with the existing method.
arXiv Detail & Related papers (2023-12-05T17:36:34Z) - Robust Adversarial Attacks Detection for Deep Learning based Relative
Pose Estimation for Space Rendezvous [8.191688622709444]
We propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes.
The proposed adversarial attack detector achieves a detection accuracy of 99.21%.
arXiv Detail & Related papers (2023-11-10T11:07:31Z) - Scattering Model Guided Adversarial Examples for SAR Target Recognition:
Attack and Defense [20.477411616398214]
This article explores the domain knowledge of SAR imaging process and proposes a novel Scattering Model Guided Adrial Attack (SMGAA) algorithm.
The proposed SMGAA algorithm can generate adversarial perturbations in the form of electromagnetic scattering response (called adversarial scatterers)
Comprehensive evaluations on the MSTAR dataset show that the adversarial scatterers generated by SMGAA are more robust to perturbations and transformations in the SAR processing chain than the currently studied attacks.
arXiv Detail & Related papers (2022-09-11T03:41:12Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Multi-Expert Adversarial Attack Detection in Person Re-identification
Using Context Inconsistency [47.719533482898306]
We propose a Multi-Expert Adversarial Attack Detection (MEAAD) approach to detect malicious attacks on person re-identification (ReID) systems.
As the first adversarial attack detection approach for ReID,MEAADeffectively detects various adversarial at-tacks and achieves high ROC-AUC (over 97.5%).
arXiv Detail & Related papers (2021-08-23T01:59:09Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Adversarial Attacks on Convolutional Neural Networks in Facial
Recognition Domain [2.4704085162861693]
Adversarial attacks that render Deep Neural Network (DNN) classifiers vulnerable in real life represent a serious threat in autonomous vehicles, malware filters, or biometric authentication systems.
We apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier.
We craft a variety of different black-box attack algorithms on a facial image dataset assuming minimal adversarial knowledge.
arXiv Detail & Related papers (2020-01-30T00:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.