Defense against Adversarial Cloud Attack on Remote Sensing Salient
Object Detection
- URL: http://arxiv.org/abs/2306.17431v2
- Date: Wed, 5 Jul 2023 16:15:10 GMT
- Title: Defense against Adversarial Cloud Attack on Remote Sensing Salient
Object Detection
- Authors: Huiming Sun, Lan Fu, Jinlong Li, Qing Guo, Zibo Meng, Tianyun Zhang,
Yuewei Lin, Hongkai Yu
- Abstract summary: We propose to jointly tune adversarial exposure and additive perturbation for attack and constrain image close to cloudy image as Adversarial Cloud.
DefenseNet can defend the proposed Adversarial Cloud in white-box setting and other attack methods in black-box setting.
- Score: 21.028664417133793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting the salient objects in a remote sensing image has wide applications
for the interdisciplinary research. Many existing deep learning methods have
been proposed for Salient Object Detection (SOD) in remote sensing images and
get remarkable results. However, the recent adversarial attack examples,
generated by changing a few pixel values on the original remote sensing image,
could result in a collapse for the well-trained deep learning based SOD model.
Different with existing methods adding perturbation to original images, we
propose to jointly tune adversarial exposure and additive perturbation for
attack and constrain image close to cloudy image as Adversarial Cloud. Cloud is
natural and common in remote sensing images, however, camouflaging cloud based
adversarial attack and defense for remote sensing images are not well studied
before. Furthermore, we design DefenseNet as a learn-able pre-processing to the
adversarial cloudy images so as to preserve the performance of the deep
learning based remote sensing SOD model, without tuning the already deployed
deep SOD model. By considering both regular and generalized adversarial
examples, the proposed DefenseNet can defend the proposed Adversarial Cloud in
white-box setting and other attack methods in black-box setting. Experimental
results on a synthesized benchmark from the public remote sensing SOD dataset
(EORSSD) show the promising defense against adversarial cloud attacks.
Related papers
- Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector [97.92369017531038]
We build a new laRge-scale Adervsarial images dataset with Diverse hArmful Responses (RADAR)
We then develop a novel iN-time Embedding-based AdveRSarial Image DEtection (NEARSIDE) method, which exploits a single vector that distilled from the hidden states of Visual Language Models (VLMs) to achieve the detection of adversarial images against benign ones in the input.
arXiv Detail & Related papers (2024-10-30T10:33:10Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Detection of Adversarial Physical Attacks in Time-Series Image Data [12.923271427789267]
We propose VisionGuard* (VG), which couples VG with majority-vote methods, to detect adversarial physical attacks in time-series image data.
This is motivated by autonomous systems applications where images are collected over time using onboard sensors for decision-making purposes.
We have evaluated VG* on videos of both clean and physically attacked traffic signs generated by a state-of-the-art robust physical attack.
arXiv Detail & Related papers (2023-04-27T02:08:13Z) - LeNo: Adversarial Robust Salient Object Detection Networks with
Learnable Noise [7.794351961083746]
This paper proposes a light-weight Learnble Noise (LeNo) to against adversarial attacks for SOD models.
LeNo preserves accuracy of SOD models on both adversarial and clean images, as well as inference speed.
Inspired by the center prior of human visual attention mechanism, we initialize the shallow noise with a cross-shaped gaussian distribution for better defense against adversarial attacks.
arXiv Detail & Related papers (2022-10-27T12:52:55Z) - Object-Attentional Untargeted Adversarial Attack [11.800889173823945]
We propose an object-attentional adversarial attack method for untargeted attack.
Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection region from HVPNet.
Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA)
arXiv Detail & Related papers (2022-10-16T07:45:13Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Adversarial Attacks against a Satellite-borne Multispectral Cloud
Detector [33.11869627537352]
In this paper, we highlight the vulnerability of deep learning-based cloud detection towards adversarial attacks.
By optimising an adversarial pattern and superimposing it into a cloudless scene, we bias the neural network into detecting clouds in the scene.
This opens up the potential of multi-objective attacks, specifically, adversarial biasing in the cloud-sensitive bands and visual camouflage in the visible bands.
arXiv Detail & Related papers (2021-12-03T05:27:50Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.