Backdoor Attacks for Remote Sensing Data with Wavelet Transform
- URL: http://arxiv.org/abs/2211.08044v2
- Date: Thu, 22 Jun 2023 15:43:40 GMT
- Title: Backdoor Attacks for Remote Sensing Data with Wavelet Transform
- Authors: Nikolaus Dr\"ager, Yonghao Xu, Pedram Ghamisi
- Abstract summary: In this paper, we provide a systematic analysis of backdoor attacks for remote sensing data.
We propose a novel wavelet transform-based attack (WABA) method, which can achieve invisible attacks by injecting the trigger image into the poisoned image.
Despite its simplicity, the proposed method can significantly cheat the current state-of-the-art deep learning models with a high attack success rate.
- Score: 14.50261153230204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed the great success of deep learning algorithms in
the geoscience and remote sensing realm. Nevertheless, the security and
robustness of deep learning models deserve special attention when addressing
safety-critical remote sensing tasks. In this paper, we provide a systematic
analysis of backdoor attacks for remote sensing data, where both scene
classification and semantic segmentation tasks are considered. While most of
the existing backdoor attack algorithms rely on visible triggers like squared
patches with well-designed patterns, we propose a novel wavelet transform-based
attack (WABA) method, which can achieve invisible attacks by injecting the
trigger image into the poisoned image in the low-frequency domain. In this way,
the high-frequency information in the trigger image can be filtered out in the
attack, resulting in stealthy data poisoning. Despite its simplicity, the
proposed method can significantly cheat the current state-of-the-art deep
learning models with a high attack success rate. We further analyze how
different trigger images and the hyper-parameters in the wavelet transform
would influence the performance of the proposed method. Extensive experiments
on four benchmark remote sensing datasets demonstrate the effectiveness of the
proposed method for both scene classification and semantic segmentation tasks
and thus highlight the importance of designing advanced backdoor defense
algorithms to address this threat in remote sensing scenarios. The code will be
available online at \url{https://github.com/ndraeger/waba}.
Related papers
- Deepfake Sentry: Harnessing Ensemble Intelligence for Resilient Detection and Generalisation [0.8796261172196743]
We propose a proactive and sustainable deepfake training augmentation solution.
We employ a pool of autoencoders that mimic the effect of the artefacts introduced by the deepfake generator models.
Experiments reveal that our proposed ensemble autoencoder-based data augmentation learning approach offers improvements in terms of generalisation.
arXiv Detail & Related papers (2024-03-29T19:09:08Z) - Robust Adversarial Attacks Detection for Deep Learning based Relative
Pose Estimation for Space Rendezvous [8.191688622709444]
We propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes.
The proposed adversarial attack detector achieves a detection accuracy of 99.21%.
arXiv Detail & Related papers (2023-11-10T11:07:31Z) - Histogram Layer Time Delay Neural Networks for Passive Sonar
Classification [58.720142291102135]
A novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification.
The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition.
arXiv Detail & Related papers (2023-07-25T19:47:26Z) - Towards an Accurate and Secure Detector against Adversarial
Perturbations [58.02078078305753]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition of natural-artificial data.
We propose an accurate and secure adversarial example detector, relying on a spatial-frequency discriminative decomposition with secret keys.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Unsupervised Wildfire Change Detection based on Contrastive Learning [1.53934570513443]
The accurate characterization of the severity of the wildfire event contributes to the characterization of the fuel conditions in fire-prone areas.
The aim of this study is to develop an autonomous system built on top of high-resolution multispectral satellite imagery, with an advanced deep learning method for detecting burned area change.
arXiv Detail & Related papers (2022-11-26T20:13:14Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.