Backdoor Attacks for Remote Sensing Data with Wavelet Transform
- URL: http://arxiv.org/abs/2211.08044v2
- Date: Thu, 22 Jun 2023 15:43:40 GMT
- Title: Backdoor Attacks for Remote Sensing Data with Wavelet Transform
- Authors: Nikolaus Dr\"ager, Yonghao Xu, Pedram Ghamisi
- Abstract summary: In this paper, we provide a systematic analysis of backdoor attacks for remote sensing data.
We propose a novel wavelet transform-based attack (WABA) method, which can achieve invisible attacks by injecting the trigger image into the poisoned image.
Despite its simplicity, the proposed method can significantly cheat the current state-of-the-art deep learning models with a high attack success rate.
- Score: 14.50261153230204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed the great success of deep learning algorithms in
the geoscience and remote sensing realm. Nevertheless, the security and
robustness of deep learning models deserve special attention when addressing
safety-critical remote sensing tasks. In this paper, we provide a systematic
analysis of backdoor attacks for remote sensing data, where both scene
classification and semantic segmentation tasks are considered. While most of
the existing backdoor attack algorithms rely on visible triggers like squared
patches with well-designed patterns, we propose a novel wavelet transform-based
attack (WABA) method, which can achieve invisible attacks by injecting the
trigger image into the poisoned image in the low-frequency domain. In this way,
the high-frequency information in the trigger image can be filtered out in the
attack, resulting in stealthy data poisoning. Despite its simplicity, the
proposed method can significantly cheat the current state-of-the-art deep
learning models with a high attack success rate. We further analyze how
different trigger images and the hyper-parameters in the wavelet transform
would influence the performance of the proposed method. Extensive experiments
on four benchmark remote sensing datasets demonstrate the effectiveness of the
proposed method for both scene classification and semantic segmentation tasks
and thus highlight the importance of designing advanced backdoor defense
algorithms to address this threat in remote sensing scenarios. The code will be
available online at \url{https://github.com/ndraeger/waba}.
Related papers
- Twin Trigger Generative Networks for Backdoor Attacks against Object Detection [14.578800906364414]
Object detectors, which are widely used in real-world applications, are vulnerable to backdoor attacks.
Most research on backdoor attacks has focused on image classification, with limited investigation into object detection.
We propose novel twin trigger generative networks to generate invisible triggers for implanting backdoors into models during training, and visible triggers for steady activation during inference.
arXiv Detail & Related papers (2024-11-23T03:46:45Z) - Power side-channel leakage localization through adversarial training of deep neural networks [10.840434597980723]
Supervised deep learning has emerged as an effective tool for carrying out power side-channel attacks on cryptographic implementations.
We propose a technique for identifying which timesteps in a power trace are responsible for leaking a cryptographic key.
arXiv Detail & Related papers (2024-10-29T18:04:41Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Robust Adversarial Attacks Detection for Deep Learning based Relative
Pose Estimation for Space Rendezvous [8.191688622709444]
We propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes.
The proposed adversarial attack detector achieves a detection accuracy of 99.21%.
arXiv Detail & Related papers (2023-11-10T11:07:31Z) - Histogram Layer Time Delay Neural Networks for Passive Sonar
Classification [58.720142291102135]
A novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification.
The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition.
arXiv Detail & Related papers (2023-07-25T19:47:26Z) - Unsupervised Wildfire Change Detection based on Contrastive Learning [1.53934570513443]
The accurate characterization of the severity of the wildfire event contributes to the characterization of the fuel conditions in fire-prone areas.
The aim of this study is to develop an autonomous system built on top of high-resolution multispectral satellite imagery, with an advanced deep learning method for detecting burned area change.
arXiv Detail & Related papers (2022-11-26T20:13:14Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.